Page 380«..1020..379380381382..390400..»

AI Engineer Salary: The Lucrative World of AI Engineering – Simplilearn

A few decades ago, the term Artificial Intelligence was reserved for scientific circles and tech-enthusiasts who wanted to sound cool. But, ever since its coining in 1955, AI has only grown in popularity. Today, you wouldnt find a technology magazine that doesnt mention artificial intelligence in every other paragraph.

Here's a quick video explaining the rise in demand for AI engineers and trends in an AI engineer's salary worldwide.

An AI Engineer is a professional skilled in developing, programming, and implementing artificial intelligence (AI) systems and applications. Their expertise lies in utilizing algorithms, data sets, and machine learning (ML) principles to create intelligent systems performing tasks that typically require human intelligence. These tasks may include problem-solving, decision-making, natural language processing, and understanding human speech.

AI Engineers work across various stages of AI project development, from conceptualizing and designing AI models to deploying and maintaining these systems in production environments. Their responsibilities often encompass:

AI Engineers typically have a strong foundation in computer science, mathematics, and statistics, with specialized knowledge in machine learning, deep learning, natural language processing, and computer vision. They must also be proficient in programming languages commonly used in AI, such as Python, and tools and frameworks like TensorFlow, PyTorch, and Keras.

Due to the interdisciplinary nature of AI, engineers often collaborate with data scientists, software engineers, and domain experts to develop solutions tailored to specific business needs or research objectives. The role requires continuous learning to keep up with the rapidly evolving field of artificial intelligence.

Before getting on the question at hand, we need to know top AI engineer's job roles. Machine Learning (ML) Engineers, Data Scientists, Data Analyst, Computer Vision Engineer, Business Intelligence Developer, and Algorithm Engineers are just some of the many different positions that come under the umbrella of AI engineering. Each of these positions entails a different job-profile, but, generally speaking, most AI engineers deal with designing and creating AI models. Everything from the maintenance to performance supervision of the model is the responsibility of the AI engineer.

Most AI engineers come from a computer science background and have strong programming skills, which is a non-negotiable part of an AI engineers position. Proficiency in Python and Object-Oriented Programming is highly desirable. But for an AI engineer, what is even more important than programming languages is the programming aptitude. Since the whole point of an AI system is to work without human supervision, AI algorithms are very different from traditional codes. So, the AI engineer must be able to design algorithms that are adaptable and capable of evolving.

Other than programming, an AI engineer needs to be conversant in an assortment of disciplines like robotics, physics, and mathematics. Mathematical knowledge is especially crucial as linear algebra and statistics play a vital role in designing AI models.

Read More: Gaurav Tyagis love for learning inspired to him to upskill with our AI For Decision Making: Business Strategies And Applications. Read about his journey and his experience with our course in his Simplilearn AI Program Review.

At the moment, AI engineering is one of the most lucrative career paths in the world. The AI job market has been growing at a phenomenal rate for some time now. The entry-level annual average AI engineer salary in India is around 10 lakhs, which is significantly higher than the average salary of any other engineering graduate. At high-level positions, the AI engineer salary can be as high as 50 lakhs.

AI engineers earn an average salary of well over $100,000 annually. According to Glassdoor, the average national salary is over $110,000; and the high salary is $150,000.

However, you must note that these figures can vary significantly based on several factors like:

Companies Hiring for Artificial Intelligence Engineers:

Here is the list of companies/ startups hiring in AI right now are IBM, Fractal.ai, JPMorgan, Intel, Oracle, Microsoft, etc.

City (India)

Average Salary (Annual)

Bangalore

12,00,000

Hyderabad

10,00,000

Mumbai

15,00,000

Chennai

8,00,000

Delhi

12,00,000

The salary for AI professionals in India can vary based on a variety of factors, including experience, job role, industry, and location. However, here's an estimate of the AI salary based on experience in India:

It's important to note that these figures are just estimates and can vary based on individual circumstances. Additionally, the industry and location can also play a role in determining AI salaries, with industries such as finance, healthcare, and technology typically paying higher salaries and cities such as Bangalore, Mumbai, and Delhi generally paying higher salaries than other cities in India.

If you're interested in pursuing a career in Artificial Intelligence (AI), here are some steps that can help you get started:

By following these steps, you can build a successful career in AI and become a valuable contributor to the field.

The top 7 countries with the maximum opportunities for Artificial Intelligence (AI) Professionals are:

There are various positions that an AI engineer can take up. An AI engineers salary depends on the market demand for his/her job profile. Presently, ML engineers are in greater demand and hence bag a relatively higher package than other AI engineers. Similarly, the greater the experience in artificial intelligence, the higher the salary companies will offer. Although you can become an AI engineer without a Masters degree, it is imperative that you keep updating and growing your skillset to remain competitive in the ever-evolving world of AI engineering.

There are a number of exciting and in-demand jobs in the field of artificial intelligence (AI). Here are some of the top AI jobs that you may want to consider:

As a machine learning engineer, you will be responsible for developing and implementing algorithms that enable computers to learn from data. This includes working with large data sets, designing and testing machine learning models, and tuning algorithms for efficient execution.

Data scientists use their expertise in statistics, mathematics, and computer science to analyze complex data sets. They work with organizations to gain insights that can be used to improve decision-making.

As an AI researcher, you will be responsible for investigating and developing new artificial intelligence algorithms and applications. This includes conducting research, writing papers, and presenting your findings at conferences.

Software engineers develop the software that enables computers to function. This includes creating algorithms, testing code, and debugging programs.

Systems engineers design and oversee the implementation of complex systems. This includes planning and coordinating system development, ensuring compatibility between components, and troubleshooting issues.

Hardware engineers design and oversee the manufacture of computer hardware components. This includes circuit boards, processors, and memory devices.

Network engineers design and implement computer networks. This includes configuring networking equipment, developing network architectures, and troubleshooting network problems.

Database administrators maintain databases and ensure that data is stored securely and efficiently. This includes designing database structures, implementing security measures, and backing up data.

Information security analysts plan and implement security measures to protect computer networks and systems. This includes researching security threats, assessing risks, and developing countermeasures.

User experience designers create user interfaces that are both effective and efficient. This includes developing navigation schemes, designing graphical elements, and testing prototypes.

These are just a few of the many exciting and in-demand jobs in the field of artificial intelligence. With the right skills and experience, you can find a position that matches your interests and abilities.

Just as AI is transforming the business landscape, it is also opening up new opportunities in the recruiting sphere. Here are some of the top companies and recruiters who are hiring for AI roles:

These are just some of the top companies and recruiters who are hiring for AI roles. If you have the right skills and experience, don't hesitate to apply!

There are a few key things you can do to help boost your AI salary. First, focus on acquiring in-demand skills. One of the best ways to do this is to enroll in a top-rated certification program. Second, keep up with the latest industry trends and developments. Finally, consider pursuing management or leadership roles within your organization. By taking these steps, you can position yourself for success and earn a higher salary in the AI field.

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Even as you read this article, the demand for AI is booming across the globe. AI engineer salaries will keep rising as industries like tech, financial services, and medical research turn to artificial intelligence. As more global brands like Google and Nvidia dive deeper into Artificial Intelligence (AI), the demand and the salaries for AI engineers will only go upwards in 2024 and the decades to follow. Even government agencies in many developed and developing nations will open up AI engineer positions as they realize the enormous impact AI can have on the defense and governance sector.

Looking at the current pandemic scenario, jobs are better left till the dawn of next year. The time you have right now will be far better utilized in upgrading your AI repertoire.

Unlike most other fields, AI of tomorrow will look nothing like the AI of today. It is evolving at a breathtaking speed, and ensuring your Artificial Intelligence (AI) skills are relevant to current market needs, you better keep upgrading it. If you wish to get a step closer to these lucrative salaries, sharpen your AI skills with the world-class Artificial Intelligence Engineer program, and, before you know it, you will be standing in the world of AI engineers!

The salary of an AI Engineer in India can range from 8 lakhs to 50 lakhs annually.

The starting salary for an AI Engineer in India can be from 8 lakhs annually.

50 laksh is the highest salary for an AI Engineer in India

As experience and position increases, the salary also increases.

IT is one of the highest paying industry for AI Engineer.

Popular skills for AI Engineers to have are programming languages, data engineering, exploratory data analysis, deploying, modelling, and security.

Average Artificial Intelligence Engineer salary in the US is around $100k annually.

Top 5 Artificial Intelligence Jobs in the US are Machine Learning Engineer, Data Scientist, Business Intelligence Developer, Research Scientist, and Big Data Engineer/Architect.

The lowest salary for an AI Enginner in the US is around $100k annually.

Highest salary can go over $150 to $200k annually.

Follow this link:

AI Engineer Salary: The Lucrative World of AI Engineering - Simplilearn

Read More..

Downscaling a Satellite Thermal Image from 1000 m to 10 m (Python) – Towards Data Science

Thermal sharpening of Sentinel-3 images: From 1 Km to 10m using Python in Google Colab 13 min read

Downscaling the thermal imagery captured by satellites has been extensively studied due to the tradeoff between the spatial and temporal resolution of satellites that provide thermal images. For example, the revisit cycle of Landsat-8 is 16 days, with an original thermal resolution of 100 meters. In contrast, Sentinel-3 can provide daily thermal images, but at a spatial resolution of 1000 meters.

One approach to address the coarse resolution of thermal images could be launching more satellites equipped with thermal sensors, such as NASAs Landsat-9, launched in September 2021. In this case, the temporal resolution for both Landsat-8 and Landsat-9 is 8 days (instead of 16 days with one satellite), assuming clear skies.

However, as you can guess, this approach requires a multimillion-dollar investment and several years of effort. Instead, researchers have focused on statistical methods, correlating the visible/near-infrared (VNIR) bands from satellites with higher spatial resolution (but lower temporal resolution) to thermal images from satellites with lower spatial resolution (but higher temporal resolution). For example, studies have shown that the Normalized Difference Vegetation Index (NDVI) calculated from VNIR bands of Sentinel-2 (10m, every 5 days) can be inversely correlated with thermal images from Sentinel-3 (1000m, daily).

But how can we use this

See the original post here:

Downscaling a Satellite Thermal Image from 1000 m to 10 m (Python) - Towards Data Science

Read More..

NVIDIA and HP Speed Up Data Science and AI on PCs – Analytics Insight

NVIDIA and HP Inc. have announced the integration of NVIDIA CUDA-X data processing libraries with HP AI workstation solutions. This will accelerate data preparation and processing for generative AI development. CUDA-X libraries, built on the NVIDIA CUDA computing platform, enhance data processing across diverse data types such as tables, text, images, and video. The NVIDIA RAPIDS cuDF library significantly accelerates the work of nearly 10 million data scientists who rely on pandas software. By leveraging an NVIDIA RTX 6000 Ada Generation GPU instead of a CPU-only system, performance gains of up to 110x are achieved, all without requiring any code modifications.

RAPIDS cuDF and other NVIDIA software will be offered as part of Z by HP AI Studio on HP AI workstations, offering a full-stack development solution that accelerates data science workflows. Pandas is the essential tool for millions of data scientists processing and preparing data for generative AI, said Jensen Huang, NVIDIAs founder and CEO. Accelerating Pandas with no code modifications will be a significant step forward. Data scientists can handle data in minutes rather than hours and use orders of magnitude more data to train generative artificial intelligence models.

Data science sets the groundwork for AI, and developers require quick access to software and systems to fuel this critical work, said Enrique Lores, president and CEO of HP Inc. With the integration of NVIDIA AI software and accelerated GPU compute, HP AI workstations provide a powerful solution for our customers.

Pandas has a robust data format called DataFrames, which allows developers to quickly edit, clean, and analyze tabular data. The NVIDIA RAPIDS cuDF package speeds pandas, allowing them to operate on GPUs with no code modifications instead of CPUs, which can slow workloads as data size increases. RAPIDS cuDF works with third-party libraries and combines GPU and CPU operations, allowing data scientists to design, test, and execute models in production effortlessly.

As datasets expand, RTX 6000 Ada Generation GPUs with 48GB of RAM per GPU can handle massive data science and AI tasks on Z by HP workstations. With up to four RTX 6000 GPUs, the HP Z8 Fury is one of the worlds most powerful workstations for AI development. HP and NVIDIAs strong partnership enables data scientists to speed development by working on local computers capable of processing huge generative AI workloads.

NVIDIA RAPIDS cuDF, which greatly accelerates pandas operations (almost 150 times quicker), now works smoothly with HP AI workstation systems. Users may use NVIDIA RTX and GeForce RTX GPUs to process data. Furthermore, HP AI Studio will include cuDF later this year, improving efficiency and performance.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

More here:

NVIDIA and HP Speed Up Data Science and AI on PCs - Analytics Insight

Read More..

Structure and Relationships: Graph Neural Networks and a Pytorch Implementation – Towards Data Science

Lets implement a regression example where the aim is to train a network to predict the value of a node given the value of all other nodes i.e. each node has a single feature (which is a scalar value). The aim of this example is to leverage the inherent relational information encoded in the graph to accurately predict numerical values for each node. The key thing to note is that we input the numerical value for all nodes except the target node (we mask the target node value with 0) then predict the target nodes value. For each data point, we repeat the process for all nodes. Perhaps this might come across as a bizarre task but lets see if we can predict the expected value of any node given the values of the other nodes. The data used is the corresponding simulation data to a series of sensors from industry and the graph structure I have chosen in the example below is based on the actual process structure. I have provided comments in the code to make it easy to follow. You can find a copy of the dataset here (Note: this is my own data, generated from simulations).

This code and training procedure is far from being optimised but its aim is to illustrate the implementation of GNNs and get an intuition for how they work. An issue with the currently way I have done that should definitely not be done this way beyond learning purposes is the masking of node feature value and predicting it from the neighbours feature. Currently youd have to loop over each node (not very efficient), a much better way to do is the stop the model from include its own features in the aggregation step and hence you wouldnt need to do one node at a time but I thought it is easier to build intuition for the model with the current method:)

Preprocessing Data

Importing the necessary libraries and Sensor data from CSV file. Normalise all data in the range of 0 to 1.

Defining the connectivity (edge index) between nodes in the graph using a PyTorch tensor i.e. this provides the systems graphical topology.

The Data imported from csv has a tabular structure but to use this in GNNs, it must be transformed to a graphical structure. Each row of data (one observation) is represented as one graph. Iterate through Each Row to Create Graphical representation of the data

A mask is created for each node/sensor to indicate the presence (1) or absence (0) of data, allowing for flexibility in handling missing data. In most systems, there may be items with no data available hence the need for flexibility in handling missing data. Split the data into training and testing sets

Graph Visualisation

The graph structure created above using the edge indices can be visualised using networkx.

Model Definition

Lets define the model. The model incorporates 2 GAT convolutional layers. The first layer transforms node features to an 8 dimensional space, and the second GAT layer further reduces this to an 8-dimensional representation.

GNNs are highly susceptible to overfitting, regularation (dropout) is applied after each GAT layer with a user defined probability to prevent over fitting. The dropout layer essentially randomly zeros some of the elements of the input tensor during training.

The GAT convolution layer output results are passed through a fully connected (linear) layer to map the 8-dimensional output to the final node feature which in this case is a scalar value per node.

Masking the value of the target Node; as mentioned earlier, the aim of this of task is to regress the value of the target node based on the value of its neighbours. This is the reason behind masking/replacing the target nodes value with zero.

Training the model

Initialising the model and defining the optimiser, loss function and the hyper parameters including learning rate, weight decay (for regularisation), batch_size and number of epochs.

The training process is fairly standard, each graph (one data point) of data is passed through the forward pass of the model (iterating over each node and predicting the target node. The loss from the prediction is accumulated over the defined batch size before updating the GNN through backpropagation.

Testing the trained model

Using the test dataset, pass each graph through the forward pass of the trained model and predict each nodes value based on its neighbours value.

Visualising the test results

Using iplot we can visualise the predicted values of nodes against the ground truth values.

Despite a lack of fine tuning the model architecture or hyperparameters, it has done a decent job actually, we could tune the model further to get improved accuracy.

This brings us to the end of this article. GNNs are relatively newer than other branches of machine learning, it will be very exciting to see the developments of this field but also its application to different problems. Finally, thank you for taking the time to read this article, I hope you found it useful in your understanding of GNNs or their mathematical background.

Unless otherwise noted, all images are by the author

Read more here:

Structure and Relationships: Graph Neural Networks and a Pytorch Implementation - Towards Data Science

Read More..

Seeing Our Reflection in LLMs. When LLMs give us outputs that reveal | by Stephanie Kirmer | Mar, 2024 – Towards Data Science

Photo by Vince Fleming on Unsplash

By now, Im sure most of you have heard the news about Googles new LLM*, Gemini, generating pictures of racially diverse people in Nazi uniforms. This little news blip reminded me of something that Ive been meaning to discuss, which is when models have blind spots, so we apply expert rules to the predictions they generate to avoid returning something wildly outlandish to the user.

This sort of thing is not that uncommon in machine learning, in my experience, especially when you have flawed or limited training data. A good example of this that I remember from my own work was predicting when a package was going to be delivered to a business office. Mathematically, our model would be very good at estimating exactly when the package would get physically near the office, but sometimes, truck drivers arrive at destinations late at night and then rest in their truck or in a hotel until morning. Why? Because no ones in the office to receive/sign for the package outside of business hours.

Teaching a model about the idea of business hours can be very difficult, and the much easier solution was just to say, If the model says the delivery will arrive outside business hours, add enough time to the prediction that it changes to the next hour the office is listed as open. Simple! It solves the problem and it reflects the actual circumstances on the ground. Were just giving the model a little boost to help its results work better.

However, this does cause some issues. For one thing, now we have two different model predictions to manage. We cant just throw away the original model prediction, because thats what we use for model performance monitoring and metrics. You cant assess a model on predictions after humans got their paws in there, thats not mathematically sound. But to get a clear sense of the real world model impact, you do want to look at the post-rule prediction, because thats what the customer actually experienced/saw in your application. In ML, were used to a very simple framing, where every time you run a model you get one result or set of results, and thats that, but when you start tweaking the results before you let them go, then you need to think at a different scale.

I kind of suspect that this is a form of whats going on with LLMs like Gemini. However, instead of a post-prediction rule, it appears that the smart money says Gemini and other models are applying secret prompt augmentations to try and change the results the LLMs produce.

In essence, without this nudging, the model will produce results that are reflective of the content it has been trained on. That is to say, the content produced by real people. Our social media posts, our history books, our museum paintings, our popular songs, our Hollywood movies, etc. The model takes in all that stuff, and it learns the underlying patterns in it, whether they are things were proud of or not. A model given all the media available in our contemporary society is going to get a whole lot of exposure to racism, sexism, and myriad other forms of discrimination and inequality, to say nothing of violence, war, and other horrors. While the model is learning what people look like, and how they sound, and what they say, and how they move, its learning the warts-and-all version.

Our social media posts, our history books, our museum paintings, our popular songs, our Hollywood movies, etc. The model takes in all that stuff, and it learns the underlying patterns in it, whether they are things were proud of or not.

This means that if you ask the underlying model to show you a doctor, its going to probably be a white guy in a lab coat. This isnt just random, its because in our modern society white men have disproportionate access to high status professions like being doctors, because they on average have access to more and better education, financial resources, mentorship, social privilege, and so on. The model is reflecting back at us an image that may make us uncomfortable because we dont like to think about that reality.

The obvious argument is, Well, we dont want the model to reinforce the biases our society already has, we want it to improve representation of underrepresented populations. I sympathize with this argument, quite a lot, and I care about representation in our media. However, theres a problem.

Its very unlikely that applying these tweaks is going to be a sustainable solution. Recall back to the story I started with about Gemini. Its like playing whac-a-mole, because the work never stops now weve got people of color being shown in Nazi uniforms, and this is understandably deeply offensive to lots of folks. So, maybe where we started by randomly applying as a black person or as an indigenous person to our prompts, we have to add something more to make it exclude cases where its inappropriate but how do you phrase that, in a way an LLM can understand? We probably have to go back to the beginning, and think about how the original fix works, and revisit the whole approach. In the best case, applying a tweak like this fixes one narrow issue with outputs, while potentially creating more.

Lets play out another very real example. What if we add to the prompt, Never use explicit or profane language in your replies, including [list of bad words here]. Maybe that works for a lot of cases, and the model will refuse to say bad words that a 13 year old boy is requesting to be funny. But sooner or later, this has unexpected additional side effects. What about if someones looking for the history of Sussex, England? Alternately, someones going to come up with a bad word you left out of the list, so thats going to be constant work to maintain. What about bad words in other languages? Who judges what goes on the list? I have a headache just thinking about it.

This is just two examples, and Im sure you can think of more such scenarios. Its like putting band aid patches on a leaky pipe, and every time you patch one spot another leak springs up.

So, what is it we actually want from LLMs? Do we want them to generate a highly realistic mirror image of what human beings are actually like and how our human society actually looks from the perspective of our media? Or do we want a sanitized version that cleans up the edges?

Honestly, I think we probably need something in the middle, and we have to continue to renegotiate the boundaries, even though its hard. We dont want LLMs to reflect the real horrors and sewers of violence, hate, and more that human society contains, that is a part of our world that should not be amplified even slightly. Zero content moderation is not the answer. Fortunately, this motivation aligns with the desires of large corporate entities running these models to be popular with the public and make lots of money.

we have to continue to renegotiate the boundaries, even though its hard. We dont want LLMs to reflect the real horrors and sewers of violence, hate, and more that human society contains, that is a part of our world that should not be amplified even slightly. Zero content moderation is not the answer.

However, I do want to continue to make a gentle case for the fact that we can also learn something from this dilemma in the world of LLMs. Instead of simply being offended and blaming the technology when a model generates a bunch of pictures of a white male doctor, we should pause to understand why thats what we received from the model. And then we should debate thoughtfully about whether the response from the model should be allowed, and make a decision that is founded in our values and principles, and try to carry it out to the best of our ability.

As Ive said before, an LLM isnt an alien from another universe, its us. Its trained on the things we wrote/said/filmed/recorded/did. If we want our model to show us doctors of various sexes, genders, races, etc, we need to make a society that enables all those different kinds of people to have access to that profession and the education it requires. If were worrying about how the model mirrors us, but not taking to heart the fact that its us that needs to be better, not just the model, then were missing the point.

If we want our model to show us doctors of various sexes, genders, races, etc, we need to make a society that enables all those different kinds of people to have access to that profession and the education it requires.

View original post here:

Seeing Our Reflection in LLMs. When LLMs give us outputs that reveal | by Stephanie Kirmer | Mar, 2024 - Towards Data Science

Read More..

Ethereum Recovers From Dip: ETH Hits $3900 For The First Time In Two Years – TradingView

After Bitcoin (BTC) recorded a new all-time high (ATH), Ethereum (ETH) rallied above $3,800 before the price crashed over 10%. The second-largest cryptocurrency has recovered from the dip and reached $3,900 momentarily for the first time in over two years.

Ethereum Recovers And Rallies to $3,900

On Thursday, Bitcoin reached a crucial milestone after breaking above $69,000 and recording a new all-time high (ATH). Before the euphoria was over, the flagship cryptocurrencys price started to drop, trading as low as $60,000. Since then, BTCs price has recovered to hover between the $66,000-$67,000 price range.

Fueled by the bullish sentiment, Ethereum rallied above $3,800 before suffering a considerable price drop. The king of altcoins lost momentum and shredded about 12% of its price to trade at a price as low as $3,360, according to CoinMarketCap data.

After the dip was done, ETH started to show a recovery alongside Bitcoin. As reported by NewsBTC, a crucial resistance level to clear during this recovery was $3,600. Ethereum surpassed this support level and has maintained its price above the $3,800 range during the last 4 hours.

Ethereum reached the $3,800 support level twice in the last 24 hours. This price range was not seen since January 2022, and the regained bullish momentum propelled the tokens price to a higher milestone.

Ethereum hit $3,900 for the first time since December of 2021. The biggest altcoin briefly soared to $3,901 before falling to the $3,850 price range.

At the time of writing, ETH is trading at $3,834, representing a 1.6% price drop in the last hour and a 2% increase from 24 hours ago. Similarly, the token exhibits green numbers on longer timeframes.

Ethereums price performance has surged almost 16% in the past week, 65% in the last month, and an impressive 145% in one year.

ETHs market capitalization increased 1.55% to $459.7 million on the last day. Its daily trading volume has increased by 58%, with $52.16 billion in market activity in the previous 24 hours.

Whats Next For ETHs Price?

Many analysts have forecasted that ETHs rally is far from over. Analyst Altcoin Sherpa predicted that Ethereum could reach $4,000 when it breaks through the $3,000 price barrier.

Ethereums rally seems to be fueled not only by Bitcoins momentum but also by the general market dynamics. The date for the Dencun upgrade is approaching, and this update is expected to bring several technical improvements to Ethereums infrastructure,

Moreover, the possibility of Ether-based spot exchange-traded funds (ETF) being approved by the US Securities and Exchange Commission (SEC) in May has built expectations for Ether and the blockchains ecosystem.

Pseudonym trader Ash Crypto suggested to his Telegram subscribers that the price correction experienced after Bitcoins new ATH was not a reason to panic.

Related Reading: Ethereum Price Follows Bitcoin Surge, Why $4K Is Just A Matter of Time

The trader considers that the late long flush to cut all the leverage was expected and that a soon-to-come stabilization in BTCs price will propel the run of ETH and all altcoins. Similarly, he announced the incoming alt season after the price of ETH hit $3,900 and suggested that Ethereums next support level will be $4,200.

The rest is here:

Ethereum Recovers From Dip: ETH Hits $3900 For The First Time In Two Years - TradingView

Read More..

3 reasons why Ethereum (ETH) price could hit $4K in the short-term – Cointelegraph

Ether (ETH) rallied to a new year-to-date high of $3,822 on March 5 after rallying 8% over the last 24 hours. The second largest cryptocurrency by market capitalization is up 15% over the last seven days and 132% over the last 6 months.

Data from Cointelegraph Markets Pro and TradingView show Ethers price was hovering around $3,796, about 28% shy of its all-time high of $4,891 set on Nov. 26, 2021.

Accompanying ETHs rally is a 68% leap in daily trading volume, currently at $33.29 billion. With a market capitalization of $453 billion, Ether cements its position as the second most valuable cryptocurrency, according to CoinMarketCap.

Apart from the uptrend in the wider crypto market fueled by increased inflows into spot Bitcoin ETFs and the upcoming Bitcoin supply halving, other fundamental factors and on-chain metrics back Ethereums uptrend.

One factor supporting Ethers upside is reducing supply on exchanges. Data from on-chain market intelligence firm Glassnode shows ETH balance on exchanges reached a 20-month low of 13.14 million ETH, after dropping 7.7% over the last 90 days.

The total balance between inflows and outflows in and out of all known exchange wallets shows a steep decline since October 2023, when withdrawals from the trading platforms began to surge. This drop accompanies a 130% rise in Ethers price over the same time period.

Decreasing ETH balances on exchanges simply means investors could be withdrawing their tokens into self-custody wallets, indicating a lack of intention to sell in anticipation of a price increase in the future.

This is explained by a spike in accumulation by large holders over the last few weeks. More data from Glassnode shows that wallets holding $100,000 or more worth of ETH have been on the rise since the start of February.

The chart above shows that wallets holding $100,000 or more have increased from 94,620 on Jan. 1 to 141, 406 on March 4 This means that whales have not sold on the latest rally in ETH but have continued to accumulate, suggesting most want to position themselves for more gains.

Also contributing to the decreasing ETH tokens available for trade is the increasing amount of Ether staked on the Beacon Chain. According to data from Dune Analytics, over $31.58 million ETH, worth $119.8 billion at current rates, are now being staked on Ethereum's proof-of-stake layer protocol.

This means 26.3% of ETH supplies have been staked and unavailable in the market, with over 987,000 individual validators involved.

Related: Bitcoin price hits a new all-time high

Staking on Ethereum has been further facilitated by liquid staking solutions like Lido, Rocket Pool and EtherFi, which allows for the staking of amounts less than 32 ETH and enables the use of staked assets as collateral in DeFi.

According to data from BlockBeats, the total value locked on EtherFi has crossed the $2 billion mark, highlighting the growing popularity and adoption of Ethereum liquidity protocols.

Increased demand for leverage resulted in a surge in ETH futures open interest (OI), which sat around $11.98 billion, edging closer to the $13 billion peak witnessed on Nov. 9, 2021.

Data from Coinglass shows that Ether futures OI broke above $8 billion on Feb. 12, being pinned under this level for more than two years. From there, the OI has jumped nearly 50% in less than two weeks, suggesting increased demand for leveraged ETH positions.

Currently, Ethereums on-chain and derivatives markets reflect investors optimism and expectation for a a spot Ether ETF approval The upcoming Dencun upgrade could also be lending some bullish tailwinds to ETH price.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

The rest is here:

3 reasons why Ethereum (ETH) price could hit $4K in the short-term - Cointelegraph

Read More..

‘The Polyverse Testnet’ is live, bringing IBC to Ethereum – Blockworks

Polymer, an Ethereum rollup that is hoping to become Ethereums interoperability hub, has launched the Polyverse Testnet, becoming the latest team hoping to tackle blockchain interoperability.

The testnet will be launched in three phases dubbed Basecamp, Into the Unknown and Discovery. The first phase, Basecamp, will be live starting today and is designed to incentivize developers to facilitate liquidity onto the testnet from other rollups.

Phase 2, Into the Unknown, will commence the following week, where Polymer will select a handful of decentralized apps to promote to end users, who will also be able to receive rewards. Then the final phase, Discovery, will focus on refining and optimizing incentive mechanisms to drive participation.

Like many cross-chain messaging and bridging protocols today, Polymer was created to solve the issue of blockchain interoperability.

Read more: Interoperability isnt just a buzzword

Blockchain ecosystems today remain relatively isolated from one another, meaning they can not communicate or interact with each other creating terrible user experiences for their customers.

An example of this in Web2 would be being unable to send emails from your Gmail account to an Outlook account.

To address the communication barrier, cross-chain messaging protocols and other interoperability solutions have sprung into life as a means to enable blockchains to safely transfer valuable information to each other.

This type of infrastructure is critical to blockchain scaling, as evidenced by the attention and interest it has received from investors.

Wormhole, one of the largest cross-chain messaging solutions today, secured $225 million in a private token sale, which saw interest from Brevan Howard, Coinbase Ventures and Multicoin Capital late last year.

Similarly, LayerZero locked in a seven-figure Series B fundraise, where investors from a16z, OKX Ventures and Sequoia Capital gave the protocol $120 million to expand its operations.

Polymer also recently revealed that it acquired $23 million to bring Cosmos SDKs inter-blockchain communication (IBC) protocol to Ethereum.

Read More: Polymer Labs secures $23M to bring IBC to Ethereum

Unlike many interoperability protocols today, Polymer is not designed as a third-party bridge but rather as a layer-2 Ethereum rollup solution that serves a similar purpose to the interoperability hub on Cosmos. It aims to provide IBC to Ethereum and connect with other layer-2 solutions.

IBC, unlike many other interoperability solutions today, is not a bridge application but a network standard, Devain Pal Bansal, a product analyst at Polymer Labs, told Blockworks.

The biggest benefit of introducing it to Ethereum, particularly Ethereum rollups, is that it extends the capabilities of how a rollup settles on Ethereum via the native bridge and extends it cross rollups without a third party required to attest to data or its validity by simply using the shared source of truth for all rollups Ethereum, Bansal said.

Tommy OConnell, a senior product manager at Polymer, explained to Blockworks that applications can build their own bridges and control inbound and outbound messages using a layer-1 trust layer. This eliminates the need for an additional trust assumption of a third party.

This also allows us to be focused on enabling chains to join Polymers ecosystem of chains with just a SINGLE connection to the hub, mitigating Polymer being a blocker for growth, OConnell said.

This differs from Wormhole, for example, which relies on a 13 of 19 supermajority to attest to a message before it is produced or sent. It is also different from Axelar, which relies on validators for attestations.

It is important to note, however, that Polymers minimum viable product (MVP) will be limited to the Base and Optimism at the testnet launch.

Though this is the case, OConnell notes that there are immediate plans to grow to other OP stack chains and soon after to other chains such as those in the Cosmos ecosystem.

The primary benefit for OP stack rollups is that we have built an IBC client for OP geth, which enables us to extend the capabilities of native L1<>L2 bridge across rollups. It is particularly appealing because we can unlock other chains built on the OP stack with minimal expansion effort, OConnell said.

Dont miss the next big story join ourfree daily newsletter.

See original here:

'The Polyverse Testnet' is live, bringing IBC to Ethereum - Blockworks

Read More..

Ethereum is set to outperform Bitcoin as the spot ETH ETF narrative comes into play, analysts say – crypto.news

Analysts at QCP Capital note that despite Bitcoins quick drop to $59,000, funding is back to sensible levels and the more likely scenario is the outperformance of the ETH/BTC pair.

In a volatile overnight session, Bitcoin (BTC) quickly set a new all-time high of $69,400 only to undergo a rapid decline, plummeting to $59,200 within a matter of a few hours. This steep downturn resulted in the liquidation of over $1 billion worth of leveraged long positions on Binance alone.

However, as noted by analysts at QCP Capital, the market quickly rebounded as the dip was aggressively bought up, with the $60,000 level demonstrating robust support. According to analysts, funding rates have returned to sensible levels, hovering around 30% annually on Binance. Therefore, analysts at QCP Capital anticipate Ethereum (ETH) to outperform Bitcoin, noting that the narrative surrounding a spot Ethereum exchange-traded fund (ETF) gains traction.

[] the likely scenario is the outperformance of ETHBTC as the ETH spot ETF narrative comes into play.

QCP Capital

Despite the leverage unwinding, term futures are still trading at a significant premium to spot prices, analysts emphasized, adding there has been a surge in client activity aimed at selling the spot-forward spread, particularly for contracts expiring between September and December this year, allowing investors to secure risk-free yields for the year.

As of press time, Ethereum remains significantly distant from its all-time high compared to Bitcoin, suggesting a potential for rapid value appreciation. While Bitcoin trails only 4.3% from its historical peak reached on Mar. 5, Ethereum lags behind its 2021 record by over 20%, according to CoinGecko data.

As the crypto landscape evolves, Wall Street behemoths are intensifying efforts to introduce more spot crypto ETFs, following the U.S. Securities and Exchange Commissions (SEC) approval of all applications for spot Bitcoin ETFs earlier in January. Reports indicate ongoing discussions between the SEC and Ethereum ETF applicants, with decisions on spot Ethereum ETFs delayed until May at the earliest. VanEcks filing, in particular, awaits a response by May 23, alongside applications from BlackRock, Franklin Templeton Grayscale, and Invesco Galaxy.

See the rest here:

Ethereum is set to outperform Bitcoin as the spot ETH ETF narrative comes into play, analysts say - crypto.news

Read More..

Ethereum Posts New 2024 High As Bitcoin Rebounds From Violent Pullback – The Defiant – DeFi News

ETH rallied to $3,900 while BTC bounced back to $67,370.

ETH posted a new 2024 high today while Bitcoin recovered from yesterdays violent pull-back.

The price of Ether tagged $3,900 for the first time since January 2021 on March 6, with investors turning their attention to the No. 2 cryptocurrency by market cap after Bitcoin tumbled from its new all-time high yesterday.

BTC last changed hands for $67,150, having mostly recovered from a violent shake-out that saw Bitcoin violently fall below $61,000 after posting a new all-time high above $68,750, according to aggregated price data from CoinGecko.

Bitcoins pull-back drove$1.2 billionworth of liquidations across the cryptocurrency sector in 24 hours, according to CoinGlass. The move coincided with a new record for daily spot Bitcoin ETF volume and inflows at$10 billionand$648.4 millionrespectively.

ETH is now up 10.5% over the past seven days, while BTCs weekly gain sits at 7%. Other notable gainers among leading digital assets for the week include Solana (SOL) with 11%, Polkadot (DOT) with 16.5%, Uniswap (UNI) with 42.7%.

Memecoin mania is also raging on, with Dogwifhat (WIF) rallying 62% in 24 hours to surpass a $2 billion market cap for the first time. SHIB, WIF, FLOKI, PEPE, and BONK rank as the five strongest performing top 100 markets of the past seven days with astonishing gains ranging from 93% to 198%.

See original here:

Ethereum Posts New 2024 High As Bitcoin Rebounds From Violent Pullback - The Defiant - DeFi News

Read More..