Category Archives: Data Science

AI Engineer Salary: The Lucrative World of AI Engineering – Simplilearn

A few decades ago, the term Artificial Intelligence was reserved for scientific circles and tech-enthusiasts who wanted to sound cool. But, ever since its coining in 1955, AI has only grown in popularity. Today, you wouldnt find a technology magazine that doesnt mention artificial intelligence in every other paragraph.

Here's a quick video explaining the rise in demand for AI engineers and trends in an AI engineer's salary worldwide.

An AI Engineer is a professional skilled in developing, programming, and implementing artificial intelligence (AI) systems and applications. Their expertise lies in utilizing algorithms, data sets, and machine learning (ML) principles to create intelligent systems performing tasks that typically require human intelligence. These tasks may include problem-solving, decision-making, natural language processing, and understanding human speech.

AI Engineers work across various stages of AI project development, from conceptualizing and designing AI models to deploying and maintaining these systems in production environments. Their responsibilities often encompass:

AI Engineers typically have a strong foundation in computer science, mathematics, and statistics, with specialized knowledge in machine learning, deep learning, natural language processing, and computer vision. They must also be proficient in programming languages commonly used in AI, such as Python, and tools and frameworks like TensorFlow, PyTorch, and Keras.

Due to the interdisciplinary nature of AI, engineers often collaborate with data scientists, software engineers, and domain experts to develop solutions tailored to specific business needs or research objectives. The role requires continuous learning to keep up with the rapidly evolving field of artificial intelligence.

Before getting on the question at hand, we need to know top AI engineer's job roles. Machine Learning (ML) Engineers, Data Scientists, Data Analyst, Computer Vision Engineer, Business Intelligence Developer, and Algorithm Engineers are just some of the many different positions that come under the umbrella of AI engineering. Each of these positions entails a different job-profile, but, generally speaking, most AI engineers deal with designing and creating AI models. Everything from the maintenance to performance supervision of the model is the responsibility of the AI engineer.

Most AI engineers come from a computer science background and have strong programming skills, which is a non-negotiable part of an AI engineers position. Proficiency in Python and Object-Oriented Programming is highly desirable. But for an AI engineer, what is even more important than programming languages is the programming aptitude. Since the whole point of an AI system is to work without human supervision, AI algorithms are very different from traditional codes. So, the AI engineer must be able to design algorithms that are adaptable and capable of evolving.

Other than programming, an AI engineer needs to be conversant in an assortment of disciplines like robotics, physics, and mathematics. Mathematical knowledge is especially crucial as linear algebra and statistics play a vital role in designing AI models.

Read More: Gaurav Tyagis love for learning inspired to him to upskill with our AI For Decision Making: Business Strategies And Applications. Read about his journey and his experience with our course in his Simplilearn AI Program Review.

At the moment, AI engineering is one of the most lucrative career paths in the world. The AI job market has been growing at a phenomenal rate for some time now. The entry-level annual average AI engineer salary in India is around 10 lakhs, which is significantly higher than the average salary of any other engineering graduate. At high-level positions, the AI engineer salary can be as high as 50 lakhs.

AI engineers earn an average salary of well over $100,000 annually. According to Glassdoor, the average national salary is over $110,000; and the high salary is $150,000.

However, you must note that these figures can vary significantly based on several factors like:

Companies Hiring for Artificial Intelligence Engineers:

Here is the list of companies/ startups hiring in AI right now are IBM, Fractal.ai, JPMorgan, Intel, Oracle, Microsoft, etc.

City (India)

Average Salary (Annual)

Bangalore

12,00,000

Hyderabad

10,00,000

Mumbai

15,00,000

Chennai

8,00,000

Delhi

12,00,000

The salary for AI professionals in India can vary based on a variety of factors, including experience, job role, industry, and location. However, here's an estimate of the AI salary based on experience in India:

It's important to note that these figures are just estimates and can vary based on individual circumstances. Additionally, the industry and location can also play a role in determining AI salaries, with industries such as finance, healthcare, and technology typically paying higher salaries and cities such as Bangalore, Mumbai, and Delhi generally paying higher salaries than other cities in India.

If you're interested in pursuing a career in Artificial Intelligence (AI), here are some steps that can help you get started:

By following these steps, you can build a successful career in AI and become a valuable contributor to the field.

The top 7 countries with the maximum opportunities for Artificial Intelligence (AI) Professionals are:

There are various positions that an AI engineer can take up. An AI engineers salary depends on the market demand for his/her job profile. Presently, ML engineers are in greater demand and hence bag a relatively higher package than other AI engineers. Similarly, the greater the experience in artificial intelligence, the higher the salary companies will offer. Although you can become an AI engineer without a Masters degree, it is imperative that you keep updating and growing your skillset to remain competitive in the ever-evolving world of AI engineering.

There are a number of exciting and in-demand jobs in the field of artificial intelligence (AI). Here are some of the top AI jobs that you may want to consider:

As a machine learning engineer, you will be responsible for developing and implementing algorithms that enable computers to learn from data. This includes working with large data sets, designing and testing machine learning models, and tuning algorithms for efficient execution.

Data scientists use their expertise in statistics, mathematics, and computer science to analyze complex data sets. They work with organizations to gain insights that can be used to improve decision-making.

As an AI researcher, you will be responsible for investigating and developing new artificial intelligence algorithms and applications. This includes conducting research, writing papers, and presenting your findings at conferences.

Software engineers develop the software that enables computers to function. This includes creating algorithms, testing code, and debugging programs.

Systems engineers design and oversee the implementation of complex systems. This includes planning and coordinating system development, ensuring compatibility between components, and troubleshooting issues.

Hardware engineers design and oversee the manufacture of computer hardware components. This includes circuit boards, processors, and memory devices.

Network engineers design and implement computer networks. This includes configuring networking equipment, developing network architectures, and troubleshooting network problems.

Database administrators maintain databases and ensure that data is stored securely and efficiently. This includes designing database structures, implementing security measures, and backing up data.

Information security analysts plan and implement security measures to protect computer networks and systems. This includes researching security threats, assessing risks, and developing countermeasures.

User experience designers create user interfaces that are both effective and efficient. This includes developing navigation schemes, designing graphical elements, and testing prototypes.

These are just a few of the many exciting and in-demand jobs in the field of artificial intelligence. With the right skills and experience, you can find a position that matches your interests and abilities.

Just as AI is transforming the business landscape, it is also opening up new opportunities in the recruiting sphere. Here are some of the top companies and recruiters who are hiring for AI roles:

These are just some of the top companies and recruiters who are hiring for AI roles. If you have the right skills and experience, don't hesitate to apply!

There are a few key things you can do to help boost your AI salary. First, focus on acquiring in-demand skills. One of the best ways to do this is to enroll in a top-rated certification program. Second, keep up with the latest industry trends and developments. Finally, consider pursuing management or leadership roles within your organization. By taking these steps, you can position yourself for success and earn a higher salary in the AI field.

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Even as you read this article, the demand for AI is booming across the globe. AI engineer salaries will keep rising as industries like tech, financial services, and medical research turn to artificial intelligence. As more global brands like Google and Nvidia dive deeper into Artificial Intelligence (AI), the demand and the salaries for AI engineers will only go upwards in 2024 and the decades to follow. Even government agencies in many developed and developing nations will open up AI engineer positions as they realize the enormous impact AI can have on the defense and governance sector.

Looking at the current pandemic scenario, jobs are better left till the dawn of next year. The time you have right now will be far better utilized in upgrading your AI repertoire.

Unlike most other fields, AI of tomorrow will look nothing like the AI of today. It is evolving at a breathtaking speed, and ensuring your Artificial Intelligence (AI) skills are relevant to current market needs, you better keep upgrading it. If you wish to get a step closer to these lucrative salaries, sharpen your AI skills with the world-class Artificial Intelligence Engineer program, and, before you know it, you will be standing in the world of AI engineers!

The salary of an AI Engineer in India can range from 8 lakhs to 50 lakhs annually.

The starting salary for an AI Engineer in India can be from 8 lakhs annually.

50 laksh is the highest salary for an AI Engineer in India

As experience and position increases, the salary also increases.

IT is one of the highest paying industry for AI Engineer.

Popular skills for AI Engineers to have are programming languages, data engineering, exploratory data analysis, deploying, modelling, and security.

Average Artificial Intelligence Engineer salary in the US is around $100k annually.

Top 5 Artificial Intelligence Jobs in the US are Machine Learning Engineer, Data Scientist, Business Intelligence Developer, Research Scientist, and Big Data Engineer/Architect.

The lowest salary for an AI Enginner in the US is around $100k annually.

Highest salary can go over $150 to $200k annually.

Follow this link:

AI Engineer Salary: The Lucrative World of AI Engineering - Simplilearn

NVIDIA and HP Speed Up Data Science and AI on PCs – Analytics Insight

NVIDIA and HP Inc. have announced the integration of NVIDIA CUDA-X data processing libraries with HP AI workstation solutions. This will accelerate data preparation and processing for generative AI development. CUDA-X libraries, built on the NVIDIA CUDA computing platform, enhance data processing across diverse data types such as tables, text, images, and video. The NVIDIA RAPIDS cuDF library significantly accelerates the work of nearly 10 million data scientists who rely on pandas software. By leveraging an NVIDIA RTX 6000 Ada Generation GPU instead of a CPU-only system, performance gains of up to 110x are achieved, all without requiring any code modifications.

RAPIDS cuDF and other NVIDIA software will be offered as part of Z by HP AI Studio on HP AI workstations, offering a full-stack development solution that accelerates data science workflows. Pandas is the essential tool for millions of data scientists processing and preparing data for generative AI, said Jensen Huang, NVIDIAs founder and CEO. Accelerating Pandas with no code modifications will be a significant step forward. Data scientists can handle data in minutes rather than hours and use orders of magnitude more data to train generative artificial intelligence models.

Data science sets the groundwork for AI, and developers require quick access to software and systems to fuel this critical work, said Enrique Lores, president and CEO of HP Inc. With the integration of NVIDIA AI software and accelerated GPU compute, HP AI workstations provide a powerful solution for our customers.

Pandas has a robust data format called DataFrames, which allows developers to quickly edit, clean, and analyze tabular data. The NVIDIA RAPIDS cuDF package speeds pandas, allowing them to operate on GPUs with no code modifications instead of CPUs, which can slow workloads as data size increases. RAPIDS cuDF works with third-party libraries and combines GPU and CPU operations, allowing data scientists to design, test, and execute models in production effortlessly.

As datasets expand, RTX 6000 Ada Generation GPUs with 48GB of RAM per GPU can handle massive data science and AI tasks on Z by HP workstations. With up to four RTX 6000 GPUs, the HP Z8 Fury is one of the worlds most powerful workstations for AI development. HP and NVIDIAs strong partnership enables data scientists to speed development by working on local computers capable of processing huge generative AI workloads.

NVIDIA RAPIDS cuDF, which greatly accelerates pandas operations (almost 150 times quicker), now works smoothly with HP AI workstation systems. Users may use NVIDIA RTX and GeForce RTX GPUs to process data. Furthermore, HP AI Studio will include cuDF later this year, improving efficiency and performance.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

More here:

NVIDIA and HP Speed Up Data Science and AI on PCs - Analytics Insight

Structure and Relationships: Graph Neural Networks and a Pytorch Implementation – Towards Data Science

Lets implement a regression example where the aim is to train a network to predict the value of a node given the value of all other nodes i.e. each node has a single feature (which is a scalar value). The aim of this example is to leverage the inherent relational information encoded in the graph to accurately predict numerical values for each node. The key thing to note is that we input the numerical value for all nodes except the target node (we mask the target node value with 0) then predict the target nodes value. For each data point, we repeat the process for all nodes. Perhaps this might come across as a bizarre task but lets see if we can predict the expected value of any node given the values of the other nodes. The data used is the corresponding simulation data to a series of sensors from industry and the graph structure I have chosen in the example below is based on the actual process structure. I have provided comments in the code to make it easy to follow. You can find a copy of the dataset here (Note: this is my own data, generated from simulations).

This code and training procedure is far from being optimised but its aim is to illustrate the implementation of GNNs and get an intuition for how they work. An issue with the currently way I have done that should definitely not be done this way beyond learning purposes is the masking of node feature value and predicting it from the neighbours feature. Currently youd have to loop over each node (not very efficient), a much better way to do is the stop the model from include its own features in the aggregation step and hence you wouldnt need to do one node at a time but I thought it is easier to build intuition for the model with the current method:)

Preprocessing Data

Importing the necessary libraries and Sensor data from CSV file. Normalise all data in the range of 0 to 1.

Defining the connectivity (edge index) between nodes in the graph using a PyTorch tensor i.e. this provides the systems graphical topology.

The Data imported from csv has a tabular structure but to use this in GNNs, it must be transformed to a graphical structure. Each row of data (one observation) is represented as one graph. Iterate through Each Row to Create Graphical representation of the data

A mask is created for each node/sensor to indicate the presence (1) or absence (0) of data, allowing for flexibility in handling missing data. In most systems, there may be items with no data available hence the need for flexibility in handling missing data. Split the data into training and testing sets

Graph Visualisation

The graph structure created above using the edge indices can be visualised using networkx.

Model Definition

Lets define the model. The model incorporates 2 GAT convolutional layers. The first layer transforms node features to an 8 dimensional space, and the second GAT layer further reduces this to an 8-dimensional representation.

GNNs are highly susceptible to overfitting, regularation (dropout) is applied after each GAT layer with a user defined probability to prevent over fitting. The dropout layer essentially randomly zeros some of the elements of the input tensor during training.

The GAT convolution layer output results are passed through a fully connected (linear) layer to map the 8-dimensional output to the final node feature which in this case is a scalar value per node.

Masking the value of the target Node; as mentioned earlier, the aim of this of task is to regress the value of the target node based on the value of its neighbours. This is the reason behind masking/replacing the target nodes value with zero.

Training the model

Initialising the model and defining the optimiser, loss function and the hyper parameters including learning rate, weight decay (for regularisation), batch_size and number of epochs.

The training process is fairly standard, each graph (one data point) of data is passed through the forward pass of the model (iterating over each node and predicting the target node. The loss from the prediction is accumulated over the defined batch size before updating the GNN through backpropagation.

Testing the trained model

Using the test dataset, pass each graph through the forward pass of the trained model and predict each nodes value based on its neighbours value.

Visualising the test results

Using iplot we can visualise the predicted values of nodes against the ground truth values.

Despite a lack of fine tuning the model architecture or hyperparameters, it has done a decent job actually, we could tune the model further to get improved accuracy.

This brings us to the end of this article. GNNs are relatively newer than other branches of machine learning, it will be very exciting to see the developments of this field but also its application to different problems. Finally, thank you for taking the time to read this article, I hope you found it useful in your understanding of GNNs or their mathematical background.

Unless otherwise noted, all images are by the author

Read more here:

Structure and Relationships: Graph Neural Networks and a Pytorch Implementation - Towards Data Science

Seeing Our Reflection in LLMs. When LLMs give us outputs that reveal | by Stephanie Kirmer | Mar, 2024 – Towards Data Science

Photo by Vince Fleming on Unsplash

By now, Im sure most of you have heard the news about Googles new LLM*, Gemini, generating pictures of racially diverse people in Nazi uniforms. This little news blip reminded me of something that Ive been meaning to discuss, which is when models have blind spots, so we apply expert rules to the predictions they generate to avoid returning something wildly outlandish to the user.

This sort of thing is not that uncommon in machine learning, in my experience, especially when you have flawed or limited training data. A good example of this that I remember from my own work was predicting when a package was going to be delivered to a business office. Mathematically, our model would be very good at estimating exactly when the package would get physically near the office, but sometimes, truck drivers arrive at destinations late at night and then rest in their truck or in a hotel until morning. Why? Because no ones in the office to receive/sign for the package outside of business hours.

Teaching a model about the idea of business hours can be very difficult, and the much easier solution was just to say, If the model says the delivery will arrive outside business hours, add enough time to the prediction that it changes to the next hour the office is listed as open. Simple! It solves the problem and it reflects the actual circumstances on the ground. Were just giving the model a little boost to help its results work better.

However, this does cause some issues. For one thing, now we have two different model predictions to manage. We cant just throw away the original model prediction, because thats what we use for model performance monitoring and metrics. You cant assess a model on predictions after humans got their paws in there, thats not mathematically sound. But to get a clear sense of the real world model impact, you do want to look at the post-rule prediction, because thats what the customer actually experienced/saw in your application. In ML, were used to a very simple framing, where every time you run a model you get one result or set of results, and thats that, but when you start tweaking the results before you let them go, then you need to think at a different scale.

I kind of suspect that this is a form of whats going on with LLMs like Gemini. However, instead of a post-prediction rule, it appears that the smart money says Gemini and other models are applying secret prompt augmentations to try and change the results the LLMs produce.

In essence, without this nudging, the model will produce results that are reflective of the content it has been trained on. That is to say, the content produced by real people. Our social media posts, our history books, our museum paintings, our popular songs, our Hollywood movies, etc. The model takes in all that stuff, and it learns the underlying patterns in it, whether they are things were proud of or not. A model given all the media available in our contemporary society is going to get a whole lot of exposure to racism, sexism, and myriad other forms of discrimination and inequality, to say nothing of violence, war, and other horrors. While the model is learning what people look like, and how they sound, and what they say, and how they move, its learning the warts-and-all version.

Our social media posts, our history books, our museum paintings, our popular songs, our Hollywood movies, etc. The model takes in all that stuff, and it learns the underlying patterns in it, whether they are things were proud of or not.

This means that if you ask the underlying model to show you a doctor, its going to probably be a white guy in a lab coat. This isnt just random, its because in our modern society white men have disproportionate access to high status professions like being doctors, because they on average have access to more and better education, financial resources, mentorship, social privilege, and so on. The model is reflecting back at us an image that may make us uncomfortable because we dont like to think about that reality.

The obvious argument is, Well, we dont want the model to reinforce the biases our society already has, we want it to improve representation of underrepresented populations. I sympathize with this argument, quite a lot, and I care about representation in our media. However, theres a problem.

Its very unlikely that applying these tweaks is going to be a sustainable solution. Recall back to the story I started with about Gemini. Its like playing whac-a-mole, because the work never stops now weve got people of color being shown in Nazi uniforms, and this is understandably deeply offensive to lots of folks. So, maybe where we started by randomly applying as a black person or as an indigenous person to our prompts, we have to add something more to make it exclude cases where its inappropriate but how do you phrase that, in a way an LLM can understand? We probably have to go back to the beginning, and think about how the original fix works, and revisit the whole approach. In the best case, applying a tweak like this fixes one narrow issue with outputs, while potentially creating more.

Lets play out another very real example. What if we add to the prompt, Never use explicit or profane language in your replies, including [list of bad words here]. Maybe that works for a lot of cases, and the model will refuse to say bad words that a 13 year old boy is requesting to be funny. But sooner or later, this has unexpected additional side effects. What about if someones looking for the history of Sussex, England? Alternately, someones going to come up with a bad word you left out of the list, so thats going to be constant work to maintain. What about bad words in other languages? Who judges what goes on the list? I have a headache just thinking about it.

This is just two examples, and Im sure you can think of more such scenarios. Its like putting band aid patches on a leaky pipe, and every time you patch one spot another leak springs up.

So, what is it we actually want from LLMs? Do we want them to generate a highly realistic mirror image of what human beings are actually like and how our human society actually looks from the perspective of our media? Or do we want a sanitized version that cleans up the edges?

Honestly, I think we probably need something in the middle, and we have to continue to renegotiate the boundaries, even though its hard. We dont want LLMs to reflect the real horrors and sewers of violence, hate, and more that human society contains, that is a part of our world that should not be amplified even slightly. Zero content moderation is not the answer. Fortunately, this motivation aligns with the desires of large corporate entities running these models to be popular with the public and make lots of money.

we have to continue to renegotiate the boundaries, even though its hard. We dont want LLMs to reflect the real horrors and sewers of violence, hate, and more that human society contains, that is a part of our world that should not be amplified even slightly. Zero content moderation is not the answer.

However, I do want to continue to make a gentle case for the fact that we can also learn something from this dilemma in the world of LLMs. Instead of simply being offended and blaming the technology when a model generates a bunch of pictures of a white male doctor, we should pause to understand why thats what we received from the model. And then we should debate thoughtfully about whether the response from the model should be allowed, and make a decision that is founded in our values and principles, and try to carry it out to the best of our ability.

As Ive said before, an LLM isnt an alien from another universe, its us. Its trained on the things we wrote/said/filmed/recorded/did. If we want our model to show us doctors of various sexes, genders, races, etc, we need to make a society that enables all those different kinds of people to have access to that profession and the education it requires. If were worrying about how the model mirrors us, but not taking to heart the fact that its us that needs to be better, not just the model, then were missing the point.

If we want our model to show us doctors of various sexes, genders, races, etc, we need to make a society that enables all those different kinds of people to have access to that profession and the education it requires.

View original post here:

Seeing Our Reflection in LLMs. When LLMs give us outputs that reveal | by Stephanie Kirmer | Mar, 2024 - Towards Data Science

How Google Used Your Data to Improve their Music AI – Towards Data Science

MusicLM fine-tuned on user preferences 7 min read

MusicLM, Googles flagship text-to-music AI, was originally published in early 2023. Even in its basic version, it represented a major breakthrough and caught the music industry by surprise. However, a few weeks ago, MusicLM received a significant update. Heres a side-by-side comparison for two selected prompts:

Prompt: Dance music with a melodic synth line and arpeggiation:

Prompt: a nostalgic tune played by accordion band

This increase in quality can be attributed to a new paper by Google Research titled: MusicRL: Aligning Music Generation to Human Preferences. Apparently, this upgrade was considered so significant that they decided to rename the model. However, under the hood, MusicRL is identical to MusicLM in its key architecture. The only difference: Finetuning.

When building an AI model from scratch, it starts with zero knowledge and essentially does random guessing. The model then extracts useful patterns through training on data and starts displaying increasingly intelligent behavior as training progresses. One downside to this approach is that training from scratch requires a lot of data. Finetuning is the idea that an existing model is used and adapted to a new task, or adapted to approach the same task differently. Because the model already has learned the most important patterns, much less data is required.

For example, a powerful open-source LLM like Mistral7B can be trained from scratch by anyone, in principle. However, the amount of data required to produce even remotely useful outputs is gigantic. Instead, companies use the existing Mistral7B model and feed it a small amount of proprietary data to make it solve new tasks, whether that is writing SQL queries or classifying emails.

The key takeaway is that finetuning does not change the fundamental structure of the model. It only adapts its internal logic slightly to perform better on a specific task. Now, lets use this knowledge to understand how Google finetuned MusicLM on user data.

A few months after the MusicLM paper, a public demo was released as part of Googles AI Test Kitchen. There, users could experiment with the text-to-music model for free. However, you might know the saying: If the product is free, YOU are the product. Unsurprisingly, Google is no exception to this rule. When using MusicLMs public demo, you were occasionally confronted with two generated outputs and asked to state which one you prefer. Through this method, Google was able to gather 300,000 user preferences within a couple of months.

As you can see from the screenshot, users were not explicitly informed that their preferences would be used for machine learning. While that may feel unfair, it is important to note that many of our actions in the internet are being used for ML training, whether it is our Google search history, our Instagram likes, or our private Spotify playlists. In comparison to these rather personal and sensitive cases, music preferences on the MusicLM playground seem negligible.

It is good to be aware that user data collection for machine learning is happening all the time and usually without explicit consent. If you are on Linkedin, you might have been invited to contribute to so-called collaborative articles. Essentially, users are invited to provide tips on questions in their domain of expertise. Here is an example of a collaborative article on how to write a successful folk song (something I didnt know I needed).

Users are incentivized to contribute, earning them a Top Voice badge on the platform. However, my impression is that noone actually reads these articles. This leads me to believe that these thousands of question-answer pairs are being used by Microsoft (owner of Linkedin) to train an expert AI system on these data. If my suspicion is accurate, I would find this example much more problematic than Google asking users for their favorite track.

But back to MusicLM!

The next question is how Google was able to use this massive collection of user preferences to finetune MusicLM. The secret lies in a technique called Reinforcement Learning from Human Feedback (RLHF) which was one of the key breakthroughs of ChatGPT back in 2022. In RLHF, human preferences are used to train an AI model that learns to imitate human preference decisions, resulting in an artificial human rater. Once this so-called reward model is trained, it can take in any two tracks and predict which one would most likely be preferred by human raters.

With the reward model set up, MusicLM could be finetuned to maximize the predicted user preference of its outputs. This means that the text-to-music model generated thousands of tracks, each track receiving a rating from the reward model. Through the iterative adaptation of the model weights, MusicLM learned to generate music that the artificial human rater likes.

In addition to the finetuning on user preferences, MusicLM was also finetuned concerning two other criteria: 1. Prompt Adherence MuLan, Googles proprietary text-to-audio embedding model was used to calculate the similarity between the user prompt and the generated audio. During finetuning, this adherence score was maximized. 2. Audio Quality Google trained another reward model on user data to evaluate the subjective audio quality of its generated outputs. These user data seem to have been collected in separate surveys, not in MusicLMs public demo.

The new, finetuned model seems to reliably outperform the old MusicLM, listen to the samples provided on the demo page. Of course, a selected public demo can be deceiving, as the authors are incentivized to showcase examples that make their new model look as good as possible. Hopefully, we will get to test out MusicRL in a public playground, soon.

However, the paper also provides a quantitative assessment of subjective quality. For this, Google conducted a study and asked users to compare two tracks generated for the same prompt, giving each track a score from 1 to 5. Using this metric with the fancy-sounding name Mean Opinion Score (MOS), we can compare not only the number of direct comparison wins for each model, but also calculate the average rater score (MOS).

Here, MusicLM represents the original MusicLM model. MusicRL-R was only finetuned for audio quality and prompt adherence. MusicRL-U was finetuned solely on human feedback (the reward model). Finally, MusicRL-RU was finetuned on all three objectives. Unsurprisingly, MusicRL-RU beats all other models in direct comparison as well as on the average ratings.

The paper also reports that MusicRL-RU, the fully finetuned model, beat MusicLM in 87% of direct comparisons. The importance of RLHF can be shown by analyzing the direct comparisons between MusicRL-R and MusicRL-RU. Here, the latter had a 66% win rate, reliably outperforming its competitor.

Although the difference in output quality is noticeable, qualitatively as well as quantitatively, the new MusicLM is still quite far from human-level outputs in most cases. Even on the public demo page, many generated outputs sound odd, rhythmically, fail to capture key elements of the prompt or suffer from unnatural-sounding instruments.

In my opinion, this paper is still significant, as it is the first attempt at using RLHF for music generation. RLHF has been used extensively in text generation for more than one year. But why has this taken so long? I suspect that collecting user feedback and finetuning the model is quite costly. Google likely released the public MusicLM demo with the primary intention of collecting user feedback. This was a smart move and gave them an edge over Meta, which has equally capable models, but no open platform to collect user data on.

All in all, Google has pushed itself ahead of the competition by leveraging proven finetuning methods borrowed from ChatGPT. While even with RLHF, the new MusicLM has still not reached human-level quality, Google can now maintain and update its reward model, improving future generations of text-to-music models with the same finetuning procedure.

It will be interesting to see if and when other competitors like Meta or Stability AI will be catching up. For us as users, all of this is just great news! We get free public demos and more capable models.

For musicians, the pace of the current developments may feel a little threatening and for good reason. I expect to see human-level text-to-music generation in the next 13 years. By that, I mean text-to-music AI that is at least as capable at producing music as ChatGPT was at writing texts when it was released. Musicians must learn about AI and how it can already support them in their everyday work. As the music industry is being disrupted once again, curiosity and flexibility will be the primary key to success.

Read the rest here:

How Google Used Your Data to Improve their Music AI - Towards Data Science

David Mongeau to step down, interim director for country’s only HSI data science school announced – The University of Texas at San Antonio

A nationally recognized leader in the data science and artificial intelligence community, Mongeau brought to UTSA a distinguished record in leading research institutes and training programs, as well as in developing partnerships across government, industry, academia and the philanthropic community.

Under his leadership, the School of Data Science has recorded numerous achievements, including receiving $1.2 million in gift funding for data science, AI and machine learning student training and research programs. In addition to the undergraduate and graduate degree and certificate programs comprising the School of Data Science, school leaders now are developing a new certificate program in data engineering.

In partnership with the Association of Computing Machinery at UTSA and the National Security Agency, the School of Data Science in 2022 launched the annual Rowdy Datathon competition.

In April 2023, the school hosted the inaugural UTSA Draper Data Science Business Plan Competition, which highlights data science applications and student entrepreneurship; the second annual competition will be held at San Pedro I later this spring.

Also in 2023, the school hosted its inaugural Los Datos Conference. The school also now serves as administrative host to the universitys annual RowdyHacks competition; more than 500 students from across Texas participated in the 9th annual RowdyHacks at San Pedro I last weekend.

Mongeau has been driven to increase the reach and reputation of the School of Data Science from and back to San Antonio. In October 2023, the School of Data Science hosted the annual meeting of the Academic Data Science Alliance, bringing together more than 200 data science practitioners, researchers and educators from across the country to UTSA. The school also invested nearly $400,000 to create opportunities for UTSA students and faculty to pursue projects and participate in national data science and AI experiences at, for example, University of Chicago, University of Michigan, University of Washington, and the U.S. Census Bureau.

Through a collaboration with San Antonio-based start-up Skew the Script, the school has reached 20,000 high school teachers and 400,000 high school students with open-source training in statistics and math, which are core to success in data science and AI.

I consider myself so fortunate to have been part of the creation of the School of Data Science at UTSA, said Mongeau. I thank the schools dedicated staff and core faculty for their commitment to the school which is having an enduring impact on our students the next generation of diverse data scientists who have embraced the schools vision to make our world more equitable, informed and secure. These Roadrunners are destined to become industry leaders and continue to advance the frontiers of data science and AI.

Immediately prior to joining UTSA, Mongeau served as executive director of the Berkeley Institute for Data Science at the University of California, Berkeley. As executive director, he set the strategic direction for the institute, expanded industry and foundation engagement, and applied data science and AI in health care, climate change, and criminal justice.

Notably, he also initiated three data science fellowship programs and forged partnerships to enhance opportunities for legal immigrants and refugees in data science careers.

Visit link:

David Mongeau to step down, interim director for country's only HSI data science school announced - The University of Texas at San Antonio

4 Emerging Strategies to Advance Big Data Analytics in Healthcare – HealthITAnalytics.com

February 28, 2024 -While the potential for big data analytics in healthcare has been a hot topic in recent years, the possible risks of using these tools have received just as much attention.

Big data analytics technologies have demonstrated their promise in enhancing multiple areas of care, from medical imaging and chronic disease management to population health and precision medicine. These algorithms could increase the efficiency of care delivery, reduce administrative burdens, and accelerate disease diagnosis.

But despite all the good these tools could achieve, the harm these algorithms could cause is nearly as significant.

Concerns about data access and collection, implicit and explicit bias, and issues with patient and provider trust in analytics technologies have hindered the use of these tools in everyday healthcare delivery.

Healthcare researchers and provider organizations are working to solve these issues, facilitating the use of big data analytics in clinical care for better quality and outcomes.

READ MORE: Data Analytics in Healthcare: Defining the Most Common Terms

In this primer, HealthITAnalytics will explore how improving data quality, addressing bias, prioritizing data privacy, and building providers trust in analytics tools can advance the four types of big data analytics in healthcare.

In healthcare, its widely understood that the success of big data analytics tools depends on the value of the information used to train them. Algorithms trained on inaccurate, poor-quality datacan yield erroneous results, leading to inadequate care delivery.

However, obtaining quality training data is complex and time-intensive, leaving many organizations without the resources to build effective models.

Researchers across the industry are working to overcome this challenge.

In 2019, a team from MITs Computer Science and Artificial Intelligence Library (CSAIL)developedan automated system to gather more data from images to train machine learning models, synthesizing a massive dataset of distinct training examples.

READ MORE: Breaking Down the 4 Types of Healthcare Big Data Analytics

This approach is beneficial for use cases in which high-quality images are available, but there are too few to develop a robust dataset. The synthesized dataset can be used to improve the training of machine learning models, enabling them to detect anatomical structures in new scans.

This image segmentation approach helps address one of the major data quality issues: insufficient data points.

But what about cases with a wealth of relevant data but varying qualities or data synthetization challenges?

In these cases, its useful to begin by defining and exploring some common healthcare analytics concepts.

Data quality, as the name suggests, is a way to measure the reliability and accuracy of the data. Addressing quality is critical to healthcare data generation, collection, and processing.

READ MORE: Top Data Analytics Tools for Population Health Management

If the data collection process yielded a sufficient number of data points but there is a question of quality, stakeholders can look at the datas structure and identify whether converting the structure of the datasets into a common format is appropriate. This is known as data standardization, and it can help ensure that the data are consistent, which is necessary for effective analysis.

Data cleaning flagging and addressing data abnormalities and data normalization, the process of organizing data, can take standardization even further.

Tools like the United States Core Data for Interoperability (USCDI) and USCDI+ can help in cases where a healthcare organization doesnt have enough high-quality data.

In scenarios with a large amount of data, synthesizing the data for analysis creates another potential hurdle.

As seen throughout the COVID-19 pandemic, when data related to the virus became available globally, healthcare leaders faced the challenge of creating high-quality datasets to help researchers answer vital questions about the virus.

In 2020, the White House Office of Science and Technology Policyissueda call to action for experts to synthesize an artificial intelligence (AI) algorithm-friendly COVID-19 dataset to bolster these efforts.

The dataset represents an extensive machine-readable coronavirus literature collection including over 29,000 articles at the time of creation designed to help researchers sift through and analyze the data more quickly.

By promoting collaboration among researchers, healthcare institutions, and other stakeholders, initiatives like this can support the efficient synthesis of large-scale, high-quality datasets.

As healthcare organizations become increasingly reliant on analytics algorithms to help them make care decisions, bias is a major hurdle to the safe and effective deployment of these tools.

Tackling algorithmic bias requires stakeholders to be aware of how biases are introduced and reproduced at every stage of algorithm development and deployment. In many algorithms, bias can be baked in almost immediately if the developers rely on biased data.

The US Department of Health and Human Services (HHS) Office of Minority Health (OMH) indicates that lack of diversity in an algorithms training data is a significant source of bias. Further, bias can be coded into algorithms based on developers beliefs or assumptions, including implicit and explicit biases.

If, for example, a developer incorrectly assumes that symptoms of a particular condition are more common or severe in one population than another, the resulting algorithm could be biased and perpetuate health disparities.

Some have suggested that bringing awareness to potential biases can remedy the issue of algorithmic bias, but research suggests that a more robust approach is required. One study published in the Future Healthcare Journal in 2021 demonstrated that while bias training can help individuals recognize biases in themselves and others, it is not an effective debiasing strategy.

The OMH recommends best practices beyond bias training, encouraging developers to work with diverse stakeholders to ensure that algorithms are adequately developed, validated, and reviewed to maximize utility and minimize harm.

In scenarios where diverse training data for algorithms is unavailable, techniques like synthetic data can help minimize potential biases.

In terms of algorithm deployment and monitoring, the OMH suggests that the tools should be implemented gradually and that users should have a way to provide feedback to the developers for future algorithm improvement.

To this end, developers can work with experts and end-users to understand what clinical measures are important to providers, according to researchers from the University of Massachusetts Amherst.

In recent years, healthcare stakeholders have increasingly developed frameworks and best practices to minimize bias in clinical algorithms.

A panel of experts convened by the Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) published a special communications article in the December 2023 issue of JAMA Network Open outlining five principles to address the impact of algorithm bias on racial and ethnic disparities in healthcare.

The framework guides healthcare stakeholders to mitigate and prevent bias at each stage of an algorithms life cycle by promoting health equity, ensuring algorithm transparency, earning trust by engaging patients and communities, explicitly identifying fairness issues, and establishing accountability for equity and fairness in outcomes from algorithms.

When trained using high-quality data and deployed in settings that will be monitored and adjusted to minimize biases, algorithms can help address disparities in maternal health, preterm births, and social determinants of health (SDOH).

In algorithm development, data privacy and security are high on the list of concerns. Legal, privacy, and cultural obstacles can keep researchers from accessing the large, diverse data sets needed to train analytics technologies.

Over the years, experts have worked to craft approaches that can balance the need for data access against the need to protect patient privacy.

In 2020, a team from the University of Iowa (UI) set out to develop a solution to this problem. With a $1 million grant from the National Science Foundation (NSF), UI researcherscreateda machine learning platform to train algorithms with data from around the world.

The tool is a decentralized, asynchronous solution called ImagiQ, and it relies on an ecosystem of machine learning models so that institutions can select models that work best for their populations. Using the platform, organizations can upload and share the models, but not patient data, with each other.

The researchers indicated that traditional machine learning methods require a centralized database where patient data can be directly accessed for use in model training, but these approaches are often limited by practical issues like information security, patient privacy, data ownership, and the burden on health systems tasked with creating and maintaining those centralized databases.

ImagiQ helps overcome some of these challenges, but it is not the only framework to do so.

Researchers from the University of Pittsburgh Swanson School of Engineering were awarded $1.7 million from the National Institutes of Health (NIH) in 2022 to advance their efforts to develop a federated learning (FL)-based approach to achieve fairness in AI-assisted medical screening tools.

FL is a privacy-protection method that enables researchers to train AI models across multiple decentralized devices or servers holding local data samples without exchanging them.

The approach is useful for improving model performance without compromising data privacy, as AI trained on one institutions data typically does not generalize well on data from another.

However, FL is not a perfect solution, as experts from the University of Southern California (USC) Viterbi School of Engineering pointed out at the 2023 International Workshop on Health Intelligence. They posited that FL brings forth multiple concerns, such as its ability to make predictions based on what its learned from its training data and the hurdles presented by missing data and the data harmonization process.

The research team presented a framework for addressing these challenges, but there are other tools healthcare stakeholders can use to prioritize data privacy, such as confidential computing or blockchain. These tools center on making the data largely inaccessible and resistant to tampering by unauthorized parties.

Alternatives that do not require significant investments in cloud computing or blockchain are also available to stakeholders through privacy-enhancing technologies (PETs), three of which are particularly suited to healthcare use cases.

Algorithmic PETs like encryption, differential privacy, and zero-knowledge proofs protect data privacy by altering how the information is represented while ensuring it is usable. Often, this involves modifying the changeability or traceability of healthcare data.

In contrast, architectural PETs focus on the structure of data or computation environments, rather than how those data are represented, to enable users to exchange information without exchanging any underlying data. Federated learning, secure multi-party computation, and blockchain fall into this PET category.

Augmentation PETs, as the name suggests, augment existing data sources or create fully synthetic ones. This approach can help enhance the availability and utility of data used in healthcare analytics projects. Digital twins and generative adversarial networks are commonly used for this purpose.

But even the most robust data privacy infrastructure cannot compensate for a lack of trust in big data analytics tools.

Just as patients need to trust that analytics algorithms can keep their data safe, providers must trust that these tools can deliver information in a functional, reliable way.

The issue of trustworthy analytics tools has recently taken center stage in conversations around how Americans interact with AI knowingly and unknowingly in their daily lives. Healthcare is one of the industries where advanced technologies present the most significant potential for harm, leading the federal government to begin taking steps to guide the deployment and use of algorithms.

In October 2023, President Joe Biden signed theExecutive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which outlines safety, security, privacy, equity, and other standards for how industry and government should approach AI innovation.

The orders directives are broad, as they are designed to apply to all US industries, but it does lay out some industry-specific directives for those looking at how it will impact healthcare. Primarily, the executive order provides a framework for creating standards, laws, and regulations around AI and establishes a roadmap of subsequent actions that government agencies, like HHS, must take to build such a framework.

However, this process will take months, and more robust regulation of healthcare algorithms could take even longer, leading industry stakeholders to develop their own best practices for using analytics technologies in healthcare.

One stakeholder is the National Academy of Medicine (NAM) Artificial Intelligence Code of Conduct (AICC), which represents a collaborative effort among healthcare, research, and patient advocacy groups to create a national architecture for responsible AI use in healthcare.

In a 2024 interview with HealthITAnalytics, NAM leadership emphasized that this governance infrastructure is necessary to gain trust and improve healthcare as advanced technologies become more ubiquitous in care settings.

However, governance structure must be paired with education and clinician support to obtain buy-in from providers.

Some of this can start early, as evidenced by recent work from the University of Texas (UT) health system to incorporate AI training into medical school curriculum. Having staff members dedicated to spearheading analytics initiatives, such as a chief analytics officer, is another approach that healthcare organizations can use to make providers feel more comfortable with these tools.

These staff can also work to bolster trust at the enterprise level by focusing on creating a healthcare data culture, gaining provider buy-in from the top down, and having strategies to address concerns about clinician overreliance on analytics technologies.

With healthcare organizations increasingly leveraging big data analytics tools for enhanced insights and streamlined care processes, overcoming data quality, bias, privacy, and security issues and fostering user trust will be critical for successfully using these models in clinical care.

As research evolves around AI, machine learning, and other analytics algorithms, the industry will keep refining these tools for improved patient care.

Follow this link:

4 Emerging Strategies to Advance Big Data Analytics in Healthcare - HealthITAnalytics.com

Computer scientist traces her trajectory from stunt flying to a startup – GeekWire

Computer scientist Cecilia Aragon tells her life story at the Womens Leadership Conference, presented by the Bellevue Chamber. (GeekWire Photo / Alan Boyle)

BELLEVUE, Wash. Three decades ago, Cecilia Aragon made aviation history as the first Latina to earn a place on the U.S. Unlimited Aerobatic Team. She went on to write a book about it, titled Flying Free.

Today, shes still flying free, as a professor and data scientist in the University of Washington and as the co-founder of a Seattle startup that aims to commercialize her research.

Aragon recounted her personal journey today during a talk at the Womens Leadership Conference, presented by the Bellevue Chamber. The conference brought nearly 400 attendees to Bellevues Meydenbauer Center to hear about topics ranging from financial literacy to sports management.

Aragons aerobatic days began in 1985, when she accepted an invitation from a co-worker to take a ride in his flying clubs Piper Cherokee airplane.

The first thing I thought was, Im the person whos scared of climbing a stepladder. Im scared of going in an elevator,' she recalled.

But then she thought of her Chilean-born father. I heard my fathers voice, saying, What is stopping you from doing whatever you want? she said. She swallowed her fears, climbed into the plane, and was instantly hooked.

Its so gorgeous to fly out into the water and see the sun glinting up on the water, like a million gold coins, she said. And when we got down to the ground, I said, I want to take flying lessons. I want to be the pilot of my own life.'

Aragon said she went through three flight instructors, but gradually overcame her fears. I learned to turn fear into excitement, she said. The excitement reached its peak in 1991 when she was named to the U.S. aerobatic team and went on to win bronze medals at the U.S. national and world aerobatic championships.

That wasnt the only dream that Aragon has turned into reality. After leaving the aerobatic team, she worked as a computer scientist at NASAs Ames Research Center in Silicon Valley, earned her Ph.D. at Berkeley and became a staff scientist at Lawrence Berkeley National Laboratory. Aragon joined UWs faculty in 2010 and is now the director of the universitys Human-Centered Data Science Lab.

I love it, she said. My students amaze me and excite me every single day.

Aragons research focuses on how people make sense of vast data sets, using computer algorithms and visualizations. She holds several patents relating to visual representations of travel data and with the help of UWs CoMotion Labs and Mobility Innovation Center, Aragon and her teammates have turned that data science into a startup called Traffigram.

For the past year, Traffigrams small team has been working in semi-stealth mode to develop software that can analyze multiple travel routes, determine the quickest way to get from Point A to Point B, and present the information in an easy-to-digest format. Aragon is the ventures chief scientist and her son, Ken Aragon, is co-founder and CEO.

Its a family business, she told GeekWire. Weve gotten a great response from potential customers so far, and weve raised some money.

So how does creating a startup compare with aerobatic stunt flying?

I think there are a lot of similarities, because its very risky, Aragon said. As they have told me many times, most startup businesses fail. You know, thats just like what they told me with aerobatics that very few people make the U.S. aerobatic team, and its probably not going to happen. I said, Yeah, but Im going to enjoy the path I believe in. So I believe in the mission we have, to make transportation more accessible to everyone.

Originally posted here:

Computer scientist traces her trajectory from stunt flying to a startup - GeekWire

Data Science Market: Unleashing Insights with AI and Machine Learning, Embracing a 31.0% CAGR and to Grow USD … – GlobeNewswire

Covina, Feb. 28, 2024 (GLOBE NEWSWIRE) -- According to the recent research study, the Data Science Market size was valued at about USD 80.5 Billion in 2024 and expected to grow at CAGR of 31.0% to extend a value of USD 941.8 Billion by 2034.

What is Data Science?

Market Overview:

Data science is a multidisciplinary field that involves extracting insights and knowledge from data using various scientific methods, algorithms, processes, and systems. It combines aspects of statistics, mathematics, computer science, and domain expertise to analyze complex data sets and solve intricate problems.

The primary goal of data science is to extract valuable insights, patterns, trends, and knowledge from structured and unstructured data. This process typically involves:

Get Access to Free Sample Research Report with Latest Industry Insights:

https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/1148

*Note: PMI Sample Report includes,

Top Leading Players in Data Science Market:

Market Dynamics:

Driving Factors:

Restrain Factors:

Emerging Trends and Opportunities in Data Science Market:

Download PDF Brochure:

https://www.prophecymarketinsights.com/market_insight/Insight/request-pdf/1148

Challenges of Data Science Market:

Detailed Segmentation:

Data Science Market, By Type:

Data Science Market, By End-User:

Data Science Market, By Region:

Regional Analysis:

Regional insights highlight the diverse market dynamics, regulatory landscapes, and growth drivers shaping the Data Science Market across different geographic areas. Understanding regional nuances and market trends is essential for stakeholders to capitalize on emerging opportunities and drive market expansion in the Data Science sector.

North America market is estimated to witness the fastest share over the forecast period the adoption of cloud computing services, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS), has accelerated in North America. Cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform offer scalable, cost-effective solutions for data storage, processing, and analytics, driving adoption among enterprises.

Report scope:

By End-User Banking and Financial Institutions (BFSI), Telecommunication, Transportation and Logistics, Healthcare, and Manufacturing

Europe - UK, Germany, Spain, France, Italy, Russia, Rest of Europe

Asia Pacific - Japan, India, China, South Korea, Australia, Rest of Asia-Pacific

Latin America - Brazil, Mexico, Argentina, Rest of Latin America

Middle East & Africa - South Africa, Saudi Arabia, UAE, Rest of Middle East & Africa

Key highlights of the Data Science Market:

Any query or customization before buying:

https://www.prophecymarketinsights.com/market_insight/Insight/request-customization/1148

Explore More Insights:

Blog: http://www.prophecyjournals.com

Follow us on:

LinkedIn | Twitter | Facebook |YouTube

Go here to read the rest:

Data Science Market: Unleashing Insights with AI and Machine Learning, Embracing a 31.0% CAGR and to Grow USD ... - GlobeNewswire

Why LLMs are not Good for Coding. Challenges of Using LLMs for Coding | by Andrea Valenzuela | Feb, 2024 – Towards Data Science

Self-made image

Over the past year, Large Language Models (LLMs) have demonstrated astonishing capabilities thanks to their natural language understanding. These advanced models have not only redefined the standards in Natural Language Processing but also populated applications and services.

There has been a rapidly growing interest in using LLMs for coding, with some companies striving to turn natural language processing into code understanding and generation. This task has already highlighted several challenges yet to be addressed in using LLMs for coding. Despite these obstacles, this trend has led to the development of AI code generator products.

Have you ever used ChatGPT for coding?

While it can be helpful in some instances, it often struggles to generate efficient and high-quality code. In this article, we will explore three reasons why LLMs are not inherently proficient at coding out of the box: the tokenizer, the complexity of context windows when applied to code and the nature of the training itself .

Identify the key areas that need improvement is crutial to transform LLMs into more effective coding assistants!

The LLM tokenizer is the responsible of converting the user input text, in natural language, to a numerical format that the LLMs can understand.

The tokenizer processes raw text by breaking it down into tokens. Tokens can be whole words, parts of words (subwords), or individual characters, depending on the tokenizers design and the requirements of the task.

Since LLMs operate on numerical data, each token is given an ID which depends on the LLM vocabulary. Then, each ID is further associated with a vector in the LLMs latent high-dimensional space. To do this last mapping, LLMs use learned embeddings, which are fine-tuned during training and capture complex relationships and nuances in the data.

If you are interested in playing around with different LLM tokenizers and see how they

Follow this link:

Why LLMs are not Good for Coding. Challenges of Using LLMs for Coding | by Andrea Valenzuela | Feb, 2024 - Towards Data Science