Category Archives: Machine Learning

Artificial Intelligence in Nutrition: Definition, Benefits, and Algorithms – ThomasNet News

Artificial intelligence (AI) is transforming the way we perceive and manage nutrition. There are applications for diet tracking, which offer personalized guidance and meal plans, solutions that pinpoint ingredients with specific health benefits, and tools for analyzing medical data to inform customized nutrition interventions.

These technologies serve to optimize medical outcomes, improve public health nutrition advice, encourage healthy eating, support chronic disease management, prevent health decline, aid disease prevention, and improve overall well-being.

The use of AI and machine learning (ML) in nutrition has benefits in several areas, including:

A one-size-fits-all approach to public health nutrition guidance fails to account for different dietary preferences, health goals, lifestyles, nutritional requirements, intolerances, allergies, and other health conditions.

A young and active vegan with a nut allergy, for example, has hugely different dietary needs to an elderly carnivore living with diabetes.

AI-powered technology can quickly analyze vast amounts of nutrition data and cross-reference it with an individuals measurements and requirements to produce personalized and optimal nutrition plans for all.

Clinical nutrition can be defined as a discipline that deals with the prevention, diagnosis, and management of nutritional and metabolic changes related to acute and chronic disease and conditions caused by a lack or excess of energy and nutrients.

AI has several applications in this field, from analyzing complex medical data and medical images to informing the decisions of medical practitioners and producing personalized nutrition plans for patients. Because AI solutions can identify previously overlooked associations between diet and medical outcomes, they can improve chronic disease management, optimize patient recovery, and improve patient wellbeing.

A tailored nutrition plan for a diabetic person, for example, will evaluate their gut microbiome and blood glucose levels, while a person with cardiovascular problems may require a diet that takes into consideration their cholesterol levels and blood pressure.

There has been a rise in AI-powered apps that assist users in tracking their nutritional intake while offering personalized guidance on making healthier choices.

The challenge with self-reported food diaries is that they depend on the memory and honesty of individuals, which often leads to under- and over-reporting and other inaccuracies. When certain snacks and meals are forgotten, portion sizes are miscalculated, or food choices that are perceived to be less healthy are deliberately omitted, it is more difficult for nutrition-focused apps and healthcare professionals to provide informed and effective nutritional advice.

With AI-powered computer vision technology, food tracking apps can identify food items, estimate portion sizes, and calculate nutritional values with increasing accuracy. Coupled with wearable devices, which track a users activity, this technology is empowering people to make optimal nutritional choices. Some nutrition apps offer additional personalization.

For example, they might partner with health organizations to obtain their users electronic health records or feature a nutrition chatbot to quickly respond to queries or perform a dietary assessment.

Nutraceuticals are products derived from food sources that promise additional health benefits to their basic nutritional value. Some examples include glucosamine, which is used in the treatment of arthritis; omega-3 fatty acids, which are used to treat inflammatory conditions; and many nutrient-rich foods, including soybeans, ginger, garlic, and citrus fruits.

Various nutraceutical companies have come under fire for marketing products as health solutions without meaningful scientific evidence to back their claims. But AI looks set to transform the industrys image by finding genuine health solutions fast.

The speed and accuracy with which an AI solution can identify bioactive compounds in foods and then predict the actions they will have in the body is of particular interest to nutraceutical companies. At present, it often takes several years to identify, develop, test, and launch a new ingredient.

In the future, ML solutions are likely to support the development of targeted nutraceutical solutions.

Across 48 countries, 238 million people are facing high levels of acute food insecurity. Meanwhile, one-third of the food produced for human consumption is lost or wasted, which equates to 1.3 billion tons every year.

AI is aiding the global effort to address food insecurity and reduce waste generation.

It can predict demand for certain crops to enable farmers to optimize their planting plans, detect crop and livestock disease at an early stage to contain damage and limit loss, and identify trends in consumer behavior to help retailers forecast demand and better manage their inventories. In addition, AI systems can track food from farm to plate, helping to ensure is it harvested, shipped, and consumed on time.

In the aftermath of a natural disaster or conflict, AI can quickly analyze data to inform humanitarian responses.

The challenges associated with AI in nutrition include:

To improve accuracy and efficiency, ML solutions are fed vast amounts of training data. In nutrition, such data is especially sensitive, including personal information and medical records.

Once a product, such as a food tracking app, is live, additional data is collected, as users are required to disclose personal information, including measurements, medications, food intake, and existing health conditions.

Rigorous safeguarding must be implemented to ensure that all personal data is safely collected and stored and that users understand how it is being used.

AI solutions are known to perpetuate societal stereotypes and biases. Amazon deployed a recruitment system that discriminated against women, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) created a tool to predict the likelihood of criminals reoffending, which misclassified almost twice as many Black defendants, and a Twitter algorithm was proven to favor white faces over Black faces when cropping images.

If nutrition-centered AI solutions are not carefully developed, these tools could reinforce outdated and oversimplified concepts of nutritional health and wellness or reflect biases in the healthcare system.

The use of diverse training data can prevent unfair or inaccurate outcomes, and these tools must be continuously monitored and updated to echo the latest healthcare guidance.

Meal scanning technology enables food-tracking app users to log their intake by simply snapping a photograph of their meals via their cell phone cameras. These tools are exceptionally fast and can be highly accurate, but there are some major limitations to consider.

For example, the technology will struggle to detect a basic ingredient swap in familiar recipes. When scanning a slice of cake, it would record items such as butter and eggs, even if those ingredients had been replaced with avocado and yogurt. Similarly, the app wont register when a creamy pasta sauce is replaced with a milk-based alternative.

Fortunately, these shortcomings can be addressed with some manual effort on the users part.

Nutrition is a complex and nuanced field, which will continue to benefit from the inputs and expertise of qualified healthcare professionals.

Take the management of chronic illnesses as an example. While an AI-powered app can produce highly customized dietary plans for individuals living with diabetes, celiac disease, or Crohns disease, additional medical support and monitoring is likely to be required.

Complexities also arise when poor or unusual eating habits are linked to mental health conditions, such as eating disorders. In these scenarios, food-tracking apps are likely to cause more harm than good.

The market for personalized nutrition is fast-expanding, driven largely by rapid developments in AI.

Some exciting industry players include;

Nutrition labels are designed to prevent false advertising and promote food safety. But perhaps one of the most arduous tasks involved in launching new food products, medicines, and supplements is ensuring adherence to labeling regulations and standards, which are not only complex but can also vary enormously from country to country. Manual reviews in the food industry are repetitive, slow, and prone to human error, which, at best, results in delayed product launches and, at worst, poses a threat to human health.

Verifying the accuracy and compliance of labels is made easy with AI algorithms. Manufacturers simply upload their recipes and packaging design to an AI-powered tool, which analyzes the ingredients and identifies any issues. This drives operational efficiencies, reduces product waste, ensures customer safety, and enables more cost-effective international trade.

An increasing number of companies are using smart labels to provide consumers with additional nutritional information. This enables people to make more informed decisions about the food and supplements they consume while ensuring food safety.

The growing demand for personalized nutrition has led to the rapid adoption of fitness-tracking apps like MyFitnessPal and MyPlate. Indeed, almost two-thirds of American adults are mobile health app users, according to a 2023 survey.

Amid widespread criticism that these apps are promoting unhealthy diets, extreme exercise regimes, and rapid weight loss, users must understand the technologys limitations.

Here are some important things to consider:

AI-powered apps are more likely to be beneficial when healthcare professionals, including doctors and dietitians, work closely with their patients and clients to recommend appropriate products and monitor usage.

The applications of AI in nutrition are far-reaching, enabling personalized diet plans, enhanced clinical nutrition, the development of targeted nutraceuticals, and more effective methods for addressing food insecurity.

As adoption increases, these solutions will require increasingly robust regulation, particularly in relation to data handling and security, algorithm bias, and consumer education.

With the revenue from health apps forecast to grow to $35.7 billion by 2030, healthcare professionals must be aware of the information that is being communicated to consumers so they can guide their patients toward truly health-promoting options.

As for the developers of AI-powered nutrition technology, inputs from experts in diverse fields, including healthcare, nutrition, technology, and ethics, will ensure solutions are safe and effective.

Read the original:
Artificial Intelligence in Nutrition: Definition, Benefits, and Algorithms - ThomasNet News

How AI Bias Is Impacting Healthcare – InformationWeek

Artificial intelligence has been used to spot bias in healthcare, such as a lack of darker skin tones in dermatologic educational materials, but AI has been the cause of bias itself in some cases.

When AI bias occurs in healthcare, the causes are a mix of technical errors as well as real human decisions, according to Dr. Marshall Chin, professor of healthcare ethics in the Department of Medicine at the University of Chicago. Chin co-chaired a recent governmentpanelon AI bias.

This is something that we have control over, Chin tells InformationWeek. It's not just a technical thing that is inevitable.

In 2023, a class action lawsuit accused UnitedHealth of illegally using an AI algorithm to turn away seriously ill elderly patients from care under Medicare Advantage. The lawsuit blamed naviHealths nH Predict AI model for inaccuracy. UnitedHealth told StatNews last year that the naviHealth care-support tool is not used to make determinations. The lawsuit has no merit, and we will defend ourselves vigorously, the company stated.

Other cases of potential AI bias involved algorithms studying cases of heart failure, cardiac surgery, and vaginal birth after cesarean delivery (VBAC), in which an AI algorithm led Black patients to get more cesarean procedures than were necessary, according to Chin. The algorithm erroneously predicted that minorities were less likely to have success with a vaginal birth after a C-section compared with non-Hispanic white women, according to the US Department of Health and Human Services Office of Minority Health. It inappropriately had more of the racial minority patients having severe cesarean sections as opposed to having the vaginal birth, Chin explains. It basically led to an erroneous clinical decision that wasn't supported by the actual evidence base.

Related:Why AIs Slower Pace in Healthcare Is as It Should Be

After years of research, the VBAC algorithm was changed to no longer consider race or ethnicity when predicting which patients could suffer complications from a VBAC procedure, HHS reported.

When a dataset used to train an AI system lacks diversity, that can result in misdiagnoses, disparities in healthcare, and unequal insurance decisions on premiums or coverage," explains Tom Hittinger, healthcare applied AI leader at Deloitte Consulting.

If a dataset used to train an AI system lacks diversity, the AI may develop biased algorithms that perform well for certain demographic groups while failing others, Hittinger says in an email interview. This can exacerbate existing health inequities, leading to poor health outcomes for underrepresented groups.

Related:Metaverse: The Next Frontier in Healthcare?

Although AI tools can cause bias, they also bring more diversity to drug development. Companies such as BioPhy study patterns in patient populations to see how people respond to different types of drugs.

The challenge is to choose a patient population that is broad enough to offer a level of diversity but also bring drug efficacy. However, designing an AI algorithm to predict patient populations may result in only a subset of the population, explains Dave Latshaw II, PhD, cofounder ofBioPhy.

If you feed an algorithm that's designed to predict optimal patient populations with only a subset of the population, then it's going to give you an output that only recommends a subset of the population, Latshaw tells InformationWeek. You end up with bias in those predictions if you act on them when it comes to structuring your clinical trials and finding the right patients to participate.

Therefore, health IT leaders must diversify their training sets when teaching an AI platform to avoid blindness in the results, he adds.

The dream scenario for somebody who's developing a drug is that they're able to test their drug in nearly any person of any background from any location with any genetic makeup that has a particular disease, and it will work just the same in everyone, Latshaw says. That's the ideal state of the world.

Related:Connected Healthcare Takes Huge Leap Forward

IT leaders should involve a diverse group of stakeholders when implementing algorithms. That involves tech leaders, clinicians, patients, and the public, Chin says.

When validating AI models, IT leaders should include ethicists and data scientists along with clinicians, patients, and associates, which are nonclinical employees, staff members, and contractual workers at a healthcare organization, Hittinger says. When multiple teams roll out new models, that can increase the time required for experimentation and lead to a gradual rollout along with continuous monitoring, according to Hittinger.

That process can take many months, he says.

Many organizations are using proprietary algorithms, which lack an incentive to be transparent, according to Chin. He suggests that AI algorithms should have labels like a cereal box explaining how algorithms were developed, how patient demographic characteristics were distributed, and the analytical techniques used.

That would give people some sense of what this algorithm is, so this is not a total black box, Chin says.

In addition, organizations should audit and monitor AI systems for bias and performance disparities, Hittinger advises.

Organizations must proactively search for biases within their algorithms and datasets, undertake the necessary corrections, and set up mechanisms to prevent new biases from arising unexpectedly, Hittinger says. Upon detecting bias, it must be analyzed and then rectified through well-defined procedures aimed at addressing the issue and restoring public confidence.

Organizations such as Deloitte offer frameworks to provide guidance on how to maintain ethical use of AI.

One core tenet is creating fair, unbiased models and this means that AI needs to be developed and trained to adhere to equitable, uniform procedures and render impartial decisions, Hittinger says.

In addition, healthcare organizations can adopt automated monitoring tools to spot and fix model drift, according to Hittinger. He also suggests that healthcare organizations form partnerships with academic institutions and AI ethics firms.

Dr. Yair Lewis, chief medical officer at AI-powered primary-care platform Navina, recommends that organizations establish a fairness score metric for algorithms to ensure that patients are treated equally.

The concept is to analyze the algorithms performance across different demographics to identify any disparities, Lewis says in an email interview. By quantifying bias in this manner, organizations can set benchmarks for fairness and monitor improvements over time.

Original post:
How AI Bias Is Impacting Healthcare - InformationWeek

Study Shows UK Leading the AI and Machine Learning Charge for Enhanced Productivity – Automation.com

Global research report from Rockwell Automation highlights how artificial intelligence is being deployed to give the workforce superpowers and build resiliency in operations.

Rockwell Automation, Inc., the world's largest company dedicated to industrial automation and digital transformation, today announced the results of the 9th annual State of Smart Manufacturing Report, offering valuable insights into trends, challengesand plans for global manufacturers. The study surveyed more than 1,500 respondents from 17 countries, including the United Kingdom (UK), France, Germany, Italyand Spain.

Among many findings, three major trends emerge from this report: the AI revolution is here for manufacturing, technology is being deployed to give the workforce superpowersand building resiliency in operations is growing in importance.

"The world has changed, and manufacturing has changed with it, but there is more to do," said Asa Arvidsson, regional vice president sales, north region, Rockwell Automation. "Talent continues to be elusive globally. As manufacturers continue to seek opportunities for profitable growth, they are finding that uncertainty in workforce availability is impacting quality and their ability to meet their customers needs and transform at pace.

"The clear message from this report is that manufacturers view technology as an advantage for improving quality, agility, and innovation, and for attracting the next generation of talent. Manufacturers expect to mitigate risk through technology tied to process and people, build resiliency, and drive future success."

Key UK findings include:

According to this year's survey, automation and optimization through artificial intelligence (AI) and machine learning (ML) are the main reasons for smart manufacturing investments within the UK. In the 2024 survey, 88% of responding companies in the UK said they have invested, or plan to invest in the next 12 months, in AI and ML. This is above the European figure of 84% and highest among all the European countries surveyed, matching the United States. When companies deploy the technology, there are four standout applications: quality control (38%), cybersecurity (37%), logistics (34%), and customer service (32%).

In any digital transformation, employees risk being the forgotten factor. According to the survey, manufacturers are investing in people and technology to advance. Rather than having AI replace roles, organizations are looking to use AI to tackle roles they are struggling to fill today. To address the manufacturing industry's labor shortage and skills gap, 38% of companies in the UK are spreading the net in their search for talent by leveraging remote work to access a wider talent pool for remote-capable jobs.

Quality remains a top priority for companies in the UK, which ranked it first at 42%. While Europe also listed cost and efficiency, along with quality, in their top three, the UK went in a different direction. Its top priorities behind quality were improving the company's financial position and improving decision-making with data, both at 35%.

"The findings of this year's State of Smart Manufacturing Report underscore a pivotal moment for the industry, as UK manufacturers lead the charge in integrating AI and machine learning technologies, Arvidsson concluded. This strategic embrace is not merely about technological adoption but signifies a broader transformation towards smarter, more resilient manufacturing ecosystems. By leveraging AI to enhance data-driven decision-making and operational efficiency, UK manufacturers are setting new benchmarks for innovation and competitiveness on the global stage.

The full findings of the report can be foundhere.

This report analyzed feedback from 1,567 respondents from 17 of the top manufacturing countries with roles from management up to the C-suite and was conducted in association with Sapio Research and Plex Systems. The survey covered discrete, process, and hybrid industries across a balanced distribution of company sizes, with revenues spanning $10 million to over $10 billion, providing a wide breadth of manufacturing business perspectives.

Rockwell Automation, Inc.is a global leader in industrial automation and digital transformation. We connect the imaginations of people with the potential of technology to expand what is humanly possible, making the world more productive and more sustainable. Headquartered in Milwaukee, Wisconsin, Rockwell Automation employs approximately 29,000 problem solvers dedicated to our customers in more than 100 countries.

Check out our free e-newsletters to read more great articles..

Continue reading here:
Study Shows UK Leading the AI and Machine Learning Charge for Enhanced Productivity - Automation.com

Why the CEO of Quantcast is betting on "personalized AI" – Big Think

Theres a new startup in San Francisco, and theyre into marketing. Alone, that wouldnt be so interesting, but this company has made its business about algorithmic marketing. This is 2006, in the days when Facebook was still new and generative artificial intelligence was a conversation among science fiction fans. But this startup is sure that machine learning is the future.

The startup secures a major client, a local tourist board, and is eager to turn this into an early win. So they put their heads together and burned the midnight oil. Why would you come to San Francisco? Who should they target, and how? The team came up with the old, familiar answers: Alcatraz, Golden Gate Bridge, and wine country. But then they turned to their algorithm to measure the decision-making behaviors of potential San Francisco visitors.

Their systems buzzed and whirred and they came back with dentistry. At first, the CEO thought it was a mistake. Dentistry? San Franciscans have nice teeth, no doubt, but they arent famous for them. Then the team realized something: the annual Dentistry Association conference was about to be held in San Francisco. No one had known. No one had even thought of that. But the machine did. The team whipped up a presentation for their client, and everyone thought it was great.

Try BigThink+ for your business

Engaging content on the skills that matter, taught by world-class experts.

Thats the true story of one of Quantcasts earliest successes. Today, Quantcast is one of the biggest AI-driven advertising technology companies in the world. They have offices in ten countries around the world with hundreds of employees. They were one of the few to recognize the power of AI before the rest of the world did. Last month, Big Think talked with Quantcast founder and CEO, Konrad Feldman, to make sense of how AI has changed marketing and will continue to revolutionize business.

Almost everyone in the world uses some kind of bank. There are billions of transactions occurring every second, and each of those transactions represents a behavior. A boy is buying a chocolate bar from a vending machine. A pensioner is moving money between pension accounts. A young couple is putting down a deposit on a house. A businessman is giving a generous tip. Money moves, and it tells us a story. We can often tell more about peoples behavior from their spending habits than from any self-recorded account.

The problem, then, is how to make sense of those stories. How can anyone hope to read the habits of billions of microtransactions? I started my career in research in neural networks at UCL, Feldman tells Big Think. I started a business using machine learning techniques to find patterns predominantly in the financial service industry. I ended up working in lots of different business domains where it was really hard for people to interpret all of the data and find the patterns of activity that they needed to address particular business problems. We ended up building a lot of systems in the banking sector to help the big banks detect things like money laundering and terrorist financing.

Having fine-tuned various algorithms in the financial sector, Feldman looked to move into the online space; Quantcast was focused on dot-com entrepreneurs just starting out. But now there was a problem. In the days before cloud computing and the internet of everything, the internet was a web of tiny pockets and gated communities. Quantcast Measure is the product we launched in 2006 to help websites of any size, explains Feldman. It was a classic sort of disruptive innovation initially. These algorithms have the ability to provide much broader visibility to better understand audiences. Over time, larger and larger sites realized the richness of audience information they could derive, and participate in machine learning. That was the first use of Quantcast Measure.

Marketers need data. But everyone has their own data sets, often very small. Quantcast offered the opportunity to combine them and analyze them. We had to solve some really thorny technical issues, says Feldman, both in terms of the sort of data processing architectures we needed and because the data volumes involved are just absolutely immense.

Its often misunderstood that machine learning and AI are some kind of data-processing panacea. A computer can do billions of calculations per nanosecond, so surely they can make light work of all that data? But an algorithm is only as good as the humans who engineer it. As Feldman puts it, A lot of people, when they think about AI or machine learning, think of it as some sort of holistic algorithm that does everything. In fact, even though we have algorithms now in large language models (LLMs) that can actually do a number of different things, fundamentally, most specific problems require specific algorithms and specific tuning and application of those algorithms.

The various explosions of GPTs are a good example of this. OpenAI has produced one of the most powerful AIs in the history of civilization, but after a few months, its usage slipped, and people started to see it as a time waste. Then, at the end of 2023, OpenAI introduced custom-made GPTs. Essentially, anyone could fine-tune and create an algorithm of their own using OpenAIs LLMs. Users started to craft prompts in an algorithmic style: Imagine youre a movie critic, In the style of a romantic poet, You are Albert Einstein, Explain like Im five years old, and so on. This is an example of the fine-tuning Feldman talks about to turn machine learning into something practical because machine learning varies.

We believe it will be possible to build machine learning models that could actually infer audience characteristics.

The machine learning we use to measure audiences is different from the machine learning were using to build predictive audience models for advertising delivery, explains Feldman. Its different from the machine learning thats been used to ensure brand safety or to price individual impressions. Weve developed different types of machine learning approaches to solve some of these fundamental problems. Quantcasts marketing algorithms will capture information in real time [and] then make adjustments as they go along. They are interactive tools that will optimize for some specified goal as theyre working.

The clear implication here is that the next stage of AI will be more flexible, fine-grained and fine-tuned. For AI to become really useful and for the conversation to become really exciting AI has to become personalized AI.

We believe it will be possible to build machine learning models that could actually infer audience characteristics, says Feldman. Humans are really good [at] pattern matching thats how weve evolved. But machines also have flexibility because they can deal with so many dimensions of data. If youre going adjust and tailor the content, if youre going to modify content for each individual recipient, well then, people cant do that. You have to use a computer to do it.

Read the original post:
Why the CEO of Quantcast is betting on "personalized AI" - Big Think

SADA Achieves Over 300% Increase in Generative AI and Machine Learning Projects in 2023 – GlobeNewswire

LOS ANGELES, March 26, 2024 (GLOBE NEWSWIRE) -- SADA, An Insight company, a leading business and technology consultancy and award-winning Google Cloud Premier Partner across several product and engagement models, announces continued momentum powered by rapid scale in its Generative AI(GenAI) and machine learning operations as customers transform with Google Clouds Gemini and Vertex AI platform, propelling a significant increase in customer adoption of these GenAI technologies.

Our continued growth is a testament to SADAs commitment to delivering business outcomes and driving value to customers by providing industry-leading solutions and services, said Tony Safoian, CEO of SADA. We strive to be true business partners to our customers as they modernize, innovate, and grow their business with us and Google Cloud.

SADA Drives Customer Value

Generative AI & Machine Learning Scale

Solution & Services Innovation

Google Cloud Next 2024

SADAs team of experts will participate in a series of speaking sessions at Google Cloud Next in Las Vegas on April 9-11 alongside key customers and partners, sharing their cloud innovations and success.

SADAs Sessions include:

Additionally, SADAs ongoing cross-country Cloud Transformation Tour in North America brings Google Cloud experts to discuss machine learning and Gen AI for business insights.

Expanding Partner Ecosystem SADAs partnerships continued with a commitment to the ISV partner ecosystem, which allows SADA to deliver deep expertise and offer complementary solutions for Google Cloud and add-on solutions. Working together, SADAs partners help customers achieve maximum impact on their business goals.

About SADA SADA, An Insight company, is a market leader in professional services and an award-winning solutions provider of Google Cloud. Since 2000, SADA has been committed to helping customers in healthcare, media, entertainment, retail, manufacturing, and the public sector solve their most complex challenges so they can focus on achieving their boldest ambitions. With offices in North America, India, and Armenia providing sales and customer support teams, SADA is positioned to meet customers where they are in their digital transformation journey. SADA is a 6x Google Cloud Partner of the Year award winner with 10 Google Cloud Specializations and has been recognized as a Niche Player in the 2023 Gartner Magic Quadrant for Public Cloud IT Transformation Services. SADA is a 15x honoree of the Inc. 5000 list of Americas Fastest-Growing Private Companies and has been named to Inc. Magazines Best Workplaces four years in a row. Learn more at http://www.sada.com.

Read this article:
SADA Achieves Over 300% Increase in Generative AI and Machine Learning Projects in 2023 - GlobeNewswire

Netflix Uses Metaflow to Manage Hundreds of AI/ML Applications at Scale – InfoQ.com

Netflix recently published how its Machine Learning Platform (MLP) team provides an ecosystem around Metaflow, an open-source machine learning infrastructure framework. By creating various integrations for Metaflow, Netflix is able to support hundreds of Metaflow projects maintained by multiple engineering teams.

Metaflow's integrations with Netflix's production systems enable projects to move from prototype to production without incurring unsustainable operational overhead. The engineering team explains their key to success:

Given the very diverse set of ML and AI use cases we support [...], we don't expect all projects to follow the same path from prototype to production. Instead, we provide a robust foundational layer with integrations to our company-wide data, compute, and orchestration platform, as well as various paths to deploy applications to production smoothly. On top of this, teams have built their own domain-specific libraries to support their specific use cases and needs.

One integration example provided is the "Fast Data" library for Metaflow. Netflix hosts its main data lake on S3 as Apache Iceberg tables and uses Apache Spark for ETL. The Fast Data library enables fast, scalable, and robust access to the Netflix data warehouse by leveraging high-performance components from the Python data ecosystem. This library allows Netflix toprocessterabytes of data collectively and encode complex relationships between titles, actors, and other film attributes, supporting the company's broad business applications.

The Fast Data Library for Metaflow (Source)

Netflix's production workflow orchestrator,Maestro, plays a critical role in managing Metaflow projects in production. It supports scalability and high availability and enables seamless integration of Metaflow flows with other systems through event-triggering. Using this integration, Netflix engineers can support content decision-making and answer "what content Netflix should bring to the service".

Finally, for deployments that require an API and real-time evaluation, Netflix provides an integrated model hosting service, Metaflow Hosting. Metaflow Hosting "provides an easy to use interface on top of Netflix's existing microservice infrastructure, allowing data scientists to quickly move their work from experimentation to a production grade web service that can be consumed over a HTTP REST API with minimal overhead."

Using this integration, Netflix hosts and scales a model to compute various media asset features. Once it finds the features, a consuming service can store them in a store for future use. An old talk provides an overview and further details about this service.

Hosting and Consuming the Media Feature Computation Model (Source)

Netflix implemented the integrations using Metaflow's extension mechanism, "which is publicly available but subject to change and hence not part of Metaflow's stable API yet." They invite engineers to contact them on the Metaflow community Slack to discuss building additional extensions.

Read more:
Netflix Uses Metaflow to Manage Hundreds of AI/ML Applications at Scale - InfoQ.com

10 Areas Impacted By AI – SME.org

Few technologies have transformed the manufacturing industry like ERP software. The next transformative technologyartificial intelligence (AI) and machine learning (ML)is taking manufacturing to a new level with unprecedented predictive data tracking and analysis capabilities.

AI can be programmed to learn from large amounts of data to make deeper and more accurate predictions regarding customers, buying habits, inventory levels, markets, material purchasing and more. In turn, AI programs machines to learn from experience so they can perform tasks that have always been conducted by humans.

AI software uses progressive learning algorithms to automate repetitive learning and achieve incredible accuracy so the data can do the programming. It can even assist manufacturers with decision-making when the relevant data, parameters, and variables exceed human understanding.

Integrating AI and ERP software can do what ERP has been doing all alongsimplifying manufacturing, improving operational efficiency and increasing profitability while growing the company. Here are 10 ways AI can make your manufacturing better.

AI-integrated ERP software helps optimize inventory management by predicting demand, identifying slow-moving products, and automating order fulfillment. AI-based inventory planning provides increased visibility of inventory KPIs, improved product, channel and location forecasting, and automatic SKU classifying to meet material demands. A recent McKinsey study reported that companies utilizing AI to optimize inventory can reduce inventory levels from 20% to 50%.

AI-based inspection systems can identify production defects and anomalies in real time, reducing the risk of product recalls and improving overall quality. This includes using machine vision to identify defects on the assembly line that may not be visible to the human eye. AI-powered quality control software can create rules to determine the features that define acceptable quality levels.

AI can lead to better-informed pricing decisions by analyzing market trends, competitor pricing and customer behavior. By factoring these into pricing strategies you can predict how different prices will impact sales, and even combine experience and data to increase prices without damaging sales.

AI helps optimize production schedules, reduce lead times and avoid stockouts by predicting product demand based on historical data and trends. In addition to predicting consumer demand for your SKUs, AI can use real-time data to create forecasts based on current supply chain conditions.

AI can predict the types and quantities of products that will be in demand with remarkable accuracy, enabling you to reduce production lead times, lower costs, and increase customer satisfaction. AI algorithms can predict the expected quantities for products in demand, thereby reducing strains on specific links of your supply chain.

AI helps reduce repair costs and extends the life of production machines by predicting equipment failure and scheduling preventative maintenance before breakdowns occur. In addition, AI can improve worker safety by minimizing human errors and accidents while increasing efficiency and productivity.

AI-powered ERP software can assist in reducing labor costs by predicting employee productivity, identifying training needs, and optimizing scheduling. It can also lower insurance rates and medical costs by reducing workplace injuries via streamlining or automating risky processes.

AI-powered ERP software provides key performance indicators on production rates, inventory levels, and quality metrics to facilitate data-driven decisions and identify areas for improvement. AI speeds up the analytics process by preparing, analyzing and assessing data as soon as it is available.

AI can help minimize labor shortages through robotic automation, additive manufacturing, and machine vision. AI applications enable robot arms to safely handle objects on the production line and can even train robots to perform various types of assembly line work done by humans.

AIs ability to automate production processes improves efficiency by reducing the need for human intervention. Machine automation can perform a wide range of production processes from repetitive tasks such as data entry and order processing to complex tasks like spotting anomalies on the production line.

The manufacturing industry has begun to gravitate toward big data in large part due to its ability to produce predictive forecasts regarding sales, pricing, material availability, and other key metrics. Using advanced technology, including AI, big data pieces together very large and diverse datasets that are used in machine learning, predictive modeling, and other advanced analytics to solve business problems and make informed decisions.

The complexity of AI algorithms can be daunting. Yet, their ability to look to the past, present and future will help modernize manufacturing.

Follow this link:
10 Areas Impacted By AI - SME.org

AI reveals the complexity of a simple birdsong – The Washington Post

To a human ear the songs of all male zebra finches sound more or less the same. But faced with a chorus of this simple song, female finches can pick the performer who sings most beautifully.

Zebra finches are found in Australia, and they usually mate monogamously for life making this a high-stakes decision for the female finches. The zebra finch is among about a third of songbirds who learn a single song from their fathers early in life, and sing it over and over, raising the question of how female songbirds distinguish between them to choose a mate.

Listen to the song of a male zebra finch:

Scientists believe that most male songbirds evolved to sing a variety of songs to demonstrate their fitness. Under that theory, the fittest songbirds will have more time and energy to work on their vocal stylings and attract females with their varied vocal repertoire.

New research using machine learning shows finches may be sticking to one tune, but how they sing it makes a big difference. Published Wednesday in the journal Nature, the study reveals the complexity of a single zebra finch song and what female songbirds might be hearing in their prospective mates seemingly simple songs.

When researchers analyze birdsongs, theyre often not listening to them but rather looking at spectrograms, which are visualizations of audio files.

So I put together that, Hey, what humans are doing is looking at images of these audio files. Can we use machine learning and deep learning to do this? said Danyal Alam, the lead author on the new study and a postdoctoral researcher at the University of California at San Francisco.

Alam, along with Todd Roberts, an associate professor at UT Southwestern Medical Center and another colleague, used machine learning to analyze hundreds of thousands of zebra finch songs to figure out how they were different from each other and which variations were more attractive to female songbirds.

The researchers found one metric that seemed to get females attention: the spread of syllables in the song. The females seemed to prefer longer paths between syllables. This isnt something humans can easily pick up by listening to the songs or looking at the spectrograms but based on how these algorithms mapped the syllables, the researchers were able to see them in a new way.

To check their hypothesis, the researchers brought the findings back to the birds.

They generated synthetic bird songs to see if females preferred those with a longer path and they did, suggesting the birds intended audience picked up on the same pattern as the researchers computers.

Listen to see if you can tell the difference between a synthetic finch song that doesnt spread out its syllables:

Alam and his colleagues also found that baby birds had a harder time learning the long-distance song patterns than the shorter ones which suggests fitter birds would be more able to learn them, the researchers said.

The studys finding is consistent with whats been shown in other species: The more complexity or difficulty in a song, the more appealing it is to female birds.

A lot of signals in animal communication are meant to be an honest signal of some underlying quality, said Kate Snyder, a researcher at Vanderbilt who wasnt involved in the new paper.

For example, she said, if you look at a peacock, you see the male birds with the longer and more beautiful tails are better at attracting mates. Maintaining a tail like that is expensive for the bird so it must be good at finding food and surviving in its environment to have the time to devote to keeping its tail nice.

Learning takes a lot of time, energy, brain space, Snyder said. Only the fittest male birds will have the time and energy to devote to learning to sing.

Among finches, that work has just been harder to spot until now.

We used to think of this single song repertoire as perhaps a simple behavior, said Roberts. But what we see is that its perhaps much more complicated than we previously appreciated.

Read more:
AI reveals the complexity of a simple birdsong - The Washington Post

Unlock the potential of generative AI in industrial operations | Amazon Web Services – AWS Blog

In the evolving landscape of manufacturing, the transformative power of AI and machine learning (ML) is evident, driving a digital revolution that streamlines operations and boosts productivity. However, this progress introduces unique challenges for enterprises navigating data-driven solutions. Industrial facilities grapple with vast volumes of unstructured data, sourced from sensors, telemetry systems, and equipment dispersed across production lines. Real-time data is critical for applications like predictive maintenance and anomaly detection, yet developing custom ML models for each industrial use case with such time series data demands considerable time and resources from data scientists, hindering widespread adoption.

Generative AI using large pre-trained foundation models (FMs) such as Claude can rapidly generate a variety of content from conversational text to computer code based on simple text prompts, known as zero-shot prompting. This eliminates the need for data scientists to manually develop specific ML models for each use case, and therefore democratizes AI access, benefitting even small manufacturers. Workers gain productivity through AI-generated insights, engineers can proactively detect anomalies, supply chain managers optimize inventories, and plant leadership makes informed, data-driven decisions.

Nevertheless, standalone FMs face limitations in handling complex industrial data with context size constraints (typically less than 200,000 tokens), which poses challenges. To address this, you can use the FMs ability to generate code in response to natural language queries (NLQs). Agents like PandasAI come into play, running this code on high-resolution time series data and handling errors using FMs. PandasAI is a Python library that adds generative AI capabilities to pandas, the popular data analysis and manipulation tool.

However, complex NLQs, such as time series data processing, multi-level aggregation, and pivot or joint table operations, may yield inconsistent Python script accuracy with a zero-shot prompt.

To enhance code generation accuracy, we propose dynamically constructing multi-shot prompts for NLQs. Multi-shot prompting provides additional context to the FM by showing it several examples of desired outputs for similar prompts, boosting accuracy and consistency. In this post, multi-shot prompts are retrieved from an embedding containing successful Python code run on a similar data type (for example, high-resolution time series data from Internet of Things devices). The dynamically constructed multi-shot prompt provides the most relevant context to the FM, and boosts the FMs capability in advanced math calculation, time series data processing, and data acronym understanding. This improved response facilitates enterprise workers and operational teams in engaging with data, deriving insights without requiring extensive data science skills.

Beyond time series data analysis, FMs prove valuable in various industrial applications. Maintenance teams assess asset health, capture images for Amazon Rekognition-based functionality summaries, and anomaly root cause analysis using intelligent searches with Retrieval Augmented Generation (RAG). To simplify these workflows, AWS has introduced Amazon Bedrock, enabling you to build and scale generative AI applications with state-of-the-art pre-trained FMs like Claude v2. With Knowledge Bases for Amazon Bedrock, you can simplify the RAG development process to provide more accurate anomaly root cause analysis for plant workers. Our post showcases an intelligent assistant for industrial use cases powered by Amazon Bedrock, addressing NLQ challenges, generating part summaries from images, and enhancing FM responses for equipment diagnosis through the RAG approach.

The following diagram illustrates the solution architecture.

The workflow includes three distinct use cases:

The workflow for NLQ with time series data consists of the following steps:

Our summary generation use case consists of the following steps:

Our root cause diagnosis use case consists of the following steps:

To follow along with this post, you should meet the following prerequisites:

To set up your solution resources, complete the following steps:

Next, you create the knowledge base for the documents in Amazon S3.

The next step is to deploy the app with the required library packages on either your PC or an EC2 instance (Ubuntu Server 22.04 LTS).

Provide the OpenSearch Service collection ARN you created in Amazon Bedrock from the previous step.

After you complete the end-to-end deployment, you can access the app via localhost on port 8501, which opens a browser window with the web interface. If you deployed the app on an EC2 instance, allow port 8501 access via the security group inbound rule. You can navigate to different tabs for various use cases.

To explore the first use case, choose Data Insight and Chart. Begin by uploading your time series data. If you dont have an existing time series data file to use, you can upload the following sample CSV file with anonymous Amazon Monitron project data. If you already have an Amazon Monitron project, refer to Generate actionable insights for predictive maintenance management with Amazon Monitron and Amazon Kinesis to stream your Amazon Monitron data to Amazon S3 and use your data with this application.

When the upload is complete, enter a query to initiate a conversation with your data. The left sidebar offers a range of example questions for your convenience. The following screenshots illustrate the response and Python code generated by the FM when inputting a question such as Tell me the unique number of sensors for each site shown as Warning or Alarm respectively? (a hard-level question) or For sensors shown temperature signal as NOT Healthy, can you calculate the time duration in days for each sensor shown abnormal vibration signal? (a challenge-level question). The app will answer your question, and will also show the Python script of data analysis it performed to generate such results.

If youre satisfied with the answer, you can mark it as Helpful, saving the NLQ and Claude-generated Python code to an OpenSearch Service index.

To explore the second use case, choose the Captured Image Summary tab in the Streamlit app. You can upload an image of your industrial asset, and the application will generate a 200-word summary of its technical specification and operation condition based on the image information. The following screenshot shows the summary generated from an image of a belt motor drive. To test this feature, if you lack a suitable image, you can use the following example image.

Hydraulic elevator motor label by Clarence Risher is licensed underCC BY-SA 2.0.

To explore the third use case, choose the Root cause diagnosis tab. Input a query related to your broken industrial asset, such as, My actuator travels slow, what might be the issue? As depicted in the following screenshot, the application delivers a response with the source document excerpt used to generate the answer.

In this section, we discuss the design details of the application workflow for the first use case.

The users natural language query comes with different difficult levels: easy, hard, and challenge.

Straightforward questions may include the following requests:

For these questions, PandasAI can directly interact with the FM to generate Python scripts for processing.

Hard questions require basic aggregation operation or time series analysis, such as the following:

For hard questions, a prompt template with detailed step-by-step instructions assists FMs in providing accurate responses.

Challenge-level questions need advanced math calculation and time series processing, such as the following:

For these questions, you can use multi-shots in a custom prompt to enhance response accuracy. Such multi-shots show examples of advanced time series processing and math calculation, and will provide context for the FM to perform relevant inference on similar analysis. Dynamically inserting the most relevant examples from an NLQ question bank into the prompt can be a challenge. One solution is to construct embeddings from existing NLQ question samples and save these embeddings in a vector store like OpenSearch Service. When a question is sent to the Streamlit app, the question will be vectorized by BedrockEmbeddings. The top N most-relevant embeddings to that question are retrieved using opensearch_vector_search.similarity_search and inserted into the prompt template as a multi-shot prompt.

The following diagram illustrates this workflow.

The embedding layer is constructed using three key tools:

At the outset of app development, we began with only 23 saved examples in the OpenSearch Service index as embeddings. As the app goes live in the field, users start inputting their NLQs via the app. However, due to the limited examples available in the template, some NLQs may not find similar prompts. To continuously enrich these embeddings and offer more relevant user prompts, you can use the Streamlit app for gathering human-audited examples.

Within the app, the following function serves this purpose. When end-users find the output helpful and select Helpful, the application follows these steps:

In the event that a user selects Not Helpful, no action is taken. This iterative process makes sure that the system continually improves by incorporating user-contributed examples.

By incorporating human auditing, the quantity of examples in OpenSearch Service available for prompt embedding grows as the app gains usage. This expanded embedding dataset results in enhanced search accuracy over time. Specifically, for challenging NLQs, the FMs response accuracy reaches approximately 90% when dynamically inserting similar examples to construct custom prompts for each NLQ question. This represents a notable 28% increase compared to scenarios without multi-shot prompts.

On the Streamlit apps Captured Image Summary tab, you can directly upload an image file. This initiates the Amazon Rekognition API (detect_text API), extracting text from the image label detailing machine specifications. Subsequently, the extracted text data is sent to the Amazon Bedrock Claude model as the context of a prompt, resulting in a 200-word summary.

From a user experience perspective, enabling streaming functionality for a text summarization task is paramount, allowing users to read the FM-generated summary in smaller chunks rather than waiting for the entire output. Amazon Bedrock facilitates streaming via its API (bedrock_runtime.invoke_model_with_response_stream).

In this scenario, weve developed a chatbot application focused on root cause analysis, employing the RAG approach. This chatbot draws from multiple documents related to bearing equipment to facilitate root cause analysis. This RAG-based root cause analysis chatbot uses knowledge bases for generating vector text representations, or embeddings. Knowledge Bases for Amazon Bedrock is a fully managed capability that helps you implement the entire RAG workflow, from ingestion to retrieval and prompt augmentation, without having to build custom integrations to data sources or manage data flows and RAG implementation details.

When youre satisfied with the knowledge base response from Amazon Bedrock, you can integrate the root cause response from the knowledge base to the Streamlit app.

To save costs, delete the resources you created in this post:

Generative AI applications have already transformed various business processes, enhancing worker productivity and skill sets. However, the limitations of FMs in handling time series data analysis have hindered their full utilization by industrial clients. This constraint has impeded the application of generative AI to the predominant data type processed daily.

In this post, we introduced a generative AI Application solution designed to alleviate this challenge for industrial users. This application uses an open source agent, PandasAI, to strengthen an FMs time series analysis capability. Rather than sending time series data directly to FMs, the app employs PandasAI to generate Python code for the analysis of unstructured time series data. To enhance the accuracy of Python code generation, a custom prompt generation workflow with human auditing has been implemented.

Empowered with insights into their asset health, industrial workers can fully harness the potential of generative AI across various use cases, including root cause diagnosis and part replacement planning. With Knowledge Bases for Amazon Bedrock, the RAG solution is straightforward for developers to build and manage.

The trajectory of enterprise data management and operations is unmistakably moving towards deeper integration with generative AI for comprehensive insights into operational health. This shift, spearheaded by Amazon Bedrock, is significantly amplified by the growing robustness and potential of LLMs likeAmazon Bedrock Claude 3to further elevate solutions. To learn more, visit consult theAmazon Bedrock documentation, and get hands-on with theAmazon Bedrock workshop.

Julia Hu is a Sr. AI/ML Solutions Architect at Amazon Web Services. She is specialized in Generative AI, Applied Data Science and IoT architecture. Currently she is part of the Amazon Q team, and an active member/mentor in Machine Learning Technical Field Community. She works with customers, ranging from start-ups to enterprises, to develop AWSome generative AI solutions. She is particularly passionate about leveraging Large Language Models for advanced data analytics and exploring practical applications that address real-world challenges.

Sudeesh Sasidharanis a Senior Solutions Architect at AWS, within the Energy team. Sudeesh loves experimenting with new technologies and building innovative solutions that solve complex business challenges. When he is not designing solutions or tinkering with the latest technologies, you can find him on the tennis court working on his backhand.

Neil Desai is a technology executive with over 20 years of experience in artificial intelligence (AI), data science, software engineering, and enterprise architecture. At AWS, he leads a team of Worldwide AI services specialist solutions architects who help customers build innovative Generative AI-powered solutions, share best practices with customers, and drive product roadmap. In his previous roles at Vestas, Honeywell, and Quest Diagnostics, Neil has held leadership roles in developing and launching innovative products and services that have helped companies improve their operations, reduce costs, and increase revenue. He is passionate about using technology to solve real-world problems and is a strategic thinker with a proven track record of success.

Read the rest here:
Unlock the potential of generative AI in industrial operations | Amazon Web Services - AWS Blog

Using AI to expand global access to reliable flood forecasts – Google Research

Posted by Yossi Matias, VP Engineering & Research, and Grey Nearing, Research Scientist, Google Research

Floods are the most common natural disaster, and are responsible for roughly $50 billion in annual financial damages worldwide. The rate of flood-related disasters has more than doubled since the year 2000 partly due to climate change. Nearly 1.5 billion people, making up 19% of the worlds population, are exposed to substantial risks from severe flood events. Upgrading early warning systems to make accurate and timely information accessible to these populations can save thousands of lives per year.

Driven by the potential impact of reliable flood forecasting on peoples lives globally, we started our flood forecasting effort in 2017. Through this multi-year journey, we advanced research over the years hand-in-hand with building a real-time operational flood forecasting system that provides alerts on Google Search, Maps, Android notifications and through the Flood Hub. However, in order to scale globally, especially in places where accurate local data is not available, more research advances were required.

In Global prediction of extreme floods in ungauged watersheds, published in Nature, we demonstrate how machine learning (ML) technologies can significantly improve global-scale flood forecasting relative to the current state-of-the-art for countries where flood-related data is scarce. With these AI-based technologies we extended the reliability of currently-available global nowcasts, on average, from zero to five days, and improved forecasts across regions in Africa and Asia to be similar to what are currently available in Europe. The evaluation of the models was conducted in collaboration with the European Center for Medium Range Weather Forecasting (ECMWF).

These technologies also enable Flood Hub to provide real-time river forecasts up to seven days in advance, covering river reaches across over 80 countries. This information can be used by people, communities, governments and international organizations to take anticipatory action to help protect vulnerable populations.

The ML models that power the FloodHub tool are the product of many years of research, conducted in collaboration with several partners, including academics, governments, international organizations, and NGOs.

In 2018, we launched a pilot early warning system in the Ganges-Brahmaputra river basin in India, with the hypothesis that ML could help address the challenging problem of reliable flood forecasting at scale. The pilot was further expanded the following year via the combination of an inundation model, real-time water level measurements, the creation of an elevation map and hydrologic modeling.

In collaboration with academics, and, in particular, with the JKU Institute for Machine Learning we explored ML-based hydrologic models, showing that LSTM-based models could produce more accurate simulations than traditional conceptual and physics-based hydrology models. This research led to flood forecasting improvements that enabled the expansion of our forecasting coverage to include all of India and Bangladesh. We also worked with researchers at Yale University to test technological interventions that increase the reach and impact of flood warnings.

Our hydrological models predict river floods by processing publicly available weather data like precipitation and physical watershed information. Such models must be calibrated to long data records from streamflow gauging stations in individual rivers. A low percentage of global river watersheds (basins) have streamflow gauges, which are expensive but necessary to supply relevant data, and its challenging for hydrological simulation and forecasting to provide predictions in basins that lack this infrastructure. Lower gross domestic product (GDP) is correlated with increased vulnerability to flood risks, and there is an inverse correlation between national GDP and the amount of publicly available data in a country. ML helps to address this problem by allowing a single model to be trained on all available river data and to be applied to ungauged basins where no data are available. In this way, models can be trained globally, and can make predictions for any river location.

Our academic collaborations led to ML research that developed methods to estimate uncertainty in river forecasts and showed how ML river forecast models synthesize information from multiple data sources. They demonstrated that these models can simulate extreme events reliably, even when those events are not part of the training data. In an effort to contribute to open science, in 2023 we open-sourced a community-driven dataset for large-sample hydrology in Nature Scientific Data.

Most hydrology models used by national and international agencies for flood forecasting and river modeling are state-space models, which depend only on daily inputs (e.g., precipitation, temperature, etc.) and the current state of the system (e.g., soil moisture, snowpack, etc.). LSTMs are a variant of state-space models and work by defining a neural network that represents a single time step, where input data (such as current weather conditions) are processed to produce updated state information and output values (streamflow) for that time step. LSTMs are applied sequentially to make time-series predictions, and in this sense, behave similarly to how scientists typically conceptualize hydrologic systems. Empirically, we have found that LSTMs perform well on the task of river forecasting.

Our river forecast model uses two LSTMs applied sequentially: (1) a hindcast LSTM ingests historical weather data (dynamic hindcast features) up to the present time (or rather, the issue time of a forecast), and (2) a forecast LSTM ingests states from the hindcast LSTM along with forecasted weather data (dynamic forecast features) to make future predictions. One year of historical weather data are input into the hindcast LSTM, and seven days of forecasted weather data are input into the forecast LSTM. Static features include geographical and geophysical characteristics of watersheds that are input into both the hindcast and forecast LSTMs and allow the model to learn different hydrological behaviors and responses in various types of watersheds.

Output from the forecast LSTM is fed into a head layer that uses mixture density networks to produce a probabilistic forecast (i.e., predicted parameters of a probability distribution over streamflow). Specifically, the model predicts the parameters of a mixture of heavy-tailed probability density functions, called asymmetric Laplacian distributions, at each forecast time step. The result is a mixture density function, called a Countable Mixture of Asymmetric Laplacians (CMAL) distribution, which represents a probabilistic prediction of the volumetric flow rate in a particular river at a particular time.

The model uses three types of publicly available data inputs, mostly from governmental sources:

Training data are daily streamflow values from the Global Runoff Data Center over the time period 1980 - 2023. A single streamflow forecast model is trained using data from 5,680 diverse watershed streamflow gauges (shown below) to improve accuracy.

We compared our river forecast model with GloFAS version 4, the current state-of-the-art global flood forecasting system. These experiments showed that ML can provide accurate warnings earlier and over larger and more impactful events.

The figure below shows the distribution of F1 scores when predicting different severity events at river locations around the world, with plus or minus 1 day accuracy. F1 scores are an average of precision and recall and event severity is measured by return period. For example, a 2-year return period event is a volume of streamflow that is expected to be exceeded on average once every two years. Our model achieves reliability scores at up to 4-day or 5-day lead times that are similar to or better, on average, than the reliability of GloFAS nowcasts (0-day lead time).

Additionally (not shown), our model achieves accuracies over larger and rarer extreme events, with precision and recall scores over 5-year return period events that are similar to or better than GloFAS accuracies over 1-year return period events. See the paper for more information.

The flood forecasting initiative is part of our Adaptation and Resilience efforts and reflects Google's commitmentto address climate change while helping global communities become more resilient. We believe that AI and ML will continue to play a critical role in helping advance science and research towards climate action.

We actively collaborate with several international aid organizations (e.g., the Centre for Humanitarian Data and the Red Cross) to provide actionable flood forecasts. Additionally, in an ongoing collaboration with the World Meteorological Organization (WMO) to support early warning systems for climate hazards, we are conducting a study to help understand how AI can help address real-world challenges faced by national flood forecasting agencies.

While the work presented here demonstrates a significant step forward in flood forecasting, future work is needed to further expand flood forecasting coverage to more locations globally and other types of flood-related events and disasters, including flash floods and urban floods. We are looking forward to continuing collaborations with our partners in the academic and expert communities, local governments and the industry to reach these goals.

Excerpt from:
Using AI to expand global access to reliable flood forecasts - Google Research