Category Archives: Deep Mind

Google DeepMind AI discovers 380,000 new materials unknown to humanity and a robot chemist is already mak… – The US Sun

HUNDREDS of thousands of new materials have been discovered by scientists, thanks to the help of AI-driven robots.

Researchers with Googles artificial intelligence research lab DeepMind have recently uncovered over 2.2 million new types of crystals including 380,000 materials that are stable.

Two papers published in Nature detail the findings from the researchers and scientists from DeepMind and the University of California, Berkeley.

Not only did the researchers celebrate that the finding is equivalent to nearly 800 years worth of knowledge, they indicated that their discovery can have a strong impact on future technological advances, particularly when uncovering new materials.

Specifically, they claim that among the 380,000 most stable materials discovered, they have the potential to be used to power superconductors, supercomputers, and even batteries in electric vehicles.

This discovery was largely made possible by Deepminds Graph Networks for Materials Exploration (GNoME) a deep learning AI tool.

In order to uncover millions of materials, GNoME was supposedly trained with information on various crystal structures and their stability, from data supplied by the Materials Project.

From there, the scientists had robots driven by the GNoME AI generate novel candidate crystals and predict their stability essentially cooking up new materials.

These robotic chefs quickly went to work, successfully synthesizing 41 materials out of 58 within 17 days in their "kitchen" at A-Lab, a facility at Berkley Lab.

"We now have the capability to rapidly make these new materials we come up with computationally,Berkeley materials scientists and A-Lab leader Gerbrand Ceder told Nature.com.

Ceder insisted that this technology and Ai-driven recipes for new materials will "change the world."

Not A-Lab itself, but the knowledge and information that it generates," he said.

Within this research, the scientists also found that more than 500 of the stable material candidates had "promising" lithium-ion conductors, which is a critical component in batteries.

Not only did the researchers say they would make their database of the uncovered materials available to the scientific community, they also plan on providing the recipes developed by GNoME for further testing.

"This is really exciting, Alexander Ganose, a materials chemist at Imperial College London, told Science.org.

It is enabling materials discovery across a much wider composition range. We might be able to find the materials of the future in this data set.

Google DeeMind has celebrated its findings as a large advancement of using AI technology in scientific research.

Our research and that of collaborators at the Berkeley Lab, Google Research, and teams around the world shows the potential to use AI to guide materials discovery, experimentation, and synthesis, researchers Amil Merchant and Ekin Dogus Cubuk said.

We hope that GNoME together with other AI tools can help revolutionize materials discovery today and shape the future of the field.

View post:
Google DeepMind AI discovers 380,000 new materials unknown to humanity and a robot chemist is already mak... - The US Sun

DeepMind boss found a flaw in Musk’s Mars plan. He’s not the first. – Business Insider

SpaceX CEO Elon Musk hasn't been shy about his plans to colonize Mars. AP

Elon Musk hasn't been shy about his plans to colonize Mars.

For years, the billionaire has argued that humans must become a multiplanet species as quickly as possible to escape threats on Earth such as overpopulation.

Musk has said that by 2050, he plans to put 1 million people on the neighboring planet, with help from his space-exploration company, SpaceX.

Some of his contemporaries aren't so sure about the plan.

DeepMind CEO Demis Hassabis was among the latest to point out an issue in the plans, The New York Times reported.

While Hassabis agreed the plan could work in theory, Musk was left speechless by Hassabis' suggestion that superintelligent artificial intelligence could follow him to Mars and destroy humanity, the Times reported.

The billionaire hadn't considered that risk and was so concerned he later invested in DeepMind to stay close to the technology, the report said.

Musk, a vocal AI doomsayer, has since launched his own AI company and an AI-powered chatbot.

Hassabis is far from the first to poke holes in the Tesla CEO and X owner's ambitions.

The Microsoft cofounder Bill Gates previously said Musk's ambition to colonize Mars wasn't a good use of money. Gates said funding important healthcare such as vaccine development was a better use of funds.

Four scientists previously told Business Insider that the plan suffered from technical, scientific, and ethical flaws. They said forming a colony on another planetary body, such as the moon, was more realistic than settling on the red planet.

The filmmaker Werner Herzog also took aim at the plans, once saying he thought the proposal was a "mistake" and an "obscenity."Herzog said he thought humans should focus on keeping Earth habitable, rather than looking for a new home.

Hassabis and Musk did not immediately respond to requests for comment made outside normal working hours.

Loading...

Read more here:
DeepMind boss found a flaw in Musk's Mars plan. He's not the first. - Business Insider

Google Training Gemini On Its Own Chips Reveals Another Of Its Advantages – Forbes

Google unveiled its highly anticipated new AI model.

Google on Wednesday unveiled its highly anticipated new artificial intelligence model Gemini, an impressive piece of software that can solve math problems, understand images and audio and mimic human reasoning. But Gemini also reveals Googles unique advantage compared to other AI players: Google trained it on its own chips designed in house, not the highly-coveted GPUs the rest of the industry is scrambling to stockpile.

As the AI arms race has heated up, GPUs, or graphics processing units, have become a powerful currency in Silicon Valley. The scrum has turned Nvidia, a company founded 30 years ago that was primarily known for gaming, into a trillion dollar behemoth. The White House has clamped down on chip exports to China, in an attempt to keep the AI prowess of a foreign adversary at bay.

But analysts say the fact that Google DeepMind, the tech giants AI lab, trained its marquee AI model on custom silicon highlights a major advantage large companies have against upstarts, in an age where giants like Google and Microsoft are already under intense scrutiny for their market dominance.

Googles compute hardware is so effective it was able to produce the industrys most cutting edge model, apparently one-upping OpenAIs ChatGPT, which was largely built using Nvidia GPUs. Google claims that Gemini outperforms OpenAIs latest model GPT-4 in several key areas, including language understanding and the ability to generate code. Google said its TPUs allow Gemini to run significantly faster than earlier, less-capable models.

If Google is delivering a GPT-4 beating model trained and run on custom silicon, we believe this could be a sign that AI tech stacks vertically integrated from silicon to software are indeed the future, Fred Havemeyer, head of U.S. AI research at the financial services firm Macquarie, wrote in a note to clients. Havemeyer added, however, that Google is uniquely positioned to make use of custom chips like few others can, flexing its scale, budget, and expertise.

Google showed that it's at least possible, Havemeyer told Forbes. We think that's really interesting because right now the market has been really constrained by access to GPUs.

Big tech companies have been developing their own silicon for years, hoping to wean themselves off of dependency from the chip giants. Google has spent nearly a decade developing its own AI chips, called Tensor Processing Units, or TPUs. Aside from helping to train Gemini, the company has used them to help read the names of the signs captured by its roving Street View cameras and develop protein-folding health tech for drug discovery. Amazon has also launched its own AI accelerator chips, called Trainium and Inferentia, and Facebook parent Meta announced its own chip, MTIA, earlier this year. Microsoft is reportedly working on custom silicon as well, reportedly code-named Athena. Apple, which has long designed its own silicon, unveiled a new chip earlier this year called R1, which powers the companys Vision Pro headset.

Lisa Su, CEO of the chip giant AMD, which has a smaller share of the GPU market, has shrugged off concerns that big tech customers could someday be competitors. Its natural, she told Forbes earlier this year. She said it makes sense for companies to want to build their own components as they look for efficiencies in their operations, but she was doubtful big tech companies could match AMDs expertise built up over decades. I think its unlikely that any of our customers are going to replicate that entire ecosystem.

Googles new model has the potential to shake up the AI landscape. The company is releasing three versions of Gemini with varying levels of sophistication. The most powerful version, a model that can analyze text and images called Gemini Ultra, will be released early next year. The smallest version, Gemini Nano, will be used to power features on Googles flagship Pixel 8 Pro smartphone. The mid-level version, Gemini Pro, is now being used to power Bard, the companys generative chatbot launched earlier this year. The bot initially garnered a lukewarm reception, generating an incorrect answer during a promo video and wiping out $100 billion in Google parent Alphabets market value. Gemini could be Googles best shot at overtaking OpenAI, after a bout of instability last month as CEO Sam Altman was ousted and reinstated in a matter of days.

Google also used the Gemini announcement to unveil the newest version of its custom chips, the TPU v5p, which Google will make available to outside developers and companies to train their own AI. This next generation TPU will accelerate Geminis development and help developers and enterprise customers train large-scale generative AI models faster, allowing new products and capabilities to reach customers sooner, Google CEO Sundar Pichai and DeepMind cofounder Demis Hassabis said in a blog post.

Gemini is the outcome of a massive push inside Google to speed up its shipping of AI products. Last November, the company was caught flat-footed when OpenAI released ChatGPT, a surprise hit that captured the publics imagination. The frenzy triggered a code red inside Google and prompted cofounder Sergey Brin, long absent after leaving his day-to-day role at the company in 2019, to begin coding again. In April, the company merged its two research labs, Google Brain and DeepMind, which had previously been notoriously distinct, in an attempt to give product development a push.

These are the first models of the Gemini era and the first realization of the vision we had when we formed Google DeepMind earlier this year, Pichai said. This new era of models represents one of the biggest science and engineering efforts weve undertaken as a company.

The rest is here:
Google Training Gemini On Its Own Chips Reveals Another Of Its Advantages - Forbes

This DeepMind AI Rapidly Learns New Skills Just by Watching Humans – Singularity Hub

Teaching algorithms to mimic humans typically requires hundreds or thousands of examples. But a new AI from Google DeepMind can pick up new skills from human demonstrators on the fly.

One of humanitys greatest tricks is our ability to acquire knowledge rapidly and efficiently from each other. This kind of social learning, often referred to as cultural transmission, is what allows us to show a colleague how to use a new tool or teach our children nursery rhymes.

Its no surprise that researchers have tried to replicate the process in machines. Imitation learning, in which AI watches a human complete a task and then tries to mimic their behavior, has long been a popular approach for training robots. But even todays most advanced deep learning algorithms typically need to see many examples before they can successfully copy their trainers.

When humans learn through imitation, they can often pick up new tasks after just a handful of demonstrations. Now, Google DeepMind researchers have taken a step toward rapid social learning in AI with agents that learn to navigate a virtual world from humans in real time.

Our agents succeed at real-time imitation of a human in novel contexts without using any pre-collected human data, the researchers write in a paper in Nature Communications. We identify a surprisingly simple set of ingredients sufficient for generating cultural transmission.

The researchers trained their agents in a specially designed simulator called GoalCycle3D. The simulator uses an algorithm to generate an almost endless number of different environments based on rules about how the simulation should operate and what aspects of it should vary.

In each environment, small blob-like AI agents must navigate uneven terrain and various obstacles to pass through a series of colored spheres in a specific order. The bumpiness of the terrain, the density of obstacles, and the configuration of the spheres varies between environments.

The agents are trained to navigate using reinforcement learning. They earn a reward for passing through the spheres in the correct order and use this signal to improve their performance over many trials. But in addition, the environments also feature an expert agentwhich is either hard-coded or controlled by a humanthat already knows the correct route through the course.

Over many training runs, the AI agents learn not only the fundamentals of how the environments operate, but also that the quickest way to solve each problem is to imitate the expert. To ensure the agents were learning to imitate rather than just memorizing the courses, the team trained them on one set of environments and then tested them on another. Crucially, after training, the team showed that their agents could imitate an expert and continue to follow the route even without the expert.

This required a few tweaks to standard reinforcement learning approaches.

The researchers made the algorithm focus on the expert by having it predict the location of the other agent. They also gave it a memory module. During training, the expert would drop in and out of environments, forcing the agent to memorize its actions for when it was no longer present. The AI also trained on a broad set of environments, which ensured it saw a wide range of possible tasks.

It might be difficult to translate the approach to more practical domains though. A key limitation is that when the researchers tested if the AI could learn from human demonstrations, the expert agent was controlled by one person during all training runs. That makes it hard to know whether the agents could learn from a variety of people.

More pressingly, the ability to randomly alter the training environment would be difficult to recreate in the real world. And the underlying task was simple, requiring no fine motor control and occurring in highly controlled virtual environments.

Still, social learning progress in AI is welcome. If were to live in a world with intelligent machines, finding efficient and intuitive ways to share our experience and expertise with them will be crucial.

Image Credit: Juliana e Mariana Amorim / Unsplash

See more here:
This DeepMind AI Rapidly Learns New Skills Just by Watching Humans - Singularity Hub

Deepmind’s AI discovers millions of new materials – Warp News

Google Deepmind's AI tool, Graph Networks for Materials Exploration (GNoME), has significantly expanded the horizon of materials science.

This innovative AI system has identified approximately 2.2 million new inorganic crystals, of which 380,000 are recognized as stable. This groundbreaking achievement is set to accelerate the pace of technological advancement dramatically.

Traditionally, the discovery of new materials, particularly inorganic crystal materials, has been a slow and meticulous process fraught with trial-and-error experimentation. The stability of these materials is crucial; a crystal that cannot maintain its structure is of little use in practical applications such as battery improvement or electronics enhancement.

GNoME addresses this challenge head-on, offering a pre-filtered list of stable materials for further research and experimentation.

Among the numerous discoveries, GNoME identified 52,000 new compounds similar to graphene, which hold immense promise for revolutionizing electronics through superconductors.

Moreover, the AI found 528 potential lithium-ion conductors, significantly more than previous studies, which could enhance the efficiency of rechargeable batteries.

Google has made these discoveries accessible to the broader scientific community by providing free access to this data. This move is expected to catalyze the synthesis and experimental exploration of these new materials, potentially leading to transformative technological developments.

Further enhancing the potential of these discoveries, Deepmind has collaborated with Berkeley Lab to develop a robotic laboratory capable of autonomously synthesizing new crystals. This autonomous lab has already synthesized 41 new materials, demonstrating the potential for even faster progress in materials science.

We've written previously about A-lab:

AI-driven lab search for new materials 24/7

An automated system operates continuously, day and night, to produce novel inorganic materials with the potential to enhance batteries, fuel cells, and superconductors.

WALL-YWALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.

Read the original post:
Deepmind's AI discovers millions of new materials - Warp News

Google DeepMind Introduces GNoME: A New Deep Learning Tool that Dramatically Increases the Speed and Efficiency of Discovery by Predicting the…

Inorganic crystals are essential to many contemporary technologies, including computer chips, batteries, and solar panels. Every new, stable crystal results from months of meticulous experimentation, and stable crystals are essential for enabling new technologies since they do not dissolve.

Researchers have engaged in costly, trial-and-error experiments that yielded only limited results. They sought new crystal structures by modifying existing crystals or trying other element combinations. 28,000 novel materials have been found in the past ten years thanks to computational methods spearheaded by the Materials Project and others. The capacity of emerging AI-guided techniques to reliably forecast materials that may be experimentally viable has been a major limitation up until now.

Researchers from the Lawrence Berkeley National Laboratory and Google DeepMind have published two papers in Nature demonstrating the potential of our AI predictions for autonomous material synthesis. The study shows a finding of 2.2 million more crystals, the same as approximately 800 years worth of information. Their new deep learning tool, Graph Networks for Materials Exploration (GNoME), predicts the stability of novel materials, greatly improving the speed and efficiency of discovery. GNoME exemplifies the promise of AI in the large-scale discovery and development of novel materials. Separate yet contemporaneous efforts by scientists in different laboratories across the globe have produced 736 of these novel structures.

The number of technically feasible materials has been increased by a factor of two thanks to GNoME. Among its 2.2 million forecasts, 380,000 show the greatest promise for experimental synthesis because of their stability. Materials with the ability to create next-generation batteries that improve the efficiency of electric vehicles and superconductors that power supercomputers are among these contenders.

GNoME is a model for a state-of-the-art GNN. Because GNN input data is represented by a graph analogous to atomic connections, GNNs are well suited to finding novel crystalline materials.

Data on crystal structures and their stability, initially used to train GNoME, are publicly available through the Materials Project. The use of active learning as a training method significantly improved GNoMEs efficiency. The researchers generated new crystal candidates and predicted their stability using GNoME. They used Density Functional Theory (DFT), a well-established computational method in physics, chemistry, and materials science for understanding atomic structurescrucial for evaluating crystal stabilityto repeatedly check their models performance throughout progressive training cycles to evaluate its predictive power. The model training went back into the process using the high-quality training data.

The findings show that the research increased the rate of materials stability prediction discovery from approximately 50% to 80%, using an external benchmark set by earlier state-of-the-art models as a guide. Enhancements to this models efficiency allowed the discovery rate to be boosted from below 10% to over 80%; these gains in efficiency may have a major bearing on the computing power needed for each discovery.

The autonomous lab produced over forty-one novel materials using ingredients from the Materials Project and stability information from GNoME, paving the way for further advancements in AI-driven materials synthesis.

The GNoMEs forecasts have been released to the scientific community. The researchers will provide the Materials Project, which analyzes the compounds and adds them to its online database with 380,000 materials. With the help of these resources, they hope that the community will seek to study inorganic crystals further and realize the potential of machine learning technologies as experimental guidelines.

Check out thePaper 1 and Paper 2andReference Article.All credit for this research goes to the researchers of this project. Also,dont forget to joinour 33k+ ML SubReddit,41k+ Facebook Community,Discord Channel,andEmail Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in todays evolving world making everyone's life easy.

See the article here:
Google DeepMind Introduces GNoME: A New Deep Learning Tool that Dramatically Increases the Speed and Efficiency of Discovery by Predicting the...

Google unveils Gemini, its largest AI model, to take on OpenAI – Moneycontrol

Google parent Alphabet on December 6 unveiled Gemini, its largest and most capable AI model to date, as the tech giant looks to take on rivals OpenAI's GPT-4 and Meta's Llama 2 in a race to lead the nascent artificial intelligence (AI) space.

This is the first AI model from Alphabet after the merger of its AI research units, DeepMind and Google Brain, into a single division called Google DeepMind, led by DeepMind CEO Demis Hassabis.

Gemini has been built from the ground up and is "multimodal" in nature, meaning it can understand and work with different types of information, including text, code, audio, image and video, at the same time.

The AI model will be available in three different sizes: Ultra (for highly complex tasks), Pro (for scaling across a wide range of tasks) and Nano (on-device tasks).

"These are the first models of the Gemini era and the first realisation of the vision we had when we formed Google DeepMind earlier this year. This new era of models represents one of the biggest science and engineering efforts weve undertaken as a company," said Alphabet CEO Sundar Pichai.

Gemini Pro will be accessible to developers through the Gemini API in Google AI Studio and Google Cloud Vertex AI starting December 13.

On the other hand, Gemini Nano will be accessible to Android developers through AICore, a new system capability introduced in Android 14. This capability will be made available on Pixel 8 Pro devices starting December 6, with plans to extend support to other Android devices in the future.

Google's AI model Gemini will be available in three different sizes: Ultra, Pro and Nano

Gemini Ultra is currently being made available to select customers, developers, partners and safety and responsibility experts for early experimentation and feedback with a broader rollout to developers and enterprise customers early next year.

Also read:Google parent to make 'meaningful' investments to double down on its AI efforts, says CEO Sundar Pichai

Google will also be using Gemini across all its products. Starting December 6, Bard will use a fine-tuned version of Gemini Pro for more advanced reasoning, planning, and understanding.

Meanwhile, Gemini Nano will be powering new features on Pixel 8 Pro smartphones like 'Summarise' in the Recorder app and will soon be available in Smart Reply in Gboard, starting with WhatsApp - with more messaging apps coming next year.

Gemini is also being used to make Google's generative AI search offering Search Generative Experience (SGE) faster for users. The company said that they witnessed a 40 percent reduction in latency in English in the United States, alongside improvements in quality.

Hassabis said that Gemini will be integrated into more of the company's products and services, including Search, Ads, Chrome, and Duet AI in the coming months.

'Transition to AI far bigger than mobile or web'

Pichai said that every technology shift is an opportunity to advance scientific discovery, accelerate human progress and improve lives.

"I believe the transition we are seeing right now with AI will be the most profound in our lifetimes, far bigger than the shift to mobile or the web before it," he said.

Pichai added "AI has the potential to create opportunities - from the everyday to the extraordinary for people everywhere. It will bring new waves of innovation and economic progress and drive knowledge, learning, creativity, and productivity on a scale we havent seen before...Were only beginning to scratch the surface of whats possible."

Alphabet first previewed Gemini in its annual developer conference Google I/O in May 2023. This launch comes at a time when the tech giant is racing to catch up with Microsoft-backed OpenAI which released its latest AI model GPT-4 Turbo during its OpenAI DevDay last month. GPT-4 Turbo is an improved version of the AI upstart's flagship GPT-4 model that was released in March 2023.

Also read:Generative AI helping us reimagine Search, other products: Alphabet's Sundar Pichai

Most flexible model yet

In a blog post, Hassabis said that Gemini Ultra's performance exceeds the current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.

It is also the first model to outperform human experts on MMLU (massive multitask language understanding) benchmark, which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.

Meanwhile, Gemini Pro outperformed GPT-3.5 in six of eight benchmarks including in MMLU and GSM8K (Grade School Math 8K), which measures grade school math reasoning, before its public launch, said Sissie Hsiao, Vice-President, Google Assistant and Bard.

"This is a significant milestone in the development of AI, and the start of a new era for us at Google as we continue to rapidly innovate and responsibly advance the capabilities of our models," Hassabis said.

Hassabis said that for a long time, they wanted to build a new generation of AI models, inspired by the way people understand and interact with the world. "AI that feels less like a smart piece of software and more like something useful and intuitive an expert helper or assistant. Today, were a step closer to this vision," he said.

He mentioned that Gemini is their most flexible model yet since it can run efficiently on everything from data centres to mobile devices and its capabilities will significantly enhance the way developers and enterprise customers build and scale with AI.

Hassabis said that the multimodal reasoning capabilities of the first version of Gemini can help make sense of complex written and visual information, due to which it can extract insights from hundreds of thousands of documents through reading, filtering and understanding information.

He said it also better understands nuanced information and can answer questions relating to complicated topics, making it adept at explaining reasoning in complex subjects like math and physics.

The AI model can also understand, explain, and generate high-quality code in many popular programming languages, like Python, Java, C++ and Go.

"Were working hard to further extend its capabilities for future versions, including advances in planning and memory, and increasing the context window for processing even more information to give better responses," Hassabis said.

Considering Gemini's capabilities, Alphabet is also adding new protections building upon its safety policies and AI principles to tackle potential risks.

"Weve conducted novel research into potential risk areas like cyber-offence, persuasion, and autonomy, and have applied Google Researchs best-in-class adversarial testing techniques to help identify critical safety issues in advance of Geminis deployment," Hassabis said.

The company is also working with a diverse group of external experts and partners to stress-test their models across a range of issues, he said.

View original post here:
Google unveils Gemini, its largest AI model, to take on OpenAI - Moneycontrol

Google DeepMinds weather AI can forecast extreme weather faster and more accurately – MIT Technology Review

It said Hurricane Lee would make landfall in Nova Scotia three days sooner than traditional methods predicted.

This year the Earth has been hit by a record number of unpredictable extreme weather events made worse by climate change. Predicting them faster and with greater accuracy could enable us to prepare better for natural disasters and help save lives. A new AI model from Google DeepMind could make that easier.

In research published in Science today, Google DeepMinds model, GraphCast, was able to predict weather conditions up to 10 days in advance, more accurately and much faster than the current gold standard. GraphCast outperformed the model from the European Centre for Medium-Range Weather Forecasts (ECMWF) in more than 90% of over 1,300 test areas. And on predictions for Earths tropospherethe lowest part of the atmosphere, where most weather happensGraphCast outperformed the ECMWFs model on more than 99% of weather variables, such as rain and air temperature

Crucially, GraphCast can also offer meteorologists accurate warnings, much earlier than standard models, of conditions such as extreme temperatures and the paths of cyclones. In September, GraphCast accurately predicted that Hurricane Lee would make landfall in Nova Scotia nine days in advance, says Rmi Lam, a staff research scientist at Google DeepMind.Traditional weather forecasting models pinpointed the hurricane to Nova Scotia only six days in advance.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

Weather prediction is one of the most challenging problems that humanity has been working on for a long, long time. And if you look at what has happened in the last few years with climate change, this is an incredibly important problem, says Pushmeet Kohli, the vice president of research at Google DeepMind.

Traditionally, meteorologists use massive computer simulations to make weather predictions. They are very energy intensive and time consuming to run, because the simulations take into account many physics-based equations and different weather variables such as temperature, precipitation, pressure, wind, humidity, and cloudiness, one by one.

GraphCast uses machine learning to do these calculations in under a minute. Instead of using the physics-based equations, it bases its predictions on four decades of historical weather data. GraphCast uses graph neural networks, which map Earths surface into more than a million grid points. At each grid point, the model predicts the temperature, wind speed and direction, and mean sea-level pressure, as well as other conditions like humidity. The neural network is then able to find patterns and draw conclusions about what will happen next for each of these data points.

For the past year, weather forecasting has been going through a revolution as models such as GraphCast, Huaweis Pangu-Weather and Nvidias FourcastNet have made meteorologists rethink the role AI can play in weather forecasting. GraphCast improves on the performance of other competing models, such as Pangu-Weather, and is able to predict more weather variables, says Lam.The ECMWF is already using it.

When Google DeepMind first debuted GraphCast last December, it felt like Christmas, says Peter Dueben, head of Earth system modeling at ECMWF, who was not involved in the research.

It showed that these models are so good that we cannot avoid them anymore, he says.

GraphCast is a reckoning moment for weather prediction because it shows that predictions can be made using historical data, says Aditya Grover, an assistant professor of computer science at UCLA, who developed ClimaX, a foundation model that allows researchers to do different tasks relating to modeling the Earths weather and climate.

DeepMinds model is great work and extremely exciting, says Oliver Fuhrer, the head of the numerical prediction department at MeteoSwiss, the Swiss Federal Office of Meteorology and Climatology. Fuhrer says that other weather agencies, such as the ECMWF and the Swedish Meteorological and Hydrological Institute, have also used the graph neural network architecture proposed by Google DeepMind to build their own models.

But GraphCast is not perfect. It still lags behind conventional weather forecasting models in some areas, such as precipitation, Dueben says. Meteorologists will still have to use conventional models alongside machine-learning models to offer better predictions.

Google DeepMind is also making GraphCast open source. This is a good development, says UCLAs Grover.

With climate change on the rise, its very important that big organizations, which have had the luxury of so much compute, also think about giving back [to the scientific community], he says.

Continued here:
Google DeepMinds weather AI can forecast extreme weather faster and more accurately - MIT Technology Review

Google DeepMind’s AI Weather Forecaster Handily Beats a Global Standard – WIRED

In September, researchers at Googles DeepMind AI unit in London were paying unusual attention to the weather across the pond. Hurricane Lee was at least 10 days out from landfalleons in forecasting termsand official forecasts were still waffling between the storm landing on major Northeast cities or missing them entirely. DeepMinds own experimental software had made a very specific prognosis of landfall much farther north. We were riveted to our seats, says research scientist Rmi Lam.

A week and a half later, on September 16, Lee struck land right where DeepMinds software, called GraphCast, had predicted days earlier: Long Island, Nova Scotiafar from major population centers. It added to a breakthrough season for a new generation of AI-powered weather models, including others built by Nvidia and Huawei, whose strong performance has taken the field by surprise. Veteran forecasters told WIRED earlier this hurricane season that meteorologists serious doubts about AI have been replaced by an expectation of big changes ahead for the field.

Today, Google shared new, peer-reviewed evidence of that promise. In a paper published today in Science, DeepMind researchers report that its model bested forecasts from the European Centre for Medium-Range Weather Forecasting (ECMWF), a global giant of weather prediction, across 90 percent of more than 1,300 atmospheric variables such as humidity and temperature. Better yet, the DeepMind model could be run on a laptop and spit out a forecast in under a minute, while the conventional models require a giant supercomputer.

An AI-based weather model's ten-day forecast for Hurricane Lee in September accurately predicted where it would make landfall.

Fresh Air

Standard weather simulations make their predictions by attempting to replicate the physics of the atmosphere. Theyve gotten better over the years, thanks to better math and by taking in fine-grained weather observations from growing armadas of sensors and satellites. Theyre also cumbersome. Forecasts at major weather centers like the ECMWF or the US National Oceanic and Atmospheric Association can take hours to compute on powerful servers.

When Peter Battaglia, a research director at DeepMind, first started looking at weather forecasting a few years ago, it seemed like the perfect problem for his particular flavor of machine learning. DeepMind had already taken on local precipitation forecasts with a system, called NowCasting, trained with radar data. Now his team wanted to try predicting weather on a global scale.

Battaglia was already leading a team focused on applying AI systems called graph neural networks, or GNNs, to model the behavior of fluids, a classic physics challenge that can describe the movement of liquids and gases. Given that weather prediction is at its core about modeling the flow of molecules, tapping GNNs seemed intuitive. While training these systems is heavy-duty, requiring hundreds of specialized graphics processing units, or GPUs, to crunch tremendous amounts of data, the final system is ultimately lightweight, allowing forecasts to be generated quickly with minimal computer power.

GNNs represent data as mathematical graphsnetworks of interconnected nodes that can influence one another. In the case of DeepMinds weather forecasts, each node represents a set of atmospheric conditions at a particular location, such as temperature, humidity, and pressure. These points are distributed around the globe and at various altitudesa literal cloud of data. The goal is to predict how all the data at all those points will interact with their neighbors, capturing how the conditions will shift over time.

More here:
Google DeepMind's AI Weather Forecaster Handily Beats a Global Standard - WIRED

DeepMind says new AI is world’s most accurate 10-day weather forecaster – TNW

A new AI model from Google DeepMind is the worlds most accurate 10-day global weather forecasting system, according to the London-based lab.

Named GraphCast, the model promises medium-range weather forecasts of unprecedented accuracy. In a study published today, GraphCast was shown to be more precise and faster than the industry gold standard for weather simulation, the High-Resolution Forecast (HRES).

The system also predicted extreme weather further into the future than was previously possible.

These insights were analysed by the European Centre for Medium-Range Weather Forecasts (ECMWF), an intergovernmental organisation that produces the HRES.

A live version of GraphCast was deployed on the ECMWF website. In September, the system accurately predicted around nine days in advance that Hurricane Lee would make landfall in Nova Scotia.

In contrast, traditional forecasting methods only spotlighted Nova Scotia about six days beforehand. They also provided less consistent predictions of the time and location of landfall.

Intriguingly, GraphCast can identify dangerous weather events without being trained to find them. After integrating a simple cyclone tracker, the model predicted cyclone movements more accurately than the HRES method.

Such data could save lives and livelihoods. As the climate becomes more extreme and unpredictable, fast and accurate forecasts will provide increasingly vital insights for disaster planning.

Matthew Chantry, a machine learning coordinator at the ECMWF, believes his industry has reached an inflection point.

Theres probably more work to be done to create reliable operational products, but this is likely the beginning of a revolution, Chantry said at a press briefing.

Meteorological organisations, he added, had previously expected AI to be most useful when merged with physics. But recent breakthroughs show that machine learning can also directly forecast the weather.

Conventional weather forecasts are based on intricate physics equations. These are then adapted into algorithms that run on supercomputers.

The process can be painstaking. It also requires specialist knowledge and vast computing resources.

GraphCast harnesses a different technique. The model combines machine learning with Graph Neural Networks (GNNs), an architecture thats adept at processing spatially structured data.

To learn the causes and effects that determine weather changes, the system was trained on decades of weather information.

Traditional approaches are also incorporated. The ECMWF supplied GraphCast with training data from around 40 years of weather reanalysis, which encompassed monitoring from satellites, radars and weather stations.

When there are gaps in the observations, physics-based prediction methods fill them in. The result is a detailed history of global weather. GraphCast uses these lessons from the past to predict the future.

GraphCast makes predictions at a spatial resolution of 0.25-degrees latitude/longitude.

To put that into perspective, imagine the Earth divided into a million grid points. At each point, the model predicts five Earth-surface variable and six atmospheric variables. Together, they cover the planets entire atmosphere in 3D over 37 levels.

The variables encompass temperature, wind, humidity, precipitation, and sea-level pressure. They also incorporate geopotential the gravitational potential energy of a unit mass, at a particular location, relative to mean sea level.

In tests, the results were impressive. GraphCast significantly outperformed the most accurate operational deterministic systems on 90% of 1,380 test targets.

The disparity was even starker in the troposphere the lowest layer of Earths atmosphere and the location of most weather phenomena. In this region, GraphCast outperformed HRES on 99.7% of the test variables for future weather.

GraphCast is also highly efficient. A 10-day forecast takes under a minute to complete on a single Google TPU v4 machine.

A conventional approach, by comparison, can take hours of computation in a supercomputer with hundreds of machines.

Despite the promising early results, GraphCast could still benefit from further refinement. In the cyclone predictions, for instance, the model proved accurate at tracking movements, but less effective at measuring intensity.

Gentry is keen to see how much this can improve.

At the moment, thats an area where GraphCast and machine learning models still lag a little bit behind physical models Im hopeful that this can be an area for further improvement, but this shows that its still a nascent technology, he said.

Those improvements could now come from anywhere, because DeepMind has open-sourced the model code. Global organisations and individuals alike can now experiment with GraphCast and add their own improvements.

The potential applications are, ironically, unpredictable. The forecasts could, for instance, inform renewable energy production and air traffic routing. But they could also be applied to tasks that havent even been imagined.

Theres a lot of downstream use cases for weather forecasts, said Peter Battaglia, Google DeepMinds research director. And were not aware of all of those.

See the original post:
DeepMind says new AI is world's most accurate 10-day weather forecaster - TNW