Category Archives: Deep Mind
Google DeepMind CEO on AGI, OpenAI and Beyond MWC 2024 – AI Business
In 2010, Demis Hassabis co-founded what would become one of the most influential AI labs in the world: DeepMind, named after the term deep learning. The company, which Google acquired in 2014, had grand designs for building artificial general intelligence, or AGI.
How is that endeavor going?
Its looking like its going to be a more gradual process rather than a step function, he said during a keynote fireside chat at Mobile World Congress 2024 in Barcelona, Spain. Todays AI systems are becoming incrementally more powerful as compute, techniques and data used are scaled up.
It is possible that significant advances can come in the next few years with new innovations to improve AIs ability to plan, remember and use tools things current-generation AI systems are missing. In the meantime, AI advances are proving to be useful already in many other endeavors.
The CEO defines AGI as a system that can perform almost any cognitive task that humans can. He said there is a need for a human reference point is because the human brain is the only proof we have maybe in the universe that general intelligence is possible.
But how will we know AGI when we see it? It is a question hotly debated in the field of AI. For Hassabis, it either may be obvious when it appears or may require considerable tests to determine.
Related:Google DeepMind CEO: AGI is Coming in a Few Years
One way is to actually test the systems on thousands and thousands of tasks that humans do and see if it passes a certain threshold on all of those tasks. And the more tasks you put into that test set, the more sure you can be you have the general space covered.
From left: Wireds Steven Levy and Google DeepMind CEO Demis Hassabis
Amid its quest to develop AGI, it was another AI system that helped cement DeepMind as a key player in the AI space: AlphaFold.
The system predicts protein structures and in 2022, the model was used to map nearly all of the 200 million known proteins.
Commenting on the project at MWC, Hassabis used AlphaFold as an example of a non-general AI system that could be used to further human knowledge.
He said it would have taken a billion years of having a person with a doctorate to map every known protein something his team did in just one year.
Over a million researchers have used the model, according to the Google DeepMind CEO, but he wants the model to power drug discovery.
And that is a goal parent company Alphabet has in mind it formed Isomorphic Labs in 2021 to reimagine drug discovery with AI systems like AlphaFold 2.
Isomorphic penned deals with pharma giants Novartis and Eli Lilly in January to use AI to design new drugs. According to Hassabis, drugs designed by AI will hit clinics in the next couple of years.
Related:DeepMind AI System Predicts Structure of Nearly All Known Proteins
It's really having a material impact now on drug discovery, and I hope that drug discovery will shrink from 10 years to discover one drug down to maybe a matter of months to discover drugs to cure these terrible diseases.
Hassabis noted that most of the major AI innovations of the past decade came from Google Research, Brain and DeepMind. OpenAI actually took these ideas and techniques and applied Silicon Valley growth mentality, hacker mentality to it, and scaled it to sort of maximum speed, he said.
Also, OpenAIs unusual path to success with its models was not in coming up with a new innovation but rather by scaling current innovation.
I dont think anyone predicted it, maybe even including them, that these new capabilities would just emerge just through scale, not for inventing some new innovation, but actually just sort of scaling, Hassabis said.
And its quite unusual in the history of most scientific technology fields where you get step-changing capability by doing the same thing, just bigger that doesnt happen very often. Usually, you just get incremental capabilities, and normally you have to have some new insight or some new flash of inspiration, or some new breakthrough in order to get a step change. And that wasnt the case here.
The other surprising thing was that with ChatGPT, the general public seems to be ready to use these systems even though they clearly have flaws hallucinations, theyre not factual, Hassabis said.
Googles thinking was these systems needed to be 100 times more accurate before releasing them but OpenAI just released it and it turns out millions of people found value out of that, he added. It didnt have to be 100% accurate for there to be some valuable use cases there, so I think that was surprising for the whole industry.
Hassabis said they also thought these systems would have narrower use cases for scientists and other specific professions. But actually, the general public was willing to use slightly messier systems and find value and use cases for them. So that then precipitated a change in (Googles) outlook.
This led to Googles merging of Google Brain, a team within Google Research, with DeepMind in April 2023. The goal was to combine all of our compute together and engineering talent together to build the biggest possible things we can, he said. Gemini, our most advanced, most capable AI model, is one of the fruits of that combination.
What does Hassabis believe the future of AI will look like? He said last May that DeepMinds dream of AGI may be coming in a few years, but for now, his team is exploring new areas to apply AI.
One of those areas is in material sciences using AI to help discover new types of materials.
I dream of one day discovering room temperature superconductor it may exist in chemical space, but we just haven't found it as human chemists and material scientists.
Google DeepMind is also looking at applying AI to weather prediction and climate change, as well as mathematics.
He also said that the next generation of smart assistants will be useful in peoples daily lives rather than sort of gimmicky as they were in the previous generation.
Users are already seeing smarter and more adaptable phones, sporting Googles Gemini features and a new capability to search just by encircling an image.
But in five or more years, is the phone even really going to be the perfect phone factor? he asked. Maybe we need glasses or some other things so that the AI system can actually see a bit of the context that you're in to be even more helpful in your daily life.
Go here to see the original:
Google DeepMind CEO on AGI, OpenAI and Beyond MWC 2024 - AI Business
Google DeepMind jumps back into open source AI race with new model Gemma – VentureBeat
Today Google DeepMind unveiled Gemma, its new 2B and 7B open source models built from the same research and technology used to create the companys recently-announced Gemini models.
The Gemma models will be released with pre-trained and instruction-tuned variants, Google DeepMind said in a blog post. The model weights will be released with a permissive commercial license, as well as a new Responsible Generative AI toolkit.
Google is also providing toolchains for inference and supervised fine-tuning (SFT) across all major frameworks: JAX, PyTorch, and TensorFlow through native Keras 3.0. There are ready-to-use Colab and Kaggle notebooks, and Gemma is integrated with Hugging Face, MaxText, and NVIDIA NeMo. Pre-trained and instruction-tuned Gemma models can run on a laptop, workstation, or Google Cloud with deployment on Vertex AI and Google Kubernetes Engine.
Nvidia also announced today that in collaboration with Google it had launched optimizations across all NVIDIA AI platforms, including local RTX AI PCs, to accelerate Gemma performance.
The AI Impact Tour NYC
Well be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.
Jeanine Banks, vice president and general manager of developer X and head of developer relations at Google, told VentureBeat at a press briefing that the Gemma models felt like a continuation after Googles history of open sourcing tech for AI development, from tools like TensorFlow and Jax to other models and AI systems like PaLM2 and AlphaFold, leading up to Gemini.
She also said that through feedback during the development of the Gemini models, Google DeepMind gained a key insight, which is, in some cases, developers will use both open models and APIs in a complementary way in their workflow depending on the stage of the workflow that theyre in.
As developers experiment and do early prototyping, she explained, it may be easy to start with an API to test out prompts, then turning to customize and fine-tune with open models. We felt that it would be perfect if Google could be the only provider of both APIs and open models to offer the widest set of capabilities for the community to work with.
Tris Warkentin, director of product management for Google DeepMind, told VentureBeat at the press briefing that the company will be releasing a full set of benchmarks evaluating Gemma against other models, which anyone can see on the OpenLLM leaderboards right away.
We are partnering with both Nvidia and Hugging Face, so pretty much any benchmark that is in the public sphere has been run against these models, he said. It is a fully transparent and community open kind of an approach, so it is something that were actually quite proud of because when you look at the numbers, I think weve done a pretty darn good job.
Warkentin also emphasized Gemmas safety: These all have been extensively evaluated to be the safest models that we could possibly put out into the market at these sizes, along with pre-training and evaluation, he said.
The Google DeepMind blog post said that Gemma is designed with our AI Principles at the forefront. As part of making Gemma pre-trained models safe and reliable, we used automated techniques to filter out certain personal information and other sensitive data from training sets. Additionally, we used extensive fine-tuning and reinforcement learning from human feedback (RLHF) to align our instruction-tuned models with responsible behaviors. To understand and reduce the risk profile for Gemma models, we conducted robust evaluations including manual red-teaming, automated adversarial testing, and assessments of model capabilities for dangerous activities. These evaluations are outlined in our Model Card.*
In addition to safety, Warkentin emphasized the role of the open ecosystem in fostering responsible AI.
We think it is really critical we need diverse perspectives from developers and researchers worldwide, in order to get the right feedback and build even better safety systems, he said. So part of the open model journey is to make sure that were integrating [those perspectives] and that feedback, that communication with the community, is a critical part of the way that we view the value of this project.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
See the original post here:
Google DeepMind jumps back into open source AI race with new model Gemma - VentureBeat
AI will design drugs in the next couple of years, says Google Deepmind boss – City A.M.
Tuesday 27 February 2024 6:00 am
The chief executive of Google DeepMind has said artificial intelligence (AI) could be designing drugs in clinics within the next couple of years.
Demis Hassabis, who founded DeepMind in the UK in 2010, said AI will have a material impact on drug discovery.
I think in the next couple of years were going to start seeing AI design drugs in the clinic, he said, speaking to an audience of global telecoms industry players gathered at the Mobile World Congress in Barcelona on Monday afternoon.
Deepmind, bought by Google in 2014, is an AI research lab and the creator of a system called Alphafold that can predict protein structures, potentially accelerating drug discovery.
If you know the structure of a protein it also means that you could target a drug compound to bind to the surface of the relevant bit of the surface of the proteins structure, he explained.
In 2021, Hassabis also founded the London-based drug discovery company Isomorphic Labs, owned by Google parent company Alphabet.
It uses AI to generate new chemical compounds that bind specifically to the exact part of the protein but no other protein, minimising side effects on the body.
I hope that drug discovery will shrink from an average of 10 years to design one drug to maybe a matter of months to discover drugs to cure these terrible diseases, Hassabis added.
At the start of this year, Isomorphic signed large deals with two of the biggest pharma companies in the world, Eli Lilly and Novartis, worth up to $3 billion, to work on several real-world design programmes.
It comes as an increasing number of health tech companies are attempting to use AI to solve critical medical delays. One example is Cambridge-based Nuclera, which helps accelerate drug discovery by rapidly finding the correct proteins needed to create new medicines and vaccines.
Nuclera says it can reduce the lengthy process, which can take months or even years in some cases, down to days.
There are nearly 4,500 health tech companies in the UK which have a combined turnover of 30bn, according to the Association of British Healthtech Industries.
A recent report said: Responsible AI use has immense potential and its value in the health sector was widely discussed during the recent UK AI Safety Summit. Industry overwhelmingly felt that AI has the greatest potential in disease and diagnosis detection.
Read the rest here:
AI will design drugs in the next couple of years, says Google Deepmind boss - City A.M.
Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post
Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.
Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.
We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.
The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.
Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.
Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.
In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.
In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.
Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?
Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.
Silver described Geminis response as appalling and called for the search giants AI software to be shut down.
Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.
Yet another query had users asking Gemini whether pedophilia is wrong.
The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.
The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.
Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.
It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.
However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.
When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.
Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.
Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.
In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.
The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.
With Post wires
Google AI expert tackles inappropriate images and bad actors – Mobile World Live
The head of Googles DeepMind Technologies said recent high-profile problems with pictures generated by the search giants Gemini AI would be resolved in a matter of weeks, as he conceded there are several pitfalls to be negotiated before the true potential of generative AI (genAI) can be unlocked.
In a timely MWC keynote, DeepMind co-founder and CEO Demis Hassabis (pictured) explained some elements of an image-generating feature in Gemini had provided unintended results, particularly for queries involving historical figures.
Plenty of news sites have reported how Gemini had produced culturally inappropriate images.
Hassabis said the feature was well-intended, designed to reflect the broad user base of Google by delivering results with a degree of universality.
In the case of historical figures, though, he conceded the feature was applied too bluntly, in turn highlighting one of the nuances that comes with advanced AI in terms of unexpected outcomes.
He said the feature has been taken offline, with the aim of ironing out the quirks and bringing the service back online in short order.
Hassabis also addressed the potential for bad actors to use genAI for nefarious purposes, explaining all players in the sector must discuss how to deliver the benefits of the technology without possible harmful ends.
Positive impact The AI pioneer had plenty of examples of the good AI has already done, particularly in the field of medical research.
He pointed to advances in protein research which could ultimately contribute to a reduction in the time taken to develop life-saving pharmaceuticals from an average of ten years to discover one drug, down to maybe a matter of months.
The pace of development in genAI itself took an unexpected step when OpenAI released its ChatGPT product.
Hassabis admitted he was surprised at the public enthusiasm for using a product which still had flaws, but equally took heart that millions found value even at such a nascent stage in the development of genAI.
He believes the technology could also spur a fresh round of innovation in the device sector, opening the door for different form-factors and becoming a more useful element in peoples lives through an evolution of current versions of digital assistants.
The rest is here:
Google AI expert tackles inappropriate images and bad actors - Mobile World Live
‘Mind-blowing’ deep sea expedition uncovers more than 100 new species and a gigantic underwater mountain – Livescience.com
A deep-sea expedition off the coast of Chile has uncovered a treasure trove of scientific wonders, including more than 100 previously unknown marine species and a handful of never-before-seen underwater mountains the largest of which is around four times the size of the world's tallest building.
Incredible photos and video footage of the underwater landscape also showcase a menagerie of deep-sea weirdos, including intricate sponges, spiraling corals, a beady-eye lobster, a bizarre stack of oblong sea urchins and a bright red "sea toad" with hands for fins.
Between Jan. 8 and Feb. 11, researchers on board the Schmidt Ocean Institute's (SOI) research vessel Falkor (too) explored the seafloor off the coast of Chile. The expedition, named "Seamounts of the Southeast Pacific," focused on underwater mountains, or seamounts, in three main areas: the Nazca and Salas y Gmez ridges two chains of more than 200 seamounts that stretch a combined 1,800 miles (2,900 kilometers) from Chile to Easter Island (also known as Rapa Nui); as well as the Juan Fernndez and Nazca-Desventuradas marine parks.
In total, the researchers mapped around 20,400 square miles (52,800 square kilometers) of ocean.
These new, highly detailed maps revealed four previously unknown solitary seamounts. The biggest of these, which the team dubbed Solito meaning "alone" in Spanish towers 11,581 feet (3,530 meters) above the seafloor, making it more than four times taller than the world's tallest building, the Burj Khalifa, which stands at 2,716 feet (828 m) tall.
Related: 10 mind-boggling deep sea discoveries in 2023
The research team also used an underwater robot to explore the submerged slopes of 10 seamounts across the study range. This revealed more than 100 species that the scientists suspect are new to science, including corals, sponges, sea urchins, mollusks and crustaceans.
"We far exceeded our hopes on this expedition," Javier Sellanes, a marine biologist at the Catholic University of the North, in Chile, and lead scientist on the expedition, said in a statement emailed to Live Science. "You always expect to find new species in these remote and poorly explored areas, but the amount we found, especially for some groups like sponges, is mind-blowing."
The researchers took samples of the creatures and will now begin studying each one to determine whether it is a newfound species.
Image 1 of 6
"Full species identification can take many years," Jyotika Virmani, SOI's executive director, said in the statement. And the "incredible number of samples" could make this process even longer, she added.
The researchers noted that a majority of the species live within vulnerable habitats, such as cold-water corals and sponge gardens, which are highly susceptible to damage from trawling and deep-sea mining. The new species within the Juan Fernndez and Nazca-Desventuradas parks are legally protected from these threats. However, the seamounts along the Nazca and Salas y Gmez ridges are currently unprotected.
This research trip is the latest of several SOI expeditions that have mapped seamounts in the southeast Pacific in recent years.
The institute previously mapped four other massive seamounts during an expedition off the coast of Chile and Peru, as well as another solitary peak off the coast of Guatemala last year. Each of these five peaks was at least twice as tall as the Burj Khalifa.
It is important to find and study these towering "biological hotspots" because they can "advance our knowledge of life on Earth," Virmani previously said after the discovery of the seamounts in Chile and Peru.
Read this article:
'Mind-blowing' deep sea expedition uncovers more than 100 new species and a gigantic underwater mountain - Livescience.com
DeepMind COO on building a responsible future for AI and humanity – TNW
Recently, a New Zealand-based supermarket was miffed to find its AI meal bot going haywire. Instead of providing wholesome recipe suggestions using its products, it had begun suggesting dishes such as bleach-infused rice surprise and mysterious meat stew (with the mysterious meat being human flesh).
While this may have been a bit of fun for internet pranksters who prompted the bot with ever more outlandish ingredients, it also raises a growing concern. What can happen when AI falls into the wrong hands?
Just the year before, researchers used an AI trained to search for helpful new drugs to generate 40,000 new chemical weapons in just six hours.
Even when AI does what its trained to do, weve already seen many examples of what can happen when algorithms are developed without oversight, from dangerous medical diagnoses to racial bias to the creation and spread of misinformation.
With the race to develop ever more powerful large language models ramping up, at TNW 2023, we asked AI experts: Will AGI pose a threat to humanity?
Whether you believe in an apocalyptic terminatoresque future or not, what we can all agree on is the fact that AI needs to be developed responsibly. However, as per usual, innovation has vastly outpaced regulation. As policy makers struggle to keep up, the fate of AI is largely dependent on the tech community coming together to self-regulate, embrace transparency, and perhaps most unheard of actually work together.
Of course, this poses more work for companies developing AI. How do you build a framework for developing responsible AI? How do you balance this with the need to innovate and keep up with expectations from board members and investors?
At TNW 2023, we spoke with Lila Ibrahim, COO of Googles AI laboratory DeepMind. She shared three essential steps for building a responsible future for AI and humanity.
Ready to be part of the conversation? Join your fellow AI enthusiasts at TNW 2024!
One of the themes of this years TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), weve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).
Read more:
DeepMind COO on building a responsible future for AI and humanity - TNW
DeepMind and Meta staff plan to launch a new AI chatbot that could have the edge over ChatGPT and Bard – TechRadar
Since the explosion in popularity of large language AI models chatbots like ChatGPT, Google Gemini, and Microsoft Copilot, many smaller companies have tried to wiggle their way into the scene. Reka, a new AI startup, is gearing up to take on artificial intelligence chatbot giants like Gemini (formerly known as Google Bard) and OpenAIs ChatGPT - and it may have a fighting chance to actually do so.
The company is spearheaded by Singaporean scientist Yi Tay, working towards Reka Flash, a multilingual language model that has been trained in over 32 languages. Reka Flash also boasts 21 billion parameters, with the company stating that the model could have a competitive edge with Google Gemini Pro and OpenAIs ChatGPT 3.5 across multiple AI benchmarks.
According to TechInAsia, the company has also released a more compact version of the model called Reka Edge, which offers 7 billion parameters with specific use cases like on-device use. Its worth noting that ChatGPT and Google Gemini have significantly more training parameters (approximately 175 billion and 137 billion respectively), but those bots have been around for longer and there are benefits to more compact AI models; for example, Google has Gemini Nano, an AI model designed for running on edge devices like smartphones that uses just 1.8 billion parameters - so Reka Edge has it beat there.
The model is available to the public in beta on the official Reka site. Ive had a go at using it and can confirm that it's got a familiar ChatGPT-esque feel to the user interface and the way the bot responds.
The bot introduced itself as Yasa, developed by Reka, and gave me an instant rundown of all the things it could do for me. It had the usual AI tasks down, like general knowledge, sharing jokes or stories, and solving problems.
Interestingly, Yasa noted that it can also assist in translation, and listed 28 languages it can swap between. While my understanding of written Hindi is rudimentary, I did ask Yasa to translate some words and phrases from English to Hindi and from Hindi to English.
I was incredibly impressed not just by the accuracy of the translation, but also by the fact that Yasa broke down its translation to explain not just how it got there, but also breaking down each word in the phrase or sentence and translated it word forward before giving you the complete sentence. The response time for each prompt no matter how long was also very quick. Considering that non-English-language prompts have proven limited in the past with other popular AI chatbots, its a solid showing - although its not the only multilingual bot out there.
I tried to figure out how up-to-date the bot was with current events or general knowledge and finally figured out the information. It must have been trained on information that predates the release of the Barbie movie. I know, a weird litmus test, but when I asked it to give me some facts about the pink-tinted Margot Robbie feature it spoke about it as an upcoming movie and gave me the release date of July 28, 2023. So, we appear to have the same case as seen with ChatGPT, where its knowledge was previously limited to world events before 2022.
Of all the ChatGPT alternatives Ive tried since the AI boom, Reka (or should I say, Yasa) is probably the most immediately impressive. While other AI betas feel clunky and sometimes like poor-mans knockoffs, Reka holds its own not just with its visually pleasing user interfaces and easy-to-use setup, but for its multilingual capabilities and helpful, less robotic personality.
See the original post here:
DeepMind and Meta staff plan to launch a new AI chatbot that could have the edge over ChatGPT and Bard - TechRadar
AI Safety Institute recruits Google DeepMind researcher – UKTN (UK Technology News
The AI Safety Institute (AISI), a state-backed body tasked with developing the UKs understanding of the risks of artificial intelligence, has added a Google DeepMind employee and an Oxford professor to its research team ranks.
Geoffrey Irving has been announced as the research director for the institute, which was founded last November off the back of the AI Safety Summit.
Irvings background includes work for Google DeepMind, where he is currently a safety researcher, OpenAI, where he spent two years as a member of the technical staff and Google Brain, the old AI department of Alphabet.
Being a world leader in AI safety is only possible if we attract the worlds top AI safety talent to work in the institutions we are building, said Michelle Donelan, the tech secretary.
Geoffrey Irving more than fits that bill bringing a wealth of expertise from his work at Google DeepMind as he now steps up to become the AI Safety Institutes research director.
I have made it my mission to drive the Institute forward, and it is now a world-leading organisation despite only being launched a few short months ago at the AI Safety Summit.
The institute has also announced that Professor Chris Summerfield from the University of Oxford will join its research team.
DSIT said the AISI team now contains 24 researchers and will aim to triple that by the end of the year.
As we build more powerful AI systems, it is essential for us to be able to coordinate between the private sector, government and civil society, said Irving.
This requires deep capability within government. Over 2023 I have been very impressed with the progress made by the UK via the AI Safety Institute and AI Safety Summit and am excited to join the team.
The announcement comes as AISI chair Ian Hogarth visits the US to deepen collaboration between the countries on AI safety. The trip includes visits to California and Washington.
Read this article:
AI Safety Institute recruits Google DeepMind researcher - UKTN (UK Technology News
Got geometry problems? Google DeepMind new AI model can solve them for you – CoinGeek
Google DeepMind hasannouncedthe design of a newartificial intelligence(AI) model, designed to solve geometry problems with the same capabilities as a human Olympiad gold medalist.
The new AI model, AlphaGeometry, solves complex geometry challenges with a test of 30 Olympiad problems. Drawing inspiration from ancient Greece, the International Mathematical Olympiad features talented high school mathematicians solving math problems within a time limit.
Google (NASDAQ: GOOGL) DeepMinds model solved 25 out of 30 problems to surpass the previous AI attempt. Before AlphaGeometry, previous models only managed to solve 10 challenges, underscoring the difficulty of AI systems in demonstrating proficiency at solving geometry problems.
AI systems typically struggle with geometry due to the absence of training data and an inherent deficiency of logical and reasoning skills. To solve the gaping challenge,Google DeepMindturned to a neural language model and a deduction engine to improve the problem-solving skills of its models, supplemented by 100 million synthetic data examples.
The end result is a system capable of training itself to solve complex problems in a split strategy. The neural language model offers fast ideas that may be incorrect, but including a deduction engine provides a rational pathway to arriving at solutions.
AlphaGeometrys language model guides its symbolic deduction engine towards likely solutions to geometry problems, according to the tech giant. These clues help fill in the gaps and allow the symbolic engine to make further deductions about the diagram and close in on the solution.
In the end, experts say the combination of two systems has yielded impressive results, going on to provide human-readable output rather than relying on tedious algebra calculations.
AlphaGeometrys output is impressive because its both verifiable and clean, said former Olympiad gold-medalist Evan Chen. It uses classical geometry rules with angles and similar triangles just as students do.
DeepMind goes down the AI rabbit hole
Google DeepMind has thrown its weight behindAI and innovationwith emerging technologies, unveiling a streak of models with impressive capabilities. In late 2023, the team launched an AI model capable ofgeneratingmillions of new materials by predicting their structures.
In another study, the team proposed a new method toimprovethe capabilities of large language models (LLMs) by linking them with existing models, using its new Composition to Augment Language Model (CALM).
Other projects include the race to roll out invisible watermarks for AI-generated images, a weather prediction model, and studies mimicking the human ability to learn new skill sets.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownershipallowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeeks coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Turning AI into ROI
New to blockchain? Check out CoinGeeks Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.
Continued here:
Got geometry problems? Google DeepMind new AI model can solve them for you - CoinGeek