In 2010, Demis Hassabis co-founded what would become one of the most influential AI labs in the world: DeepMind, named after the term deep learning. The company, which Google acquired in 2014, had grand designs for building artificial general intelligence, or AGI.
How is that endeavor going?
Its looking like its going to be a more gradual process rather than a step function, he said during a keynote fireside chat at Mobile World Congress 2024 in Barcelona, Spain. Todays AI systems are becoming incrementally more powerful as compute, techniques and data used are scaled up.
It is possible that significant advances can come in the next few years with new innovations to improve AIs ability to plan, remember and use tools things current-generation AI systems are missing. In the meantime, AI advances are proving to be useful already in many other endeavors.
The CEO defines AGI as a system that can perform almost any cognitive task that humans can. He said there is a need for a human reference point is because the human brain is the only proof we have maybe in the universe that general intelligence is possible.
But how will we know AGI when we see it? It is a question hotly debated in the field of AI. For Hassabis, it either may be obvious when it appears or may require considerable tests to determine.
Related:Google DeepMind CEO: AGI is Coming in a Few Years
One way is to actually test the systems on thousands and thousands of tasks that humans do and see if it passes a certain threshold on all of those tasks. And the more tasks you put into that test set, the more sure you can be you have the general space covered.
From left: Wireds Steven Levy and Google DeepMind CEO Demis Hassabis
Amid its quest to develop AGI, it was another AI system that helped cement DeepMind as a key player in the AI space: AlphaFold.
The system predicts protein structures and in 2022, the model was used to map nearly all of the 200 million known proteins.
Commenting on the project at MWC, Hassabis used AlphaFold as an example of a non-general AI system that could be used to further human knowledge.
He said it would have taken a billion years of having a person with a doctorate to map every known protein something his team did in just one year.
Over a million researchers have used the model, according to the Google DeepMind CEO, but he wants the model to power drug discovery.
And that is a goal parent company Alphabet has in mind it formed Isomorphic Labs in 2021 to reimagine drug discovery with AI systems like AlphaFold 2.
Isomorphic penned deals with pharma giants Novartis and Eli Lilly in January to use AI to design new drugs. According to Hassabis, drugs designed by AI will hit clinics in the next couple of years.
Related:DeepMind AI System Predicts Structure of Nearly All Known Proteins
It's really having a material impact now on drug discovery, and I hope that drug discovery will shrink from 10 years to discover one drug down to maybe a matter of months to discover drugs to cure these terrible diseases.
Hassabis noted that most of the major AI innovations of the past decade came from Google Research, Brain and DeepMind. OpenAI actually took these ideas and techniques and applied Silicon Valley growth mentality, hacker mentality to it, and scaled it to sort of maximum speed, he said.
Also, OpenAIs unusual path to success with its models was not in coming up with a new innovation but rather by scaling current innovation.
I dont think anyone predicted it, maybe even including them, that these new capabilities would just emerge just through scale, not for inventing some new innovation, but actually just sort of scaling, Hassabis said.
And its quite unusual in the history of most scientific technology fields where you get step-changing capability by doing the same thing, just bigger that doesnt happen very often. Usually, you just get incremental capabilities, and normally you have to have some new insight or some new flash of inspiration, or some new breakthrough in order to get a step change. And that wasnt the case here.
The other surprising thing was that with ChatGPT, the general public seems to be ready to use these systems even though they clearly have flaws hallucinations, theyre not factual, Hassabis said.
Googles thinking was these systems needed to be 100 times more accurate before releasing them but OpenAI just released it and it turns out millions of people found value out of that, he added. It didnt have to be 100% accurate for there to be some valuable use cases there, so I think that was surprising for the whole industry.
Hassabis said they also thought these systems would have narrower use cases for scientists and other specific professions. But actually, the general public was willing to use slightly messier systems and find value and use cases for them. So that then precipitated a change in (Googles) outlook.
This led to Googles merging of Google Brain, a team within Google Research, with DeepMind in April 2023. The goal was to combine all of our compute together and engineering talent together to build the biggest possible things we can, he said. Gemini, our most advanced, most capable AI model, is one of the fruits of that combination.
What does Hassabis believe the future of AI will look like? He said last May that DeepMinds dream of AGI may be coming in a few years, but for now, his team is exploring new areas to apply AI.
One of those areas is in material sciences using AI to help discover new types of materials.
I dream of one day discovering room temperature superconductor it may exist in chemical space, but we just haven't found it as human chemists and material scientists.
Google DeepMind is also looking at applying AI to weather prediction and climate change, as well as mathematics.
He also said that the next generation of smart assistants will be useful in peoples daily lives rather than sort of gimmicky as they were in the previous generation.
Users are already seeing smarter and more adaptable phones, sporting Googles Gemini features and a new capability to search just by encircling an image.
But in five or more years, is the phone even really going to be the perfect phone factor? he asked. Maybe we need glasses or some other things so that the AI system can actually see a bit of the context that you're in to be even more helpful in your daily life.
Go here to see the original:
Google DeepMind CEO on AGI, OpenAI and Beyond MWC 2024 - AI Business
Read More..