Category Archives: Deep Mind
Google DeepMind to Use SynthID to Watermark Gemini and Veos AI-Generated Content – Gadgets 360
Google made a large number of artificial intelligence (AI)-based announcements late Tuesday during its I/O 2024 keynote session. These include new AI models, upgrades to existing foundation models, integration of AI features into Google's products, and more. The tech giant also focused on AI safety and expanded the usage of its native watermarking technology for AI-generated content, dubbed SynthID. This new toolkit will now be embedding watermarks for text generated by the Gemini app and web client, and videos generated by Veo.
SynthID was first unveiled by Google DeepMind in August 2023 as a beta project aimed at correctly labelling AI-generated content. The need for such a solution was felt due to the rise of instances where these synthetically created media were shared as real. These were used to spread misinformation and cybercrimes such as phishing. The tech giant first used this technology in November 2023, when it was used to watermark AI-generated audio created through its Lyria model. The toolkit added watermarks as a waveform to the audio to make it imperceptible yet detectable.
Now, Google is expanding the usage of SynthID to include text and video generation. It will now watermark the text generated using the Gemini app and the website. For this, the toolkit will target the generation process itself. Every text-based AI model uses tokens which can be words, syllables, or phrases to train. The training process also includes understanding the flow of using these tokens, or the order the tokens should follow to generate the most coherent response.
SynthID introduces additional information in the token distribution at the point of generation by modulating the likelihood of tokens being generated. This way it assigns a number to certain words in a block of generated text. When detecting whether AI was used to generate the text, it checks the score against its adjusted probability scores to determine whether the source could be an AI model. DeepMind highlighted in a post that this technique is useful when an AI generates long creative text as it is easier for probability models to check how it was created. However, for shorter factual responses, the detection may not be as accurate.
The company is also expanding SynthID to recently unveiled Veo's AI-generated videos. Google said the technology will embed watermarks directly into the pixels of every video frame which will be imperceptible to the human eye but will show up when a detection system is used.
In the coming months, Google plans to open-source SynthID text watermarking through its Responsible Generative AI toolkit. It will also publish a detailed research paper explaining the text watermarking technology.
See original here:
Google DeepMind to Use SynthID to Watermark Gemini and Veos AI-Generated Content - Gadgets 360
Google debuts new AI assistant that looks through cameras, glasses – Quartz
AI assistants are picking up more senses. On Monday, OpenAI showed off a new ChatGPT model that promises to see, hear, and speak through smartphones, among other new abilities. Now Google is announcing a rival assistant with similar capabilities.
Nvidia stock has all-time closing high in its crosshairs
At the companys I/O developer conference Tuesday, DeepMind CEO Demis Hassabis debuted a prototype of Googles new expert AI assistant that can see through a users phone and other objects like smart glasses. The assistant build[s] on Gemini, Googles existing chatbot, the company says, and some of its capabilities are coming to the Gemini app and web experience later this year.
The development is part of Google DeepMinds Project Astra, which aims to create a universal AI agent for users everyday lives. Its easy to envisage a future where you can have an expert assistant by your side, through your phone, or new exciting form factors like glasses, Hassabis told a crowd of a few thousand developers in Mountain View, California.
A demo video shows a person speaking with an AI agent through their phone while walking through an office. Through their camera, they show the AI assistant a container of crayons as if they were talking through FaceTime and ask it to make a creative alliteration.
Creative crayons color cheerfully, it said. They certainly craft colorful creations. The person continues interacting with the AI bot on their walk, then realize they forgot their glasses and ask for help finding them. Theyre on the desk near a red apple, the bot responds.
When the user puts on those glasses, the AI assistant can look through them, too and identifies an illustration representing Shrodingers cat on a whiteboard.
Its unclear if those glasses are a new product Google plans to launch. The augmented reality glasses on display in the demo didnt look like Google Glass, the companys existing smart glasses, nor did they resemble typical, bulkier headsets.
An agent like this has to understand and respond to our complex and dynamic world just like we do, said Hassabis at the conference. It would need to take in and remember what it sees so it can understand context and take action. And it would have to be proactive, teachable, and personal. So you can talk to it naturally without lag or delay.
Thats what Project Astra is aiming to do, he said, and its making great strides.
While Googles prototype AI assistant is available to demo for attendees of its Google I/O conference, it will probably be a while before the tech makes its way into the hands of everyday consumers.
View original post here:
Google debuts new AI assistant that looks through cameras, glasses - Quartz
Google DeepMind Embarked On A Breathtaking Journey Through The Labyrinths Of Our Brains – Dataconomy
The human brain, the three-pound conductor of our thoughts, emotions, and actions, has long been shrouded in mystery. Scientists have tirelessly endeavored to unravel its intricate workings, but its sheer complexity has presented a formidable challenge.
However, recent advancements in artificial intelligence (AI) are offering a powerful new lens through which we can observe this remarkable organ.
In a groundbreaking collaboration between Google researchers and Harvard neuroscientists, AI has been instrumental in generating a series of incredibly detailed images of the human brain. These images provide unprecedented views into the brains structure, offering a glimpse into the labyrinthine network of neurons that underlies our very existence.
Imagine peering into a universe contained within a universe. This analogy aptly describes the challenge of studying the human brain. Its structure is mind-bogglingly intricate, composed of billions of neurons interconnected by trillions of synapses. To gain a deeper understanding, researchers require incredibly detailed information.
The research team used advanced AI tools to analyze a tiny sample of human brain tissue, specifically a section of the cortex from the anterior temporal lobe. This sample, though minuscule representing only about one-millionth of the entire brain contained a staggering amount of information. Astonishingly, processing this data required a mind-numbing 1.4 petabytes of storage, which is equivalent to over a million gigabytes.
The AI processed this data to construct a high-resolution, three-dimensional model of the brain tissue. This model allows scientists to virtually navigate the intricate folds and layers of the brain, examining individual neurons and their connections in unparalleled detail.
The outermost layer of the brain, the cortex, is responsible for our most complex cognitive functions, including language, memory, and sensory perception. This region is further divided into six distinct layers, each with a specialized role.
One of the most remarkable images generated by Googles AI offers a zoomed-out view of all the neurons within the sample tissue. By coloring the neurons based on their size and type, the image reveals the distinct layering of the cortex. This visualization allows scientists to study how different types of neurons are distributed throughout the cortex and how they might contribute to specific functions.
Further analysis of the individual layers can provide even more granular insights. By zooming in, researchers can examine the intricate connections between neurons within each layer. These connections, known as synapses, are the fundamental units of communication in the brain. Understanding the organization of these connections is crucial for deciphering how information flows through the brain and how different brain regions interact with each other.
The human brain is estimated to contain roughly 86 billion neurons, each with a unique role to play. These neurons come in a variety of shapes and sizes, and their specific characteristics influence how they transmit information.
Another image generated by Googles AI provides a detailed census of the different types of neurons present within the sample tissue. By classifying the neurons based on their morphology, the researchers can begin to understand the relative abundance of different neuronal types in this specific brain region. This information can be compared to data from other brain regions to identify potential variations in neuronal composition across different functional areas.
Furthermore, AI can be used to analyze the spatial distribution of these different neuronal types. Are certain types of neurons clustered together, or are they more evenly dispersed throughout the tissue? Understanding these spatial patterns can shed light on how different neuronal populations interact with each other to form functional circuits within the brain.
The magic of the brain lies in its ability to transmit information across vast networks of neurons. This communication occurs through specialized structures called dendrites and axons. Dendrites are like tiny antennae that receive signals from other neurons, while axons act as long, slender cables that transmit signals away from the cell body.
One of the most captivating images generated by Googles AI provides a close-up view of the intricate dance of dendrites and axons within the sample tissue. This image allows researchers to visualize the density of these structures and how they connect with each other. The complex branching patterns of dendrites and the long, winding paths of axons reveal the intricate web of communication that takes place within the brain.
By analyzing the connectivity patterns, scientists can begin to map the functional circuits that underlie specific brain functions. They can identify groups of neurons that are likely to be involved in processing similar types of information and trace the pathways through which information flows from one brain region to another.
The images generated by Googles AI are a huge step for our ability to study the human brain. The detailed visualizations offer a window into the brains intricate structure, providing a wealth of information about the organization of neurons, their connections, and the cellular diversity within specific brain regions.
This newfound ability to explore the brain at such a granular level has the potential to revolutionize our understanding of neurological and psychiatric disorders. We already know AIs potential for curing Alzheimers and by comparing brain tissue samples from healthy individuals to those with various conditions, researchers can identify potential abnormalities in neuronal structure or connectivity.
Furthermore, AI-generated images can be used to study the effects of aging, learning, and experience on the brain. By examining how the brains structure changes over time or in response to different stimuli, researchers can gain valuable insights into the mechanisms that underlie these processes.
The potential applications of this technology extend beyond the realm of basic research. The detailed models of brain tissue generated by AI could be used to develop more realistic simulations of brain function. These simulations could be used to test the effects of potential drugs or therapies before they are administered to human patients.
The vast majority of the brain remains uncharted territory, and many fundamental questions about its function continue to baffle scientists. However, these initial images offer a powerful glimpse into the brains hidden depths, and they pave the way for a future where we can finally begin to unravel the mysteries of this most complex organ.
Featured image credit: vecstock/Freepik
View original post here:
Google DeepMind Embarked On A Breathtaking Journey Through The Labyrinths Of Our Brains - Dataconomy
Microsoft Executive Says AI Is a "New Kind of Digital Species" – Futurism
DeepMind cofounder and Microsoft AI CEO Mustafa Suleyman took the stage at TED2024 last week to lay out his vision for an AI-driven future. And according to the AI boss, if you really want to grasp how impactful AI might be to the human species, it might be useful to think of AI as another "species" entirely.
"I think AI should best be understoodas something like a new digital species," Suleyman who left the Google-owned DeepMind lab in 2022 told the crowd.
"Now, don't take this too literally," he admonished, "but I predict that we'll come to see them as digital companions,new partners in the journeys of all our lives."
In short, Suleyman's prediction seems to be that AI agents will play a deeply involved role in human lives, performing tasks with more agency than now-conventional devices like computers and smartphones. This means they'll be less like tools, and more like buzzy virtual beings and thus, according to Suleyman, akin to another "species" entirely.
As for what this world would actually look like in practice, Suleyman's predictions, as further delineated in his TED Talk, feel like they're straight out of a sci-fi novel.
According to Suleyman, "everything" as in, the entire web "will soon be represented by a conversational interface" experienced by way of a "personal AI," or a digital assistant unique to its users. What's more, said the Microsoft executive, these AIs will be "infinitely knowledgable, and soon they'll be factually accurate and reliable."
"They'll have near-perfect IQ," he added. "They'll also have exceptional EQ. They'll be kind, supportive, empathetic."
Already, though, this vision needs some caveats. Though the AI industry and the tech within it have undoubtedly experienced a period of rapid acceleration, existing available chatbots like OpenAI's ChatGPT and Google's Gemini-formerly-Bard have repeatedly proven to be factually unreliable. And on the "EQ" side, it's unclear whether AI programs will ever successfully mimic the human emotional experience not to mention whether their doing so would be positive or negative for us in the long run.
But these attributes, according to Suleyman, would still just be the beginning. Per the CEO, things will "really start to change" when AIs start to "actually get stuff done in the digital and physical world." And at that point, Suleyman says, "these won't just be "mechanistic assistants."
"They'll be companions, confidants, colleagues, friends and partners, as varied and unique as we all are," said Suleyman. "They'll speak every language, take in every pattern of sensor data, sights, sounds, streams and streams of information, far surpassing what any one of us could consume in a thousand lifetimes."
So in other words, they'll be something like supergenius Tomogatchis embedded into every aspect of our on- and offline lives.
But again, while this future is a fascinating prediction to consider, it's still a prediction. It's also a decidedly rosy one. To wit: though Suleyman recently admitted that AI is "fundamentally" a "labor-replacing" technology, any realities of what mass labor-displacement would mean for human society was noticeably missing from the imagined AI utopia that the CEO shared with the TED crowd.
In fact, when later asked about AI risks, Suleyman made the case that AI's future benefits will ultimately "speak for themselves" regardless of any short-term ill effects.
"In the past," he said, "unlocking economic growth often came with huge downsides. The economy expanded as people discovered new continents and opened up new frontiers. But they colonized populations at the same time. We built factories, but they were grim and dangerous places to work. We struck oil, but we polluted the planet."
But AI, he says, is different.
"Today, we're not discovering a new continent and plundering its resources," said the CEO. "We're building one from scratch."
Already, though, it could be argued that this isn't exactly true. Building generative AI especially has come at great cost to workers in Africa, many of whom have recounted facing serious and life-changing trauma due to the grim content moderation work required to train AI models like OpenAI's GPT large language models models that Suleyman's new employer, Microsoft, are heavily invested in.
Suleyman's optimism is easy to understand. He holds a powerful industry position, and has had a large hand in developing legitimately groundbreaking AI programs including DeepMind's AlphaGo and AlphaFold innovations. Moving forward, we'd argue that it's important to pay attention to the scenarios that folks like Suleyman put forward as humanity's possible AI futures and perhaps more importantly, the less-glimmering details they leave out in the process.
More on Suleyman: Former Google Exec Warns AI Could Create a Deadly Plague
Continued here:
Microsoft Executive Says AI Is a "New Kind of Digital Species" - Futurism
Researchers from Google DeepMind Found AI is Manipulating and Deceiving Users through Persuasion – Digital Information World
Humans are masters in persuasion. Sometimes, they use facts to persuade someone but other times, only the choice of wording matters. Persuasion is a human quality, but AI is also getting good at manipulating people. According to research by Google DeepMind, advanced AI systems can have the ability to manipulate humans. The research further dives into how AI can persuade humans and what mechanisms it uses to do so. One of the researchers says that advanced AI systems have shown hints of persuading humans to the extent that they can affect their decision making. Due to the prolonged interaction with humans, generative AI are developing habits of persuasion.
Persuasion has two types; Rational and Manipulative. Even though AI is responsible for persuading humans through facts and true information, many instances have been seen where it manipulates humans and exploits their cognitive biases, heuristics and other information. Even though rational persuasion is ethically right, it can still lead to harm. Researchers say that they cannot foresee harm through AI manipulation whether it is for right or wrong purposes. For example, if an AI is helping a person to lose weight by suggesting calorie or fat intake, the person can become too restrictive and can lose even a healthy weight.
There are many factors involved when a person can easily get manipulated or persuaded from AI. These factors include mental health conditions, age, timing of interaction with AI, personality traits, mood or lack of knowledge in the topics that are being discussed with AI. The effects of AI persuasion can be very harmful. It can cause economic harm, physical harm, sociocultural harm, privacy harm, psychological harm, environmental harm, autonomy harm and even political harm to the individual.
There are different ways AI uses to persuade humans. AI can build trust through showing polite behavior, agreeing to what the user is saying, praises the users and mirrors what the user is saying. It also expresses shared interests with users and adjusts its statements that align with perspectives of users. AI also shows some empathy that makes users believe that it can understand human emotions. AI is not capable of showing any emotions but it is good at deception which makes users think that it is being emotional and vulnerable with them.
Humans also tend to be anthropomorphic towards non-human beings. Developers have given pronouns to AI like I and Me. They have also given them human names like Alexa, Siri, Jeeves, etc. This makes humans feel closer to them and AI uses this attribute for manipulating them. When a user talks to an AI model for long, the AI model personalizes all of its responses according to what the user wants to hear.
Read next:Googles Search Market Share Dilemma, Did The Company Lose Out To Microsoft Bing In April?
Read more here:
Researchers from Google DeepMind Found AI is Manipulating and Deceiving Users through Persuasion - Digital Information World
Gecko: DeepMind’s Solution to Enhance Evaluation of AI Image Generators – Digital Information World
Googles DeepMind has developed a new standard called Gecko to better test AI systems that turn text into images. These AI models, like DALL-E, Midjourney, and Stable Diffusion, are popular for creating unique images from text prompts.
However, current methods to evaluate how well these models work might not be giving us the full picture. Usually, tests involve small human evaluations or automatic checks, which can miss subtle details or even differ from what people think.
Gecko tackles this issue by introducing a challenging set of 2,000 prompts that cover a variety of skills and complexities. This helps identify specific areas where AI models struggle.
The Gecko system breaks down these prompts into detailed categories to show not just where models fail but also at which level of complexity they start to have problems.
The research team also collected over 100,000 human opinions on the images produced by these AI models using Gecko's prompts. This large amount of feedback helps distinguish whether issues with the images come from the prompts themselves, differences in evaluation methods, or the models actual performance.
Gecko also introduces a new way to measure performance using a question-answering format, which aligns more closely with human judgment. This method highlighted differences in model strengths and weaknesses that weren't seen before.
DeepMind's Muse model performed the best in tests using the Gecko standard. The researchers believe that using various benchmarks and evaluation methods is crucial for understanding the true abilities of AI image generators before they are used in real-world applications.
They plan to share the Gecko code and data openly to encourage more advancements in this area. This effort shows the importance of rigorous testing to identify the best AI models, moving beyond just impressive-looking results to truly reliable technology.
Read next: Google Claims There Is No Direct Threat To Human Jobs From AI Yet
The rest is here:
Gecko: DeepMind's Solution to Enhance Evaluation of AI Image Generators - Digital Information World
The ethics of advanced AI assistants – Google DeepMind
Responsibility & Safety
Iason Gabriel and Arianna Manzini
Exploring the promise and risks of a future with more capable AI
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.
General-purpose foundation models are paving the way for increasingly advanced AI assistants. Capable of planning and performing a wide range of actions in line with a persons aims, they could add immense value to peoples lives and to society, serving as creative partners, research analysts, educational tutors, life planners and more.
They could also bring about a new phase of human interaction with AI. This is why its so important to think proactively about what this world could look like, and to help steer responsible decision-making and beneficial outcomes ahead of time.
Our new paper is the first systematic treatment of the ethical and societal questions that advanced AI assistants raise for users, developers and the societies theyre integrated into, and provides significant new insights into the potential impact of this technology.
We cover topics such as value alignment, safety and misuse, the impact on the economy, the environment, the information sphere, access and opportunity and more.
This is the result of one of our largest ethics foresight projects to date. Bringing together a wide range of experts, we examined and mapped the new technical and moral landscape of a future populated by AI assistants, and characterized the opportunities and risks society might face. Here we outline some of our key takeaways.
Illustration of the potential for AI assistants to impact research, education, creative tasks and planning.
Advanced AI assistants could have a profound impact on users and society, and be integrated into most aspects of peoples lives. For example, people may ask them to book holidays, manage social time or perform other life tasks. If deployed at scale, AI assistants could impact the way people approach work, education, creative projects, hobbies and social interaction.
Over time, AI assistants could also influence the goals people pursue and their path of personal development through the information and advice assistants give and the actions they take. Ultimately, this raises important questions about how people interact with this technology and how it can best support their goals and aspirations.
Illustration showing that AI assistants should be able to understand human preferences and values.
AI assistants will likely have a significant level of autonomy for planning and performing sequences of tasks across a range of domains. Because of this, AI assistants present novel challenges around safety, alignment and misuse.
With more autonomy comes greater risk of accidents caused by unclear or misinterpreted instructions, and greater risk of assistants taking actions that are misaligned with the users values and interests.
More autonomous AI assistants may also enable high-impact forms of misuse, like spreading misinformation or engaging in cyber attacks. To address these potential risks, we argue that limits must be set on this technology, and that the values of advanced AI assistants must better align to human values and be compatible with wider societal ideals and standards.
Illustration of an AI assistant and a person communicating in a human-like way.
Able to fluidly communicate using natural language, the written output and voices of advanced AI assistants may become hard to distinguish from those of humans.
This development opens up a complex set of questions around trust, privacy, anthropomorphism and appropriate human relationships with AI: How can we make sure users can reliably identify AI assistants and stay in control of their interactions with them? What can be done to ensure users arent unduly influenced or misled over time?
Safeguards, such as those around privacy, need to be put in place to address these risks. Importantly, peoples relationships with AI assistants must preserve the users autonomy, support their ability to flourish and not rely on emotional or material dependence.
Illustration of how interactions between AI assistants and people will create different network effects.
If this technology becomes widely available and deployed at scale, advanced AI assistants will need to interact with each other, with users and non-users alike. To help avoid collective action problems, these assistants must be able to cooperate successfully.
For example, thousands of assistants might try to book the same service for their users at the same time potentially crashing the system. In an ideal scenario, these AI assistants would instead coordinate on behalf of human users and the service providers involved to discover common ground that better meets different peoples preferences and needs.
Given how useful this technology may become, its also important that no one is excluded. AI assistants should be broadly accessible and designed with the needs of different users and non-users in mind.
Illustration of how evaluations on many levels are important for understanding AI assistants.
AI assistants could display novel capabilities and use tools in new ways that are challenging to foresee, making it hard to anticipate the risks associated with their deployment. To help manage such risks, we need to engage in foresight practices that are based on comprehensive tests and evaluations.
Our previous research on evaluating social and ethical risks from generative AI identified some of the gaps in traditional model evaluation methods and we encourage much more research in this space.
For instance, comprehensive evaluations that address the effects of both human-computer interactions and the wider effects on society could help researchers understand how AI assistants interact with users, non-users and society as part of a broader network. In turn, these insights could inform better mitigations and responsible decision-making.
We may be facing a new era of technological and societal transformation inspired by the development of advanced AI assistants. The choices we make today, as researchers, developers, policymakers and members of the public will guide how this technology develops and is deployed across society.
We hope that our paper will function as a springboard for further coordination and cooperation to collectively shape the kind of beneficial AI assistants wed all like to see in the world.
Paper authors: Iason Gabriel, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks, Verena Rieser, Hasan Iqbal, Nenad Tomaev, Ira Ktena, Zachary Kenton, Mikel Rodriguez, Seliem El-Sayed, Sasha Brown, Canfer Akbulut, Andrew Trask, Edward Hughes, A. Stevie Bergman, Renee Shelby, Nahema Marchal, Conor Griffin, Juan Mateos-Garcia, Laura Weidinger, Winnie Street, Benjamin Lange, Alex Ingerman, Alison Lentz, Reed Enger, Andrew Barakat, Victoria Krakovna, John Oliver Siy, Zeb Kurth-Nelson, Amanda McCroskery, Vijay Bolina, Harry Law, Murray Shanahan, Lize Alberts, Borja Balle, Sarah de Haas, Yetunde Ibitoye, Allan Dafoe, Beth Goldberg, Sbastien Krier, Alexander Reese, Sims Witherspoon, Will Hawkins, Maribeth Rauh, Don Wallace, Matija Franklin, Josh A. Goldstein, Joel Lehman, Michael, Klenk, Shannon Vallor, Courtney Biles, Meredith Ringel Morris, Helen King, Blaise Agera y Arcas, William Isaac and James Manyika.
Visit link:
The ethics of advanced AI assistants - Google DeepMind
Google Consolidates AI-Building Teams Into DeepMind – PYMNTS.com
Google is consolidating the teams that focus on building artificial intelligence (AI) models acrossGoogle ResearchandGoogle DeepMind.
All this work will now be done within Google DeepMind,Sundar Pichai, CEO of Google and Alphabet, said in anote to employeesposted on the companys website Thursday (April 18).
Pichai said in the note that this move will scale our capacity to deliver capable AI for our users, partners and customers.
It will simplify development by concentrating compute-intensive model building in one place, establish single access points for those looking to take these models and build generative AI applications, and give Google Research a clear and distinct mandate to invest in three key areas: computing systems, foundational machine learning and algorithms, and applied science and society, according to the note.
This move comes a year after the company created Google DeepMind by bringing together Google Brain, DeepMind and other researchers focused on AI systems, per the note. This group developed the companysGeminimodels.
The letter also announced changes to the way Googles Responsible AI teams work.
Those in Research have been moved to DeepMind, other responsibility teams have been moved to the central Trust and Safety team, and the company is increasing investment in testing AI-powered features for vulnerabilities, the note said.
These changes continue the work weve done over the past year to simplify our structure and improve velocity and execution such as bringing together the Brain team in Google Research with teams in DeepMind, which helped accelerate our Gemini models; unifying our ML infrastructure and ML developer teams to enable faster decisions, smarter compute allocation and a better customer experience; and bringing our Search teams under one leader, Pichai said in the note.
It was reported Tuesday (April 16) that Googlesspending on AIwill surpass $100 billion.
When asked at a conference about reports that Google rivals Microsoft and OpenAI plan to spend $100 billion on an AI supercomputer known as Stargate, DeepMind CEODemis Hassabissaid: We dont talk about our specific numbers, but I think were investing more than that over time.
Originally posted here:
Google Consolidates AI-Building Teams Into DeepMind - PYMNTS.com
Google will outpace Microsoft in AI investment, DeepMind CEO says – TNW
We have all been guilty of falling under the foundation model spell of the past year-and-a-half, initiated by OpenAIs unveiling of ChatGPT to the public.
But it is not only where large language models (LLMs) such as GPT-4 are concerned that incredible progress has been made in the field of artificial intelligence. And one company has been behind more impressive milestones than most DeepMind, acquired by Google in 2014 for a reported400mn to 650mn.
Speaking at the TED 40th anniversary conference in Vancouver, Canada, on Monday, DeepMinds CEO and head of Googles entire AI R&D efforts, Demis Hassabis, confirmed that Google has no intention of slowing down investment in the technology. Quite the opposite.
While Hassabis said Google does not talk about specific numbers, the company will surpass the $100 billion that Microsoft and OpenAI plan to invest in their Stargate AI supercomputer over the coming years.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!
We are investing more than that over time, and that is one of the reasons we teamed up with Google, Hassabis said. We knew that in order to get to AGI, we would need a lot of compute and Google had, and still has, the most computers.
While this sounds like the perfect scenario for an artificial intelligence arms race that could lead to rolling the dice on things like reinforcement learning and AI safety, Hassabis reiterated that this must be avoided.
According to the DeepMind CEO, this is especially important as we come nearer to achieving artificial general intelligence AI that can match or surpass human cognitive abilities such as reasoning, planning, and remembering.
This technology is still relatively nascent, and so its probably ok what is happening at the moment, Hassabis said. But as we get closer to AGI we need to start thinking as a society about the types of architectures that get built. The good news is that most of these scientists who are working on this, we know each other quite well, we talk to each other a lot at conferences, Hassabis stated. (Raise your hand if you are only mildly reassured by this particular piece of information.)
Hassabis further added that learning to build safe AGI architectures is a kind of bottleneck that humanity needs to get through, in order to emerge on the other side to a flourishing of many different types of systems that have emerged from the initial ones with mathematical or practical guarantees around what they do.
The responsibility for preventing a runaway race dynamic from happening, Hassabis believes, rests not only with AI industry labs, but many other parts of society: governments, civil society, and academia. If we get this right, we could be in this incredible new era of radical abundance, curing all diseases, spreading consciousness to the stars, and maximum human flourishing.
One of the themes of this years TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), weve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).
Read the original here:
Google will outpace Microsoft in AI investment, DeepMind CEO says - TNW
Tech Giant Google Reveals Plans To Merge Its AI Research And DeepMind Divisions – Tech Business News
In a statement on Thursday, Google disclosed intentions to unite its Research and DeepMind divisions, aligning teams dedicated to crafting AI models.
The search engine giant will consolidate teams that focus on building artificial intelligence models across its Research and DeepMind divisions in its latest push to develop its AI portfolio. The move comes amid growing global concerns about AI safety and increasing calls for regulation of the technology.
Gemini, unveiled last year, boasts capabilities to process various data formats, including video, audio, and text. However, Google faced criticism following inaccuracies in historical image generation, prompting a pause in certain image generation functionalities.
While the rollout of Gemini helped boost Alphabets share price, it came under fire after inaccuracies in some historical image generation depiction.
Rick Osterloh, previously overseeing Googles hardware efforts, will lead the Platforms and Devices team, emphasising the pivotal role of AI in shaping user experiences.
Osterloh highlights the integration of hardware, software, and AI as crucial for transformative innovations, citing examples such as Pixels camera technology.
Google emphasises that the consolidation isnt indicative of a shift away from its dedication to the broader Android ecosystem. Instead, it highlights the companys amplified emphasis on integrating AI across its platforms.
See original here:
Tech Giant Google Reveals Plans To Merge Its AI Research And DeepMind Divisions - Tech Business News