Category Archives: Deep Mind

Employees claim OpenAI, Google ignoring risks of AI and should give them ‘right to warn’ public – New York Post

A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity and demanded a right to warn the public in an open letter Tuesday.

Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that AI companies have strong financial incentives to avoid effective oversight and cited a lack of federal rules on developing advanced AI.

The workers point to potential risks including the spread of misinformation, worsening inequality and even loss of control of autonomous AI systems potentially resulting in human extinction especially as OpenAI and other firms pursue so-called advanced general intelligence, with capacities on par with or surpassing the human mind.

Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI, former OpenAI employee Daniel Kokotajlo, one of the letters organizers, said in a statement. I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.

They and others have bought into the move fast and break things approach and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo added.

Kokotajlo, who joined OpenAI in 2022 as a researcher focused on charting AI advancements before leaving in April, has placed the probability that advanced AI will destroy or severely harm humanity in the future at a whopping 70%, according tothe New York Times, which first reported on the letter.

He believes theres a 50% chance that researchers will achieve artificial general intelligence by 2027.

The letter drew endorsements by two prominent experts known as the Godfathers of AI Geoffrey Hinton, who warned last year that the threat of rogue AI was more urgent to humanity than climate change, and Canadian computer scientist Yoshua Bengio. Famed British AI researcher Stuart Russell also backed the letter.

The letter asks AI giants to commit to four principles designed to boost transparency and protect whistleblowers who speak out publicly.

Those include an agreement not to retaliate against employees who speak out about safety concerns and to support an anonymous system for whistleblowers to alert the public and regulators about risks.

The AI firms are also asked to allow a culture of open criticism so long as no trade secrets are disclosed, and pledge not to enter into or enforce non-disparagement agreements or non-disclosure agreements.

As of Tuesday morning, the letters signers include a total of 13 AI workers. Of that total, 11 are formerly or currently employed by OpenAI, including Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler.

There should be ways to share information about risks with independent experts, governments, and the public, said Saunders. Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.

Other signers included former Google DeepMind employee Ramana Kumar and current employee Neel Nanda, who formerly worked at Anthropic.

When reached for comment, an OpenAI spokesperson said the company has a proven track record of not releasing AI products until necessary safeguards were in place.

Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, OpenAI said in a statement.

We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world, the company added

Google and Anthropic did not immediately return requests for comment.

The letter was published just days after revelations that OpenAI has dissolved its Superalignment safety team, whose responsibilities included creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction.

Subscribe to our daily Business Report newsletter!

Two OpenAI executives who led the team, co-founder Ilya Sutskever and Jan Leike, have since resigned from the company. Leike blasted the firm on his way out the door, claiming that safety had taken a backseat to shiny products.

Elsewhere, former OpenAI board member Helen Toner who was part of the group that briefly succeeded in ousting Sam Altman as the firms CEO last year alleged that he had repeatedly lied during her tenure.

Toner claimed that she and other board members did not learn about ChatGPTs launch in November 2022 from Altman and instead found out about its debut on Twitter.

OpenAI has since established a new safety oversight committee that includes Altman as it begins training the new version of the AI model that powers ChatGPT.

The company pushed back on Toners allegations, noting that an outside review had determined that safety concerns were not a factor in Altmans removal.

Read more from the original source:
Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post

OpenAI and Google DeepMind staff warn AI may lead to human extinction – Verdict

Current and former employees from Google DeepMind and OpenAI have delivered a fresh warning that AI could lead to human extinction.

The open letter, signed by 11 staff members, alleged that unregulated AI could aid in the spread of misinformation and increase inequalities in the world, which could ultimately lead to eventual human extinction

AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm, the letter said.

However, the letter warned that AI companies currently have weak obligations to share this information with governments and regulators about the capabilities of their systems.

OpenAI and Google DeepMind workers said AI companies can not be trusted to deliver this crucial information on their own accord.

The letter also accused the structural and financial motives of AI companies of hindering effective oversight.

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Your download email will arrive shortly

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

Country * UK USA Afghanistan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos Islands Colombia Comoros Congo Democratic Republic of the Congo Cook Islands Costa Rica Cte d"Ivoire Croatia Cuba Curaao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See Honduras Hong Kong Hungary Iceland India Indonesia Iran Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati North Korea South Korea Kuwait Kyrgyzstan Lao Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia Moldova Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Runion Romania Russian Federation Rwanda Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates US Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Vietnam British Virgin Islands US Virgin Islands Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe Kosovo

Industry * Academia & Education Aerospace, Defense & Security Agriculture Asset Management Automotive Banking & Payments Chemicals Construction Consumer Foodservice Government, trade bodies and NGOs Health & Fitness Hospitals & Healthcare HR, Staffing & Recruitment Insurance Investment Banking Legal Services Management Consulting Marketing & Advertising Media & Publishing Medical Devices Mining Oil & Gas Packaging Pharmaceuticals Power & Utilities Private Equity Real Estate Retail Sport Technology Telecom Transportation & Logistics Travel, Tourism & Hospitality Venture Capital

Tick here to opt out of curated industry news, reports, and event updates from Verdict.

Submit and download

We do not believe bespoke structures of corporate governance are sufficient to change this, the letter said.

The open letter called for AI companies to allow former employees to raise risk-related concerns to the public.

AI companies should allow its current and former employees to raise risk-related concerns about its technologies to the public, to the companys board, to regulators, or to an appropriate independent organisation with relevant expertise, the letter said.

So long as trade secrets and other intellectual property interests are appropriately protected, it added.

The open letter called for AI companies to not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

GlobalData predicts that the overall AI marketwill be worth $909bn by 2030, having grown at a CAGR of 35% between 2022 and 2030.

In the GenAI space, revenues are expected to grow from $1.8bn in 2022 to $33bn in 2027, a CAGR of 80%.

Give your business an edge with our leading industry insights.

See the article here:
OpenAI and Google DeepMind staff warn AI may lead to human extinction - Verdict

Employees Say OpenAI and Google DeepMind Are Hiding Dangers from the Public – TIME

A group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter on Tuesday warning against the dangers of advanced AI as they allege companies are prioritizing financial gains while avoiding oversight.

Thirteen employees, eleven of which are current or former employees of OpenAI, the company behind ChatGPT, signed the letter entitled: A Right to Warn about Advanced Artificial Intelligence. The two other signatories are current and former employees of Google DeepMind. Six individuals are anonymous.

The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction, the letter says.

Read More: Exclusive: U.S. Must Move Decisively to Avert Extinction-Level Threat From AI, Government-Commissioned Report Says

Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, OpenAI spokeswoman Lindsey Held told the New York Times. We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world.

Google DeepMind has not commented publicly on the letter and did not respond to TIMEs request for comment.

Leaders of all three leading AI companiesOpenAI, Google DeepMind and Anthropichave talked about the risks in the past. If we build an AI system thats significantly more competent than human experts but it pursues goals that conflict with our best interests, the consequences could be dire rapid AI progress would be very disruptive, changing employment, macroeconomics, and power structures [we have already encountered] toxicity, bias, unreliability, dishonesty, AI safety and research company Anthropic said in a March 2023 statement, which is linked to in the letter. (One of the letter signatories who currently works at Google DeepMind used to work at Anthropic.)

Read More: Inside Anthropic, the AI Company Betting That Safety Can Be a Winning Strategy

The group behind the letter alleges that AI companies have information about the risks of the AI technology they are working on, but because they arent required to disclose much with governments, the real capabilities of their systems remain a secret. That means current and former employees are the only ones who can hold the companies accountable to the public, they say, and yet many have found their hands tied by confidentiality agreements that prevent workers from voicing their concerns publicly. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated, the group wrote.

Employees are an important line of safety defense, and if they cant speak freely without retribution, that channels going to be shut down, the groups pro bono lawyer Lawrence Lessig told the New York Times.

Eighty-three percent of Americans believe that AI could accidentally lead to a catastrophic event, according to research by the AI Policy Institute. Another 82% do not trust tech executives to self-regulate the industry. Daniel Colson, executive director of the Institute, notes that the letter has come out after a series of high-profile exits from OpenAI, including Chief Scientist Ilya Sutskever. Sutskevers departure also made public the non-disparagement agreements that former employees would sign to bar them from speaking negatively about the company. Failure to abide by that rule would put their vested equity at risk. There needs to be an ability for employees and whistleblowers to share what's going on and share their concerns, says Colson. Things that restrict the people in the know from speaking about what's actually happening really undermines the ability for us to make good choices about how to develop technology.

The letter writers have made four demands of advanced AI companies: stop forcing employees into agreements that prevent them from criticizing their employer for risk-related concerns, create an anonymous process for employees to raise their concerns to board members and other relevant regulators or organizations, support a culture of open criticism, and not retaliate against former and current employees who share risk-related confidential information after other processes have failed.

The thing that we haven't seen at all anywhere, Colson says, is requirements being placed upon these companies for things like safety testing, and any sort of limitation on companies being able to develop these models if they don't comply with cybersecurity requirements, or safety testing requirements.

Governments around the world have moved to regulate AI, though progress lags behind the speed at which AI is progressing. Earlier this year, the E.U. passed the worlds first comprehensive AI legislation. Efforts at international cooperation have been pursued through AI Safety Summits in the U.K. and South Korea, and at the U.N. In October 2023. President Joe Biden signed an AI executive order that, among other things, requires AI companies to disclose their development and safety testing plans to the Department of Commerce.

-With additional reporting by Will Henshall/Washington

Read the original post:
Employees Say OpenAI and Google DeepMind Are Hiding Dangers from the Public - TIME

OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers – The Washington Post

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses grave risks to humanity in a Tuesday letter, calling on companies to implement sweeping changes to ensure transparency and foster a culture of public debate.

The letter, signed by 13 people including current and former employees at Anthropic and Googles DeepMind, said AI can exacerbate inequality, increase misinformation, and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have strong financial incentives to limit oversight, they said.

Because AI is only loosely regulated, accountability rests on company insiders, the employees wrote, calling on corporations to lift nondisclosure agreements and give workers protections that allow them to anonymously raise concerns.

The move comes as OpenAI faces a staff exodus. Many critics have seen prominent departures including of OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike as a rebuke of company leaders, who some employees argue chase profit at the expense of making OpenAIs technologies safer.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the companys disregard for the risks of artificial intelligence.

Summarized stories to quickly stay informed

I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence, he said in a statement, referencing a hotly contested term referring to computers matching the power of human brains.

They and others have bought into the move fast and break things approach, and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo said.

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that rigorous debate is crucial given the significance of this technology. Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight, AI workers are the few people who can hold corporations accountable. They said that they are hamstrung by broad confidentiality agreements and that ordinary whistleblower protections are insufficient because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles are a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms after other processes have failed.

The Washington Post in December reported that senior leaders at OpenAI raised fears about retaliation from CEO Sam Altman warnings that preceded the chiefs temporary ouster. In a recent podcast interview, former OpenAI board member Helen Toner said part of the nonprofits decision to remove Altman as CEO late last year was his lack of candid communication about safety.

He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working, she told The TED AI Show in May.

The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, who are considered godfathers of AI, and renowned computer scientist Stuart Russell.

Go here to see the original:
OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers - The Washington Post

OpenAI, Google Deepmind employees call for oversight, whisteblower protection to curb AI risks By Proactive Investors – Investing.com Canada

Proactive Investors - Current and former staff at AI firms OpenAI, Anthropic and Google (NASDAQ:) Deepmind have published an open letter describing their concerns about the rapid advancement of AI with a lack of oversight and the absence of whistleblower protection.

In the letter, the AI professionals cited numerous risks from AI technologies including the further entrenchment of existing inequalities, manipulation and misinformation, and the loss of control of autonomous AI systems potentially resulting in human extinction.

They wrote that they believe these risks can be mitigated but AI companies have strong financial incentives to avoid effective oversight and we do not believe bespoke structures of corporate governance are sufficient to change this.

They called upon AI companies to commit to several principles, including facilitating an anonymous process for current and former employees to raise concerns to the companys board, regulators and appropriate independent organizations and for these companies to support a culture of open criticism.

They also urged companies not to retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

They noted that ordinary whistleblower protections are insufficient because they are focused on illegal activity, whereas the risks they are concerned about are not yet regulated.

So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public, they wrote.

Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.

Read more on Proactive Investors CA

Disclaimer

See more here:
OpenAI, Google Deepmind employees call for oversight, whisteblower protection to curb AI risks By Proactive Investors - Investing.com Canada

Inside DeepMinds effort to understand its own creations – Semafor

With missteps at industry leader OpenAI possibly providing an opening for rivals touting safety advances, Google DeepMind unveiled fresh details of how its building systems to catch potentially dangerous leaps in artificial intelligence capabilities.

OpenAI has tried to reassure the public, announcing a new safety committee earlier this week, after a top safety researcher joined rival firm Anthropic. That move came before actress Scarlett Johansson accused Sam Altmans firm of using her voice without her permission for ChatGPT.

With AI guardrails becoming a possible competitive advantage, Google DeepMind executives told Semafor that the methods for predicting and identifying threats will likely involve a combination of humans and what the company calls auto evaluations, in which AI models analyze other models or even themselves.

The effort, though, has become particularly challenging, now that the most advanced AI models have made the jump to multimodality, meaning they were trained not only on text, but video and audio as well, they said.

We have some of the best people in the world working on this, but I think everybody recognizes the field of science and evaluations is still very much an area where we need additional investment research, collaboration and also best practices, said Tom Lue, general counsel and head of governance at Google DeepMind.

Google, which released a comprehensive new framework earlier this month to assess the dangers of AI models, has been working on the problem for years. But the efforts have ramped up now that foundation models like GPT and DeepMinds Gemini have ignited a global, multibillion dollar race to increase the capabilities of AI models.

The challenge, though, is that the massive foundation models that power these popular products are still in their infancy. They are not yet powerful enough to pose any imminent threat, so researchers are trying to design a way to analyze a technology that has not yet been created.

When it comes to new multimodal models, automated evaluation is still in the distant horizon, said Helen King, Google DeepMinds senior director of responsibility. We havent matured the evaluation approach yet and actually trying to automate that is almost premature, she said.

See the original post:
Inside DeepMinds effort to understand its own creations - Semafor

Looking ahead to the AI Seoul Summit – Google DeepMind

How summits in Seoul, France and beyond can galvanize international cooperation on frontier AI safety

Last year, the UK Government hosted the first major global Summit on frontier AI safety at Bletchley Park. It focused the worlds attention on rapid progress at the frontier of AI development and delivered concrete international action to respond to potential future risks, including the Bletchley Declaration; new AI Safety Institutes; and the International Scientific Report on Advanced AI Safety.

Six months on from Bletchley, the international community has an opportunity to build on that momentum and galvanize further global cooperation at this weeks AI Seoul Summit. We share below some thoughts on how the summit and future ones can drive progress towards a common, global approach to frontier AI safety.

Since Bletchley, there has been strong innovation and progress across the entire field, including from Google DeepMind. AI continues to drive breakthroughs in critical scientific domains, with our new AlphaFold 3 model predicting the structure and interactions of all lifes molecules with unprecedented accuracy. This work will help transform our understanding of the biological world and accelerate drug discovery. At the same time, our Gemini family of models have already made products used by billions of people around the world more useful and accessible. We've also been working to improve how our models perceive, reason and interact and recently shared our progress in building the future of AI assistants with Project Astra.

This progress on AI capabilities promises to improve many peoples lives, but also raises novel questions that need to be tackled collaboratively in a number of key safety domains. Google DeepMind is working to identify and address these challenges through pioneering safety research. In the past few months alone, weve shared our evolving approach to developing a holistic set of safety and responsibility evaluations for our advanced models, including early research evaluating critical capabilities such as deception, cyber-security, self-proliferation, and self-reasoning. We also released an in-depth exploration into aligning future advanced AI assistants with human values and interests. Beyond LLMs, we recently shared our approach to biosecurity for AlphaFold 3.

This work is driven by our conviction that we need to innovate on safety and governance as fast as we innovate on capabilities - and that both things must be done in tandem, continuously informing and strengthening each other.

Maximizing the benefits from advanced AI systems requires building international consensus on critical frontier safety issues, including anticipating and preparing for new risks beyond those posed by present day models. However, given the high degree of uncertainty about these potential future risks, there is clear demand from policymakers for an independent, scientifically-grounded view.

Thats why the launch of the new interim International Scientific Report on the Safety of Advanced AI is an important component of the AI Seoul Summit - and we look forward to submitting evidence from our research later this year. Over time, this type of effort could become a central input to the summit process and, if successful, we believe it should be given a more permanent status, loosely modeled on the function of the Intergovernmental Panel on Climate Change. This would be a vital contribution to the evidence base that policymakers around the world need to inform international action.

We believe these AI summits can provide a regular forum dedicated to building international consensus and a common, coordinated approach to governance. Keeping a unique focus on frontier safety will also ensure these convenings are complementary and not duplicative of other international governance efforts.

Evaluations are a critical component needed to inform AI governance decisions. They enable us to measure the capabilities, behavior and impact of an AI system, and are an important input for risk assessments and designing appropriate mitigations. However, the science of frontier AI safety evaluations is still early in its development.

This is why the Frontier Model Forum (FMF), which Google launched with other leading AI labs, is engaging with AI Safety Institutes in the US and UK and other stakeholders on best practices for evaluating frontier models. The AI summits could help scale this work internationally and help avoid a patchwork of national testing and governance regimes that are duplicative or in conflict with one another. Its critical that we avoid fragmentation that could inadvertently harm safety or innovation.

The US and UK AI Safety Institutes have already agreed to build a common approach to safety testing, an important first step toward greater coordination. We think there is an opportunity over time to build on this towards a common, global approach. An initial priority from the Seoul Summit could be to agree a roadmap for a wide range of actors to collaborate on developing and standardizing frontier AI evaluation benchmarks and approaches.

It will also be important to develop shared frameworks for risk management. To contribute to these discussions, we recently introduced the first version of our Frontier Safety Framework, a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. We expect the Framework to evolve significantly as we learn from its implementation, deepen our understanding of AI risks and evaluations, and collaborate with industry, academia and government. Over time, we hope that sharing our approaches will facilitate work with others to agree on standards and best practices for evaluating the safety of future generations of AI models.

Many of the potential risks that could arise from progress at the frontier of AI are global in nature. As we head into the AI Seoul Summit, and look ahead to future summits in France and beyond, were excited for the opportunity to advance global cooperation on frontier AI safety. Its our hope that these summits will provide a dedicated forum for progress towards a common, global approach. Getting this right is a critical step towards unlocking the tremendous benefits of AI for society.

View original post here:
Looking ahead to the AI Seoul Summit - Google DeepMind

DeepMind’s AI program AlphaFold3 can predict the structure of every protein in the universe and show how they … – Livescience.com

DeepMind has unveiled the third version of its artificial intelligence (AI)-powered structural biology software, AlphaFold, which models how proteins fold.

Structural biology is the molecular basis study of biological materials including proteins and nucleic acids and aims to reveal how they are structured, work, and interact.

AlphaFold3 helps scientists more accurately predict how proteins large molecules that play a critical role in all life forms, from plants and animals to human cells interact with other biological molecules, including DNA and RNA. Doing so will enable scientists to truly understand lifes processes, DeepMind representatives wrote in a blog post.

By comparison, its predecessors, AlphaFold and AlphaFold2, could only predict the shapes that proteins fold into. That was still a major scientific breakthrough at the time.

AlphaFold3's predictions could help scientists develop bio-renewable materials, crops with greater resistance, new drugs and more, the research team wrote in a study published May 8 in the journal Nature.

Related: 'Master of deception': Current AI models already have the capacity to expertly manipulate and deceive humans

Given a list of molecules, the AI program can show how they fit together. It does this not only for large molecules like proteins, DNA, and RNA but also for small molecules known as ligands, which bind to receptors on large proteins like key fitting into a lock.

Get the worlds most fascinating discoveries delivered straight to your inbox.

AlphaFold3 also models how some of these biomolecules (organic molecules produced by living things) are chemically modified. Disruptions in these chemical modifications can play a role in diseases, according to the blog post.

AlphaFold3 can perform these calculations because its underlying machine-learning architecture and training data encompasses every type of biomolecule.

The researchers claim that AlphaFold3 is 50% more accurate than current software-based methods of predicting protein structures and their interactions with other molecules.

For example, in drug discovery, Nature reported that AlphaFold3 outperformed two docking programs which researchers use to model the affinity of small molecules and proteins when they bind together and RoseTTAFold All-Atom, a neural network for predicting biomolecular structures.

Frank Uhlmann, a biochemist at the Francis Crick Institute in London, told Nature that he's been using the tool for predicting the structure of proteins that interact with DNA when copying genomes and experiments show the predictions are mostly accurate.

However, unlike its predecessors, AlphaFold 3 is no longer open source. This means scientists cannot use custom versions of the AI model, or access its code or training data publicly, for their research work.

Scientists looking to use AlphaFold3 for non-commercial research can access it for free via the recently launched AlphaFold Server. They can input their desired molecular sequences and gain predictions within minutes. But they can only perform 20 jobs per day.

Follow this link:
DeepMind's AI program AlphaFold3 can predict the structure of every protein in the universe and show how they ... - Livescience.com

Nature earns ire over lack of code availability for Google DeepMind protein folding paper – Retraction Watch

A group of researchers is taking Nature to task for publishing a paper earlier this month about Google DeepMinds protein folding prediction program without requiring the authors publish the code behind the work.

Roland Dunbrack, of Fox Chase Cancer Center in Philadelphia, peer-reviewed the paper but was not given access to code during the review, the authors of a letter submitted today, May 14, to Nature including Dunbrack write, despite repeated requests.

A Nature podcast said AlphaFold3 unlike AlphaFold2 can accurately predict protein-molecule complexes containing DNA, RNA and more. Although the new version is restricted to non-commercial use, researchers are excited by its greater range of predictive abilities and the prospect of speedier drug discovery.

Not everyone was excited. The authors of the letter, which co-author Stephanie A. Wankowicz of the University of California, San Francisco told Retraction Watch was submitted today to Nature, write they were disappointed with the lack of code, or even executables accompanying the publication of AlphaFold3 in Nature. They continue:

Although AlphaFold3 expands AlphaFold2s capacities to include small molecules, nucleic acids, and chemical modifications, it was released without the means to test and use the software in a high-throughput manner. This does not align with the principles of scientific progress, which rely on the ability of the community to evaluate, use, and build upon existing work. The high-profile publication advertises capabilities that remain locked behind the doors of the parent company.

The authors, who are circulating the letter for additional signatures, write that the models limited availability on a hosted web server, capped at ten predictions per day, restricts the scientific communitys capacity to verify the broad claims of the findings or apply the predictions on a large scale. Specifically, the inability to make predictions on novel organic molecules akin to chemical probes and drugs, one of the central claims of the paper, makes it impossible to test or use this method.

A May 8 news story by the independent team of journalists at Nature noted the restrictions. Nature editor in chief Magdalena Skipper told Retraction Watch:

Nature has a long-standing policy designed to facilitate the availability of data, materials and code upon reasonable request. While seeking to enhance transparency at every opportunity, Nature accepts that there may be circumstances under which research data or code are not openly available. When making a decision on data and code availability, we reflect on many different factors, including the potential implications for biosecurity and the ethical challenges this presents. In such cases we work with the authors to provide alternatives that will support reproducibility, for example through the provision of pseudocode, which is made available to the reviewers during peer review.

As noted in the code availability statement in the paper: AlphaFold3 is available as a non-commercial usage only server at https://www.alphafoldserver.com, with restrictions on allowed ligands and covalent modifications. Pseudocode describing the algorithms is available in the Supplementary Information.

The pseudocode, however, will require months of effort to turn into workable code that approximates the performance, wasting valuable time and resources, the authors of the letter write. Even if such a reimplementation is attempted, restricted access raises questions about whether the results could be fully validated.

The authors of the letter continue:

When journals fail to enforce their written policies about making code available to reviewers and alongside publications, they demonstrate how these policies are applied inequitably and how editorial decisions do not align with the needs of the scientific community. While there is an ever-changing landscape of how science is performed and communicated, journals should uphold their role in the community by ensuring that science is reproducible upon dissemination, regardless of who the authors are.

Like Retraction Watch? You can make atax-deductible contribution to support our work,subscribe to our freedaily digestorpaid weekly update,follow uson Twitter, like uson Facebook, or add us to yourRSS reader. If you find a retraction thatsnot in The Retraction Watch Database, you canlet us know here. For comments or feedback, email us at [emailprotected].

Continued here:
Nature earns ire over lack of code availability for Google DeepMind protein folding paper - Retraction Watch

Well played Google! DeepMind shows off Project Astra watching the OpenAI ChatGPT Voice announcement – Tom’s Guide

OpenAI held its big unveiling earlier this week, showing off the much more human-sounding and emotive ChatGPT-4o model with a big push for its Voice Mode.

With Google following up with its I/O keynote, it appears the tech giant was keeping more than just an eye on its AI rivals. In fact, it held a little AI screening of the event.

Michael Chang from the Google DeepMind Gemini and Project Astra team shared a video showing that the company's Gemini chatbot was watching GPT-4o at work.

Taking to X, Chang said "Gemini and I also got a chance to watch the @OpenAI live announcement of gpt4o, using Project Astra!"

"Congrats to the OpenAI team, super impressive work!"

In the video Gemini is running transcription for the GPT-4o reveal event, attributing quotes to the correct speakers and providing a voice commentary.

Chang asks Gemini to give him a summary of "what just happened" as part of the demo, too, and Gemini explains what was shown on stage, even referencing the OpenAI team by name.

Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.

At the point where GPT-4o works to solve an algebra equation, Gemini solves the equation while OpenAI's model works through the steps. Gemini even recounts the steps to solve and gives kudos to its counterpart for how it solved the equation. Impressive stuff.

Wondering which is best? Be sure to check out our rundown.

See the rest here:
Well played Google! DeepMind shows off Project Astra watching the OpenAI ChatGPT Voice announcement - Tom's Guide