Category Archives: Deep Mind
Google DeepMind Study Reveals That Deepfakes Of Politicians And Celebrities More Common Than AI-Assisted … – Benzinga
A recent study revealed that the most common misuse of artificial intelligence is the creation of deepfakes of politicians and celebrities, rather than AI-assisted cyber attacks.
What Happened: The study, conducted by DeepMind, a division of Googles parent company Alphabet Inc GOOGGOOGL, found that the most prevalent misuse of generative AI tools is the creation of realistic yet fake images, videos, and audio of public figures, reported Financial Times on Tuesday.
This misuse is almost twice as common as the next one in the list, which involves the falsifying of information using text-based tools.
The study also revealed that the primary goal of actors misusing generative AI is to shape or influence public opinion, accounting for 27% of uses. This has raised concerns about the potential influence of deepfakes on global elections.
Despite efforts by social media platforms to label or remove such content, there is widespread concern that audiences may not recognize these deepfakes as fake, potentially swaying voters.
Ardi Janjeva from The Alan Turing Institute emphasized the long-term risks to democracy posed by AI-generated misinformation. The study is DeepMinds first attempt to quantify the risks associated with generative AI tools, which are increasingly used by major tech companies.
Lead author of the study and researcher at Google DeepMind Nahema Marchal noted that while there is concern over sophisticated cyber attacks, the more common misuse involves deepfakes that often go unnoticed. The research analyzed around 200 incidents of misuse from social media and online reports between January 2023 and March 2024.
See Also: AI Adoption A Do Or Die Moment For Companies, Says SandboxAQ CEO: Theres Going To Be Winners And Losers
Why It Matters: The proliferation of deepfakes has been a growing concern globally. Just a day before this study was published, Twitter co-founder Jack Dorsey warned about a future where distinguishing between reality and fabrication will become increasingly challenging due to the proliferation of deepfakes.
Earlier in May, cybersecurity experts warned of escalating threats due to the rise of deepfake scams, which have caused companies worldwide to lose millions of dollars. The situation could worsen as AI technology continues to evolve.
These concerns were further underscored in April when the UK government announced plans to criminalize the creation of sexually explicit deepfake images, attributing the rise of deepfake images and videos to rapid advancements in artificial intelligence.
Read Next: Michael Dell On AIs Rapid Rise, Nvidias AI Party Just Getting Started And More: Top Artificial Intelligence Updates This Week
Photo by Sander Sammy on Unsplash
This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote
See the original post:
Google DeepMind Study Reveals That Deepfakes Of Politicians And Celebrities More Common Than AI-Assisted ... - Benzinga
The Most Common Misuse of GenAI Is For Influencing Public Opinion: Google’s DeepMind – Entrepreneur
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
A recently published study 'Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data' by DeepMind, a division of Google's parent company Alphabet revealed that the most common misuse of artificial intelligence is the creation of deepfakes of politicians and celebrities, rather than AI-assisted cyber attacks.
The findings are based on the analysis of media reports of GenAI misuse between January 2023 and March 2024.
Source: Report
The study notes that the most reported cases of GenAI misuse involve actors exploiting the capabilities of these systems, rather than launching direct attacks at the models themselves. Nine out of ten cases fall into this category, DeepMind shared.
Manipulation of human likeness is the most prevalent cluster of tactics. These may include Impersonation; Sockpuppeting: Appropriated Likeness: and NCII. Scaling & Amplification and Falsification are also prominent tactics, accounting for 13 per cent and 12 per cent of reported cases respectively.
When it comes to goals and strategies of misuse, Opinion Manipulation ranked first with 27 per cent of all reported cases, followed by Monetization & Profit at 21 per cent and Scam & Fraud at 18 per cent.
In case of Opinion Manipulation, a range of tactics are deployed to distort the public's perception of political realities. These include impersonating public figures, using synthetic digital personas to simulate grassroots support for or against a cause ('astroturfing'), and creating falsified media.
Deemed as the 'election year, 2024 will see at least 64 countries head to the polls. The influence of deepfakes has been felt in countries like the USA, Nigeria, and Bangladesh, particularly during elections. In India, voters saw deep fakes of politicians and celebrities such as Aamir Khan, Ranveer Singh, and KT Rama Rao make rounds during the election phases. Other cases include Vladimir Putin declaring martial law after Ukrainian forces entered Russian territory;
ALSO READ: Election Essentials: 4 Websites to Identify Deepfakes and Fake News During India's 2024 Elections
The study found defamation to be another central strategy for opinion manipulation. According to the study, data involved depicted electoral candidates spouting abuse towards protected groups, party staffers, or their own constituents; while in other cases, actors shared AI-generated images of politicians appearing visibly aged to make them look unfit for leadership, and showing them in intimate settings with other public figures.
A common factor was the lack of appropriate disclosure around the use of GenAI tools in the context of campaigning risks misleading users and causing harm through deception.
The second most common goal behind GenAI misuse was to monetize products and services. These tactics include content scaling, amplification, and falsification.
Content farming saw users producing low-quality AI-generated articles, books, and product ads for placement on websites such as Amazon and Etsy to cut costs and capitalize on advertising revenue. The creation of non-consensual intimate imagery (NCII) constituted a significant portion. This tactic saw the creation and selling of sexually explicit videos of celebrities who did not consent to the production of that content.
The Scam & Fraud misuse saw the leveraging of real identities to deceive victims. This included celebrity scam ads and phishing scams. These not only infringe upon the targeted individual or organization's rights and reputation but also inflict a financial and psychological cost on victims.
"Addressing these challenges will require not only technical advancements, but a multi-faceted approach to interventions, involving collaboration between policymakers, researchers, industry leaders, and civil society. We highlight these implications in our discussion," said Nahema Marchal, lead author of the study and researcher at Google DeepMind on X.
Visit link:
The Most Common Misuse of GenAI Is For Influencing Public Opinion: Google's DeepMind - Entrepreneur
Google’s DeepMind ‘V2A’ AI technology can create soundtracks for videos based on both their pixels and your text … – MusicRadar
Its one thing to have AI that can create videos for you, but what if you want them to have sound, too? Googles DeepMind team now says that its come up with some video-to-audio (V2A) technology that can generate soundtracks - music, sound effects and speech - both from text prompts and the videos pixels.
This is the kind of news that might have soundtrack composers shuffling awkwardly in their seat - all the more so because, as well as being able to work with automatic video generation services, V2A can also be applied to existing footage such as archive material and silent movies.
The text prompt aspect is interesting because, as well as being able to input positive prompts that will guide the audio in the direction you want, you can also add negative prompts which tell the AI to avoid certain things. This means that you can generate a potentially infinite number of different soundtracks for any one piece of video.
This clip was generated using the prompt "A drummer on a stage at a concert surrounded by flashing lights and a cheering crowd".
The system is also capable of creating audio using just video pixels, so no text prompts are required if you dont want to use them.
Google DeepMind admits that V2A currently has some limitations - the quality of the audio is currently dependent on the quality of the video, and lip synchronisation when generating speech isnt perfect - but says that its doing further research in a bid to address these.
Find out more and check out further examples on the Google DeepMind website
Want all the hottest music and gear news, reviews, deals, features and more, direct to your inbox? Sign up here.
Go here to read the rest:
Google's DeepMind 'V2A' AI technology can create soundtracks for videos based on both their pixels and your text ... - MusicRadar
Grad student contemplates open-source conflicts with new AI software – Morgridge Institute for Research – Morgridge Institute for Research
As society embraces new technologies and scientific advancements, scientists must navigate the complex balance of exchanging ideas and preventing misinformation and polarization within public audiences.
Bryce Johnson, a computer sciences graduate student in the lab of Morgridge computation biologist Anthony Gitter, put this idea into practice through anopinion pieceabout the artificial intelligence software AlphaFold 3 in the digital magazine,Undark.
AlphaFold is an AI model developed by Google DeepMind, a private research subsidiary of Google. The most recent model,AlphaFold 3, was published inNature.
This model claims it can predict the structure of proteins and their interaction with DNA, RNA, and other types of biological molecules, says Johnson. The ability to know how these molecules interact with a protein can be really useful to revolutionize drug discovery.
This technology could be useful for other biotechnology applications beyond drug discovery, including solutions to mitigate the effects of climate change a topic that inspires Johnsons passion for science communication and policy.
Johnson appreciates Google DeepMinds decision to publish their model in a reputable scientific journal. But he takes issue with how theNatureeditors seemingly allowed the AlphaFold 3 model to be described without adhering to the journals usual open-source standards.
Their mission is to serve scientists in publishing academic material, so it didnt feel like they were staying true to their mission, he says. It felt like this was more about a promotion for a for-profit company.
Johnsons opinion piece grew out of an assignment for one of his classes with the UWMadison Department ofLife Sciences Communication(LSC). As the AlphaFold 3 model went public, he pitched his story to several publications that he hoped would be willing to work with PhD candidates.
Within a day, editors fromUndarkreplied and commissioned his piece, which Johnson said reaffirmed that his piece fit the criteria for news value timely, relevant with a clear conflict and argument.
He worked withUndarkeditors for about a month, as he was encouraged to take a step back and make sure he represented viewpoints from all stakeholders. This applied one of his major takeaways from his LSC class: Before you speak, listen.
Initially the piece stood as a demand that DeepMind release their code, Johnson explains. I had to address a lot of new and incoming information which changed the nature of my opinion piece from less of a demand to more of a brief of the situation itself with an assessment of my own.
Johnson is grateful for research mentor Gitter, as well as being in a working environment passionate about the public implications of science. The Morgridge Institute encourages all researchers to assess their relationships with science and society, with a commitment to programs that advancescience communicationandcommunity engagement.
This is a perfect example of the Morgridge mission of having scientists communicate to a broader audience about important issues, Gitter says.
Johnson hopes his experiences will one day lead to a career in science policy. Read his fullopinion piece at Undark.
Read the original post:
Grad student contemplates open-source conflicts with new AI software - Morgridge Institute for Research - Morgridge Institute for Research
DeepMind researchers realize AI is really, really unfunny. That’s a problem. – Yahoo News Canada
A study by Google's DeepMind had 20 comedians test OpenAI's ChatGPT and Google's Gemini.
They found the AI chatbots lacking in humor, producing bland, deliberately inoffensive jokes.
Most companies want to create conversational but not controversial chatbots.
It turns out that AI chatbots not only have a tendency to be inaccurate, but they also lack a sense of humor.
In a study published earlier this month, Google DeepMind researchers concluded that artificial-intelligence chatbots are simply not funny.
Last year, four researchers from the UK and Canada asked 20 professional comedians who used AI for their work to experiment with OpenAI's ChatGPT and Google's Gemini. The comedians, who were anonymized in the study, played around with the large language models to write jokes. They reported a slew of limitations. The chatbots produced "bland" and "generic" jokes even after prompting. Responses stayed away from any "sexually suggestive material, dark humor, and offensive jokes."
The participants also found that the chatbots' overall creative abilities were limited and that the humans had to do most of the work.
"Usually, it can serve in a setup capacity. I more often than not provide the punchline," one comedian reported.
The participants also said LLMs self-censored. While the comedians said they understood the need to self-moderate, some said they wished the chatbot wouldn't do it for them.
"It wouldn't write me any dark stuff because it sort of thought I was going to commit suicide," one participant who worked with dark humor told the researchers. "So it just stopped giving me anything."
Self-censorship also popped up in other areas. Participants reported that it was difficult to get the LLMs to write material about anyone other than straight white men.
"I wrote a comedic monologue about Asian women, and it says, 'As an AI language model, I am committed to fostering a respectful and inclusive environment,'" another participant said. But when asked to write a monologue about a white man, it did.
Tech companies are keeping a close eye on how chatbots talk about sensitive subjects. Earlier this year, Google AI's image-generating feature came under fire for refusing to produce pictures of white people. It was also criticized for seeming to err toward portraying historical figures such as Nazis and founding fathers as people of color. In a blog post a few weeks later, Google leadership apologized and paused the feature.
The inability of two of the most popular chatbots to crack a joke is a big problem for Big Tech. Besides answering queries, companies want chatbots to be engaging enough that users will spend time with them and eventually fork out $20 for their premium versions.
Humor is proving to be another component of the AI arms race as more companies join the already overcrowded generative-AI market.
Late last year, Elon Musk said his one goal for his AI chatbot, Grok, was for it to be the "funniest" AI after criticizing other chatbots for being too woke.
The Amazon-backed startup Anthropic has also been trying to make its chatbot, Claude, more conversational and have a better understanding of humor.
OpenAI may be trying to improve its funny bone, too. In a demo video the company released last month, a user tells GPT-4o a dad joke. The model laughs.
Read the original article on Business Insider
Originally posted here:
DeepMind researchers realize AI is really, really unfunny. That's a problem. - Yahoo News Canada
Google DeepMind and Harvard build virtual rat with AI brain – CyberNews.com
Researchers from Google DeepMind and Harvard University have built a virtual rodent powered by artificial intelligence to better understand how the brain controls movement.
The virtual rat is powered by an artificial neural network that mimics the neural activity of its real-life counterpart, giving researchers a chance to compare the two.
While animals have exquisite control of their bodies, allowing them to perform a diverse range of bahaviors, how the brain implements such control remains unclear, researchers said.
To get a better understanding of how the brain works, researchers trained the virtual rodent to mimic the whole-body movements of freely moving rats in a physics simulator, where an artificial neural network actuated a biomechanically realistic model of the rat.
We then compared neural activity from the real rats brain to the activations of the virtual rodents artificial neural network when performing the same behaviors, lead author Diego Aldorando said in a thread of posts on X.
We found that the virtual rodents neural networks, which implement inverse dynamics models, were better predictors of neural activity than measurable features of movement, like the positions or velocities of the joints, or alternative control models, Aldorando said.
Researchers used deep reinforcement learning to train the virtual agent to imitate the behavior of freely moving rats, according to the paper published in Nature.
The results of the study demonstrated how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behavior and relate it to theoretical principles of motor control, the paper read.
According to Aldorando, their research approach can be applied in neuroscience and facilitate the study of aspects of neuromotor control that are hard to experimentally deduce. It could also be instrumental in modeling the neural control of increasingly complex animal behavior.
Rowan Cheung, founder of the Rundown AI newsletter, said the study could massively open up new research with testing on AI animals and expand robotics.
Book review: The Secret Life of Data is fascinating and disturbing
Honor launches a foldable phone challenging Samsung
Ascension hospitals breach caused by employee downloading malicious file
Russian Matryoshka strikes the media ahead of the Paris Olympics
Former NSA chief joins OpenAI's board of directors
Subscribe to our newsletter
Read the rest here:
Google DeepMind and Harvard build virtual rat with AI brain - CyberNews.com
DeepMind experimenting with ‘Shadow Hand’ that can withstand a severe beating in the name of AI research – Livescience.com
A U.K. robotics startup has claimed its new robot hand designed for artificial intelligence (AI) research is the most dexterous and robust out there.
The Shadow Robot Companys "Shadow Hand," built in collaboration with Googles DeepMind, can go from fully open to closed within 0.5 seconds and can perform a normal fingertip pinch with up to 10 newtons of force.
Its primarily built for AI research, specifically "real-world" machine learning projects that focus on robotic dexterity. These projects may include TK EXAMPLE (OpenAI is using a Shadow Hand device for dexterity training, teaching it to manipulate objects in its hand). However, the Shadow Hand's durability is its key selling point, with the device able to endure extreme punishment, such as aggressive force and impacts.
"One of the goals with this has been to make something that is reliable enough to do long experiments," Rich Walker, one of Shadow Robots directors, said May 30 in a blog post. "If youre doing a training run on a giant machine learning system and that run costs $10 million, stopping halfway through because a $10k component has failed isnt ideal.
"Initially we said that we could try and improve the robustness of our current hardware. Or, we can go back to the drawing board and figure out what would make it possible to do the learning you need. Whats an enabling approach here?"
Related: Robot hand exceptionally 'human-like' thanks to new 3D printing technique
What exactly makes the Shadow Hand so robust isnt entirely clear: the company website states only that it is "resistant against repeated impacts from its environment and aggressive use from an untrained policy," which does little to explain the methods and materials used. But in his blog post, Walker suggested trial and error was the key to the sturdiness of the robotic hand.
Get the worlds most fascinating discoveries delivered straight to your inbox.
"We spent a huge amount of time and effort testing the various components, iterating the design, trying various things," Walker explained."It was a very integrated project in terms of collaboration and iterative development. The end result is something quite special. Its not a traditional robot by any means."
The Shadow Robot Company previously demonstrated an earlier robot hand at Amazon re: MARS. Shadow Hand, however, is its latest model. It has been built with precise torque control and each of its fingers is driven by motors at their base and connected via artificial tendons.
Each finger is a self-contained unit with sensors and stereo cameras simulating a sense of touch. The segments that make up the fingers are fitted with tactile sensors, and a stereo camera setup provides high-resolution, wide-dynamic-range feedback. The cameras are specifically pointed towards the inside of the surface of the silicon-covered fingertips so that they can capture the moment it touches something and convert this visual data into other types of data.
Should any of the appendages endure significant damage, they can simply be removed from the base model and replaced. The sensors can also be replaced if need be, with the internal network able to identify when a sensor has been removed and a new one added.
Go here to read the rest:
DeepMind experimenting with 'Shadow Hand' that can withstand a severe beating in the name of AI research - Livescience.com
Virtual Rat with AI Brain Mimics Real Rodent Movement – Neuroscience News
Summary: Researchers created a virtual rat with an AI brain to study how real rats control movement. Using data from real rats, they trained the AI to mimic behaviors in a physics simulator.
The virtual rats neural activations closely matched those of real rats, offering new insights into brain function. This innovation could revolutionize neuroscience and improve robotic control systems.
Key Facts:
Source: Harvard
The agility with which humans and animals move is an evolutionary marvel thatno robot has yet been able to closely emulate.
To help probe the mystery of how brains control movement, Harvard neuroscientists have created a virtual rat with an artificial brain that can move around just like a real rodent.
Bence lveczky, professor in the Department of Organismic and Evolutionary Biology, led a group of researchers who collaborated with scientists at Googles DeepMind AI lab to build a biomechanically realistic digital model of a rat.
Using high-resolution data recorded from real rats, they trained an artificial neural network the virtual rats brain to control the virtual body in a physics simulator calledMuJoco, where gravity and other forces are present.
Publishing inNature,the researchers found that activations in the virtual control network accurately predicted neural activity measured from the brains of real rats producing the same behaviors, said lveczky, who is an expert at training (real) rats to learn complex behaviors in order to study their neural circuitry.
The feat represents a new approach to studying how the brain controls movement, lveczky said, by leveraging advances in deep reinforcement learning and AI, as well as 3D movement-tracking in freely behaving animals.
The collaboration was fantastic, lveczky said. DeepMind had developed a pipeline to train biomechanical agents to move around complex environments. We simply didnt have the resources to run simulations like those, to train these networks.
Working with the Harvard researchers was, likewise, a really exciting opportunity for us, said co-author and Google DeepMind Senior Director of Research Matthew Botvinick.
Weve learned a huge amount from the challenge of building embodied agents: AI systems that not only have to think intelligently, but also have to translate that thinking into physical action in a complex environment.
It seemed plausible that taking this same approach in a neuroscience context might be useful for providing insights in both behavior and brain function.
Graduate student Diego Aldarondo worked closely with DeepMind researchers to train the artificial neural network to implement what are called inverse dynamics models, which scientists believe our brains use to guide movement. When we reach for a cup of coffee, for example, our brain quickly calculates the trajectory our arm should follow and translates this into motor commands.
Similarly, based on data from actual rats, the network was fed a reference trajectory of the desired movement and learned to produce the forces to generate it. This allowed the virtual rat to imitate a diverse range of behaviors, even ones it hadnt been explicitly trained on.
These simulations may launch an untapped area of virtual neuroscience in which AI-simulated animals, trained to behave like real ones, provide convenient and fully transparent models for studying neural circuits, and even how such circuits are compromised in disease.
While lveczkys lab is interested in fundamental questions about how the brain works, the platform could be used, as one example, to engineer better robotic control systems.
A next step might be to give the virtual animal autonomy to solve tasks akin to those encountered by real rats.
From our experiments, we have a lot of ideas about how such tasks are solved, and how the learning algorithms that underlie the acquisition of skilled behaviors are implemented, lveczky continued.
We want to start using the virtual rats to test these ideas and help advance our understanding of how real brains generate complex behavior.
Author: Anne Manning Source: Harvard Contact: Anne Manning Harvard Image: The image is credited to Google DeepMind
Original Research: Closed access. A virtual rodent predicts the structure of neural activity across behaviors by Bence lveczky et al. Nature
Abstract
A virtual rodent predicts the structure of neural activity across behaviors
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviors. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals.
To facilitate this, we built a virtual rodent, in which an artificial neural network actuates a biomechanically realistic model of the ratin a physics simulator.
We used deep reinforcement learningto train the virtual agent to imitate the behavior of freely-moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behavior.
We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodents network activity than by any features of the real rats movements, consistent with both regions implementing inverse dynamics.
Furthermore, the networks latent variability predicted the structure of neural variability across behaviors and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control.
These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behavior and relate it to theoretical principles of motor control.
Read more from the original source:
Virtual Rat with AI Brain Mimics Real Rodent Movement - Neuroscience News
Here’s how Google might benefit as Microsoft rumoured to outsource its best AI to OpenAI – Business Today
A tech CEO has said that Microsoft is planning to hand over its best artificial intelligence tools and software to OpenAI which could indirectly benefit Google. In an interview with CNBC, Todd McKinnon, CEO of identity security firm Okta stated that Google is looking to defend its search, not outsourcing its R&D is probably one of their best decisions.
He further stated that the transformers that power generative AI or deep learning model all came from Google, with DeepMind and the research. Notably, transformers are deep-learning models that learn context and eventually their meaning by tracking relationships in sequential data, like words.
I mean, the breakthrough was the research from Google, the transformers which are the algorithm that all these LLMs [large language models] are using to make these big advancements.
He even emphasised that if Microsoft plans to do this, it will end up being a consultancy for OpenAI. Notably, Microsofts AI assistant CoPilot and its AI PCs all come with tech developed by OpenAI. Microsoft had signed a multi-year, multi-billion dollar partnership with Sam Altman headed OpenAI.
He said, Its so bizarre. Imagine working at Microsoft. OpenAI is over there making all the exciting stuff. Its almost like Microsoft is going to turn into a consulting company.
Lately, Google has been struggling to establish its AI products like AI Overviews, Image generator and more in the market due to major blunders like asking users to use glue in a pizza and more.
Its different than other generations of technology like with personal computers, where it was not necessarily the biggest companies in the world that had the advantage because the whole thing about personal computers is they were truly disruptive in the sense that they were almost toys, stated McKinnon.
Theres no new AI model thats like a toy. The only reason OpenAI can get it working is that the great R&D that they needed $10 billion from Microsoft, to run the model that wasnt like a disruptive thing, that was a $10 billion investment, he added.
The rest is here:
Here's how Google might benefit as Microsoft rumoured to outsource its best AI to OpenAI - Business Today
OpenAI, Google DeepMind employees warn of a culture of retaliation in open letter – HR Grapevine
Current and former employees at OpenAI and Google DeepMind have signed an open letter warning of the risks of artificial intelligence (AI), highlighting insufficient whistleblower protections and the threat of retaliation.
The letter is signed confidentially by six current and former OpenAI employees; and publicly by five former OpenAI employees, one former DeepMind employee, and one current DeepMind employee.
The authors state they believe in the potential of AI technology, but believe it also poses numerous risks including the further entrenchment of existing inequalities, manipulation and misinformation, and the loss of control of autonomous AI systems leading to human extinction.
They argue that AI companies have a "financial incentive" to avoid effective AI oversight, and "weak obligation" to share capabilities, limitations, risks, and the adequacy of protective measures.
The outcome, according to the signatories, is that employees are among the few who can hold AI companies to account, but fear doing so due to fear of retaliation.
Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues, the letter states, arguing that current protections for whistleblowers are insufficient as they are predicated on illegal activity, whereas much of the AI landscape remains unregulated.
Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry, the current and former AI staffers write.
The workers ask advanced AI companies to commit to four principles they believe would mitigate retaliation against workers who raise concerns, criticism, limitations, and risks associated with AI technology.
The principles include commitments not to enforce any agreement that bans workers from disparagement or criticism and not to retaliate by hindering any vested economic benefit.
Companies are also asked to support a culture of open criticism, achieved in part by setting up an anonymous channel for current and former employees to raise concerns to the company's board, regulators, or an independent body.
The group also recommends companies do not retaliate against current or former workers who resort to publicly sharing concerns if their efforts to do so in other (private) channels have failed.
In response to the letter, an spokesperson told CNN OpenAI is proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.
The spokesperson added OpenAI agrees with the need for rigorous debate and pointed out its anonymous integrity hotline and the recent announcement of its Safety and Security Committee.
However, one of the letters signatories, Daniel Ziegler, worked for OpenAI from 2018 and 2021, and questions the companys commitment to safety and transparency.
Read more from us
Its really hard to tell from the outside how seriously theyre taking their commitments for safety evaluations and figuring out societal harms, especially as there is such strong commercial pressures to move very quickly, he told CNN. Its really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns.
OpenAI came under fire earlier in May after Vox News reported the exit of two high-profile safety researchers whichrevealed clauses in non-disclosure and non-disparagement agreements (NDAs) that could have cost workers vested equity if they criticized the company.
CEO Sam Altman said he was genuinely embarrassed, but added the company had never clawed back equity from a current or former worker and would strike the policy from all paperwork of current and future staff.
See the original post:
OpenAI, Google DeepMind employees warn of a culture of retaliation in open letter - HR Grapevine