OpenAI, Google DeepMind employees warn of a culture of retaliation in open letter – HR Grapevine

Current and former employees at OpenAI and Google DeepMind have signed an open letter warning of the risks of artificial intelligence (AI), highlighting insufficient whistleblower protections and the threat of retaliation.

The letter is signed confidentially by six current and former OpenAI employees; and publicly by five former OpenAI employees, one former DeepMind employee, and one current DeepMind employee.

The authors state they believe in the potential of AI technology, but believe it also poses numerous risks including the further entrenchment of existing inequalities, manipulation and misinformation, and the loss of control of autonomous AI systems leading to human extinction.

They argue that AI companies have a "financial incentive" to avoid effective AI oversight, and "weak obligation" to share capabilities, limitations, risks, and the adequacy of protective measures.

The outcome, according to the signatories, is that employees are among the few who can hold AI companies to account, but fear doing so due to fear of retaliation.

Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues, the letter states, arguing that current protections for whistleblowers are insufficient as they are predicated on illegal activity, whereas much of the AI landscape remains unregulated.

Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry, the current and former AI staffers write.

The workers ask advanced AI companies to commit to four principles they believe would mitigate retaliation against workers who raise concerns, criticism, limitations, and risks associated with AI technology.

The principles include commitments not to enforce any agreement that bans workers from disparagement or criticism and not to retaliate by hindering any vested economic benefit.

Companies are also asked to support a culture of open criticism, achieved in part by setting up an anonymous channel for current and former employees to raise concerns to the company's board, regulators, or an independent body.

The group also recommends companies do not retaliate against current or former workers who resort to publicly sharing concerns if their efforts to do so in other (private) channels have failed.

In response to the letter, an spokesperson told CNN OpenAI is proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.

The spokesperson added OpenAI agrees with the need for rigorous debate and pointed out its anonymous integrity hotline and the recent announcement of its Safety and Security Committee.

However, one of the letters signatories, Daniel Ziegler, worked for OpenAI from 2018 and 2021, and questions the companys commitment to safety and transparency.

Read more from us

Its really hard to tell from the outside how seriously theyre taking their commitments for safety evaluations and figuring out societal harms, especially as there is such strong commercial pressures to move very quickly, he told CNN. Its really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns.

OpenAI came under fire earlier in May after Vox News reported the exit of two high-profile safety researchers whichrevealed clauses in non-disclosure and non-disparagement agreements (NDAs) that could have cost workers vested equity if they criticized the company.

CEO Sam Altman said he was genuinely embarrassed, but added the company had never clawed back equity from a current or former worker and would strike the policy from all paperwork of current and future staff.

See the original post:
OpenAI, Google DeepMind employees warn of a culture of retaliation in open letter - HR Grapevine

Related Posts

Comments are closed.