Forget about the AI apocalypse. The real dangers are already here – Yahoo Finance

Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public with a much more frightening possibility: an AI apocalypse.

Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsofts CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter last month stating: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about human extinction while also racing to invest in and deploy this technology into products that reach billions of people.

The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to civilization destruction. But he still remains deeply involved in the technology through investments across his sprawling business empire and has said he wants to create a rival to the AI offerings by Microsoft and Google.

Left to right: Microsoft's CTO Kevin Scott, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. - Joy Malone/David Ryder/Bloomberg/Joel Saget/AFP/Getty Images

Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading misinformation, perpetuating biases and enabling discrimination in various services.

Story continues

Motives seemed to be mixed, Gary Marcus, an AI researcher and New York University professor emeritus who testified before lawmakers alongside Altman last month, told CNN. Some of the execs are likely genuinely worried about what they have unleashed, he said, but others may be trying to focus attention on abstract possibilities to detract from the more immediate possibilities.

Representatives for Google and OpenAI did not immediately respond to a request for comment. In a statement, a Microsoft spokesperson said: We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly.

For Marcus, a self-described critic of AI hype, the biggest immediate threat from AI is the threat to democracy from the wholesale production of compelling misinformation.

Generative AI tools like OpenAIs ChatGPT and Dall-E are trained on vast troves of data online to create compelling written work and images in response to user prompts. With these tools, for example, one could quickly mimic the style or likeness of public figures in an attempt to create disinformation campaigns.

In his testimony before Congress, Altman also said the potential for AI to be used to manipulate voters and target disinformation were among my areas of greatest concern.

Even in more ordinary use cases, however, there are concerns. The same tools have been called out for offering wrong answers to user prompts, outright hallucinating responses and potentially perpetuating racial and gender biases.

Gary Marcus, professor emeritus at New York University, right, listens to Sam Altman, chief executive officer and co-founder of OpenAI, speak during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. - Eric Lee/Bloomberg/Getty Images

Emily Bender, a professor at the University of Washington and director of its Computational Linguistics Laboratory, told CNN said some companies may want to divert attention from the bias baked into their data and also from concerning claims about how their systems are trained.

Bender cited intellectual property concerns with some of the data these systems are trained on as well as allegations of companies outsourcing the work of going through some of the worst parts of the training data to low-paid workers abroad.

If the public and the regulators can be focused on these imaginary science fiction scenarios, then maybe these companies can get away with the data theft and exploitative practices for longer, Bender told CNN.

Regulators may be the real intended audience for the tech industrys doomsday messaging.

As Bender puts it, execs are essentially saying: This stuff is very, very dangerous, and were the only ones who understand how to rein it in.

Judging from Altmans appearance before Congress, this strategy might work. Altman appeared to win over Washington by echoing lawmakers concerns about AI a technology that many in Congress are still trying to understand and offering suggestions for how to address it.

This approach to regulation would be hugely problematic, Bender said. It could give the industry influence over the regulators tasked with holding it accountable and also leave out the voices and input of other people and communities experiencing negative impacts of this technology.

If the regulators kind of orient towards the people who are building and selling the technology as the only ones who could possibly understand this, and therefore can possibly inform how regulation should work, were really going to miss out, Bender said.

Bender said she tries, at every opportunity, to tell people these things seem much smarter than they are. As she put it, this is because we are as smart as we are and the way that we make sense of language, including responses from AI, is actually by imagining a mind behind it.

Ultimately, Bender put forward a simple question for the tech industry on AI: If they honestly believe that this could be bringing about human extinction, then why not just stop?

For more CNN news and newsletters create an account at CNN.com

See the original post:
Forget about the AI apocalypse. The real dangers are already here - Yahoo Finance

Related Posts

Comments are closed.