AI expert warns Elon Musk-signed letter doesn’t go far enough, says ‘literally everyone on Earth will die’ – Fox News

An artificial intelligence expert with more than two decades of experience studying AI safety said an open letter calling for a six-month moratorium on developing powerful AI systems does not go far enough.

Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute, wrote in a recent op-ed that the six-month "pause" on developing "AI systems more powerful than GPT-4" called for by Tesla CEO Elon Musk and hundreds of other innovators and experts understates the "seriousness of the situation." He would go further, implementing a moratorium on new large AI learning models that is "indefinite and worldwide."

The letter, issued by the Future of Life Institute and signed by more than 1,000 people, including Musk and Apple co-founder Steve Wozniak, argued that safety protocols need to be developed by independent overseers to guide the future of AI systems.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said. Yudkowsky believes this is insufficient.

ELON MUSK, APPLE CO-FOUNDER, OTHER TECH EXPERTS CALL FOR PAUSE ON GIANT AI EXPERIMENTS: DANGEROUS RACE

OpenAI ChatGPT seen on mobile with AI Brain seen on screen in Brussels on Jan. 22, 2023. (Jonathan Raa/NurPhoto via Getty Images)

"The key issue is not 'human-competitive' intelligence (as the open letter puts it); its what happens after AI gets to smarter-than-human intelligence," Yudkowskywrote for Time.

"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die," he asserts. "Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen.'"

ARTIFICIAL INTELLIGENCE GODFATHER ON AI POSSIBLY WIPING OUT HUMANITY: ITS NOT INCONCEIVABLE'

OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on Feb. 7, 2023. OpenAI's new GPT-4 learning model is the most advanced AI system yet developed, capable of generating, editing and iterating with users on creative and technical writing tasks. (JASON REDMOND/AFP via Getty Images)

For Yudkowsky, the problem is that an AI more intelligent than human beings might disobey its creators and would not care for human life. Do not think "Terminator" "Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers in a world of creatures that are, from its perspective, very stupid and very slow," he writes.

Yudkowsky warns that there is no proposed plan for dealing with a superintelligence that decides the most optimal solution to whatever problem it is tasked with solving is annihilating all life on Earth. He also raises concerns that AI researchers do not actually know if learning models have become "self-aware," and whether it is ethical to own them if they are.

DEMOCRATS AND REPUBLICANS COALESCE AROUND CALLS TO REGULATE AI DEVELOPMENT: CONGRESS HAS TO ENGAGE

Tesla, SpaceX and Twitter CEO Elon Musk and more than 1,000 tech leaders and artificial intelligence experts are calling for a temporary pause on the development of AI systems more powerful than OpenAI's GPT-4, warning of risks to society and civilization. (AP Photo/Susan Walsh, File)

Six months is not enough time to come up with a plan, he argues: "It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach todays capabilities. Solving safety of superhuman intelligence not perfect safety, safety in the sense of 'not killing literally everyone' could very reasonably take at least half that long."

Instead, Yudkowsky proposes international cooperation, even between rivals like the U.S. and China, to shut down development of powerful AI systems. He says this is more important than "preventing a full nuclear exchange," and that countries should even consider using nuclear weapons "if that's what it takes to reduce the risk of large AI training runs."

CLICK HERE TO GET THE FOX NEWS APP

"Shut it all down," Yudkowskywrites. "Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries."

Yudkowsky's drastic warning comes as artificial intelligence software continues to grow in popularity. OpenAI's ChatGPT is a recently released artificial intelligence chatbot that has shocked users by being able to compose songs, create content and even write code.

"We've got to be careful here," OpenAI CEO Sam Altman said about his company's creation earlier this month. "I think people should be happy that we are a little bit scared of this."

Fox News' Andrea Vacchiano contributed to this report.

Read more from the original source:
AI expert warns Elon Musk-signed letter doesn't go far enough, says 'literally everyone on Earth will die' - Fox News

Related Posts

Comments are closed.