The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity – Patently-O

By Dennis Crouch

Elon Musk was instrumental in the initial creation of OpenAI as a nonprofit with the vision of responsibly developing artificial intelligence (AI) to benefit humanity and to prevent monopolistic control over the technology. After ChatGPT went viral in late 2022, the company began focusing more on revenue and profits. It added a major for-profit subsidiary and completed a $13+ billion deal with Microsoft entitling the industry giant to a large share of OpenAIs future profits and a seat on the Board.

In a new lawsuit, Elon Musk alleges that OpenAI and its CEO Sam Altman have breached the organizations founding vision. [Musk vs OpenAI].

Musk contributed over $44 million between 2015 and 2020 to OpenAI. He alleges OpenAI induced these large donations through repeated promises in its founding documents and communications that it would remain a public-spirited non-profit developing artificial general intelligence (AGI) cautiously and for the broad benefit of humanity. Musk claims he relied on these assurances that OpenAI would not become controlled by a single corporation when deciding to provide essential seed funding. With OpenAI now increasingly aligned with Microsofts commercial interests, Musk argues the results of his financial contributions did not achieve their promised altruistic purpose.

Perhaps the most interesting portion of the debate involves allegations that OpenAIs latest language model, GPT-4, already constitutes AGI meaning it has human-level intelligence across a range of tasks. Musk further claims OpenAI has secretly developed an even more powerful AGI system known as Q* that shows ability to chain logical reasoning beyond human capability arguably reaching artificial super intelligence (ASI) or at least strong AGI.

The complaint discusses some of the potential risks of AGI:

Mr. Musk has long recognized that AGI poses a grave threat to humanityperhaps the greatest existential threat we face today. His concerns mirrored those raised before him by luminaries like Stephen Hawking and Sun Microsystems founder Bill Joy. Our entire economy is based around the fact that humans work together and come up with the best solutions to a hard task. If a machine can solve nearly any task better than we can, that machine becomes more economically useful than we are. As Mr. Joy warned, with strong AGI, the future doesnt need us. Mr. Musk publicly called for a variety of measures to address the dangers of AGI, from voluntary moratoria to regulation, but his calls largely fell on deaf ears.

Complaint at paragraph 18. In other words, Musk argues advanced AI threatens to replace and surpass humans across occupations if its intelligence becomes more generally capable. This could render many jobs and human skills obsolete, destabilizing economies and society by making people less essential than automated systems.

One note here for readers is to recognize important and fundamental differences between AGI and consciousness. AGI refers to the ability of an AI system to perform any intellectual task that a human can do, focusing on problem-solving, memory utilization, creative tasks and decision-making capabilities. On the other hand, consciousness involves self-awareness, subjective experiences, emotional understanding, and decision-making capabilities that are not solely linked to intelligence levels. AGI the focus of the lawsuit here poses important risks to our human societal structure. But, it is relatively small potatoes to consciousness that raises serious ethical considerations as the AI moves well beyond a human tool.

The complaint makes it clear Musk believes OpenAI has already achieved AGI with GPT-4 but it is a tricky thing to measure. Fascinatingly, whether Musk wins may hinge on a San Francisco jury deciding if programs like GPT-4 and Q* legally constitute AGI. So how might jurors go about making this monumental determination? There are a few approaches they could take:

A 2023 article from a group of China-based AI researchers proposes what they call the Tong test for assessing AGI. An important note from the article is that AGI is not a simple yes/no threshold but rather is something that should be quantified across a wide range of dimensions. The article proposes five dimensions: vision, language, reasoning, motor skills, and learning. The proposal would also measures the degree to which an AI system exhibits human values in a self-driven manner.

I can imagine expert testimony in the case, with Musks lawyers presenting key examples showing the wide applicability of GPT-4 and OpenAIs own lawyers showing its own system repeatedly failing. Although this approach is obviously not a true measure of general intelligence or an ideal way to make such an important decision, it does highlight challenges inherent in trying to pass judgment on either a complex machine system and our measures of human intelligence. At its best, the adversarial litigation process itself, with its proof and counterproof process, reflects a form of scientific process with the benefit of actually arriving at a legally binding answer.

Understanding the Inner Workings: OpenAIs latest language models keep their internal designs largely opaque similar to the human brain. Because of our thick-skulls and complex neural arrangement, the vast majority of human neurologic and intelligence testing is functional focusing on the skills and abilities of the individual rather than directly assessing the inner workings. It is easy to assume a parallel form of analysis for AI intelligence and capability especially because human results serve as the standard for measuring AGI. But the approach to human understanding is a feature of our unique biology and technology level. AI systems are designed and built by humans and do not have the natural constraints dictated by evolution. And, if transparency and understanding is a goal, it can be directly designed-into the system using transparent design principles. The current black box approach for OpenAI makes evaluating claims of attaining artificial general intelligence difficult. We cannot peer inside to judge whether displayed abilities reflect true comprehension and reasoning or mere pattern recognition. A key benefit of the litigation system for Elon Musk in this case is that it may force OpenAI to come forward with more inner transparency in order to adequately advocate its position.

What do you think: What should be the legal test for artificial general intelligence?

See the rest here:

The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity - Patently-O

Related Posts

Comments are closed.