Why OpenAI’s nonprofit mission to build AGI is under fire again | The AI Beat – VentureBeat

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

In the new lawsuit filed by Elon Musk last week against OpenAI, its CEO Sam Altman, and its president Greg Brockman, the word nonprofit appears 17 times. Board comes up a whopping 62 times. AGI? 66 times.

The lawsuits claims, which include breach of contract, breach of fiduciary duty, and unfair competition, all circle around the idea that OpenAI put profits and commercial interests indeveloping artificial general intelligence(AGI) ahead of the duty of its nonprofit arm (under the leadership of its nonprofit board) to protect the public good.

This is an issue, of course, that exploded after OpenAIs board suddenly fired Sam Altman on November 17, 2023 followed by massive blowback from investors including Microsoft and hundreds of OpenAI employees posting heart emojis indicating they were on Altmans side. Altman was quickly reinstated, while several OpenAI board members got the boot.

Plenty of people have pointed out that Musk, as an OpenAI co-founder who is now competing with the company with his own startup X.ai, is hardly an objective party. But Im far more interested in one important question: How did nerdy nonprofit governance issues tied to the rise of artificial general intelligence spark a legal firestorm?

The AI Impact Tour Boston

Were excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data infrastructure and integration, data validation methods, anomaly detection for security applications, and more. Space is limited, so request an invite today.

Well, it all winds back to the beginning of OpenAI, which Musks lawsuit lays out in more detail than we have previously seen: In 2015, Musk, Altman and Brockman joined forces to form a nonprofit AI lab that would try to catch up to Google in the race for AGI developing it for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.

But in 2023, the lawsuit claims, Altman, Brockman and OpenAI set the Founding Agreement aflamewith flagrant breaches such as breaching the nonprofit boards fiduciary duty and breach of contract, including what transpired during the days after Altman was fired by the nonprofit board on November 17, 2023, and subsequently reinstated.

Much of the controversy winds back to the fact that Open AI isnt just any old nonprofit. In fact, I reported on OpenAIs unusual and complex nonprofit/capped profit structure just a few days before Altmans firing.

In that piece, I pointed to the Our structure page on OpenAIs website that says OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to the nonprofits mission.

Elon Musks lawsuit, however, shed even more light on the confusing alphabet soup of companies that are parties in the case. While OpenAI, Inc. is the nonprofit, OpenAI, LP; OpenAI LLC; OpenAI GP, LLC; OpenAI Opco, LLC; OpenAI Global, LLC; OAI Corporation, LLC and OpenAI Holdings, LLC, all appear to be for-profit subsidiaries.

As I wrote in November, according to OpenAI, the members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to the for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

But as the very definition of AGI is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached? what does the timing and context of that possible future decision mean for its biggest investor, Microsoft that is now a non-voting member of the nonprofit board?Isnt that a massive conflict of interest?

Musk certainly seems to think so. The lawsuit says: Mr. Altman and Mr. Brockman, in concert with Microsoft, exploited Microsofts significant leverage over OpenAI, Inc. and forced the resignation of a majority of OpenAI, Inc.s Board members, including Chief Scientist Ilya Sutskever. Mr. Altman was reinstated as CEO of OpenAI, Inc. on November 21. On information and belief, the new Board members were hand-picked by Mr. Altman and blessed by Microsoft. The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGIand hence when it has developed an algorithm that is outside the scope of Microsofts license.

Musk is not the first to push back on OpenAIs nonprofit status. I think the story that Musk tells in his complaint validates and deepens the case were making in California, said Robert Weissman, president of Public Citizen, a nonprofit consumer advocacy organization which wrote a letter on January 9 requesting that the California Attorney General investigate OpenAIs nonprofit status. The letter raised concerns that OpenAI may have failed to carry out its non-profit purposes and is instead acting under the effective control of its for-profit subsidiary affiliate.

And legal experts I spoke to say that Musk has a strong point in this regard: James Denaro, attorney and chief technologist at the Washington DC-based CipherLaw, told me that Musk does make a strong policy argument that if a company can launch as a non-profit working for the public benefit, collect pre-tax donations, and then transfer the IP into a for-profit venture, this would be a highly problematic paradigm shift for technology companies.

Musks lawsuit is not surprising because of the nonprofit vs. profit structural issues that have plagued OpenAI, added Anat Alon-Beck, associate professor at Case Western University School of Law, who focuses on corporate law and governance and recently wrote a paper about shadow governance by observing board members at tech companies.

According to the paper, It was not until November 2023 that mainstream media started paying more attention to the concept of board observers, after OpenAI, the corporate entity that brought the world ChatGPT, gave Microsoft a board observer seat following the drama in OpenAIs boardroom. But what the mainstream media did not explore in its coverage of the board observer concept was its seemingly less interesting nature as a non-voting board membership, which was an important element in the complex relationship between OpenAI and Microsoft. This signaled deepening ties between the two companies that also eventually got the attention of the DOJ and FTC, as well as the influential role of CVC [corporate venture capital] in funding and governing the research and development of OpenAI.

This lawsuit was due because of OpenAIs structure, she said, adding that OpenAI should be worried.

You should always be worried because when you pick such a weird structure like OpenAI did, theres uncertainty, she said. In law, when were representing large companies, we want to have efficiency, low transaction costs and predictability. We dont know how courts gonna look at fiduciary duties. We dont know because of court hasnt decided on that. Im sorry, but its a bad structure. They could have accomplished [what they wanted] using a different type of structure.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Here is the original post:

Why OpenAI's nonprofit mission to build AGI is under fire again | The AI Beat - VentureBeat

Related Posts

Comments are closed.