Which Company Will Ensure AI Safety? OpenAI Or Anthropic – Forbes

address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023. (Photo by JASON REDMOND/AFP via Getty Images)AFP via Getty Images

Recent changes in OpenAIs board should give us all more cause for concern about the companys commitment to safety. On the other hand, its competitor, Anthropic, is taking AI safety seriously by incorporating as a Public-Benefit Corporation (PBC) and Long-Term Benefit Trust.

Artificial intelligence (AI) presents a real and present danger to society. Large language models (LLMs) like ChatGPT can exacerbate global inequities, be weaponized for large-scale cyberattacks, and evolve in ways that no one can predict or control.

When Sam Altman was ousted from OpenAI in November, the organization hinted that it was related to his neglect of AI safety. However, these questions were largely quieted when Altman was rehired, and he and other executives carefully managed the messaging to keep the companys reputation intact.

Yet, the debacle should give pause to those concerned about the potential harms of AI. Not only did Altmans rehiring reveal the soft power he holds over the company, but the profile of the new board members appears to be more singularly focused on profits than their predecessors. The changes may reassure customers and investors of OpenAIs ability to profitably scale ChatGPT, but it should raise doubts about OpenAIs commitment to its purpose, which is to ensure that artificial general intelligence benefits all of humanity.

OpenAI is a capped-profit company owned by a non-profit, which Altman has claimed should allay the publics fears. Yet, I argued in an earlier article that in spite of this ownership structure, OpenAI was acting as any for-profit company would.

However, there is an alternative ownership and governance model that seems to be more effective in developing AI safely. Anthropic, a significant competitor in generative AI, has baked safety into its organizational structure and activities. What makes its comparison to OpenAI salient is that it was founded by two executives who departed the AI giant due to concerns about its commitment to safety..

Brother and sister Dario and Daniela Amodei left their executive positions at OpenAI to launch Anthropic in 2021. Dario had been leading the team that developed OpenAIs GPT-2 and GPT-3 models. When asked in 2023 why he left OpenAI, he could credibly point to the lack of attention OpenAI paid to safety, responsibility, and controllability in the development of OpenAIs chatbots, especially in the wake of Microsofts $1 billion investment in OpenAI, which gave Microsoft a 49% stake in OpenAI LLC.

Anthropics approach to large language models and AI safety has attracted significant investments. In December of 2023, Anthropic was in talks to raise $750 million in funding to yield a $18.4 billion valuation.

In establishing Anthropic, the companys founders paid careful attention to the ownership and governance structure, especially when they saw some things that were deeply amiss at OpenAI. Its the contrast in the two firms approach that makes OpenAIs claims to AI safety feel even more like rhetoric than reality.

OpenAI Inc. is a non-profit organization that owns a capped-profit company (OpenAI LLC), which is the company that most of us think about when we say OpenAI. I describe the details of OpenAIs capped profit model in a previous Forbes.com article. There are many open questions about how the capped profit model works, as it seems the company has been intentionally discrete. And, the lines become even blurrier as Altman courts investors to buy even more shares of OpenAI LLC.

Recent events have exacerbated concerns. Before the November turmoil, OpenAI was governed by a six-member board three insiders (co-founder and CEO Sam Altman, co-founder and President Greg Brockman, and Chief Scientist Ilya Sutskever) and three outsiders (Quora co-founder Adam DAngelo, RAND Corporation scientist Tasha McCauley, and Helen Toner, director of strategy at Georgetown Universitys Center for Security and Emerging Technology). Both Toner and McCauley subscribed to effective altruism, which recognizes the risks of AI to humanity.

Altmans firing and rehiring, with the departure of five of the six board members, revealed what little power the non-profit board held over Altman and OpenAIs activities. Even though the board had the power to dismiss Altman, the events showed that OpenAIs staff and investors in the for-profit company held enormous influence over the actions of its non-profit board.

The new voting board members include former Salesforce co-CEO Bret Taylor (Chair), former U.S. Secretary and strong deregulation proponent Larry Summers. There is also a non-voting member from Microsoft, Dee Templeton. This group reveals a far greater concern for profits over AI safety. And, even though these board members were chosen because they were seen to be independent thinkers with the power to stand up to the CEO, there is no reason to believe that this will be the case. Ultimately, the CEO and investors have a significant say over the direction of the company, which was a major reason why Dario and Daniela Amodei set up Anthropic under a more potent ownership structure to elevate AI safety.

Technology And Human Unity

getty

The Amodeis were quite serious about baking ethics and safety into their business after seeing the warning signs at OpenAI. They named their company Anthropic to signal that humans (anthro) are at the center of the AI story and should guide its progress. More than that, they listed Anthropic as a public-benefit corporation (PBC) in Delaware. They join a rather small group of about 4000 companies including Patagonia, Ben & Jerrys, and Kickstarter that are committed to their stakeholders and shareholders, but also to the public good.

A public-benefit corporation requires the companys board to balance private and public interests and report regularly to its owners how the company has promoted its public benefit. Failure to comply with these requirements can trigger shareholder litigation. Unlike OpenAIs non-profit structure, a public-benefit corporations structure has real teeth.

While most companies believe a public-benefit corporation is sufficient to signal their commitment to both profits and society, the Anthropic executives believed otherwise. They wrote in a corporate blog that PBC status was not enough because it does not make the directors of the corporation directly accountable to other stakeholders or align their incentives with the interests of the general public. In a world where technological innovation is rapid, transformative, and potentially hazardous, they felt additional measures were needed.

As a result, the Amodeis incorporated Anthropic as a Long-Term Benefit Trust (LTBT). This purpose trust gave five corporate trustees Class T shares, which offer a modest financial benefit, but control over appointing and dismissing board members. Anthropics trustees select board members based on their willingness and ability to act in accordance with the corporations purpose stated at incorporation, which is the responsible development and maintenance of advanced AI for the long-term benefit of humanity.

This approach is in direct contrast to the way most for-profit and non-profit organizations staff their board. Existing board members decide on who to invite (or dismiss) from the board often based on personal relationships. There is often significant status and compensation for membership on for-profit boards and the opportunity to network with other high-net-worth or powerful people. As incumbent board members decide on who to invite, it is not surprising to see the formation of tight interlocks among members of different boards that create conflicts of interest and power plays. John Loeber illustrated a number of these conflicts arising in OpenAIs short eight-year history.

Antrhopics LTBT, on the other hand, ensures that board members remain focused on the companys purpose, not simply profits, and that major investors in Anthropic, like Amazon and Google, can contribute to building the company without steering the ship. Our corporate governance structure remains unchanged, Anthropic wrote after the Amazon investments, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy.

It appears that Anthropic created this Long-term Benefit Trust structure, although it may have been modeled after the structure created by other companies, such as Patagonia. When Yves Chouinard, Patagonias founder and former CEO, set up the Patagonia Purpose Trust, he ensured the Trust could control the company to uphold Chouinards values to protect the natural environment in perpetuity.

OpenAI has written much on its website about its commitment to developing safe and beneficial artificial general intelligence. But, it is very shy on how it translates those statements into its policies and practices.

Anthropic, on the other hand, has been transparent about its approach to AI safety. It has, for example, struck numerous committees that tackle AI safety concerns, including Alignment, Assurance, Interpretability, Security, Societal Impacts, and Trust & Safety teams. It also employs a team of people that ensures its Acceptable Use Policy (AUP) and Terms of Service (ToS) are properly enforced. Further, it tracks how its customers use its products to ensure they do not violate the Acceptable Use Policy.

The company also developed an in-house framework called AI Safety Levels (ASL) for addressing catastrophic risks. The framework limits the scaling and deploying of new models when their scaling outstrips their ability to comply with safety procedures. As well, Anthropic invests heavily in safety research and makes its research, protocols, and artifacts freely available.

Another key difference between OpenAI and Anthropic is that the latter company has baked safety into the design of its LLM. Most LLMs, such as OpenAIs ChatGPT series, rely on Reinforcement Learning from Human Feedback (RLHF), which requires humans to select between AI response pairs based on the degree of helpfulness or harmfulness. But people make mistakes and can consciously or unconsciously inject their biases, and these models are scaling so rapidly that humans cant keep up with these controls.

Anthropic took a different approach, which they call Constitutional AI. They encode into their LLMs a guiding constitution that is intended to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest, and harmless. The current constitution has drawn on a range of sources to represent Western and non-Western perspectives, including the UN Declaration of Human Rights and principles proposed by its own and other AI research labs.

Perhaps more encouraging than Anthropics extensive measures to build AI safety into its foundation is the companys acknowledgment that these measures will need to evolve and change. The company recognizes the fallibility of its constitution and expects to involve more players over time to help overcome its inadequacies.

With the current arms race towards artificial general intelligence (AGI), it is clear that AIs capabilities could quickly outstrip any single companys ability to control it regardless of the companys governance and ownership structure. Certainly, there is much skepticism that AI can be built safely, including by the many leaders of AI companies that have called for AI to be paused. Even the godfather of AI, Geoffrey Hinton, left Google to speak more openly about the risks of AI.

But, if the horses have indeed left the barn, my bets are on Anthropic to produce AGI safely because of its ownership and governance structure. It is baking safety into its practices and policies. And, not only does Anthropic provide a blueprint for the safe and human-centered development of AI, but its long-term benefit trust structure should inspire companies in other industries to organize in a way that they can bake ethics, safety, and social responsibility into their pursuit of profits.

Tomorrows business can no longer operate under the same principles as yesterdays. It not only needs to create economic value, it needs to do so by working with society and within planetary boundaries.

I have been researching and teaching business sustainability for 30 years as a professor at the Ivey Business School (Canada).Through my work at the Network for Business Sustainability and Innovation North I offer insights into what it takes to lead tomorrows companies.

Read more here:

Which Company Will Ensure AI Safety? OpenAI Or Anthropic - Forbes

Related Posts

Comments are closed.