address announcing ChatGPT integration for Bing at Microsoft in Redmond, Washington, on February 7, 2023. (Photo by JASON REDMOND/AFP via Getty Images)AFP via Getty Images
Recent changes in OpenAIs board should give us all more cause for concern about the companys commitment to safety. On the other hand, its competitor, Anthropic, is taking AI safety seriously by incorporating as a Public-Benefit Corporation (PBC) and Long-Term Benefit Trust.
Artificial intelligence (AI) presents a real and present danger to society. Large language models (LLMs) like ChatGPT can exacerbate global inequities, be weaponized for large-scale cyberattacks, and evolve in ways that no one can predict or control.
When Sam Altman was ousted from OpenAI in November, the organization hinted that it was related to his neglect of AI safety. However, these questions were largely quieted when Altman was rehired, and he and other executives carefully managed the messaging to keep the companys reputation intact.
Yet, the debacle should give pause to those concerned about the potential harms of AI. Not only did Altmans rehiring reveal the soft power he holds over the company, but the profile of the new board members appears to be more singularly focused on profits than their predecessors. The changes may reassure customers and investors of OpenAIs ability to profitably scale ChatGPT, but it should raise doubts about OpenAIs commitment to its purpose, which is to ensure that artificial general intelligence benefits all of humanity.
OpenAI is a capped-profit company owned by a non-profit, which Altman has claimed should allay the publics fears. Yet, I argued in an earlier article that in spite of this ownership structure, OpenAI was acting as any for-profit company would.
However, there is an alternative ownership and governance model that seems to be more effective in developing AI safely. Anthropic, a significant competitor in generative AI, has baked safety into its organizational structure and activities. What makes its comparison to OpenAI salient is that it was founded by two executives who departed the AI giant due to concerns about its commitment to safety..
Brother and sister Dario and Daniela Amodei left their executive positions at OpenAI to launch Anthropic in 2021. Dario had been leading the team that developed OpenAIs GPT-2 and GPT-3 models. When asked in 2023 why he left OpenAI, he could credibly point to the lack of attention OpenAI paid to safety, responsibility, and controllability in the development of OpenAIs chatbots, especially in the wake of Microsofts $1 billion investment in OpenAI, which gave Microsoft a 49% stake in OpenAI LLC.
Anthropics approach to large language models and AI safety has attracted significant investments. In December of 2023, Anthropic was in talks to raise $750 million in funding to yield a $18.4 billion valuation.
In establishing Anthropic, the companys founders paid careful attention to the ownership and governance structure, especially when they saw some things that were deeply amiss at OpenAI. Its the contrast in the two firms approach that makes OpenAIs claims to AI safety feel even more like rhetoric than reality.
OpenAI Inc. is a non-profit organization that owns a capped-profit company (OpenAI LLC), which is the company that most of us think about when we say OpenAI. I describe the details of OpenAIs capped profit model in a previous Forbes.com article. There are many open questions about how the capped profit model works, as it seems the company has been intentionally discrete. And, the lines become even blurrier as Altman courts investors to buy even more shares of OpenAI LLC.
Recent events have exacerbated concerns. Before the November turmoil, OpenAI was governed by a six-member board three insiders (co-founder and CEO Sam Altman, co-founder and President Greg Brockman, and Chief Scientist Ilya Sutskever) and three outsiders (Quora co-founder Adam DAngelo, RAND Corporation scientist Tasha McCauley, and Helen Toner, director of strategy at Georgetown Universitys Center for Security and Emerging Technology). Both Toner and McCauley subscribed to effective altruism, which recognizes the risks of AI to humanity.
Altmans firing and rehiring, with the departure of five of the six board members, revealed what little power the non-profit board held over Altman and OpenAIs activities. Even though the board had the power to dismiss Altman, the events showed that OpenAIs staff and investors in the for-profit company held enormous influence over the actions of its non-profit board.
The new voting board members include former Salesforce co-CEO Bret Taylor (Chair), former U.S. Secretary and strong deregulation proponent Larry Summers. There is also a non-voting member from Microsoft, Dee Templeton. This group reveals a far greater concern for profits over AI safety. And, even though these board members were chosen because they were seen to be independent thinkers with the power to stand up to the CEO, there is no reason to believe that this will be the case. Ultimately, the CEO and investors have a significant say over the direction of the company, which was a major reason why Dario and Daniela Amodei set up Anthropic under a more potent ownership structure to elevate AI safety.
Technology And Human Unity
getty
The Amodeis were quite serious about baking ethics and safety into their business after seeing the warning signs at OpenAI. They named their company Anthropic to signal that humans (anthro) are at the center of the AI story and should guide its progress. More than that, they listed Anthropic as a public-benefit corporation (PBC) in Delaware. They join a rather small group of about 4000 companies including Patagonia, Ben & Jerrys, and Kickstarter that are committed to their stakeholders and shareholders, but also to the public good.
A public-benefit corporation requires the companys board to balance private and public interests and report regularly to its owners how the company has promoted its public benefit. Failure to comply with these requirements can trigger shareholder litigation. Unlike OpenAIs non-profit structure, a public-benefit corporations structure has real teeth.
While most companies believe a public-benefit corporation is sufficient to signal their commitment to both profits and society, the Anthropic executives believed otherwise. They wrote in a corporate blog that PBC status was not enough because it does not make the directors of the corporation directly accountable to other stakeholders or align their incentives with the interests of the general public. In a world where technological innovation is rapid, transformative, and potentially hazardous, they felt additional measures were needed.
As a result, the Amodeis incorporated Anthropic as a Long-Term Benefit Trust (LTBT). This purpose trust gave five corporate trustees Class T shares, which offer a modest financial benefit, but control over appointing and dismissing board members. Anthropics trustees select board members based on their willingness and ability to act in accordance with the corporations purpose stated at incorporation, which is the responsible development and maintenance of advanced AI for the long-term benefit of humanity.
This approach is in direct contrast to the way most for-profit and non-profit organizations staff their board. Existing board members decide on who to invite (or dismiss) from the board often based on personal relationships. There is often significant status and compensation for membership on for-profit boards and the opportunity to network with other high-net-worth or powerful people. As incumbent board members decide on who to invite, it is not surprising to see the formation of tight interlocks among members of different boards that create conflicts of interest and power plays. John Loeber illustrated a number of these conflicts arising in OpenAIs short eight-year history.
Antrhopics LTBT, on the other hand, ensures that board members remain focused on the companys purpose, not simply profits, and that major investors in Anthropic, like Amazon and Google, can contribute to building the company without steering the ship. Our corporate governance structure remains unchanged, Anthropic wrote after the Amazon investments, with the Long Term Benefit Trust continuing to guide Anthropic in accordance with our Responsible Scaling Policy.
It appears that Anthropic created this Long-term Benefit Trust structure, although it may have been modeled after the structure created by other companies, such as Patagonia. When Yves Chouinard, Patagonias founder and former CEO, set up the Patagonia Purpose Trust, he ensured the Trust could control the company to uphold Chouinards values to protect the natural environment in perpetuity.
OpenAI has written much on its website about its commitment to developing safe and beneficial artificial general intelligence. But, it is very shy on how it translates those statements into its policies and practices.
Anthropic, on the other hand, has been transparent about its approach to AI safety. It has, for example, struck numerous committees that tackle AI safety concerns, including Alignment, Assurance, Interpretability, Security, Societal Impacts, and Trust & Safety teams. It also employs a team of people that ensures its Acceptable Use Policy (AUP) and Terms of Service (ToS) are properly enforced. Further, it tracks how its customers use its products to ensure they do not violate the Acceptable Use Policy.
The company also developed an in-house framework called AI Safety Levels (ASL) for addressing catastrophic risks. The framework limits the scaling and deploying of new models when their scaling outstrips their ability to comply with safety procedures. As well, Anthropic invests heavily in safety research and makes its research, protocols, and artifacts freely available.
Another key difference between OpenAI and Anthropic is that the latter company has baked safety into the design of its LLM. Most LLMs, such as OpenAIs ChatGPT series, rely on Reinforcement Learning from Human Feedback (RLHF), which requires humans to select between AI response pairs based on the degree of helpfulness or harmfulness. But people make mistakes and can consciously or unconsciously inject their biases, and these models are scaling so rapidly that humans cant keep up with these controls.
Anthropic took a different approach, which they call Constitutional AI. They encode into their LLMs a guiding constitution that is intended to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest, and harmless. The current constitution has drawn on a range of sources to represent Western and non-Western perspectives, including the UN Declaration of Human Rights and principles proposed by its own and other AI research labs.
Perhaps more encouraging than Anthropics extensive measures to build AI safety into its foundation is the companys acknowledgment that these measures will need to evolve and change. The company recognizes the fallibility of its constitution and expects to involve more players over time to help overcome its inadequacies.
With the current arms race towards artificial general intelligence (AGI), it is clear that AIs capabilities could quickly outstrip any single companys ability to control it regardless of the companys governance and ownership structure. Certainly, there is much skepticism that AI can be built safely, including by the many leaders of AI companies that have called for AI to be paused. Even the godfather of AI, Geoffrey Hinton, left Google to speak more openly about the risks of AI.
But, if the horses have indeed left the barn, my bets are on Anthropic to produce AGI safely because of its ownership and governance structure. It is baking safety into its practices and policies. And, not only does Anthropic provide a blueprint for the safe and human-centered development of AI, but its long-term benefit trust structure should inspire companies in other industries to organize in a way that they can bake ethics, safety, and social responsibility into their pursuit of profits.
Tomorrows business can no longer operate under the same principles as yesterdays. It not only needs to create economic value, it needs to do so by working with society and within planetary boundaries.
I have been researching and teaching business sustainability for 30 years as a professor at the Ivey Business School (Canada).Through my work at the Network for Business Sustainability and Innovation North I offer insights into what it takes to lead tomorrows companies.
Read more here:
Which Company Will Ensure AI Safety? OpenAI Or Anthropic - Forbes
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal... [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Programmed Values: The Role of Intention in Developing AI - Psychology Today [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Fear the fire or harness the flame: The future of generative AI - VentureBeat [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Senate's hearing on AI regulation was dangerously friendly - The Verge [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Why we need a "Manhattan Project" for A.I. safety - Salon [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- What is AGI? The Artificial Intelligence that can do it all - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial intelligence: World first rules are coming soon are you ... - JD Supra [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Where AI evolves from here - Axios [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- How AI and other technologies are already disrupting the workplace - The Conversation [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -... [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Israel aims to be 'AI superpower', advance autonomous warfare - Reuters.com [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- What we lose when we work with a giant AI like ChatGPT - The Hindu [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 5 things you should know about investing in artificial intelligence ... - The Motley Fool Australia [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Generative AI Will Have Profound Impact Across Sectors - Rigzone News [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Beware the EU's AI Regulations - theTrumpet.com [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- How to Win the AI War - Tablet Magazine [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Dr. ChatGPT Will Interface With You Now - IEEE Spectrum [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Crypto And AI Innovation: The London Attraction - Forbes [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI would pick Bitcoin over centralized crypto Tether CTO - Cointelegraph [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What's missing from ChatGPT and other LLMs ... - Data Science Central [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mastering ChatGPT: Introduction to ChatGPT | Thomas Fox ... - JD Supra [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Transparency is crucial over how AI is trained - and regulators must take the lead - Sky News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Top 10 AI And Blockchain Projects Revolutionizing The World - Blockchain Magazine [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- An Orb: the new crypto project by the creator of ChatGPT - The Cryptonomist [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI must be emotionally intelligent before it is super-intelligent - Big Think [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI tools trace the body's link between the brain and behavior - Axios [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 27% of jobs at high risk from AI revolution, says OECD - Reuters [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI likely to spell end of traditional school classroom, leading expert says - The Guardian [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- China striving to be first source of artificial general intelligence, says think tank - The Register [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Future of automotive journalism in India: Would AI take charge - Team-BHP [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- From vision to victory: How CIOs embrace the AI revolution - ETCIO [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Demis Hassabis - Information Age [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Why AI cant answer the fundamental questions of life | Mint - Mint [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI bots could replace us, peer warns House of Lords during debate - The Guardian [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Will architects really lose their jobs to AI? - Dezeen [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Which US workers are exposed to AI in their jobs? - Pew Research Center [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AWS announces generative A.I. tool to save doctors time on paperwork - CNBC [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]