The Sam Altman saga reveals the need for AI transparency – New York Post

Opinion

By Alex Tapscott

Published Nov. 25, 2023, 12:00 p.m. ET

Artificial Intelligence titan OpenAI fired and them rehired its charismatic CEO Sam Altman this past week. REUTERS

Its been a roller-coaster week for Sam Altman, the past, now present andhopefully future CEO of artificial-intelligence giant OpenAI.

Last weekend, the companys board shocked the technology world by firing Altman for no apparent reason.

The move left Microsoft OpenAIs largest investor reeling.

After failing to have him reinstated, Microsoft CEO Satya Nadella announced that Altman and his co-founder Greg Brockman were jumping ship to lead Microsofts new AI research arm.

Next up was a near company-wide revolt, as most of OpenAIs 800 employees made clear they wanted Altman back or were ready to follow him to Microsoft.

And so by midweek Altman had been reinstated at OpenAI, accompanied by a new board of directors, which includes former Harvard President Lawrence Summers.

The entire affair has been stealthy, both in speed and possible subterfuge.

What were the real reasons for dismissing Altman a hugely capable leader who, among other things, was spearheading a funding round valuing OpenAI at $87 billion?

Thats probably a question for ChatGPT.

OpenAI began life as a non-profit tasked with advancing responsible AI research but has more recently morphed into a typical high-growth tech company.

Some on the board, including the companys Chief Scientist and an AI ethicist, worried that Altman was breaking away from the companys founding principles of altruism.

They feared Altmans bottom-line focus and new AI products reaching near-sentient status could put humanity at risk.

The Altman-OpenAI saga has left many industry observers with a Silicon Valley-style case of whiplash.

Theres also a fair measure of uncertainty around this next-gen OpenAI both in terms of its ongoing stability and its approach to the future growth of AI as a whole.

Will this weeks backroom machinations further edify existing tech giants, like Microsoft?

Or will fast-running start-ups like OpenAI remain as the steward of AIs future?

Will governments throttle AIs growth through onerous new rules?

Or will so-called doomer AI skeptics turn the public against AI before it even gets fully going?

The truth is that none of these choices address AIs biggest concern, the murkiness over how to train, build, and ship new AI products responsibly.

And fixing this begins with doubling-down on openness and transparency.

Indeed, Microsofts Nadella called the naming of a new OpenAI board as a key first step toward well-informed and effective governance.

For AI to reach its potential safely at scale, we need transparency improvements at every step.

We need to decentralize AIs existing framework so that its governed by many rather than a few.

Embracing decentralized decision-making reduces any single point of failure such as a disgruntled board, a charismatic CEO or authoritarian regime.

As Walter Isaacson wrote, Innovation occurs when ripe seeds fall on fertile ground.

In other words, the AI technology stack is fertile; to cultivate it, we must plant new and more inclusive ideas.

Lets start at the bottom of that stack, with hardware.

Today, three companies Amazon Web Services, Microsoft, and Google control three-fourths of the cloud computing market storing all that AI data.

One company, NVIDIA, manufactures most of the chips.

Decentralization would allow smaller, user-owned networks to offset this hegemony, while adding much-needed capacity to the industry.

Altman was in the Middle East raising money for a new hardware venture that would rival NVIDIA when he was fired.

To dislodge the big players entirely, he should embrace a decentralized model instead.

Next up are so-called foundation models, the AI brains that generate language, make art and write code (and lame jokes).

Companies guard these models with little oversight or transparency.

OpenAIs models, for instance, are closed tight to public scrutiny. User-owned networks with multi-stakeholder input would be better than Microsoft or OpenAI having complete foundational control which is where we are headed.

Equally important is actual data.

To train an AI foundation model, we need lots of data.

Companies like Microsoft and Amazon have grown rich and powerful amassing mountains of user data; thats one reason OpenAI partnered with Microsoft to begin with.

Yet, users dont know how these AI firms are exploiting their personal data to train their models.

Decentralized data marketplaces such as Ocean Protocol allow individuals and organizations to securely (and accurately) share their data with AI developers.

The data silos of tech giants become less important.

Finally, at the top of the stack are applications.

Imagine a chatbot for K12 students that acts as their personal tutor, fitness instructor and guidance counselor.

We want transparency from AI products that talk to our children and everyone else.

We also want some say in what these apps collect and store about us, how they use and monetize this information, and when they destroy it.

OpenAI currently offers little of any.

AI could alter humanitys fate profoundly. But so far, just a select few Altman and Nadella among them are determining its future behind closed doors.

They claim to represent the interests of all of humanity, but no one really knows.

Neither do we know why OpenAI initially sent Altman packing last week.

But a lack of consistent candidness a k a transparency was cited by his detractors.

Back where it all began, Altman will likely emerge stronger than ever.

Now he must use that strength to advance the core openness OpenAI has always claimed to hold dear.

Alex Tapscott is the author of Web3: Charting the Internets Next Economic and Cultural Frontier (Harper Collins, out now) and a portfolio manager at Ninepoint Partners.

Load more...

https://nypost.com/2023/11/25/opinion/the-sam-altman-saga-reveals-the-need-for-ai-transparency/?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons

Read more:

The Sam Altman saga reveals the need for AI transparency - New York Post

Related Posts

Comments are closed.