The plan for AI to eat the world – POLITICO

OpenAI CEO Sam Altman. | JOEL SAGET/AFP via Getty Images

If artificial general intelligence ever arrives an AI that surpasses human intelligence and capability what will it actually do to society, and how can we prepare ourselves for it?

Thats the big, long-term question looming over the effort to regulate this new technological force.

Tech executives have tried to reassure Washington that their new AI products are tools for harmonious progress and not scary techno-revolution. But if you read between the lines of a new, exhaustive profile of OpenAI published yesterday in Wired the implications of the companys takeover of the global tech conversation become stark, and go a long way toward answering those big existential questions.

Veteran tech journalist Steven Levy spent months with the companys leaders, employees and former engineers, and came away convinced that Sam Altman and his team dont only believe that artificial general intelligence, or AGI, is inevitable, but that its likely to transform the world entirely.

That makes their mission a political one, even if it doesnt track easily along our current partisan boundaries, and theyre taking halting, but deliberate, steps toward achieving it behind closed doors in San Francisco. They expect AGI to change society so much that the companys bylaws contain written provisions for an upended, hypothetical version of the future where our current contracts and currencies have no value.

Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered, Levy notes. After all, it will be a new world from that point on.

Sandhini Agarwal, an OpenAI policy researcher, put a finer point on how he sees the companys mission at this point in time: Look back at the industrial revolution everyone agrees it was great for the world but the first 50 years were really painful Were trying to think how we can make the period before adaptation of AGI as painless as possible.

Theres an immediately obvious laundry list of questions that OpenAIs race to AGI raises, most of them still unanswered: Who will be spared the pain of this period before adaptation of AGI, for example? Or how might it transform civic and economic life? And just who decided that Altman and his team get to be the ones to set its parameters, anyway?

The biggest players in the AI world see the achievement of OpenAIs mission as a sort of biblical Jubilee, erasing all debts and winding back the clock to a fresh start for our social and political structures.

So if thats really the case, how is it possible that the government isnt kicking down the doors of OpenAIs San Francisco headquarters like the faceless space-suited agents in E.T.?

In a society based on principles of free enterprise, of course, Altman and his employees are as legally entitled to do what they please in this scenario as they would be if they were building a dating app or Uber competitor. Theyve also made a serious effort to demonstrate their agreement with the White Houses own stated principles for AI development. Levy reported on how democratic caution was a major concern in releasing progressively more powerful GPT models, with chief technology officer Mira Murati telling him they did a lot of work with misinformation experts and did some red-teaming and that there was a lot of discussion internally on how much to release around the 2019 release of GPT-2.

Those nods toward social responsibility are a key part of OpenAIs business model and media stance, but not everyone is satisfied with them. That includes some of the companys top executives, who split to found Anthropic in 2019. That companys CEO, Dario Amodei, told the New York Times this summer that his companys entire goal isnt to make money or usher in AGI necessarily, but to set safety standards with which other top competitors will feel compelled to comply.

The big questions about AI changing the world all might seem theoretical. But those within the AI community, and increasing numbers of watchdogs and politicians, are already taking them deadly seriously (despite a steadfast chorus of computer scientists still entirely skeptical about the possibility of AGI at all).

Just take a recent jeremiad from Foundation for American Innovation senior economist Samuel Hammond, who in a series of blog posts has tackled the political implications of AGI boosters claims if taken at face value, and the implications of a potential response from government:

The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion, Hammond writes. Its up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan.

For now, thats a far-fetched future scenario. But as Levys profile of OpenAI reveals, its one that the people with the most money, computing power and public sway in the AI world hold as gospel truth. Should the AGI revolution put politicians across the globe on their back foot, or out of power entirely, they wont be able to say they didnt have a warning.

On todays POLITICO Tech podcast, an AI leader recommends some very specific tools for the government to put in its toolbox when it comes to making AI safe globally.

Mustafa Suleyman, CEO of Inflection AI and co-founder of Google DeepMind, told POLITICOs Steven Overly that Washington needs to put limits on the sale of AI hardware and appoint a cabinet-level regulator for the tech.

It is a travesty that we dont have senior technical contributors in cabinet and in every government department given how critical digitization is to every aspect of our world, Suleyman told Steven, and he writes in his new book that the next five or so years are absolutely critical, a tight window when certain pressure points can still slow technology down.

To hear the full interview with Suleyman and other tech leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.

California Gov. Gavin Newsom. | Josh Edelson/AFP/Getty Images

The top official on the AI revolutions home turf is laying down some rules for the states use of the technology.

California Gov. Gavin Newsom issued an executive order today ordering the states agencies to research potential risks that AI poses, devise new policies and put rules in place to ensure its ethical and legal use.

This is a potentially transformative technology comparable to the advent of the internet and were only scratching the surface of understanding what GenAI is capable of, Newsom said in a press release. We recognize both the potential benefits and risks these tools enable.

That makes California just the latest state to tackle AI in its own idiosyncratic manner, as Newsom took care in his remarks to note the role its tech industry plays in the technologys development. POLITICOs Mohar Chatterjee reported for DFD in June on AI legislative efforts in Colorado, and Massachusetts saw similar efforts with a novel twist this year as well.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

Read the rest here:

The plan for AI to eat the world - POLITICO

Related Posts

Comments are closed.