Can we govern AI before its too late? – GZERO Media

Thats the question I set out to answer in my latest Foreign Affairs deep dive, penned with one of the top minds on artificial intelligence in the world, Inflection AI CEO and Co-Founder Mustafa Suleyman.

Just a year ago, there wasnt a single world leader Id meet who would bring up AI. Today, there isnt a single world leader who doesnt. In this short time, the explosive debut of generative AI systems like ChatGPT and Midjourney signaled the beginning of a new technological revolution that will remake politics, economies, and societies. For better and for worse.

As governments are starting to recognize, realizing AIs astonishing upside while containing its disruptive and destructive potential may be the greatest governance challenge humanity has ever faced. If governments dont get it right soon, its possible they never will.

Why AI needs to be governed

First, a disclaimer: Im an AI enthusiast. I believe AI will drive nothing less than a new globalization that will give billions of people access to world-leading intelligence, facilitate impossible-to-imagine scientific advances, and unleash extraordinary innovation, opportunity, and growth. Importantly, were heading in this direction without policy intervention: The fundamental technologies are proven, the money is available, and the incentives are aligned for full-steam-ahead progress.

At the same time, artificial intelligence has the potential to cause unprecedented social, economic, political, and geopolitical disruption that upends our lives in lasting and irreversible ways.

In the nearest term, AI will be used to generate and spread toxic misinformation, eroding social trust and democracy; to surveil, manipulate, and subdue citizens, undermining individual and collective freedom; and to create powerful digital or physical weapons that threaten human lives. In the longer run, AI could also destroy millions of jobs, worsening existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making by amplifying bad information feedback loops; or spark unintended and uncontrollable military escalations that lead to war. Farther out on the horizon lurks the promise of artificial general intelligence (AGI), the still uncertain point where AI exceeds human performance at any given task, and the existential (albeit speculative) peril that an AGI could become self-directed, self-replicating, and self-improving beyond human control.

Experts disagree on which of these risks are more important or urgent. Some lie awake at night fearing the prospect of a superpowerful AGI turning humans into slaves. To me, the real catastrophic threat is humans using ever more powerful and available AI tools for malicious or unintended purposes. But it doesnt really matter: Given how little we know about what AI might be able to do in the future what kinds of threats it could pose, how severe and irreversible its damages could be we should prepare for the worst while hoping for (and working toward) the best.

What makes AI so hard to govern

AI cant be governed like any previous technology because its unlike any previous technology. It doesnt just pose policy challenges; its unique features also make solving those challenges progressively harder. That is the AI power paradox.

For starters, the pace of AI progress is hyper-evolutionary. Take Moores Law, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that rate of progress seem quaint. The amount of computation used to train the most powerful AI models has increased by a factor of 10 every year for the last 10 years. Processing that once took weeks now happens in seconds. Yesterdays cutting-edge capabilities are running on smaller, cheaper, and more accessible systems today.

As their enormous benefits become self-evident, AI systems will only grow bigger, cheaper, and more ubiquitous. And with each new order of magnitude, unexpected capabilities will emerge. Few predicted that training on raw text would enable large language models to produce coherent, novel, and even creative sentences. Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. Soon, AI developers will likely succeed in creating systems capable of quasi-autonomy (i.e., able to achieve concrete goals with minimal human oversight) and self-improvement a critical juncture that should give everyone pause.

Then theres the ease of AI proliferation. As with any software, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. Although the most powerful models still require sophisticated hardware to work, midrange versions can run on computers that can be rented for a few dollars an hour. Soon, such models will run on smartphones. No technology this powerful has become so accessible, so widely, so quickly. All this plays out on a global field: Once released, AI models can and will be everywhere. All it takes is one malign or breakout model to wreak worldwide havoc.

AI also differs from older technologies in that almost all of it can be characterized as general purpose and dual use (i.e., having both military and civilian applications). An AI application built to diagnose diseases might be able to create and weaponize a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred. This makes AI more than just software development as usual; it is an entirely new means of projecting power.

As such, its advancement is being propelled by irresistible incentives. Whether for its repressive capabilities, economic potential, or military advantage, AI supremacy is a strategic objective of every government and company with the resources to compete. At the end of the Cold War, powerful countries might have cooperated to arrest a potentially destabilizing technological arms race. But todays tense geopolitical environment makes such cooperation much harder. From the vantage point of the worlds two superpowers, the United States and China, the risk that the other side will gain an edge in AI is greater than any theoretical risk the technology might pose to society or to their own domestic political authority. This zero-sum dynamic means that Beijing and Washington are focused on accelerating AI development, rather than slowing it down.

But even if the worlds powers were inclined to contain AI, theres no guarantee theyd be able to, because, like most of the digital world, every aspect of AI is presently controlled by the private sector. I call this arrangement technopolar, with technology companies effectively exerting sovereignty over the rules that apply to their digital fiefdoms at the expense of governments. The handful of large tech firms that currently control AI may retain their advantage for the foreseeable future or they may be eclipsed by a raft of smaller players as low barriers to entry, open-source development, and near-zero marginal costs lead to uncontrolled proliferation of AI. Either way, AIs trajectory will be largely determined not by governments but by private businesses and individual technologists who have little incentive to self-regulate.

Any one of these features would strain traditional governance models; all of them together render these models inadequate and make the challenge of governing AI unlike anything governments have faced before.

The technoprudential" imperative

For AI governance to work, it must be tailored to the specific nature of the technology and the unique challenges it poses. But because the evolution, uses, and risks of AI are inherently unpredictable, AI governance cant be fully specified at the outset. Instead, it must be as innovative, adaptive, and evolutionary as the technology it seeks to govern.

Our proposal? Technoprudentialism. Thats a big word, but essentially its about governing AI much in the same way that we govern global finance. The idea is that we need a system to identify and mitigate risks to global stability posed by AI before they occur, without choking off innovation and the opportunities that flow from it, and without getting bogged down by everyday politics and geopolitics. In practice, technoprudentialism requires the creation of multiple complementary governance regimes each with different mandates, levers, and participants to address the various aspects of AI that could threaten geopolitical stability, guided by common principles that reflect AIs unique features.

Mustafa and I argue that AI governance needs to be precautionary, agile, inclusive, impermeable, and targeted. Built atop these principles should be a minimum of three AI governance regimes: an Intergovernmental Panel on Artificial Intelligence for establishing facts and advising governments on the risks posed by AI, an arms control-style mechanism for preventing an all-out arms race between them, and a Geotechnology Stability Board for managing the disruptive forces of a technology unlike anything the world has seen.

The 21st century will throw up few challenges as daunting or opportunities as promising as those presented by AI. Whether our future is defined by the former or the latter depends on what policymakers do next.

See the original post here:

Can we govern AI before its too late? - GZERO Media

Related Posts

Comments are closed.