Page 893«..1020..892893894895..900910..»

Why Silicon Valley AI prophecies just feel like repackaged religion – Vox.com

Suppose I told you that in 10 years, the world as you know it will be over. You will live in a sort of paradise. You wont get sick, or age, or die. Eternal life will be yours! Even better, your mind will be blissfully free of uncertainty youll have access to perfect knowledge. Oh, and youll no longer be stuck on Earth. Instead, you can live up in the heavens.

If I told you all this, would you assume that I was a religious preacher or an AI researcher?

Either one would be a pretty solid guess.

The more you listen to Silicon Valleys discourse around AI, the more you hear echoes of religion. Thats because a lot of the excitement about building a superintelligent machine comes down to recycled religious ideas. Most secular technologists who are building AI just dont recognize that.

These technologists propose cheating death by uploading our minds to the cloud, where we can live digitally for all eternity. They talk about AI as a decision-making agent that can judge with mathematical certainty whats optimal and whats not. And they envision artificial general intelligence (AGI) a hypothetical system that can match human problem-solving abilities across many domains as an endeavor that guarantees human salvation if it goes well, even as it spells doom if it goes badly.

These visions are almost identical to the visions of Christian eschatology, the branch of theology that deals with the end times or the final destiny of humanity.

Christian eschatology tells us that were all headed toward the four last things: death, judgment, and heaven or hell. Although everyone whos ever lived so far has died, well be resurrected after the second coming of Christ to find out where well live for all eternity. Our souls will face a final judgment, care of God, the perfect decision-maker. That will guarantee us heaven if it goes well, but hell if it goes badly.

Five years ago, when I began attending conferences in Silicon Valley and first started to notice parallels like these between religion talk and AI talk, I figured there was a simple psychological explanation. Both were a response to core human anxieties: our mortality; the difficulty of judging whether were doing right or wrong; the unknowability of our lifes meaning and ultimate place in this universe or the next one. Religious thinkers and AI thinkers had simply stumbled upon similar answers to the questions that plague us all.

So I was surprised to learn that the connection goes much deeper.

The intertwining of religion and technology is centuries old, despite the people wholl tell you that science is value-neutral and divorced from things like religion, said Robert Geraci, a professor of religious studies at Manhattan College and author of Apocalyptic AI. Thats simply not true. It never has been true.

In fact, historians tracing the influence of religious ideas contend that we can draw a straight line from Christian theologians in the Middle Ages to the father of empiricism in the Renaissance to the futurist Ray Kurzweil to the tech heavyweights hes influenced in Silicon Valley.

Occasionally, someone there still dimly senses the parallels. Sometimes I think a lot of the breathless enthusiasm for AGI is misplaced religious impulses from people brought up in a secular culture, Jack Clark, co-founder of the AI safety company Anthropic, mused on Twitter in March.

Mostly, though, the figures spouting a vision of AGI as a kind of techno-eschatology from Sam Altman, the CEO of ChatGPT-maker OpenAI, to Elon Musk, who wants to link your brain to computers express their ideas in secular language. Theyre either unaware or unwilling to admit that the vision theyre selling derives much of its power from the fact that its plugging into age-old religious ideas.

But its important to know where these ideas come from. Not because religious is somehow pejorative; just because ideas are religious doesnt mean theres something wrong with them (the opposite is often true). Instead, we should understand the history of these ideas of virtual afterlife as a mode of salvation, say, or moral progress understood as technological progress so we see that theyre not immutable or inevitable; certain people came up with them at certain times to serve certain purposes, but there are other ideas out there if we want them. We dont have to fall prey to the danger of the single story.

We have to be careful with what narratives we buy into, said Elke Schwarz, a political theorist at Queen Mary University of London who studies the ethics of military AI. Whenever we talk about something religious, theres something sacred at play. Having something thats sacred can enable harm, because if something is sacred its worth doing the worst things for.

In the Abrahamic religions that shaped the West, it all goes back to shame.

Remember what happens in the book of Genesis? When Adam and Eve eat from the tree of knowledge, God expels them from the garden of Eden and condemns them to all the indignities of flesh-and-blood creatures: toil and pain, birth and death. Humankind is never the same after that fall from grace. Before the sin, we were perfect creatures made in the image of God; now were miserable meat sacks.

But in the Middle Ages, Christian thinkers developed a radical idea, as the historian David Noble explains in his book The Religion of Technology. What if tech could help us restore humanity to the perfection of Adam before the fall?

The influential ninth-century philosopher John Scotus Eriugena, for example, insisted that part of what it meant for Adam to be formed in Gods image was that he was a creator, a maker. So if we wanted to restore humanity to the God-like perfection of Adam prior to his fall, wed have to lean into that aspect of ourselves. Eriugena wrote that the mechanical arts (a.k.a. technology) were mans links with the Divine, their cultivation a means to salvation.

This idea took off in medieval monasteries, where the motto ora et labora prayer and work began to circulate. Even in the midst of the so-called Dark Ages, some of these monasteries became hotbeds of engineering, producing inventions like the first known tidal-powered water wheel and impact-drilled well. Catholics became known as innovators; to this day, engineers have four patron saints in the religion. Theres a reason why some say the Catholic Church was the Silicon Valley of the Middle Ages: It was responsible for everything from metallurgy, mills, and musical notation to the wide-scale adoption of clocks and the printing press, as I noted in a 2018 Atlantic article.

This wasnt tech for techs sake, or for profits sake. Instead, tech progress was synonymous with moral progress. By recovering humanitys original perfection, we could usher in the kingdom of God. As Noble writes, Technology had come to be identified with transcendence, implicated as never before in the Christian idea of redemption.

The medieval identification of tech progress with moral progress shaped successive generations of Christian thinkers all the way into modernity. A pair of Bacons illustrates how the same core belief that tech would accomplish redemption influenced both religious traditionalists and those who adopted a scientific worldview.

In the 13th century, the alchemist Roger Bacon, taking a cue from biblical prophecies, sought to create an elixir of life that could achieve something like the Resurrection as the apostle Paul described it. The elixir, Bacon hoped, would give humans not just immortality, but also magical abilities like traveling at the speed of thought. Then in the 16th century, Francis Bacon (no relation) came along. Superficially he seemed very different from his predecessor he critiqued alchemy, considering it unscientific yet he prophesied that wed one day use tech to overcome our mortality for the glory of the Creator and the relief of mans estate.

By the Renaissance, Europeans dared to dream that we could remake ourselves in the image of God not only by inching toward immortality, but also by creating consciousness out of inanimate matter.

The possibility to make new life is, other than defeating death, the ultimate power, Schwarz pointed out.

Christian engineers created automata wooden robots that could move around and mouth prayers. Muslims were rumored to create mechanical heads that could talk like oracles. And Jewish folktales spread about rabbis who brought to life clay figures, called golems, by permuting language in magical ways. In the stories, the golem sometimes offers salvation by saving the Jewish community from persecution. But other times, the golem goes rogue, killing people and using its powers for evil.

If all of this is sounding distinctly familiar well, it should. The golem idea has been cited in works on AI risk, like the 1964 book God & Golem, Inc. by mathematician and philosopher Norbert Wiener. You hear the same anxieties today in the slew of open letters released by technologists, warning that AGI will bring upon us either salvation or doom.

Reading these statements, you might well ask: why would we even want to create AGI, if AGI threatens doom as much as it promises salvation? Why not just limit ourselves to creating narrower AI which could already work wonders in applications like curing diseases and stick with that for a while?

For an answer to that, come with me on one more romp through history, and well start to see how the recent rise of three intertwined movements have molded Silicon Valleys visions for AI.

A lot of people assume that when Charles Darwin published his theory of evolution in 1859, all religious thinkers instantly saw it as a horrifying, heretical threat, one that dethroned humans as Gods most godly creations. But some Christian thinkers embraced it as gorgeous new garb for the old spiritual prophecies. After all, religious ideas never really die; they just put on new clothes.

A prime example was Pierre Teilhard de Chardin, a French Jesuit priest who also studied paleontology in the early 1900s. He believed that human evolution, nudged along with tech, was actually the vehicle for bringing about the kingdom of God, and that the melding of humans and machines would lead to an explosion of intelligence, which he dubbed the omega point. Our consciousness would become a state of super-consciousness where we merge with the divine and become a new species.

Teilhard influenced his pal Julian Huxley, an evolutionary biologist who was president of both the British Humanist Association and the British Eugenics Society, as author Meghan OGieblyn documents in her 2021 book God, Human, Animal, Machine. It was Huxley who popularized Teilhards idea that we should use tech to evolve our species, calling it transhumanism.

That, in turn, influenced the futurist Ray Kurzweil, who made basically the same prediction as Teilhard: Were approaching a time when human intelligence merges with machine intelligence, becoming unbelievably powerful. Only instead of calling it the omega point, Kurzweil rebranded it as the singularity.

The human species, along with the computational technology it created, will be able to solve age-old problems and will be in a position to change the nature of mortality in a postbiological future, wrote Kurzweil in his 1999 national bestseller The Age of Spiritual Machines. (Strong New Testament vibes there. Per the book of Revelation: Death shall be no more, neither shall there be mourning nor crying nor pain any more, for the former things have passed away.)

Kurzweil has copped to the spiritual parallels, and so have those whove formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblatts Terasem movement to the Mormon Transhumanist Association to Anthony Levandowskis short-lived Way of the Future church. But many others, such as Oxford philosopher Nick Bostrom, insist that unlike religion, transhumanism relies on critical reason and our best available scientific evidence.

These days, transhumanism has a sibling, another movement that was born in Oxford and caught fire in Silicon Valley: effective altruism (EA), which aims to figure out how to do the most good possible for the most people. Effective altruists also say their approach is rooted in secular reason and evidence.

Yet EA actually mirrors religion in many ways: functionally (it brings together a community built around a shared vision of moral life), structurally (its got a hierarchy of prophet-leaders, canonical texts, holidays, and rituals), and aesthetically (it promotes tithing and favors asceticism). Most importantly for our purposes, it offers an eschatology.

EAs eschatology comes in the form of its most controversial idea, longtermism, which Musk has described as a close match for my philosophy. It argues that the best way to help the most people is to focus on ensuring that humanity will survive far into the future (as in, millions of years from now), since many more billions of people could exist in the future than in the present assuming our species doesnt go extinct first.

And heres where we start to get the answer to our question about why technologists are set on building AGI.

To effective altruists and longtermists, just sticking with narrow AI is not an option. Take Will MacAskill, the Oxford philosopher known as the reluctant prophet of effective altruism and longtermism. In his 2022 book What We Owe the Future, he explains why he thinks a plateauing of technological advancement is unacceptable. A period of stagnation, he writes, could increase the risks of extinction and permanent collapse.

He cites his colleague Toby Ord, who estimates that the probability of human extinction through risks like rogue AI and engineered pandemics over the next century is one in six Russian roulette. Another fellow traveler in EA, Holden Karnofsky, likewise argues that were living at the hinge of history or the most important century a singular time in the story of humanity when we could either flourish like never before or bring about our own extinction. MacAskill, like Musk, suggests in his book that a good way to avoid extinction is to settle on other planets so we arent keeping all our eggs in one basket.

But thats only half of MacAskills moral case for space settlement. The other half is that we should be trying to make future human civilization as big and utopian as possible. As MacAskills Oxford colleague Bostrom has argued, the colonization of the universe would give us the area and resources with which to run gargantuan numbers of digital simulations of humans living happy lives. The more space, the more happy (digital) humans! This is where the vast majority of moral value lies: not in the present on Earth, but in the future in heaven Sorry, I meant in the virtual afterlife.

When we put all these ideas together and boil them down, we get this basic proposition:

Any student of religion will immediately recognize this for what it is: apocalyptic logic.

Transhumanists, effective altruists, and longtermists have inherited the view that the end times are nigh and that technological progress is our best shot at moral progress. For people operating within this logic, it seems natural to pursue AGI. Even though they view AGI as a top existential risk, they believe we cant afford not to build it given its potential to catapult humanity out of its precarious earthbound adolescence (which will surely end any minute!) and into a flourishing interstellar adulthood (so many happy people, so much moral value!). Of course we ought to march forward technologically because that means marching forward morally!

But is this rooted in reason and evidence? Or is it rooted in dogma?

The hidden premise here is technological determinism, with a side dash of geopolitics. Even if you and I dont create terrifyingly powerful AGI, the thinking goes, somebody else or some other country will so why stop ourselves from getting in on the action? OpenAIs Altman exemplifies the belief that tech will inevitably march forward. He wrote on his blog in 2017 that unless we destroy ourselves first, superhuman AI is going to happen. Why? As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.

Have we learned that? I see no evidence to suggest that anything that can be invented necessarily will be invented. (As AI Impacts lead researcher Katja Grace memorably wrote, Consider a machine that sprays shit in your eyes. We can technologically do that, but probably nobody has ever built that machine.) It seems more likely that people tend to pursue innovations when there are very powerful economic, social, or ideological pressures pushing them to.

In the case of the AGI fever thats gripped Silicon Valley, recycled religious ideas in the garb of transhumanism, effective altruism, and longtermism have supplied the social and ideological pressures. As for the economic, profit-making pressure, well, thats always operative in Silicon Valley.

Now, 61 percent of Americans believe AI may threaten human civilization, and that belief is especially strong among evangelical Christians, according to a Reuters/Ipsos poll in May. To Geraci, the religious studies scholar, that doesnt come as a surprise. Apocalyptic logic, he noted, is very, very, very powerful in American Protestant Christianity to the point that 4 in 10 US adults currently believe that humanity is living in the end times.

Unfortunately, apocalyptic logic tends to breed dangerous fanaticism. In the Middle Ages, when false messiahs arose, people gave up their worldly possessions to follow their prophet. Today, with talk of AGI doom suffusing the media, true believers drop out of college to go work on AI safety. The doom-or-salvation, heaven-or-hell logic pushes people to take big risks to ante up and go all in.

In an interview with me last year, MacAskill disavowed extreme gambles. He told me he imagines that a certain type of Silicon Valley tech bro, thinking theres a 5 percent chance of dying from some AGI catastrophe and a 10 percent chance AGI ushers in a blissful utopia, would be willing to take those odds and rush ahead with building AGI.

Thats not the sort of person I want building AGI, because they are not responsive to the moral issues, MacAskill told me. Maybe that means we have to delay the singularity in order to make it safer. Maybe that means it doesnt come in my lifetime. That would be an enormous sacrifice.

When MacAskill told me this, I pictured a Moses figure, looking out over the promised land but knowing he would not reach it. The longtermist vision seemed to require of him a brutal faith: You personally will not be saved, but your spiritual descendants will.

Theres nothing inherently wrong with believing that tech can radically improve humanitys lot. In many ways, it obviously already has.

Technology is not the problem, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me. In fact, Delio is comfortable with the idea that were already in a new stage of evolution, shifting from Homo sapiens to techno sapiens. She thinks we should be open-minded about proactively evolving our species with techs help.

But shes also clear that we need to be explicit about which values are shaping our tech so that we can develop the technology with purpose and with ethical boundaries, she said. Otherwise, technology is blind and potentially dangerous.

Geraci agrees. If a ton of people in Silicon Valley are going, Hey, Im in for this technology because its going to make me immortal, thats a little bit terrifying, he told me. But if somebody says, Im in for this technology because I think were going to be able to use it to solve world hunger those are two very different motives. It would impact the types of products you try to design, the population for which you are designing, and the way you try to deploy it in the world around you.

Part of making deliberate decisions about which values animate tech is also being keenly aware of who gets the power to decide. According to Schwarz, the architects of artificial intelligence have sold us on a vision of necessary tech progress with AI and set themselves up as the only experts on it, which makes them enormously powerful arguably more powerful than our democratically elected officials.

The idea that developing AGI is a kind of natural law becomes an ordering principle, and that ordering principle is political. It gives political power to some and a lot less to most others, Schwarz said. Its so strange to me to say, We have to be really careful with AGI, rather than saying, We dont need AGI, this is not on the table. But were already at a point when power is consolidated in a way that doesnt even give us the option to collectively suggest that AGI should not be pursued.

We got to this point in large part because, for the past thousand years, the West has fallen prey to the danger of the single story: the story equating tech progress with moral progress that we inherited from medieval religious thinkers.

Its the one narrative we have, Delio said. That narrative has made us inclined to defer to technologists (who, in the past, were also spiritual authorities) on the values and assumptions being baked into their products.

What are alternatives? If another narrative were to say, Just the dynamism of being alive is itself the goal, then we might have totally different aspirations for technology, Delio added. But we dont have that narrative! Our dominant narrative is to create, invent, make, and to have that change us.

We need to decide what kind of salvation we want. If were generating our enthusiasm for AI through visions of transcending our earthbound limits and our meat-sack mortality, that will create one kind of societal outcome. But if we commit to using tech to improve the well-being of this world and these bodies, we can have a different outcome. We can, as Noble put it, begin to direct our astonishing capabilities toward more worldly and humane ends.

We're here to shed some clarity

One of our core beliefs here at Vox is that everyone needs and deserves access to the information that helps them understand the world, regardless of whether they can pay for a subscription. With the 2024 election on the horizon, more people are turning to us for clear and balanced explanations of the issues and policies at stake. Were so grateful that were on track to hit 85,000 contributions to the Vox Contributions program before the end of the year, which in turn helps us keep this work free. We need to add 2,500 contributions this month to hit that goal.Will you make a contribution today to help us hit this goal and support our policy coverage? Any amount helps.

Read more from the original source:

Why Silicon Valley AI prophecies just feel like repackaged religion - Vox.com

Read More..

The plan for AI to eat the world – POLITICO

OpenAI CEO Sam Altman. | JOEL SAGET/AFP via Getty Images

If artificial general intelligence ever arrives an AI that surpasses human intelligence and capability what will it actually do to society, and how can we prepare ourselves for it?

Thats the big, long-term question looming over the effort to regulate this new technological force.

Tech executives have tried to reassure Washington that their new AI products are tools for harmonious progress and not scary techno-revolution. But if you read between the lines of a new, exhaustive profile of OpenAI published yesterday in Wired the implications of the companys takeover of the global tech conversation become stark, and go a long way toward answering those big existential questions.

Veteran tech journalist Steven Levy spent months with the companys leaders, employees and former engineers, and came away convinced that Sam Altman and his team dont only believe that artificial general intelligence, or AGI, is inevitable, but that its likely to transform the world entirely.

That makes their mission a political one, even if it doesnt track easily along our current partisan boundaries, and theyre taking halting, but deliberate, steps toward achieving it behind closed doors in San Francisco. They expect AGI to change society so much that the companys bylaws contain written provisions for an upended, hypothetical version of the future where our current contracts and currencies have no value.

Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered, Levy notes. After all, it will be a new world from that point on.

Sandhini Agarwal, an OpenAI policy researcher, put a finer point on how he sees the companys mission at this point in time: Look back at the industrial revolution everyone agrees it was great for the world but the first 50 years were really painful Were trying to think how we can make the period before adaptation of AGI as painless as possible.

Theres an immediately obvious laundry list of questions that OpenAIs race to AGI raises, most of them still unanswered: Who will be spared the pain of this period before adaptation of AGI, for example? Or how might it transform civic and economic life? And just who decided that Altman and his team get to be the ones to set its parameters, anyway?

The biggest players in the AI world see the achievement of OpenAIs mission as a sort of biblical Jubilee, erasing all debts and winding back the clock to a fresh start for our social and political structures.

So if thats really the case, how is it possible that the government isnt kicking down the doors of OpenAIs San Francisco headquarters like the faceless space-suited agents in E.T.?

In a society based on principles of free enterprise, of course, Altman and his employees are as legally entitled to do what they please in this scenario as they would be if they were building a dating app or Uber competitor. Theyve also made a serious effort to demonstrate their agreement with the White Houses own stated principles for AI development. Levy reported on how democratic caution was a major concern in releasing progressively more powerful GPT models, with chief technology officer Mira Murati telling him they did a lot of work with misinformation experts and did some red-teaming and that there was a lot of discussion internally on how much to release around the 2019 release of GPT-2.

Those nods toward social responsibility are a key part of OpenAIs business model and media stance, but not everyone is satisfied with them. That includes some of the companys top executives, who split to found Anthropic in 2019. That companys CEO, Dario Amodei, told the New York Times this summer that his companys entire goal isnt to make money or usher in AGI necessarily, but to set safety standards with which other top competitors will feel compelled to comply.

The big questions about AI changing the world all might seem theoretical. But those within the AI community, and increasing numbers of watchdogs and politicians, are already taking them deadly seriously (despite a steadfast chorus of computer scientists still entirely skeptical about the possibility of AGI at all).

Just take a recent jeremiad from Foundation for American Innovation senior economist Samuel Hammond, who in a series of blog posts has tackled the political implications of AGI boosters claims if taken at face value, and the implications of a potential response from government:

The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion, Hammond writes. Its up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan.

For now, thats a far-fetched future scenario. But as Levys profile of OpenAI reveals, its one that the people with the most money, computing power and public sway in the AI world hold as gospel truth. Should the AGI revolution put politicians across the globe on their back foot, or out of power entirely, they wont be able to say they didnt have a warning.

On todays POLITICO Tech podcast, an AI leader recommends some very specific tools for the government to put in its toolbox when it comes to making AI safe globally.

Mustafa Suleyman, CEO of Inflection AI and co-founder of Google DeepMind, told POLITICOs Steven Overly that Washington needs to put limits on the sale of AI hardware and appoint a cabinet-level regulator for the tech.

It is a travesty that we dont have senior technical contributors in cabinet and in every government department given how critical digitization is to every aspect of our world, Suleyman told Steven, and he writes in his new book that the next five or so years are absolutely critical, a tight window when certain pressure points can still slow technology down.

To hear the full interview with Suleyman and other tech leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.

California Gov. Gavin Newsom. | Josh Edelson/AFP/Getty Images

The top official on the AI revolutions home turf is laying down some rules for the states use of the technology.

California Gov. Gavin Newsom issued an executive order today ordering the states agencies to research potential risks that AI poses, devise new policies and put rules in place to ensure its ethical and legal use.

This is a potentially transformative technology comparable to the advent of the internet and were only scratching the surface of understanding what GenAI is capable of, Newsom said in a press release. We recognize both the potential benefits and risks these tools enable.

That makes California just the latest state to tackle AI in its own idiosyncratic manner, as Newsom took care in his remarks to note the role its tech industry plays in the technologys development. POLITICOs Mohar Chatterjee reported for DFD in June on AI legislative efforts in Colorado, and Massachusetts saw similar efforts with a novel twist this year as well.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

Read the rest here:

The plan for AI to eat the world - POLITICO

Read More..

Ziad Obermeyer Named one of TIME’s top 100 Leaders in Artificial … – UC Berkeley School of Public Health

Illustration by TIME

Dr. Ziad Obermeyer, an associate professor at UC Berkeley School of Public Health, has been named to the 2023 TIME100 AI list, the media companys designation of the 100 most influential people working in artificial intelligence.

Im so honored to be on this listand honestly a bit surprised too, said Obermeyer, Blue Cross of California Distinguished Associate Professor of Health Policy and Management. I think of myself as a doctor who works on problems in medicine, more than an AI person.

But I think thats the exciting part about AI: Its very applied. And by working to solve real health problems, we can learn a lot about how AI works in general; what its great at, where it goes wrong, and how to do better. Looking at this list, there are so many people building for a future where AI helps everyone, and thats really exciting.

In its September 7 announcement, TIME highlighted Dr. Obermeyers innovative research on how racial bias infiltrates the health care system, and noted a study in which he found that a widely used algorithm recommends less health care for Black patients, despite greater health needs, potentially jeopardizing the well-being of millions.

TIME also noted Dandelion Health, an AI-innovation platform Obermeyer co-founded in 2020. Dandelion makes health care datalike electrocardiogram waveforms, sleep monitoring data, and digital pathologyavailable to algorithm developers for free. TIME also mentioned Nightingale Open Science, Obermeyers nonprofit company, which builds out datasets in partnership with health systems around the world. Its goal is to bring an open science mentality to health care data, to answer questions like, why do some cancers spread, while others dont?

Obermeyer told TIME that he is cautiously optimistic about AIs impact on health carealthough its crucial to ensure the new technology doesnt cause harm.

The biggest and most exciting things that are going to happen in the field are nowhere even close to happening yet, he told TIME. We dont have the imagination to think about what this is going to do in 20 years, 100 years, and how its going to totally transform health care.

See more here:

Ziad Obermeyer Named one of TIME's top 100 Leaders in Artificial ... - UC Berkeley School of Public Health

Read More..

Trying To See: Part Three: A Locomotive Coming Fast – Tillamook Headlight-Herald

This column is the third part of a three-part series on the origins, present state, and future consequences of artificial intelligence (AI). The following column deals with the future benefits and risks of AIs development.

Theoretical physicist Stephen Hawking died in 2018 from ALS (Lou Gehrigs) disease. In 2015, three years before he died, Hawking warned that development of full artificial intelligence (today called artificial general intelligence, or AGI) could spell the end of the human race.

We all have heard dire warnings that increasingly powerful computers will, sooner than later, take off on their own and become capable of redesigning and replicating themselves, outstripping humans abilities to control them. Such a moment in the planets history has been called a singularity, the beginning of humanitys end, and the next step in evolution of higher intelligence on Earth--an age in which super-intelligent machines (AGI) will rule the planet.

Such hand-wringing sounds like dire science fiction, but there also are numerous optimists who predict much happier outcomes. These include discovery of new and sustainable technologies, a more equitable distribution of wealth and hopes for various, long-term societal benefits. We do already benefit greatly from AI in medicine, science research, transportation, finance, education and other fields of human endeavor.

AGI optimists point out that increasingly powerful algorithms and computer learning can incorporate human values and ethics, possessing qualities like compassion and empathy. They believe AGI will put itself in humanitys shoes, then act on those values to bring benefit to human societies. It certainly is better to live with hope and aspiration, but who gets to defined those values and ethics.

Algorithms are written by many kinds of humans: creative, idealistic, ambitious, generous, competitive--some ethically indifferent or even cruel. Whether consciously or unconsciously, computer coders program algorithms with their own conscious desires and unconscious biases. Various human cultures and smaller groups have different interpretations of what is good and bad. Today, in our politics we even disagree on what constitutes truth and reality. Powerful corporations or banks or criminal groups will try to make AGI take actions that create more wealth and power for them; it is in their nature and their mission to do so. What is to stop them from wreaking havoc on the rest of us?

With their unrelenting desire for more, and our ever-more powerful and seductive technology, they can only be slowed down by more responsible humans. But have world governments been able to stem the spread of nuclear weapons, or respond effectively to human-caused climate change? Have governments been able to control the growth of criminal gangs, drug syndicates, and world-wide weapons sales? Has the US government halted the surging rise of our national debt, or moderated the publics addiction to social media platforms that tear us apart?

What if rapidly more sophisticated AGI outstrips our capacity to control it? What would AGI decide to do regarding human over-population and its degradation of Earths resources, our increasingly destructive weather, sea-level rise, or other consequences of climate change, including our inadequate supply and distribution of water. Would AGI continue the relentlessly increasing concentration of wealth and power in smaller and smaller groups of people and corporations. Or would AGI see those power centers as a threat to its own desires? How would AGI deal with the threat of nuclear war, humans fears of people who look different than them or the exploding number of refugees in the world, or the increasing complexities of modern societies that struggle to repair and replace crumbling infrastructure.

How would AGI deal with the worlds violent political and religious factions that have been inflamed, then self-organized, through use of social media? What would AGI do about the collapse of nation states (Soviet Union, Haiti, Somalia, Yemen, and others yet to come). What would AGI do about whole regions of humanity that already have returned to a state of nature where coercion, violence, and terror prevail?

How would AGI networks, learning of all the human-created problems described above, deal with them? Would AGI require us to reduce our current demand for evermore pleasures and products, thus reducing our current levels of excess consumption? Would AGI think democracy and continued freedoms are still important enough to pander to long-complacent people who pay hardly any attention to voting in elections, or who are indifferent to strangers needs or the needs of their larger society?

Or would powerful AGI machines, driven by their own logic, decide to solve these seemingly intractable problems by dealing forcefully with those who persist in being acquisitive, rebellious, or violent? Would AGI redesign the human genome to create more compliant humans, who by their natures would be subservient to AGIs authority?

No one really can foresee the consequences of AGI, although we already yield to some of its elements, whether beneficial, entertaining or intrusive. Many of us also have become more screen-dependent, passive, less empathetic and less sociable--like Zoom users who resist face-to-face meetings and contacts, claiming them to be inconvenient. Given these human tendencies, plus the increasing power of AGI tools, AGI is on its way to changing the course of human history.

See original here:

Trying To See: Part Three: A Locomotive Coming Fast - Tillamook Headlight-Herald

Read More..

Humans and Machines: How Artificial Intelligence Risks Conflicts of … – Association of Equipment Manufacturers (AEM)

By Johnathan Josephs, MSL, AEM Regulatory Affairs Manager

Humans understand risk as an organizations legal, financial and criminal exposure if it does not follow industry laws and regulations.

One of these risk areas is conflicts of interest. In general, its human nature to be poor judges of our own conflicts and not motivated to disclosure them. However, its not our fault! Nuanced motivators such as private gain, hidden ownership, procurement fraud, bid-rigging, service on a board of directors, accepting gifts, or even family and romantic relationships are all pathways to typical conflicts of interest.

This begs a question, though: Can artificial intelligence (AI) understand and disclosure these nuanced motivators, like what we expect of humans?

On July 26, 2023, a press release from the Securities and Exchange Commission (SEC) announced a proposed rule featuring new requirements to address risk to investors from conflicts of interest associated with the use of predictive data analytics by broker-dealers and investment advisors. The goal here is to prohibit predictive data analytics (artificial intelligence) and similar technologies from placing firm interests ahead of investors interests. The rule would, in effect, cause the identified conflicts of interest to be eliminated or their effect neutralized before harm is done to the investor.

By now, we have probably all heard of risks involved with adopting artificial intelligence. The most popular of which being the Skynet network-based conscious group mind and artificial general superintelligence system from the popular Terminator movie franchise. However, short of mitigating compliance concerns like Arnold Schwarzenegger, the heavy equipment off-road industry is left wondering if broader AI regulations will start to affect the way we do business. Manufacturers are seasoned and trained enough to discern a conflict of interest and mitigate appropriately. However, being the new kid on the block, AI does not have the same level of training and ethics as its human counterparts.

Adopting AI is priority because staying competitive is paramount. First, the wheel revolutionized agriculture. Then the screw held together increasingly complex construction projects. Next, the assembly lines of today use robotic machines which have made life as we know it possible. Logic would dictate we give these machines enough intelligence to operate in our stead. However, innovation often forgets ethics, and in response, industry leaders should consider all inherent risks with AI:

The Colonial Pipeline is the largest pipeline system for refined oil products in the U.S. The pipeline, consisting of three tubes, is 5,500 miles (8,850 km) long and can carry 3 million barrels of fuel per day between Texas and New York. On May 7, 2021, Colonial Pipeline proactively took its systems offline in response to a ransomware attack. On May 13, 2021, Colonial Pipeline announced the company restarted their entire pipeline system and product delivery commenced to all markets, after paying a $4.4m (3.1m) ransom to the cyber-criminal gang responsible for taking the U.S. fuel pipeline offline.

This tipping point for U.S. cybersecurity regulations kicked off the current trends and regulatory regimes we see today. Manufacturers of heavy duty off-road equipment could learn from the Colonial Pipeline case study in terms of cybersecurity, third-party risk management and ransomware attacks. Shoring up enterprise compliance and securing data networks may be a way to protect your business, protect the general public and comply with the new SEC regulations.

For more perspectives from AEM staff,subscribetothe AEM Industry Advisor.

Read the original here:

Humans and Machines: How Artificial Intelligence Risks Conflicts of ... - Association of Equipment Manufacturers (AEM)

Read More..

Embracing AI means we must mitigate risk to firms, industries, consumers and society – Yahoo Finance

According to a recent poll undertaken by the Certified Financial Planner Board of Standards, nearly 1 in 3 or 31% of investors would use artificial intelligence (AI) as their financial advisor.

For those unfamiliar, AI is commonly referred to as a catch-all for the set of technologies and designs that make AI possible. In its broadest sense, AI applies to any technique that enables computer software to mimic human intelligence.

The "artificial narrow intelligence" (ANI) system, which presently exists in commercial applications, is software that is based on computational statistics used to create models that can help make decisions by human beings or other machines at ever-increasing speeds.

OPENAI, CREATOR OF CHATGPT, ON TRAJECTORY TO BRING IN $1 BILLION

Yet this application of ANI still lacks the cognitive abilities and other "human-like" attributes required to be classified as "artificial general intelligence", i.e., software that has cognitive abilities similar to humans, a "sense" of consciousness, and is typically equated to human-like ability in terms of learning, thinking and processing information for decision-making.

This ANI software is today embedded in a range of industries, firms and products and services, including online search engines, digital cameras, customer service interfaces, and recently, ChatGPT, an example of a large language model and generative AI making recent media headlines.

READ ON THE FOX BUSINESS APP

Our forthcoming research study, published by the Center for Growth and Opportunity at Utah State University, is focused on answering one key question: What governance approach will offer American society the most efficacious institutional framework to ensure that the potential negative technological risks associated with AI will be regulated and minimized, while simultaneously encouraging the development and implementation of those AI technological benefits for American society?

Story continues

Given the present state of AI, our research leads us to conclude that where American society has an important stake in the ongoing development and implementation of ANI across industries, and the limitations of public regulation and the pacing problem, i.e., referring to the quickening pace of technological developments and the inability of governments to keep up with the dynamic state of new knowledge emerging about the capabilities of this technology, the answer is to embrace a flexible and adaptable meta-regulation.

This meta-regulation involves those activities occurring in a wider regulatory space under the auspices of a variety of institutions, including the state, the private sector, and public interest groups.

Meta-regulation addresses the tension of "social control", i.e., public regulation, of a still emerging technology, while encouraging the potential commercial benefits accruing from future AI technological innovation through private governance initiatives.

Furthermore, Congress, in its ongoing efforts to regulate AI, should recognize that private governance is a major component of addressing basic issues related to the future of U.S. AI development and implementation across industries.

Each industry will have unique issues related to commercializing ANI, and therefore will be in the ideal position to know the "best practices" to be instantiated in their standard-setting processes.

For example, in the digital media industry, major companies including Alphabet (Google AI Principles), Meta (Five Pillars of Responsible AI) and Microsoft (Microsoft Responsible AI Standard (v2)) have in recent years issued explicit private governance policies and/or principles of AI to be used in their business operations.

What is crucial for American consumers is that there are effective, operational company policies delineating ANI operating practices and performance, and clear, consumer accessible information disclosure on how well the firm is abiding by these industry ANI best practices.

In many cases, private governance, market-driven mechanisms will significantly assist in the meta-regulation of firm-level ANI, including company AI insurance liability, reputational effects generated by real-time social media coverage of firm behavior, and relevant stakeholder inquiries into firm behavior. Also, this results in positive or negative general media impacts on company financial performance.

One research approach to further effectively embracing private governance is "polycentric governance," a theoretical construct developed by Nobel Prize in Economic Science winner Elinor Ostrom.

Since governance in a democratic society requires a variety of tasks to be accomplished, each of which requires actions of different types of "decision centers," both public and non-public, the process of governance as a whole involves many different decision centers acting in interdependent ways. As it involves meta-regulation, the polycentric governance approach could be a valuable tool for evaluating the efficacy of the role of non-governmental institutions in AI governance models.

Private governance is the critical component of mitigating AI technological risk to firms, industries, consumers and American society. The challenge remains, however, whether there is a corpus of American technological leadership that recognizes, invests and continues to maintain (and improve) an effective regulatory equilibrium.

This regulatory equilibrium, between government-mandated social control and for the immediate future, a responsible innovation ANI based in industry self-regulation, market forces and firm adherence to best practices, will provide for the still-to-come potential benefits accruing to American society from this revolutionary technology.

Thomas A. Hemphill is the David M. French Distinguished professor of strategy, innovation and public policy in the School of Management, University of Michigan-Flint.

Phil Longstreet is an associate professor of Management Information System in the School of Management, University of Michigan-Flint.

Original article source: Embracing AI means we must mitigate risk to firms, industries, consumers and society

More here:

Embracing AI means we must mitigate risk to firms, industries, consumers and society - Yahoo Finance

Read More..

Human-machine teams driven by AI are about to reshape warfare – Reuters

"Ghost", 24, a soldier with the 58th Independent Motorized Infantry Brigade of the Ukrainian Army, catches a drone while testing it so it can be used nearby, as Russia's invasion of Ukraine continues, near Bakhmut, Ukraine, November 25, 2022. REUTERS/Leah Millis /File Photo Acquire Licensing Rights

SYDNEY, Sept 8 (Reuters) - Some technology experts believe innovative commercial software developers now entering the arms market are challenging the dominance of the traditional defense industry, which produces big-ticket weapons, sometimes at glacial speed.

It is too early to say if big, human-crewed weapons like submarines or reconnaissance helicopters will go the way of the battleship, which was rendered obsolete with the rise of air power. But aerial, land and underwater robots, teamed with humans, are poised to play a major role in warfare.

Evidence of such change is already emerging from the war in Ukraine. There, even rudimentary teams of humans and machines operating without significant artificial-intelligence powered autonomy are reshaping the battlefield. Simple, remotely piloted drones have greatly improved the lethality of artillery, rockets and missiles in Ukraine, according to military analysts who study the conflict.

Kathleen Hicks, the U.S. deputy secretary of defense, said in an Aug. 28 speech at a conference on military technology in Washington that traditional military capabilities remain essential. But she noted that the Ukraine conflict has shown that emerging technology developed by commercial and non-traditional companies could be decisive in defending against modern military aggression.

A Reuters special report published today explores how automation powered by artificial intelligence is poised to revolutionize weapons, warfare and military power.

Both Russian and Ukrainian forces are integrating traditional weapons with AI, satellite imaging and communications, as well as smart and loitering munitions, according to a May report from the Special Competitive Studies Project, a non-partisan U.S. panel of experts. The battlefield is now a patchwork of deep trenches and bunkers where troops have been forced to go underground or huddle in cellars to survive, the report said.

Some military strategists have noted that in this conflict, attack and transport helicopters have become so vulnerable that they have been almost forced from the skies, their roles now increasingly handed over to drones.

Uncrewed aerial systems have already taken crewed reconnaissance helicopters out of a lot of their missions, said Mick Ryan, a former Australian army major general who publishes regular commentaries on the conflict. We are starting to see ground-based artillery observers replaced by drones. So, we are already starting to see some replacement.

Reporting by David Lague. Edited by Peter Hirschberg.

Our Standards: The Thomson Reuters Trust Principles.

Go here to see the original:

Human-machine teams driven by AI are about to reshape warfare - Reuters

Read More..

ChatGPT Glossary: 41 AI Terms that Everyone Should Know – CNET

ChatGPT, the AI-chatbot from OpenAI, which has an uncanny ability to answer any question, was likely your first introduction to AI. From writing poems, resumes and fusion recipes, the power of ChatGPT has been compared to autocomplete on steroids.

But AI chatbots are only one part of the AI landscape. Sure, having ChatGPT help do your homework or having Midjourney create fascinating images of mechs based on country of origin is cool, but its potential could completely reshape economies. That potential could be worth $4.4 trillion to the global economy annually, according to McKinsey Global Institute, which is why you should expect to hear more and more about artificial intelligence.

As people become more accustomed to a world intertwined with AI, new terms are popping up everywhere. So whether you're trying to sound smart over drinks or impress in a job interview, here are some important AI terms you should know.

This glossary will continuously be updated.

Artificial general intelligence, or AGI: A concept that suggests a more advanced version of AI than we know today, one that can perform tasks much better than humans while also teaching and advancing its own capabilities.

AI ethics: Principles aimed at preventing AI from harming humans, achieved through means like determining how AI systems should collect data or deal with bias.

AI safety: An interdisciplinary field that's concerned with the long-term impacts of AI and how it could progress suddenly to a super intelligence that could be hostile to humans.

Algorithm: A series of instructions that allows a computer program to learn and analyze data in a particular way, such as recognizing patterns, to then learn from it and accomplish tasks on its own.

Alignment: Tweaking an AI to better produce the desired outcome. This can refer to anything from moderating content to maintaining positive interactions toward humans.

Anthropomorphism: When humans tend to give nonhuman objects humanlike characteristics. In AI, this can include believing a chatbot is more humanlike and aware than it actually is, like believing it's happy, sad or even sentient altogether.

Artificial intelligence, or AI: The use of technology to simulate human intelligence, either in computer programs or robotics. A field in computer science that aims to build systems that can perform human tasks.

Bias: In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes.

Chatbot: A program that communicates with humans through text that simulates human language.

ChatGPT: An AI chatbot developed by OpenAI that uses large language model technology.

Cognitive computing: Another term for artificial intelligence.

Data augmentation: Remixing existing data or adding a more diverse set of data to train an AI.

Deep learning: A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

Diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo.

Emergent behavior: When an AI model exhibits unintended abilities.

End-to-end learning, or E2E: A deep learning process in which a model is instructed to perform a task from start to finish. It's not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once.

Ethical considerations: An awareness of the ethical implications of AI and issues related to privacy, data usage, fairness, misuse and other safety issues.

Foom: Also known as fast takeoff or hard takeoff. The concept that if someone builds an AGI that it might already be too late to save humanity.

Generative adversarial networks, or GANs: A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it's authentic.

Generative AI: A content-generating technology that uses AI to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.

Google Bard: An AI chatbot by Google that functions similarly to ChatGPT but pulls information from the current web, whereas ChatGPT is limited to data until 2021 and isn't connected to the internet.

Guardrails: Policies and restrictions placed on AI models to ensure data is handled responsibly and that the model doesn't create disturbing content.

Hallucination: An incorrect response from AI. Can include generative AI producing answers that are incorrect but stated with confidence as if correct. The reasons for this aren't entirely known. For example, when asking an AI chatbot, "When did Leonardo da Vinci paint the Mona Lisa?" it may respond with an incorrect statement saying, "Leonardo da Vinci painted the Mona Lisa in 1815," which is 300 years after it was actually painted.

Large language model, or LLM: An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language.

Machine learning, or ML: A component in AI that allows computers to learn and make better predictive outcomes without explicit programming. Can be coupled with training sets to generate new content.

Microsoft Bing: A search engine by Microsoft that can now use the technology powering ChatGPT to give AI-powered search results. It's similar to Google Bard in being connected to the internet.

Multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech.

Natural language processing: A branch of AI that uses machine learning and deep learning to give computers the ability to understand human language, often using learning algorithms, statistical models and linguistic rules.

Neural network: A computational model that resembles the human brain's structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time.

Overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data but not new data.

Parameters: Numerical values that give LLMs structure and behavior, enabling it to make predictions.

Prompt chaining: An ability of AI to use information from previous interactions to color future responses.

Stochastic parrot: An analogy of LLMs that illustrates that the software doesn't have a larger understanding of meaning behind language or the world around it, regardless of how convincing the output sounds. The phrase refers to how a parrot can mimic human words without understanding the meaning behind them.

Style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso.

Temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks.

Text-to-image generation: Creating images based on textual descriptions.

Training data: The datasets used to help AI models learn, including text, images, code or data.

Transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.

Turing test: Named after famed mathematician and computer scientist Alan Turing, it tests a machine's ability to behave like a human. The machine passes if a human can't distinguish the machine's response from another human.

Weak AI, aka narrow AI: AI that's focused on a particular task and can't learn beyond its skill set. Most of today's AI is weak AI.

Zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

More:

ChatGPT Glossary: 41 AI Terms that Everyone Should Know - CNET

Read More..

How to get the best out of Claude Pro as Anthropic increases access to 100k token model – CryptoSlate

Artificial intelligence startup Anthropic has launched its much anticipated paid subscription service Claude Pro to offer more usage of its 100k token conversational AI assistant Claude.

The San Francisco-based company introduced the $20 per month service on Sept. 7 as a way for power users to get more productivity from Claudes large language model capabilities by allowing increased usage.

With the paid Claude Pro plan, subscribers can get five times more usage than Anthropics free tier. This enables users to have more extended conversations with Claude by sending many more messages over an 8-hour period.

Founded by former OpenAI employees in 2021 after raising $124 million, Anthropic is focused on building safe artificial general intelligence (AGI). The founders were reportedly motivated by concerns about the safety and ethics of advanced AI systems.

Claudes UI compares with ChatGPT as a simple chat interface. However, each Claude conversation can handle a context of over 25,000 words, while ChatGPT (GPT4 model) is limited to around 2,500 words. The longer context window, officially 100,000 tokens, allows users to feed large batches of written information into Claude to analyze and give feedback.

For example, if a user is summarizing a research paper, they could ask Claude detailed questions about it and get paragraph-length summaries of critical sections. The extended usage limits in Claude Pro allow for deeper research tasks like this. Essentially, users can have a conversation with the information being analyzed, with Claude retaining tens of thousands of words of text in each research task.

Exact usage limits will vary based on factors like the length of messages and the size of file attachments. Anthropic says the average conversation length on its platform is around 200 sentences, for which Claude Pro would allow approximately 100 messages every 8 hours.

The subscription also provides priority access to Claude during high-traffic periods and early access to new features.

To maximize usage, Anthropic advises Claude Pro subscribers to:

The critical aspect is that Claude Pros limitations appear based on token usage rather than message limits. ChatGPTs paid tier limits the number of messages per hour but not the length of messages. Anthropics approach applies a more granular fair-usage policy, limiting the overall volume sent to its LLM rather than the number of queries.

Anthropic says the limits are intended to make Claude freely available for many users to try while still providing increased capacity to paying power users.

The company first launched Claude in July 2022 as an AI assistant focused on harmless, honest, helpful conversations. Claude Pro is now available in the U.S. and U.K.

Disclaimer: Our writers' opinions are solely their own and do not reflect the opinion of CryptoSlate. None of the information you read on CryptoSlate should be taken as investment advice, nor does CryptoSlate endorse any project that may be mentioned or linked to in this article. Buying and trading cryptocurrencies should be considered a high-risk activity. Please do your own due diligence before taking any action related to content within this article. Finally, CryptoSlate takes no responsibility should you lose money trading cryptocurrencies.

See the rest here:

How to get the best out of Claude Pro as Anthropic increases access to 100k token model - CryptoSlate

Read More..

Can we govern AI before its too late? – GZERO Media

Thats the question I set out to answer in my latest Foreign Affairs deep dive, penned with one of the top minds on artificial intelligence in the world, Inflection AI CEO and Co-Founder Mustafa Suleyman.

Just a year ago, there wasnt a single world leader Id meet who would bring up AI. Today, there isnt a single world leader who doesnt. In this short time, the explosive debut of generative AI systems like ChatGPT and Midjourney signaled the beginning of a new technological revolution that will remake politics, economies, and societies. For better and for worse.

As governments are starting to recognize, realizing AIs astonishing upside while containing its disruptive and destructive potential may be the greatest governance challenge humanity has ever faced. If governments dont get it right soon, its possible they never will.

Why AI needs to be governed

First, a disclaimer: Im an AI enthusiast. I believe AI will drive nothing less than a new globalization that will give billions of people access to world-leading intelligence, facilitate impossible-to-imagine scientific advances, and unleash extraordinary innovation, opportunity, and growth. Importantly, were heading in this direction without policy intervention: The fundamental technologies are proven, the money is available, and the incentives are aligned for full-steam-ahead progress.

At the same time, artificial intelligence has the potential to cause unprecedented social, economic, political, and geopolitical disruption that upends our lives in lasting and irreversible ways.

In the nearest term, AI will be used to generate and spread toxic misinformation, eroding social trust and democracy; to surveil, manipulate, and subdue citizens, undermining individual and collective freedom; and to create powerful digital or physical weapons that threaten human lives. In the longer run, AI could also destroy millions of jobs, worsening existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making by amplifying bad information feedback loops; or spark unintended and uncontrollable military escalations that lead to war. Farther out on the horizon lurks the promise of artificial general intelligence (AGI), the still uncertain point where AI exceeds human performance at any given task, and the existential (albeit speculative) peril that an AGI could become self-directed, self-replicating, and self-improving beyond human control.

Experts disagree on which of these risks are more important or urgent. Some lie awake at night fearing the prospect of a superpowerful AGI turning humans into slaves. To me, the real catastrophic threat is humans using ever more powerful and available AI tools for malicious or unintended purposes. But it doesnt really matter: Given how little we know about what AI might be able to do in the future what kinds of threats it could pose, how severe and irreversible its damages could be we should prepare for the worst while hoping for (and working toward) the best.

What makes AI so hard to govern

AI cant be governed like any previous technology because its unlike any previous technology. It doesnt just pose policy challenges; its unique features also make solving those challenges progressively harder. That is the AI power paradox.

For starters, the pace of AI progress is hyper-evolutionary. Take Moores Law, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that rate of progress seem quaint. The amount of computation used to train the most powerful AI models has increased by a factor of 10 every year for the last 10 years. Processing that once took weeks now happens in seconds. Yesterdays cutting-edge capabilities are running on smaller, cheaper, and more accessible systems today.

As their enormous benefits become self-evident, AI systems will only grow bigger, cheaper, and more ubiquitous. And with each new order of magnitude, unexpected capabilities will emerge. Few predicted that training on raw text would enable large language models to produce coherent, novel, and even creative sentences. Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. Soon, AI developers will likely succeed in creating systems capable of quasi-autonomy (i.e., able to achieve concrete goals with minimal human oversight) and self-improvement a critical juncture that should give everyone pause.

Then theres the ease of AI proliferation. As with any software, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. Although the most powerful models still require sophisticated hardware to work, midrange versions can run on computers that can be rented for a few dollars an hour. Soon, such models will run on smartphones. No technology this powerful has become so accessible, so widely, so quickly. All this plays out on a global field: Once released, AI models can and will be everywhere. All it takes is one malign or breakout model to wreak worldwide havoc.

AI also differs from older technologies in that almost all of it can be characterized as general purpose and dual use (i.e., having both military and civilian applications). An AI application built to diagnose diseases might be able to create and weaponize a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred. This makes AI more than just software development as usual; it is an entirely new means of projecting power.

As such, its advancement is being propelled by irresistible incentives. Whether for its repressive capabilities, economic potential, or military advantage, AI supremacy is a strategic objective of every government and company with the resources to compete. At the end of the Cold War, powerful countries might have cooperated to arrest a potentially destabilizing technological arms race. But todays tense geopolitical environment makes such cooperation much harder. From the vantage point of the worlds two superpowers, the United States and China, the risk that the other side will gain an edge in AI is greater than any theoretical risk the technology might pose to society or to their own domestic political authority. This zero-sum dynamic means that Beijing and Washington are focused on accelerating AI development, rather than slowing it down.

But even if the worlds powers were inclined to contain AI, theres no guarantee theyd be able to, because, like most of the digital world, every aspect of AI is presently controlled by the private sector. I call this arrangement technopolar, with technology companies effectively exerting sovereignty over the rules that apply to their digital fiefdoms at the expense of governments. The handful of large tech firms that currently control AI may retain their advantage for the foreseeable future or they may be eclipsed by a raft of smaller players as low barriers to entry, open-source development, and near-zero marginal costs lead to uncontrolled proliferation of AI. Either way, AIs trajectory will be largely determined not by governments but by private businesses and individual technologists who have little incentive to self-regulate.

Any one of these features would strain traditional governance models; all of them together render these models inadequate and make the challenge of governing AI unlike anything governments have faced before.

The technoprudential" imperative

For AI governance to work, it must be tailored to the specific nature of the technology and the unique challenges it poses. But because the evolution, uses, and risks of AI are inherently unpredictable, AI governance cant be fully specified at the outset. Instead, it must be as innovative, adaptive, and evolutionary as the technology it seeks to govern.

Our proposal? Technoprudentialism. Thats a big word, but essentially its about governing AI much in the same way that we govern global finance. The idea is that we need a system to identify and mitigate risks to global stability posed by AI before they occur, without choking off innovation and the opportunities that flow from it, and without getting bogged down by everyday politics and geopolitics. In practice, technoprudentialism requires the creation of multiple complementary governance regimes each with different mandates, levers, and participants to address the various aspects of AI that could threaten geopolitical stability, guided by common principles that reflect AIs unique features.

Mustafa and I argue that AI governance needs to be precautionary, agile, inclusive, impermeable, and targeted. Built atop these principles should be a minimum of three AI governance regimes: an Intergovernmental Panel on Artificial Intelligence for establishing facts and advising governments on the risks posed by AI, an arms control-style mechanism for preventing an all-out arms race between them, and a Geotechnology Stability Board for managing the disruptive forces of a technology unlike anything the world has seen.

The 21st century will throw up few challenges as daunting or opportunities as promising as those presented by AI. Whether our future is defined by the former or the latter depends on what policymakers do next.

See the original post here:

Can we govern AI before its too late? - GZERO Media

Read More..