Suppose I told you that in 10 years, the world as you know it will be over. You will live in a sort of paradise. You wont get sick, or age, or die. Eternal life will be yours! Even better, your mind will be blissfully free of uncertainty youll have access to perfect knowledge. Oh, and youll no longer be stuck on Earth. Instead, you can live up in the heavens.
If I told you all this, would you assume that I was a religious preacher or an AI researcher?
Either one would be a pretty solid guess.
The more you listen to Silicon Valleys discourse around AI, the more you hear echoes of religion. Thats because a lot of the excitement about building a superintelligent machine comes down to recycled religious ideas. Most secular technologists who are building AI just dont recognize that.
These technologists propose cheating death by uploading our minds to the cloud, where we can live digitally for all eternity. They talk about AI as a decision-making agent that can judge with mathematical certainty whats optimal and whats not. And they envision artificial general intelligence (AGI) a hypothetical system that can match human problem-solving abilities across many domains as an endeavor that guarantees human salvation if it goes well, even as it spells doom if it goes badly.
These visions are almost identical to the visions of Christian eschatology, the branch of theology that deals with the end times or the final destiny of humanity.
Christian eschatology tells us that were all headed toward the four last things: death, judgment, and heaven or hell. Although everyone whos ever lived so far has died, well be resurrected after the second coming of Christ to find out where well live for all eternity. Our souls will face a final judgment, care of God, the perfect decision-maker. That will guarantee us heaven if it goes well, but hell if it goes badly.
Five years ago, when I began attending conferences in Silicon Valley and first started to notice parallels like these between religion talk and AI talk, I figured there was a simple psychological explanation. Both were a response to core human anxieties: our mortality; the difficulty of judging whether were doing right or wrong; the unknowability of our lifes meaning and ultimate place in this universe or the next one. Religious thinkers and AI thinkers had simply stumbled upon similar answers to the questions that plague us all.
So I was surprised to learn that the connection goes much deeper.
The intertwining of religion and technology is centuries old, despite the people wholl tell you that science is value-neutral and divorced from things like religion, said Robert Geraci, a professor of religious studies at Manhattan College and author of Apocalyptic AI. Thats simply not true. It never has been true.
In fact, historians tracing the influence of religious ideas contend that we can draw a straight line from Christian theologians in the Middle Ages to the father of empiricism in the Renaissance to the futurist Ray Kurzweil to the tech heavyweights hes influenced in Silicon Valley.
Occasionally, someone there still dimly senses the parallels. Sometimes I think a lot of the breathless enthusiasm for AGI is misplaced religious impulses from people brought up in a secular culture, Jack Clark, co-founder of the AI safety company Anthropic, mused on Twitter in March.
Mostly, though, the figures spouting a vision of AGI as a kind of techno-eschatology from Sam Altman, the CEO of ChatGPT-maker OpenAI, to Elon Musk, who wants to link your brain to computers express their ideas in secular language. Theyre either unaware or unwilling to admit that the vision theyre selling derives much of its power from the fact that its plugging into age-old religious ideas.
But its important to know where these ideas come from. Not because religious is somehow pejorative; just because ideas are religious doesnt mean theres something wrong with them (the opposite is often true). Instead, we should understand the history of these ideas of virtual afterlife as a mode of salvation, say, or moral progress understood as technological progress so we see that theyre not immutable or inevitable; certain people came up with them at certain times to serve certain purposes, but there are other ideas out there if we want them. We dont have to fall prey to the danger of the single story.
We have to be careful with what narratives we buy into, said Elke Schwarz, a political theorist at Queen Mary University of London who studies the ethics of military AI. Whenever we talk about something religious, theres something sacred at play. Having something thats sacred can enable harm, because if something is sacred its worth doing the worst things for.
In the Abrahamic religions that shaped the West, it all goes back to shame.
Remember what happens in the book of Genesis? When Adam and Eve eat from the tree of knowledge, God expels them from the garden of Eden and condemns them to all the indignities of flesh-and-blood creatures: toil and pain, birth and death. Humankind is never the same after that fall from grace. Before the sin, we were perfect creatures made in the image of God; now were miserable meat sacks.
But in the Middle Ages, Christian thinkers developed a radical idea, as the historian David Noble explains in his book The Religion of Technology. What if tech could help us restore humanity to the perfection of Adam before the fall?
The influential ninth-century philosopher John Scotus Eriugena, for example, insisted that part of what it meant for Adam to be formed in Gods image was that he was a creator, a maker. So if we wanted to restore humanity to the God-like perfection of Adam prior to his fall, wed have to lean into that aspect of ourselves. Eriugena wrote that the mechanical arts (a.k.a. technology) were mans links with the Divine, their cultivation a means to salvation.
This idea took off in medieval monasteries, where the motto ora et labora prayer and work began to circulate. Even in the midst of the so-called Dark Ages, some of these monasteries became hotbeds of engineering, producing inventions like the first known tidal-powered water wheel and impact-drilled well. Catholics became known as innovators; to this day, engineers have four patron saints in the religion. Theres a reason why some say the Catholic Church was the Silicon Valley of the Middle Ages: It was responsible for everything from metallurgy, mills, and musical notation to the wide-scale adoption of clocks and the printing press, as I noted in a 2018 Atlantic article.
This wasnt tech for techs sake, or for profits sake. Instead, tech progress was synonymous with moral progress. By recovering humanitys original perfection, we could usher in the kingdom of God. As Noble writes, Technology had come to be identified with transcendence, implicated as never before in the Christian idea of redemption.
The medieval identification of tech progress with moral progress shaped successive generations of Christian thinkers all the way into modernity. A pair of Bacons illustrates how the same core belief that tech would accomplish redemption influenced both religious traditionalists and those who adopted a scientific worldview.
In the 13th century, the alchemist Roger Bacon, taking a cue from biblical prophecies, sought to create an elixir of life that could achieve something like the Resurrection as the apostle Paul described it. The elixir, Bacon hoped, would give humans not just immortality, but also magical abilities like traveling at the speed of thought. Then in the 16th century, Francis Bacon (no relation) came along. Superficially he seemed very different from his predecessor he critiqued alchemy, considering it unscientific yet he prophesied that wed one day use tech to overcome our mortality for the glory of the Creator and the relief of mans estate.
By the Renaissance, Europeans dared to dream that we could remake ourselves in the image of God not only by inching toward immortality, but also by creating consciousness out of inanimate matter.
The possibility to make new life is, other than defeating death, the ultimate power, Schwarz pointed out.
Christian engineers created automata wooden robots that could move around and mouth prayers. Muslims were rumored to create mechanical heads that could talk like oracles. And Jewish folktales spread about rabbis who brought to life clay figures, called golems, by permuting language in magical ways. In the stories, the golem sometimes offers salvation by saving the Jewish community from persecution. But other times, the golem goes rogue, killing people and using its powers for evil.
If all of this is sounding distinctly familiar well, it should. The golem idea has been cited in works on AI risk, like the 1964 book God & Golem, Inc. by mathematician and philosopher Norbert Wiener. You hear the same anxieties today in the slew of open letters released by technologists, warning that AGI will bring upon us either salvation or doom.
Reading these statements, you might well ask: why would we even want to create AGI, if AGI threatens doom as much as it promises salvation? Why not just limit ourselves to creating narrower AI which could already work wonders in applications like curing diseases and stick with that for a while?
For an answer to that, come with me on one more romp through history, and well start to see how the recent rise of three intertwined movements have molded Silicon Valleys visions for AI.
A lot of people assume that when Charles Darwin published his theory of evolution in 1859, all religious thinkers instantly saw it as a horrifying, heretical threat, one that dethroned humans as Gods most godly creations. But some Christian thinkers embraced it as gorgeous new garb for the old spiritual prophecies. After all, religious ideas never really die; they just put on new clothes.
A prime example was Pierre Teilhard de Chardin, a French Jesuit priest who also studied paleontology in the early 1900s. He believed that human evolution, nudged along with tech, was actually the vehicle for bringing about the kingdom of God, and that the melding of humans and machines would lead to an explosion of intelligence, which he dubbed the omega point. Our consciousness would become a state of super-consciousness where we merge with the divine and become a new species.
Teilhard influenced his pal Julian Huxley, an evolutionary biologist who was president of both the British Humanist Association and the British Eugenics Society, as author Meghan OGieblyn documents in her 2021 book God, Human, Animal, Machine. It was Huxley who popularized Teilhards idea that we should use tech to evolve our species, calling it transhumanism.
That, in turn, influenced the futurist Ray Kurzweil, who made basically the same prediction as Teilhard: Were approaching a time when human intelligence merges with machine intelligence, becoming unbelievably powerful. Only instead of calling it the omega point, Kurzweil rebranded it as the singularity.
The human species, along with the computational technology it created, will be able to solve age-old problems and will be in a position to change the nature of mortality in a postbiological future, wrote Kurzweil in his 1999 national bestseller The Age of Spiritual Machines. (Strong New Testament vibes there. Per the book of Revelation: Death shall be no more, neither shall there be mourning nor crying nor pain any more, for the former things have passed away.)
Kurzweil has copped to the spiritual parallels, and so have those whove formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblatts Terasem movement to the Mormon Transhumanist Association to Anthony Levandowskis short-lived Way of the Future church. But many others, such as Oxford philosopher Nick Bostrom, insist that unlike religion, transhumanism relies on critical reason and our best available scientific evidence.
These days, transhumanism has a sibling, another movement that was born in Oxford and caught fire in Silicon Valley: effective altruism (EA), which aims to figure out how to do the most good possible for the most people. Effective altruists also say their approach is rooted in secular reason and evidence.
Yet EA actually mirrors religion in many ways: functionally (it brings together a community built around a shared vision of moral life), structurally (its got a hierarchy of prophet-leaders, canonical texts, holidays, and rituals), and aesthetically (it promotes tithing and favors asceticism). Most importantly for our purposes, it offers an eschatology.
EAs eschatology comes in the form of its most controversial idea, longtermism, which Musk has described as a close match for my philosophy. It argues that the best way to help the most people is to focus on ensuring that humanity will survive far into the future (as in, millions of years from now), since many more billions of people could exist in the future than in the present assuming our species doesnt go extinct first.
And heres where we start to get the answer to our question about why technologists are set on building AGI.
To effective altruists and longtermists, just sticking with narrow AI is not an option. Take Will MacAskill, the Oxford philosopher known as the reluctant prophet of effective altruism and longtermism. In his 2022 book What We Owe the Future, he explains why he thinks a plateauing of technological advancement is unacceptable. A period of stagnation, he writes, could increase the risks of extinction and permanent collapse.
He cites his colleague Toby Ord, who estimates that the probability of human extinction through risks like rogue AI and engineered pandemics over the next century is one in six Russian roulette. Another fellow traveler in EA, Holden Karnofsky, likewise argues that were living at the hinge of history or the most important century a singular time in the story of humanity when we could either flourish like never before or bring about our own extinction. MacAskill, like Musk, suggests in his book that a good way to avoid extinction is to settle on other planets so we arent keeping all our eggs in one basket.
But thats only half of MacAskills moral case for space settlement. The other half is that we should be trying to make future human civilization as big and utopian as possible. As MacAskills Oxford colleague Bostrom has argued, the colonization of the universe would give us the area and resources with which to run gargantuan numbers of digital simulations of humans living happy lives. The more space, the more happy (digital) humans! This is where the vast majority of moral value lies: not in the present on Earth, but in the future in heaven Sorry, I meant in the virtual afterlife.
When we put all these ideas together and boil them down, we get this basic proposition:
Any student of religion will immediately recognize this for what it is: apocalyptic logic.
Transhumanists, effective altruists, and longtermists have inherited the view that the end times are nigh and that technological progress is our best shot at moral progress. For people operating within this logic, it seems natural to pursue AGI. Even though they view AGI as a top existential risk, they believe we cant afford not to build it given its potential to catapult humanity out of its precarious earthbound adolescence (which will surely end any minute!) and into a flourishing interstellar adulthood (so many happy people, so much moral value!). Of course we ought to march forward technologically because that means marching forward morally!
But is this rooted in reason and evidence? Or is it rooted in dogma?
The hidden premise here is technological determinism, with a side dash of geopolitics. Even if you and I dont create terrifyingly powerful AGI, the thinking goes, somebody else or some other country will so why stop ourselves from getting in on the action? OpenAIs Altman exemplifies the belief that tech will inevitably march forward. He wrote on his blog in 2017 that unless we destroy ourselves first, superhuman AI is going to happen. Why? As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.
Have we learned that? I see no evidence to suggest that anything that can be invented necessarily will be invented. (As AI Impacts lead researcher Katja Grace memorably wrote, Consider a machine that sprays shit in your eyes. We can technologically do that, but probably nobody has ever built that machine.) It seems more likely that people tend to pursue innovations when there are very powerful economic, social, or ideological pressures pushing them to.
In the case of the AGI fever thats gripped Silicon Valley, recycled religious ideas in the garb of transhumanism, effective altruism, and longtermism have supplied the social and ideological pressures. As for the economic, profit-making pressure, well, thats always operative in Silicon Valley.
Now, 61 percent of Americans believe AI may threaten human civilization, and that belief is especially strong among evangelical Christians, according to a Reuters/Ipsos poll in May. To Geraci, the religious studies scholar, that doesnt come as a surprise. Apocalyptic logic, he noted, is very, very, very powerful in American Protestant Christianity to the point that 4 in 10 US adults currently believe that humanity is living in the end times.
Unfortunately, apocalyptic logic tends to breed dangerous fanaticism. In the Middle Ages, when false messiahs arose, people gave up their worldly possessions to follow their prophet. Today, with talk of AGI doom suffusing the media, true believers drop out of college to go work on AI safety. The doom-or-salvation, heaven-or-hell logic pushes people to take big risks to ante up and go all in.
In an interview with me last year, MacAskill disavowed extreme gambles. He told me he imagines that a certain type of Silicon Valley tech bro, thinking theres a 5 percent chance of dying from some AGI catastrophe and a 10 percent chance AGI ushers in a blissful utopia, would be willing to take those odds and rush ahead with building AGI.
Thats not the sort of person I want building AGI, because they are not responsive to the moral issues, MacAskill told me. Maybe that means we have to delay the singularity in order to make it safer. Maybe that means it doesnt come in my lifetime. That would be an enormous sacrifice.
When MacAskill told me this, I pictured a Moses figure, looking out over the promised land but knowing he would not reach it. The longtermist vision seemed to require of him a brutal faith: You personally will not be saved, but your spiritual descendants will.
Theres nothing inherently wrong with believing that tech can radically improve humanitys lot. In many ways, it obviously already has.
Technology is not the problem, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me. In fact, Delio is comfortable with the idea that were already in a new stage of evolution, shifting from Homo sapiens to techno sapiens. She thinks we should be open-minded about proactively evolving our species with techs help.
But shes also clear that we need to be explicit about which values are shaping our tech so that we can develop the technology with purpose and with ethical boundaries, she said. Otherwise, technology is blind and potentially dangerous.
Geraci agrees. If a ton of people in Silicon Valley are going, Hey, Im in for this technology because its going to make me immortal, thats a little bit terrifying, he told me. But if somebody says, Im in for this technology because I think were going to be able to use it to solve world hunger those are two very different motives. It would impact the types of products you try to design, the population for which you are designing, and the way you try to deploy it in the world around you.
Part of making deliberate decisions about which values animate tech is also being keenly aware of who gets the power to decide. According to Schwarz, the architects of artificial intelligence have sold us on a vision of necessary tech progress with AI and set themselves up as the only experts on it, which makes them enormously powerful arguably more powerful than our democratically elected officials.
The idea that developing AGI is a kind of natural law becomes an ordering principle, and that ordering principle is political. It gives political power to some and a lot less to most others, Schwarz said. Its so strange to me to say, We have to be really careful with AGI, rather than saying, We dont need AGI, this is not on the table. But were already at a point when power is consolidated in a way that doesnt even give us the option to collectively suggest that AGI should not be pursued.
We got to this point in large part because, for the past thousand years, the West has fallen prey to the danger of the single story: the story equating tech progress with moral progress that we inherited from medieval religious thinkers.
Its the one narrative we have, Delio said. That narrative has made us inclined to defer to technologists (who, in the past, were also spiritual authorities) on the values and assumptions being baked into their products.
What are alternatives? If another narrative were to say, Just the dynamism of being alive is itself the goal, then we might have totally different aspirations for technology, Delio added. But we dont have that narrative! Our dominant narrative is to create, invent, make, and to have that change us.
We need to decide what kind of salvation we want. If were generating our enthusiasm for AI through visions of transcending our earthbound limits and our meat-sack mortality, that will create one kind of societal outcome. But if we commit to using tech to improve the well-being of this world and these bodies, we can have a different outcome. We can, as Noble put it, begin to direct our astonishing capabilities toward more worldly and humane ends.
We're here to shed some clarity
One of our core beliefs here at Vox is that everyone needs and deserves access to the information that helps them understand the world, regardless of whether they can pay for a subscription. With the 2024 election on the horizon, more people are turning to us for clear and balanced explanations of the issues and policies at stake. Were so grateful that were on track to hit 85,000 contributions to the Vox Contributions program before the end of the year, which in turn helps us keep this work free. We need to add 2,500 contributions this month to hit that goal.Will you make a contribution today to help us hit this goal and support our policy coverage? Any amount helps.
Read more from the original source:
Why Silicon Valley AI prophecies just feel like repackaged religion - Vox.com
Read More..