I love this house, but sometimes its a sad place, he said, while we looked at the pictures. Because she loved being here and isnt here.
The sun had almost set, and Hinton turned on a little light over his desk. He closed the computer and pushed his glasses up on his nose. He squared up his shoulders, returning to the present.
I wanted you to know about Roz and Jackie because theyre an important part of my life, he said. But, actually, its also quite relevant to artificial intelligence. There are two approaches to A.I. Theres denial, and theres stoicism. Everybodys first reaction to A.I. is Weve got to stop this. Just like everybodys first reaction to cancer is How are we going to cut it out? But it was important to recognize when cutting it out was just a fantasy.
He sighed. We cant be in denial, he said. We have to be real. We need to think, How do we make it not as awful for humanity as it might be?
How usefulor dangerouswill A.I. turn out to be? No one knows for sure, in part because neural nets are so strange. In the twentieth century, many researchers wanted to build computers that mimicked brains. But, although neural nets like OpenAIs GPT models are brainlike in that they involve billions of artificial neurons, theyre actually profoundly different from biological brains. Todays A.I.s are based in the cloud and housed in data centers that use power on an industrial scale. Clueless in some ways and savantlike in others, they reason for millions of users, but only when prompted. They are not alive. They have probably passed the Turing testthe long-heralded standard, established by the computing pioneer Alan Turing, which held that any computer that could persuasively imitate a human in conversation could be said, reasonably, to think. And yet our intuitions may tell us that nothing resident in a browser tab could really be thinking in the way we do. The systems force us to ask if our kind of thinking is the only kind that counts.
During his last few years at Google, Hinton focussed his efforts on creating more traditionally mindlike artificial intelligence using hardware that more closely emulated the brain. In todays A.I.s, the weights of the connections among the artificial neurons are stored numerically; its as though the brain keeps records about itself. In your actual, analog brain, however, the weights are built into the physical connections between neurons. Hinton worked to create an artificial version of this system using specialized computer chips.
If you could do it, it would be amazing, he told me. The chips would be able to learn by varying their conductances. Because the weights would be integrated into the hardware, it would be impossible to copy them from one machine to another; each artificial intelligence would have to learn on its own. They would have to go to school, he said. But you would go from using a megawatt to thirty watts. As he spoke, he leaned forward, his eyes boring into mine; I got a glimpse of Hinton the evangelist. Because the knowledge gained by each A.I. would be lost when it was disassembled, he called the approach mortal computing. Wed give up on immortality, he said. In literature, you give up being a god for the woman you love, right? In this case, wed get something far more important, which is energy efficiency. Among other things, energy efficiency encourages individuality: because a human brain can run on oatmeal, the world can support billions of brains, all different. And each brain can learn continuously, rather than being trained once, then pushed out into the world.
As a scientific enterprise, mortal A.I. might bring us closer to replicating our own brains. But Hinton has come to think, regretfully, that digital intelligence might be more powerful. In analog intelligence, if the brain dies, the knowledge dies, he said. By contrast, in digital intelligence, if a particular computer dies, those same connection strengths can be used on another computer. And, even if all the digital computers died, if youd stored the connection strengths somewhere you could then just make another digital computer and run the same weights on that other digital computer. Ten thousand neural nets can learn ten thousand different things at the same time, then share what theyve learned. This combination of immortality and replicability, he says, suggests that we should be concerned about digital intelligence taking over from biological intelligence.
How should we describe the mental life of a digital intelligence without a mortal body or an individual identity? In recent months, some A.I. researchers have taken to calling GPT a reasoning enginea way, perhaps, of sliding out from under the weight of the word thinking, which we struggle to define. People blame us for using those wordsthinking, knowing, understanding, deciding, and so on, Bengio told me. But even though we dont have a complete understanding of the meaning of those words, theyve been very powerful ways of creating analogies that help us understand what were doing. Its helped us a lot to talk about imagination, attention, planning, intuition as a tool to clarify and explore. In Bengios view, a lot of what weve been doing is solving the intuition aspect of the mind. Intuitions might be understood as thoughts that we cant explain: our minds generate them for us, unconsciously, by making connections between what were encountering in the present and our past experiences. We tend to prize reason over intuition, but Hinton believes that we are more intuitive than we acknowledge. For years, symbolic-A.I. people said our true nature is, were reasoning machines, he told me. I think thats just nonsense. Our true nature is, were analogy machines, with a little bit of reasoning built on top, to notice when the analogies are giving us the wrong answers, and correct them.
On the whole, current A.I. technology is talky and cerebral: it stumbles at the borders of the physical. Any teen-ager can learn to drive a car in twenty hours of practice, with hardly any supervision, LeCun told me. Any cat can jump on a series of pieces of furniture and get to the top of some shelf. We dont have any A.I. systems coming anywhere close to doing these things today, except self-driving carsand they are over-engineered, requiring mapping the whole city, hundreds of engineers, hundreds of thousands of hours of training. Solving the wriggly problems of physical intuition will be the big challenge of the next decade, LeCun said. Still, the basic idea is simple: if neurons can do it, then so can neural nets.
Here is the original post:
Why the Godfather of A.I. Fears What He's Built - The New Yorker
Read More..