Q&A with Ray Kurzweil about nanobots and AI-human mind-melds – The Boston Globe

You may find a lot of this hard to believe. I know I do. But it is not easy to dismiss Kurzweil, 76, as just another hand-wavy tech hype man. He has been working in AI since 1963, probably longer than anyone else alive today, and has developed several landmark technologies. In 1965, when he was a teenager, he got a computer to compose music, a feat that landed him on national TV and earned him a meeting with President Lyndon Johnson. He went on to invent a text-to-speech reading machine for blind people, an early music synthesizer, and speech-recognition tools. For the past decade, hes been the chief futurist at Google, where today he has the job title of principal researcher and AI visionary.

Every few years, Kurzweil unspools his ideas and defends his predictions in a new book that is rich with footnotes, charts, and carefully honed arguments. His most recent book, The Singularity Is Nearer: When We Merge With AI, is no exception. But it did not persuade me that his AI-maximalist vision is coming close to fruition or that it would be desirable.

My interview with Kurzweil has been edited and condensed.

You say in the book that these are the most exciting and momentous years in all of history. Why is that?

Theres a graph that is really behind it. It shows the exponential progress in computation from 1939 to 2023. Weve gone from computers performing 0.000007 calculations per second per constant [inflation-adjusted] dollar to 130 billion calculations per second per constant dollar. And then recently Nvidia came out with a chip with half a trillion calculations per second. That represents a 75 quadrillionfold increase in the amount of computation you get for the same amount of money. And thats why were seeing large language models now. If you look at the progress just in the last two years, its been amazing, and its going to continue at that pace.

Artificial general intelligence will be able to do anything a human can do, at the best level that a human can do. And when it actually goes inside our body and brain, which will happen in the 2030s, we can harness that and make ourselves smarter. One of the implications is that were going to be able to make fantastic progress in coming up with cures for diseases.

I see how AI could give our civilization greater intelligence to solve big problems like finding new medical cures. But I am less sure that a lot of individual people will want so much more intelligence in their daily lives that theyll implant computers inside their bodies. Do you really think that access to more intelligence is fundamentally what people need most? I would suggest we really need more compassion, more forgiveness, more equanimity.

I think that also comes from intelligence.

And people dont necessarily say they want more intelligence, but when it actually [becomes available], they do want it. The fact that everybody has a smartphone if you had described it to people who came before by saying Everybodys got to carry this device around and tried to describe what it does, relatively few people would have voted for that. Yet everybody has a cellphone. So now if you go around and say, Would you want to put something that goes through your bloodstream and develop something in your brain that would talk to the web automatically? people would say, Theres no way Id want to do that. But when it actually happens and people who do that can cure diseases and can be much smarter in conversation youll have a lot more things in your mind that can pop up when a situation calls for it people definitely will do it, regardless of what they think about it right now.

The intelligence we get from having smartphones at our fingertips has also come with the downsides of distraction, solipsism, and other social trade-offs. Wouldnt those only be magnified with vastly more information at our disposal?

Well, were definitely going to have disagreements about things, and popular political figures that people dont like, and its not going to solve all of our problems. But fundamentally, more intelligence is better. Thats where the evolution of humans has gone, and thats why we create machines that make us smarter. And yes, there are always problems and things that humans can do that wouldnt otherwise be feasible that might be negative. But ultimately were much happier and have new opportunities because of making ourselves smarter.

I question your assumption that exponential rates of improvement in computing and related technologies will necessarily continue. I think its plausible that progress slows. GPT-4 inhaled essentially the entire internet but still has a limited understanding of the world. Where is a larger corpus of text going to come from that has a substantially richer representation of the world? And what about the energy consumption of all this computation?

Well, first of all, large language models are misnamed. They do a fantastic job with language, but thats not all they do. Were also using them, for example, to come up with medical cures, and thats not manipulating language thats manipulating biochemistry. Were using them to train robots so that robots can walk normally and do the kinds of things that humans can do, very simple things like clean up this table. So these models are coming that are going to learn really everything that humans can do, not just language. GPT-4 makes certain mistakes if it doesnt know a certain thing, itll just make things up. We actually know the solution to that: Thats going to require more computation.

I also think AI is actually a very valuable thing for humanity to have in terms of energy. We could meet all of our energy needs today if we converted one part out of 10,000 of the sunlight that falls on the earth, and our ability to actually turn that into energy is growing exponentially. If you follow that curve, well meet all of our energy needs from the sun and wind within 10 years.

But there are real physical constraints. Were not putting up new electricity transmission lines or putting electricity storage on the grid at a pace that would let us get all our energy from the sun and wind in 10 years.

I put the graphs in the book: The ability to have the sun in particular added to our energy sources is enormous compared to what it was five years ago or 10 years ago. Weve got plenty of headroom there. And you can look at applying AI to lots of related areas manufacturing buildings for example. I dont think the energy needs of these things are going to be a barrier. Also, there are ways of bringing down the energy needs.

Do you fundamentally see technological advancement as inevitable?

Absolutely. And we get much more benefit than we get harm.

I often think we live in a generally pessimistic period. Do you feel out of sync with the times?

Yeah, a lot of people are just pessimistic in general. And quite a substantial number of AI scientists think whats happening is disastrous and its going to destroy humanity. They imagine somebody using AI for something thats negative, and they say, How are we going to deal with that? But the tools we have to deal with it are also growing.

I know theres a lot of AI experts who are very much against whats going on. Im just waiting until they get a disease which has no cure and then theyre saved by some cure that comes from AI. Well see how they feel about that.

Brian Bergstein is the editor of the Globe Ideas section. He can be reached at brian.bergstein@globe.com.

Read this article:

Q&A with Ray Kurzweil about nanobots and AI-human mind-melds - The Boston Globe

Related Posts

Comments are closed.