The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly – WIRED

That sounded to me like he was anthropomorphizing those artificial systems, something scientists constantly tell laypeople and journalists not to do. Scientists do go out of their way not to do that, because anthropomorphizing most things is silly, Hinton concedes. But they'll have learned those things from us, they'll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable. When your powerful AI agent is trained on the sum total of human digital knowledgeincluding lots of online conversationsit might be more silly not to expect it to act human.

But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we dont really encounter the world directly.

Some people think, hey, there's this ultimate barrier, which is we have subjective experience and [robots] don't, so we truly understand things and they dont, says Hinton. That's just bullshit. Because in order to predict the next word, you have to understand what the question was. You can't predict the next word without understanding, right? Of course they're trained to predict the next word, but as a result of predicting the next word they understand the world, because that's the only way to do it.

So those things can be sentient? I dont want to believe that Hinton is going all Blake Lemoine on me. And hes not, I think. Let me continue in my new career as a philosopher, Hinton says, jokingly, as we skip deeper into the weeds. Lets leave sentience and consciousness out of it. I don't really perceive the world directly. What I think is in the world isn't what's really there. What happens is it comes into my mind, and I really see what's in my mind directly. That's what Descartes thought. And then there's the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world? Hinton goes on to argue that since our own experience is subjective, we cant rule out that machines might have equally valid experiences of their own. Under that view, its quite reasonable to say that these things may already have subjective experience, he says.

Now consider the combined possibilities that machines can truly understand the world, can learn deceit and other bad habits from humans, and that giant AI systems can process zillions of times more information that brains can possibly deal with. Maybe you, like Hinton, now have a more fraughtful view of future AI outcomes.

But were not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. It works for people, he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems cant so easily merge in a Skynet kind of hive intelligence.

Go here to read the rest:

The 'Godfather of AI' Has a Hopeful Plan for Keeping Future AI Friendly - WIRED

Related Posts

Comments are closed.