Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans – Walter Bradley Center for Natural and Artificial Intelligence

Psychologist and tech writer Gary Marcus scoffs at the idea that machines that think like people (artificial general intelligence or AGI) are just around the corner.

The author of Rebooting AI (Vintage 2019) says there are no big new developments in the offing:

It was always going to happen; the ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be; scaling alone was never going to be enough. The only mystery was what would happen when the big players realized that the jig was up, and that scaling was not in fact All You Need.

Even futurist Ray Kurzweil is postponing and revising:

For yearsand as recently as April in his TED talkRay Kurzweil famously projected that AGI would arrive in as 2029. But in an interview just published in WIRED, Kurzweil (who I believe to still works at Alphabet, hence knows what is immediately afoot) let his predictions slip back, for the first time, to 2032. (He also seemingly dropped the standard for AGI from general intelligence to writing topnotch poetry).

Readers may recall that Kurzweil told the 2023 COSM conference that once AI reaches such a general human capability in 2029, it will have already surpassed us in every way. But he isnt worried, because we humans are not going to be left behind. Instead, humans and AI are going to move into the future together.

Hed been saying such things at COSM conferences since 2019, though the COSM panel that evaluated his comments was significantly more skeptical than many tech experts.

This is from his current interview with Wired:

How will we know when AGI is here? That’s a very good question. I mean, I guess in terms of writing, ChatGPTs poetry is actually not bad, but it’s not up to the best human poets. I’m not sure whether we’ll achieve that by 2029. If it’s not happening by then, it’ll happen by 2032. It may take a few more years, but anything you can define will be achieved because AI keeps getting better and better.

To make chatbots better, the programmers will need to solve a number of problems, including:

The model collapse problem (everything becomes jackrabbits):

Model collapse: AI chatbots are eating their own tails. The problem is fundamental to how they operate. Without new human input, their output starts to decay. Meanwhile, organizations that laid off writers and editors to save money are finding that they cant just program creativity or common sense into machines.

The hallucination problem (the Soviets sent bears into space):

Internet pollution if you tell a lie long enough LLMs can generate falsehoods faster than humans can correct them. Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar. (Gary Smith)

And the innumeracy problem (I cant count):

Marvin Minsky asks: Can GPT4 hack itself? Will AI of the future be able to count the number of objects in an image? Creativity and understanding, properly defined, lie beyond the capability of the computers of today and tomorrow. (Robert J. Marks)

These deep problems may be fundamental to what a chatbot is. We shall see.

View original post here:

Is It Sinking In? Chatbots Will *Not* Soon Think Like Humans - Walter Bradley Center for Natural and Artificial Intelligence

Related Posts

Comments are closed.