‘What do you think of AI?’ People keep asking this question. Here’s five things the experts told me – ABC News

For the last few months, there's one question that I've been asked countless times.

It comes up without fail during idle moments: coffee breaks at work orstanding around, outat the dogpark.

What do you think about AI?

Usually, the tone is quietly sceptical.

For me, the way it's asked conveys a weary distrust of tech hype, but also a hint of concern. People are asking:Should I be paying attention to this?

Sure, at the start of 2023, many of us were amazed by newgenerative artificial intelligence (AI)tools like ChatGPT.

But, as the months have passed, these tools have lost their novelty.

The tech industry makes big claims abouthow AI is going tochange everything.

But this is an industry that has made big claims before and been proved wrong. It's happened with virtual reality, cryptocurrency, NFTs and the metaverse. And that's just in thepast three years.

So, what do I think of AI?

For the pastfew months I've been working on a podcast series about AI for the ABC, looking broadly at this topic.

It's been a bit like trekking through a blizzard of press releases andproduct announcements.

Everything solid dissolves into a white-out of jargon and dollar signs.

There's so much excitement, and so much money invested, that it can be hard to get answers to the big underlying questions.

And, of course, we're talking about the future! That's one topic on which no-one ever agrees,anyway.

But here's what I've learned from speaking to some of the top AI experts.

Forget Terminator. Forget 2001: A Space Odyssey.

Hollywood's long-ago visions of the futureare getting in the way of understanding the AI we have today.

If you picture a skeletal robot with red eyes every time someone says "AI", you'll have totally the wrong idea about what AI can do, what it can't, and what risks we should reasonably worry about.

Most of the AI tools we use, from ChatGPT to Google Translate, are machine learning (ML).

If AI isthe broad concept of machines being able to carry out tasks in what that we would consider "smart", ML is one way of achieving this.

The general idea is that, instead of telling a machine how to do a task, you give them lots of examples of wrong and right ways of doing the task, and let them learn for themselves.

So for driverless cars, you give a ML systemlots of video and other data of cars being driven correctly, and itlearns to do the same.

For translation, you give a ML toolthe same sentences in different languages, and it figures out its own method of translating between the two.

Why does this distinction between telling and learning matter?

Because a ML tool that can navigate a roundaboutor help you order coffee in French isn't plotting to take over the world.

The fact it can do these narrow tasks is very impressive, but that's all it's doing.

It doesn't even "know" the world exists, says Rodney Brooks, a world-leading Australian roboticist.

"We confuse what it does with real knowledge," he says.

Rodney Brooks has one of the most impressive resumes in AI. Born, raised and educated in Adelaide, during the 1990s heran the largest computer science department in the world, at MIT. He's even credited with inventing the robotic vacuum cleaner.

"Because I've built more robots than any other human in the world,I can't quite be ignored,"he told me when I called him at his home in San Francisco, one evening.

Professor Brooks, who's a professor emeritus at MIT, says the abilities of today's AI, though amazing, arewildly over-estimated.

He makes a distinction between "performance" and "competence".

Performance is what the AI actually does translate a sentence for example. Competence is its underlying knowledge of the world.

With humans, someone who performs well is also generally competent.

Say you walk up to a stranger and ask them for directions. If they answer with confidence, we figure we can also ask them other things about the city: where's the train station? How doyou pay for a ticket?

But that doesn't apply to AI. An AI that can give directions doesn't necessarily know anything else.

"We see ChatGPT do things ... and people say 'It's really amazing'. And then they generalise and imagine it can do all kinds of things there's no evidence it can do," Professor Brooks says.

"And then we see the hype cycle we've been in over the last year."

Another way of putting this is we have a tendency to anthropomorphise AI to seeourselves in the tools we've trained to mimic us.

As a result, we make the wrong assumptions about the scale and type ofintelligence beneath the performance.

"I think it's difficult for people, even within AI, to figure out what is deep and what is a technique," Professor Brooks says.

Now, many people in AI sayit's not so clear cut.

Rodney Brooks and others maybe completely wrong.

Maybe future, more advanced versions of ChatGPT will havean underlying model of the world. Performance will equate to competence.AI will develop a general intelligence, similar to humans.

Maybe. But that's a big unknown.

For the moment, AI systems are generally very narrow in what they can do.

From the buzz out of Silicon Valley, you could be forgiven for thinking the course of the future is pretty much decided.

Sam Altman, the boss of OpenAI, the company that built ChatGPT,has been telling everyonethat AI smarter than any human is right around the corner. He calls this dream Artificial General Intelligence, or AGI.

Perhaps as a result of this, minor advances are oftencommunicated to the public as though they're proof that AI is becoming super-intelligent.The future is coming, get out of the way.

ChatGPT can pass a lawexam? This changes everything.

Google has a new chatbot?This changes everything

Beyond this hype, there are lots of varying, equally valid, expert perspectives on what today's AI is on track to achieve.

The machine learning optimists,people likeSam Altman, are just one particularly vocal group.

They say that not only will we achieve AGI, but it will be used for good, ushering in a new age ofplenty.

"We are working to build tools that one day can help us make new discoveries and address some of humanity's biggest challenges, like climate change and curing cancer," Mr Altman told US lawmakers in May.

Then, there's thedoomers.They broadly say that, yes, AI will be really smart, but it won't be addressing climate change and curing cancer.

Some believe that AI will become sentient and aggressively pursue its own goals.

Other doomers fearpowerful AI tools will fall intothe wrong hands and be misused to generate misinformation, hackelections, and generally spread murder and mayhem.

Then there's the AI sceptics. People like Rodney Brooks.

The real danger, they say, isn't that AI will be too smart, but it will be too dumb, and we won't recognise its limits.

They point to examples of this happening already.

Driverless carsare crashing into pedestrians in San Francisco. Journalists are being replaced by faultybots. Facial recognition is leading to innocent people being locked up.

"Today's AI is a very powerful trick," Professor Brooks says.

"It's not approaching, or it's not necessarily even on the way, to a human-level intelligence."

And there's a fourth group (these groups overlap in complicated ways), who saythat all of the above misses the point.

We should worry less about what AI will become, and talk more about what we want it to be.

Rumman Chowdhury, an expert in the field of responsible AI, says talking about the future as something that will happen to us, rather than something we shape, is a cop out by tech companies.

AI isn't a sentient being, but just another tech product.

"In anthropomorphising and acting like artificial intelligence is an actor that makes independent decisions, people in tech absolve themselves of the sins of the technology they built," she says.

"In their story, they're a good guytrying to make this thing to help people.

"They've made us believe this AI is alive and making independent decisions and therefore they're not at fault."

Most of the popular discussion about AI and the future focuses on what happens when AI gets too powerful.

This is sometimes called the "alignment problem". It's the idea that, in the end, sentient AIwill not do what we what.

Within the AI community, the term "p(doom)" is used to describe the probability of this happening. It's a percentage chance that AI is going to wipe out humanity."My (p)doom is 20 per cent" etc.

But the most chilling vision of the future I heard wasn'tone whererobots stage an uprising.

Instead, it was muchmore mundane and plausible. A boring dystopia.

It's a future where AI pervades every aspect of our lives, from driving a car to writing an email, and a handful of companies that control this technology get very rich and powerful.

Maybe in this future AI issuper-intelligent, or maybe not. But it's at least good enough to displace workersin many industries.

New jobs are created, but they're not as good, because most peoplearen't aseconomically useful as they were. The skills these jobs require skills that were once exclusively human can be done by AI.

High-paying, creative jobs become low-paying ones, usually interacting with AI.

This is the fear that partly motivatedUS actors and screenwriters to go on strikethis year. It's why someauthors are suing AI companies.

It's a vision of the future where big tech's disruptions of certain industries overthe past 20 years Google and Facebooksuckingadvertising revenue out media and publishing, for instance is just thepreamble to a much larger,global transfer of wealth.

"The thing I worry about is there are fewer and fewer people holding more and more wealth and power and control," Dr Chowdhury says.

"As these models becomemore expensive to build and make, fewer and fewer people actually hold the keys to what's going to be driving essentially the economy of the entire world."

Michael Wooldridge, a computer scientist at Oxford University and one of the world's leading AI researchers, is also worried about this kind of future.

The future he envisions is less like The Terminator, and more like The Office.

Not only are most people paid less for the same work, but they're micromanaged by AI productivity software.

In this"deeply depressing" scenario,humans are the automata.

"A nagging concern I have is that we end up with AI as our boss," Professor Wooldridge says.

"Imagine in a very near future we've got AI monitoring every single keystroke that you type. It's looking at every email that you send. It's monitoring you continually throughout your working day.

"I think that future, unless something happens, feels like it's almost inevitable."

Sixty years ago, in the glory days of early AI research, some leading experts were convinced that truly intelligent, thinking machines were a decade or two away.

About 10 years later, in the early 1980s, the same thing happened: A few breakthroughs led to a flurry of excitement. This changes everything.

But as we know now, it didn't change everything. The future that was imagined never happened.

The third AI boom started in the 2010s and has accelerated through to 2023.

It's either still going, or tapering off slightly. In recent months, generative AI stocks have fallen in the US.

ChatGPT set the record for the fastest selling user base ever, in early 2023. But it hasn't maintained this momentum. Visits to the sitefell from June through to August this year.

To explain what's going on, some analysts have referenced Amara's Law, which statesthat we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

They've also pointed to something called the Gartner Hype Cycle, which is a graphical representation of the excitement and disappointment often associated with new technologies.

Continued here:

'What do you think of AI?' People keep asking this question. Here's five things the experts told me - ABC News

Related Posts

Comments are closed.