The problem with artificial intelligence? Its neither artificial nor intelligent – The Guardian

Opinion

Lets retire this hackneyed term: while ChatGPT is good at pattern-matching, the human mind does so much more

Thu 30 Mar 2023 10.55 EDT

Elon Musk and Apples co-founder Steve Wozniak have recently signed a letter calling for a six-month moratorium on the development of AI systems. The goal is to give society time to adapt to what the signatories describe as an AI summer, which they believe will ultimately benefit humanity, as long as the right guardrails are put in place. These guardrails include rigorously audited safety protocols.

It is a laudable goal, but there is an even better way to spend these six months: retiring the hackneyed label of artificial intelligence from public debate. The term belongs to the same scrapheap of history that includes iron curtain, domino theory and Sputnik moment. It survived the end of the cold war because of its allure for science fiction enthusiasts and investors. We can afford to hurt their feelings.

In reality, what we call artificial intelligence today is neither artificial nor intelligent. The early AI systems were heavily dominated by rules and programs, so some talk of artificiality was at least justified. But those of today, including everyones favourite, ChatGPT, draw their strength from the work of real humans: artists, musicians, programmers and writers whose creative and professional output is now appropriated in the name of saving civilisation. At best, this is non-artificial intelligence.

As for the intelligence part, the cold war imperatives that funded much of the early work in AI left a heavy imprint on how we understand it. We are talking about the kind of intelligence that would come in handy in a battle. For example, modern AIs strength lies in pattern-matching. Its hardly surprising given that one of the first military uses of neural networks the technology behind ChatGPT was to spot ships in aerial photographs.

However, many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalisations. Marcel Duchamps 1917 work of art Fountain is a prime example of this. Before Duchamps piece, a urinal was just a urinal. But, with a change of perspective, Duchamp turned it into a work of art. At that moment, he was generalising about art.

When we generalise, emotion overrides the entrenched and seemingly rational classifications of ideas and everyday objects. It suspends the usual, nearly machinic operations of pattern-matching. Not the kind of thing you want to do in the middle of a war.

Human intelligence is not one-dimensional. It rests on what the 20th-century Chilean psychoanalyst Ignacio Matte Blanco called bi-logic: a fusion of the static and timeless logic of formal reasoning and the contextual and highly dynamic logic of emotion. The former searches for differences; the latter is quick to erase them. Marcel Duchamps mind knew that the urinal belonged in a bathroom; his heart didnt. Bi-logic explains how we regroup mundane things in novel and insightful ways. We all do this not just Duchamp.

AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present and the future; of history, injury or nostalgia. Without that, theres no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the intelligence part.

ChatGPT has its uses. It is a prediction engine that can also moonlight as an encyclopedia. When asked what the bottle rack, the snow shovel and the urinal have in common, it correctly answered that they are all everyday objects that Duchamp turned into art.

But when asked which of todays objects Duchamp would turn into art, it suggested: smartphones, electronic scooters and face masks. There is no hint of any genuine intelligence here. Its a well-run but predictable statistical machine.

The danger of continuing to use the term artificial intelligence is that it risks convincing us that the world runs on a singular logic: that of highly cognitive, cold-blooded rationalism. Many in Silicon Valley already believe that and they are busy rebuilding the world informed by that belief.

But the reason why tools like ChatGPT can do anything even remotely creative is because their training sets were produced by actually existing humans, with their complex emotions, anxieties and all. If we want such creativity to persist, we should also be funding the production of art, fiction and history not just data centres and machine learning.

Thats not at all where things point now. The ultimate risk of not retiring terms such as artificial intelligence is that they will render the creative work of intelligence invisible, while making the world more predictable and dumb.

So, instead of spending six months auditing the algorithms while we wait for the AI summer, we might as well go and reread Shakespeares A Midsummer Nights Dream. That will do so much more to increase the intelligence in our world.

Evgeny Morozov is the author of several books on technology and politics. His podcast The Santiago Boys, about the tech vision of former Chilean president Salvador Allende, is out this summer

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Follow this link:
The problem with artificial intelligence? Its neither artificial nor intelligent - The Guardian

Related Posts

Comments are closed.