The danger of blindly embracing the rise of AI – The Guardian

Readers express their hopes, and fears, about recent developments in artificial intelligence chatbots

Evgeny Morozovs piece is correct insofar as it states that AI is a long way from the general sentient intelligence of human beings (The problem with artificial intelligence? Its neither artificial nor intelligent, 30 March). But that rather misses the point of the thinking behind the open letter of which I and many others are signatories. ChatGPT is only the second AI chatbot to pass the Turing test, which was proposed by the mathematician Alan Turing in 1950 to test the ability of an AI model to convincingly mimic a conversation well enough to be judged human by the other participant. To that extent, current chatbots represent a significant milestone.

The issue, as Evgeny points out, is that a chatbots abilities are based on a probabilistic prediction model and vast sets of training data fed to the model by humans. To that extent, the output of the model can be guided by its human creators to meet whatever ends they desire, with the danger being that its omnipresence (via search engines) and its human-like abilities have the power to create a convincing reality and trust where none does and should exist. As with other significant technologies that have had an impact on human civilisation, their development and deployment often proceeds at a rate far faster than our ability to understand all their effects leading to sometimes undesirable and unintended consequences.

We need to explore these consequences before diving into them with our eyes shut. The problem with AI is not that it is neither artificial nor intelligent, but that we may in any case blindly trust it.Alan LewisDirector, SigmaTech Analysis

The argument that AI will never achieve true intelligence due to its inability to possess a genuine sense of history, injury or nostalgia and confinement to singular formal logic overlooks the ever-evolving capabilities of AI. Integrating a large language model in a robot would be trivial and would simulate human experiences. What would separate us then? I recommend Evgeny Morozov watch Ridley Scotts Blade Runner for a reminder that the line between man and machine may become increasingly indistinct. Daragh ThomasMexico City, Mexico

Artificial intelligence sceptics follow a pattern. First, they argue that something can never be done, because it is impossibly hard and quintessentially human. Then, once it has been done, they argue that it isnt very impressive or useful after all, and not really what being human is about. Then, once it becomes ubiquitous and the usefulness is evident, they argue that something else can never be done. As with chess, so with translation. As with translation, so with chatbots. I await with interest the next impossible development.Edward HibbertChipping, Lancashire

AIs main failings are in the differences with humans. AI does not have morals, ethics or conscience. Moreover, it does not have instinct, much less common sense. Its dangers in being subject to misuse are all too easy to see.Michael ClarkSan Francisco, US

Thank you, Evgeny Morozov, for your insightful analysis of why we should stop using the term artificial intelligence. I say we go with appropriating informatics instead.Annick DriessenUtrecht, the Netherlands

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

View original post here:
The danger of blindly embracing the rise of AI - The Guardian

Related Posts

Comments are closed.