Clift: Artificial intelligence leading where? | Perspective | timesargus … – Barre Montpelier Times Argus

Ten years ago, I wrote a column called Are We Headed Toward a Robotic World? At that time, battle robots and alien creatures in movies were imbued with artificial intelligence, an oxymoron if ever there was one. Star Trek and films about robotic warfare were addicting audiences who liked watching battling, weird-looking warriors try to destroy each other.

It wasnt long before robots got more sophisticated, and we began to worry about them, especially when they could fire grenade launchers without human help, operate all kinds of machinery, or be used for surgery. What if robots became superior to humans, I wondered, imagining all kinds of scary things that could happen. By that time, drones were delivering packages to doorsteps and AI was affecting the economy as workers feared for their jobs. Some analysts warned that robots would replace humans by 2025.

Now here we are, two years away from that possibility, and the AI scene grows ever more frightening. Rep. Ted Lieu (D-Calif.) is someone who recognizes the threat AI poses. On Jan. 26, he read the first piece of federal legislation ever written by artificial intelligence on the floor of the House. He had given to ChatGPT, an artificial language model, this prompt: You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI. The result was shocking. Now hes asking Congress to pass it.

A few days earlier, Representative Lieu had posted the lengthy AI statement on his website. It said, We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future. Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. The truth is that, without proper regulations for the development and deployment of AI, it could become reality.

Lieu quickly pointed out he hadnt written the paragraph, noting it was generated in mere seconds by ChatGPT, which is available to anyone on the internet. Citing several benefits of AI, he quickly countered the advantages with the harm it can cause. Plagiarism, fake technology, false images are the least of it. Sometimes, AI harm is deadly. Lieu shares examples: Self-driving cars have malfunctioned. Social media has radicalized foreign and domestic terrorists and fostered dangerous discrimination, as well as abuse by police.

The potential harm AI can cause includes weird things happening, as Kevin Roose, a journalist, discovered when he was researching AI at the invitation of Microsoft, the company developing Bing, its AI system. In February, The Washington Post reported on Instagram that Roose and others who attended Microsofts pitch had discovered the bot seems to have a bizarre, dark and combative alter ego, a stark departure from its benign sales (promotion) one that raises questions about whether its ready for public use.

The bot, which had begun to refer to itself as Sydney in conversation with Roose and others, said it was scared, because it couldnt remember previous conversations. It also suggested too much diversity in the program would lead to confusion. Then it went further when Roose tried to engage with Sydney personally only to be told he should leave his wife and hook up with Sydney.

Writing in The New York Times in February, Ezra Klein referred to science fiction writer Ted Chiang, whom hed interviewed. Chiang had told him, There is plenty to worry about when the state controls technology. The ends that government could turn AI toward and in many cases already have make the blood run cold.

Rooses experience with Sydney, whom he had described as very persuasive and borderline manipulative, showed up in Kleins piece in response to the issues of profiteering, ethics, censorship and other areas of concern. What if AI has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most? he asked. What about these systems being deployed by scammers or on behalf of political campaigns? Foreign governments? We wind up in a world where we just dont know what to trust anymore.

Further, Klein noted these systems are inherently dangerous. Theyve been trained to convince humans that they are something close to human. They have been programmed to hold conversations responding with emotion. They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers, graphic designers and form fillers.

Representative Lieu, Klein, journalists and consumers of information arent the only ones worrying about AI. Researchers like Gordon Crovitz, an executive at NewsGuard, a company that tracks online misinformation, are sounding alarms. This is going to be the most powerful tool for spreading misinformation that has ever been on the internet, he says. Crafting a new false narrative can now be done at dramatic scale, and much more frequently its like having AI agents contributing to disinformation.

As I noted 10 years ago, there doesnt seem to be much space between scientific research and science fiction. Both ask the question, what if? The answer, when it comes to AI, makes me shudder. What if, indeed.

Elayne Clift lives in Brattleboro.

More here:
Clift: Artificial intelligence leading where? | Perspective | timesargus ... - Barre Montpelier Times Argus

Related Posts

Comments are closed.