AI-created images, video, audio and text are already being used to spread disinformation heading into the 2024 election. Here's what to know and how to spot it.
PORTLAND, Ore. Most Americans are aware by now that artificial intelligence is here. They may have seen examples of, or even used, a large language model like the one behind ChatGPT or an image generator like Midjourney. But they may not be aware of how rapidly the technology is improving and the potential it has to disrupt something like the 2024 election campaign season.
The concept of a "deepfake" has been around for some time, but AI tech is rapidly making the creation of that material quicker, easier and cheaper to produce.
It's perhaps best demonstrated with audio. We used an AI audio generator and fed it a few minutes of The Story's Pat Dooris speaking. We then typed in a couple sentences of dialogue. It was able to replicate Dooris' voice, now speaking the dialogue we'd fed it, with pretty astounding accuracy. The whole process took about 2 minutes and cost less than $5 to produce.
Much of the AI content already making the rounds will have certain "tells" upon closer observation. But many people encounter media like this while scrolling on their phones; they might only see or hear it briefly, and often on a very small screen.
So what can you do to inoculate yourself against AI disinformation? The best rule of thumb is to be skeptical about the content that you see or hear.
Al Tompkins (that's Al as in Albert, not AI as in artificial intelligence) is a well-known journalism consultant. He's taught professional reporters the world over, and is currently teaching at Syracuse University in New York. He recently led news personnel from throughout KGW's parent company, TEGNA, in a training on disinformation and AI.
"This technology has moved so fast, I mean faster than any technology in my lifetime," Tompkins told Dooris in a recent interview. "We've gone from, you know, basically Alexander Graham Bell to cell phones in maybe a year or so. So the technology is moving very, very quickly. And here's what's interesting that this technology is moving so fast and causing so much disruption already, but the tools that we need to detect it have not moved with the same speed. And I compare this, sometimes, if you remember when we really started using email all the time and it really took a number of years of getting so much spam email for us to start getting email spam detectors."
In the not-so-distant future, Tompkins suggested, there may be software systems better-equipped to alert us to AI fakes. But right now there are few, if any, that can perform with much accuracy.
With AI detection tech lagging behind, it's best that people learn how to look for deepfakes on their own. In fact, it's something people should probably have learned yesterday, because some of these AI tools have been around for a while.
"Photoshop tools and so on have used a version of AI, a kind of AI, for quite a number of years," Tompkins explained. "AI in its most basic form only does one thing ... But the newer versions of AI, now the ones that are most disruptive, are what we call multimodal AI, so they can change audio and video and text simultaneously. It's not just one thing, it's a bunch of things that you can change all at one time."
Tompkins said he's been tracking the development of AI for years. Most people will have some experience with it, even if they weren't aware of it.
"If, for example, you have a grammar check that comes on and looks at your text and replaces words for you or suggest words, that's a form of artificial intelligence. It's just that it would only do one thing at a time," Tompkins continued. "And this is also a good time for us to say, Pat, that AI isn't the work of the devil. I mean, I think we're going to see that AI actually does some wonderful things and that it's going to make our life much more productive in important ways. Virtually every industry is going to find some useful way of using artificial intelligence if they're not already, because it will take care of things that we need to be taken care of."
There are now some sophisticated programs that can either convincingly alter an existing photo in Adobe's new Photoshop Beta, for example or create images wholesale. And fake news websites are already using the latter, in particular, and passing the images off as real.
Fake audio is already making the rounds as well. Just before the Iowa caucuses this year, an ad running on TV took aim at Donald Trump for attacking the popular conservative governor of Iowa, Kim Reynolds. It featured audio from Trump himself. But it was not, in fact, Trump speaking. The ad, which was put out by supporters of Ron DeSantis, employed an AI-generated voice, although it was "reading" the words that Trump used in a real post on his Truth Social platform.
The more insidious examples probably aren't going to be running on TV. They might instead pop up on social media and quickly make the rounds before anyone's had a chance to fact-check them hurting a candidate's reputation by putting words in their mouth, for example, or giving voters bad information from an otherwise trusted source.
Spotting AI isn't an exact science, but there are some things to look out for. Because the technology is advancing rapidly, the obvious flaws present in early iterations are becoming less common. For the time being, Tompkins said, AI continues to struggle with things like hands and ears in images of people.
"It turns out that, you know, when they take mug shots, one reason they do a side shot is and in passports too we're looking at ears," Tompkins said. "It turns out that our ears are very unique, and AI has a really big problem making ears ... Sometimes one's way longer than the other .... (in) some they're just misshapen fingers.
"It turns out these sculptors of old knew a lot. Michelangelo knew a lot. And it turns out that they sculpted hands and fingers all the time because they're so difficult to do partly because there's not a consistent algorithm between the size of your fingers, and they grow at different rates as you get older and younger, and so on ... AI often makes big mistakes with fingers. Commonly we'll chop off fingers or we'll add fingers and so on. So, hands are sometimes, they're way too big for the person too, so I call them gorilla hands."
AI image models also struggle with text in images, so a closer look at text in the background of an AI-created image might reveal that it's total nonsense. In general, looking carefully at an AI-generated or edited image may reveal a host of things that just don't quite make sense, which should all be red flags.
Audio recordings created by AI might lack the natural pauses that someone makes to take a breath, or do strange things with cadence, pronunciation or emphasis. They might sound unnaturally flat, lacking in emotion or nuance, or they may be a little too clean for audio created outside of a recording studio.
But again, flaws like these come and go AI models are getting better every day. The best thing you can do is stop and think about the context. Does this content, whatever form it takes, seem too good to be true? Could it harm reputations, stoke anger or spread fear?
Tompkins explained that disinformation tends to work because of something called confirmation bias. We tend to believe things that agree with our pre-existing views, or that seem to fit with the actual facts of a situation. The more believable a piece of disinformation appears to us, the more likely we are to accept it as fact without taking a closer look or pumping the brakes.
Oregon's senior U.S. Senator, Ron Wyden, is a member of the Senate Committee on Intelligence. He's also worried about how AI could be used to produce deepfakes that impact real-world politics.
"My view is, particularly deepfakes as you see in the political process, are gonna undermine elections, they're gonna undermine democracy. And we're gonna have to take very strong action," Wyden said. "Because already we've got this gigantic misinformation machine out there. And AI just makes it a lot more powerful and easier to use."
Last week, Oregon Gov. Tina Kotek signed two new bills into law on this subject. The first creates a task force to look at the issue, and it was sponsored by state Rep. Daniel Nguyen of Lake Oswego. The second, sponsored by state Sen. Aaron Woods of Wilsonville, requires political campaigns to include a disclaimer in their ads if any part of the commercial has been altered using AI.
Wyden noted that several other states have AI bills, but he thinks it will take national action to really protect voters from deepfakes.
Right now, the environment of disinformation supercharged by AI can seem pretty daunting. But Tompkins warns against just becoming jaded to the issue.
"This election year, it's going to be very tempting for you to say, 'Everybody's a liar, every politician is a liar, everybody's a liar, they're all lying to me. I don't believe anybody. I'm just going to live in my shell and forget about it or I'm never going to change my mind because I have no idea who else is right. So I'm just going to trust my gut and I'm going to quit exploring,'" Tompkins said. "That's not the way to live. Because that is cynicism you don't believe anything.
"Instead, I'd rather us all be skeptics, and that is open to truth. Stay open to truth. Stay open to proof, because that's what the smart people do. Smart people are constantly learning. They're constantly open to evidence. Don't shut down from the evidence when something isn't true but it's widely circulating. I think it's part of our civic duty to call it out, to say, 'You know what, I'm seeing this circulating. It's just not true. And here's how I know it's not.' And that's not being unkind. It's not being rude. You don't have to be mean to anybody. Just say, 'Listen, this is circulating. And here's how I know it's not true.' Or, 'Here are some questions I would ask about that.'"
See more here:
AI-generated disinformation will impact the 2024 election - KGW.com
Read More..