The future of malicious artificial intelligence applications is here – Toronto Star

The year is 2016. Under close scrutiny by CCTV cameras, 400 contractors are working around the clock in a Russian state-owned facility. Many are experts in American culture, tasked with writing posts and memes on Western social media to influence the upcoming U.S. Presidential election. The multimillion dollar operation would reach 120 million people through Facebook alone.

Six years later, the impact of this Russian info op is still being felt. The techniques it pioneered continue to be used against democracies around the world, as Russia's troll factory the Russian internet Research Agency continues to fuel online radicalization and extremism. Thanks in no small part to their efforts, our world has become hyper-polar, increasingly divided into parallel realities by cherry-picked facts, falsehoods, and conspiracy theories.

But if making sense of reality seems like a challenge today, it will be all but impossible tomorrow. For the past two years, a quiet revolution has been brewing in AI and despite some positive consequences, its also poised to hand authoritarian regimes unprecedented new ways to spread misinformation across the globe at an almost inconceivable scale.

In 2020, AI researchers created a text generation system called GPT-3. GPT-3 can produce text thats indistinguishable from human writing including viral articles, tweets, and other social media posts. GPT-3 was one of the most significant breakthroughs in the history of AI: it offered a simple recipe that AI researchers could follow to radically accelerate AI progress, and build much more capable, humanlike systems.

But it also opened a Pandoras box of malicious AI applications.

Text-generating AIs or language models can now be used to massively augment online influence campaigns. They can craft complex and compelling arguments, and be leveraged to create automated bot armies and convincing fake news articles.

This isnt a distant future concern: its happening already. As early as 2020, Chinese efforts to interfere with Taiwans national election involved the instant distribution of artificial-intelligence-generated fake news to social media platforms.

But the 2020 AI breakthrough is now being harnessed for more than just text. New image-generation systems, able to create photorealistic pictures based on any text prompt, have become reality this year for the first time. As AI-generated content becomes better and cheaper, the posts, pictures, and videos we consume in our social media feeds will increasingly reflect the massively amplified interests of tech-savvy actors.

And malicious applications of AI go far beyond social media manipulation. Language models can already write better phishing emails than humans, and have code-writing capabilities that outperform human competitive programmers. AI that can write code can also write malware, and many AI researchers see language models as harbingers of an era of self-mutating AI-powered malicious software that could blindside the world. Other recent breakthroughs have significant implications for weaponized drone control and even bioweapon design.

Needed: a coherent plan

Policy and governance usually follow crises, rather than anticipate them. And that makes sense: the future is uncertain, and most imagined risks fail to materialize. We cant invest resources in solving every hypothetical problem.

But exceptions have always been made for problems which, if left unaddressed, could have catastrophic effects. Nuclear technology, biotechnology, and climate change are all examples. Risk from advanced AI represents another such challenge. Like biological and nuclear risk, it calls for a co-ordinated, whole-of-government response.

Public safety agencies should establish AI observatories that produce unclassified reporting on publicly available information about AI capabilities and risks, and begin studying how to frame AI through a counterproliferation lens.

Given the pivotal role played by semiconductors and advanced processors in the development of what are effectively new AI weapons, we should be tightening export control measures for hardware or resources that feed into the semiconductor supply chains of countries like China and Russia.

Our defence and security agencies could follow the lead of the U.K.s Ministry of Defence, whose Defence AI Strategy involves tracking and mitigating extreme and catastrophic risks from advanced AI.

AI has entered an era of remarkable, rapidly accelerating capabilities. Navigating the transition to a world with advanced AI will require that we take seriously possibilities that would have seemed like science fiction until very recently. Weve got a lot to rethink, and now is the time to get started.

Jrmie Harris is the co-founder of Gladstone AI, an AI safety company.

See more here:
The future of malicious artificial intelligence applications is here - Toronto Star

Related Posts

Comments are closed.