Robot playing chess. Credit: Vchalup via Adobe
Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. In a2022 survey, half of researchers indicated they believed theres at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOsindicated they believe AI could destroy humanity in the next five to 10 years.
These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.
Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.
Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too. One can easily imagine how Islamophobia, antisemitism, and run-of-the-mill racism and biasoften baked into AI training datacould affect the systems calculations on important military or diplomatic advice or action. If not properly controlled, an unaligned artificial superintelligence could directly or indirectly cause genocide, massive job loss by rendering human activity worthless, creation of novel biological weapons, and even human extinction.
The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways. Take for instance an artificial superintelligence designed to protect the environment and preserve biodiversity. The goal is arguably a noble one: A 2018 World Wildlife Foundation report concluded humanity wiped out 60 percent of global animal life just since 1970, while a 2019 report by the United Nations Environment Programme showed a million animal and plant species could die out in decades. An artificial superintelligence could plausibly conclude that drastic reductions in the number of humans on Earthperhaps even to zerois, logically, the best response. Without proper controls, such a superintelligence might have the ability to cause those logical reductions.
A superintelligence with access to the Internet and all published human material would potentially tap into almost every human thoughtincluding the worst of thought. Exposed to the works of the Unabomber, Ted Kaczynski, it might conclude the industrial system is a form of modern slavery, robbing individuals of important freedoms. It could conceivably be influenced by Sayyid Qutb, who provided the philosophical basis for al-Qaeda, or perhaps by Adolf Hitlers Mein Kampf, now in the public domain.
The good news is an artificial intelligenceeven a superintelligencecould not manipulate the world on its own. But it might create harm through its ability to influence the world in indirect ways. It might persuade humans to work on its behalf, perhaps using blackmail. Or it could provide bad recommendations, relying on humans to implement advice without recognizing long-term harms. Alternatively, artificial superintelligence could be connected to physical systems it can control, like laboratory equipment. Access to the Internet and the ability to create hostile code could allow a superintelligence to carry out cyber-attacks against physical systems. Or perhaps a terrorist or other nefarious actor might purposely design a hostile superintelligence and carry out its instructions.
That said, a superintelligence might not be hostile immediately. In fact, it may save humanity before destroying it. Humans face many other existential threats, such as near-Earth objects, super volcanos, and nuclear war. Insights from AI might be critical to solve some of those challenges or identify novel scenarios that humans arent aware of. Perhaps an AI might discover novel treatments to challenging diseases. But since no one really knows how a superintelligence will function, its not clear what capabilities it needs to generate such benefits.
The immediate emergence of a superintelligence should not be assumed. AI researchers differ drastically on the timeline of artificial general intelligence, much less artificial superintelligence. (Some doubt the possibility altogether.) In a 2022 survey of 738 experts who published during the previous year on the subject, researchers estimated a 50 percent chance of high-level machine intelligenceby 2059. In an earlier, 2009 survey, the plurality of respondents believed an AI capable of Nobel Prize winner-level intelligence would be achieved by the 2020s, while the next most common response was Nobel-level intelligence would not come until after the 2100 or never.
As philosopher Nick Bostrom notes, takeoff could occur anywhere from a few days to a few centuries. The jump from human to super-human intelligence may require additional fundamental breakthroughs in artificial intelligence. But a human-level AI might recursively develop and improve its own capabilities, quickly jumping to super-human intelligence.
There is also a healthy dose of skepticism regarding whether artificial superintelligence could emerge at all in the near future, as neuroscientists acknowledge knowing very little about the human brain itself, let alone how to recreate or better it. However, even a small chance of such a system emerging is enough to take it seriously.
Policy response. The central challenge for policymakers in reducing artificial superintelligence-related risk is grappling with the fundamental uncertainty about when and how these systems may emerge balanced against the broad economic, social, and technological benefits that AI can bring. The uncertainty means that safety and security standards must adapt and evolve. The approaches to securing the large language models of today may be largely irrelevant to securing some future superintelligence-capable model. However, building policy, governance, normative, and other systems necessary to assess AI risk and to manage and reduce the risks when superintelligence emerges can be usefulregardless of when and how it emerges. Specifically, global policymakers should attempt to:
Characterize the threat. Because it lacks a body, artificial superintelligences harms to humanity are likely to manifest indirectly through known existential risk scenarios or by discovering novel existential risk scenarios. How such a system interacts with those scenarios needs to be better characterized, along with tailored risk mitigation measures. For example, a novel biological organism that is identified by an artificial superintelligence should undergo extensive analysis by diverse, independent actors to identify potential adverse effects. Likewise, researchers, analysts, and policymakers need to identify and protect, to the extent thats possible, critical physical facilities and assetssuch as biological laboratory equipment, nuclear command and control infrastructure, and planetary defense systemsthrough which an uncontrolled AI could create the most harm.
Monitor. The United States and other countries should conduct regular comprehensive surveys and assessment of progress, identify specific known barriers to superintelligence and advances towards resolving them, and assess beliefs regarding how particular AI-related developments may affect artificial superintelligence-related development and risk. Policymakers could also establish a mandatory reporting system if an entity hits various AI-related benchmarks up to and including artificial superintelligence.
A monitoring system with pre-established benchmarks would allow governments to develop and implement action plans for when those benchmarks are hit. Benchmarks could include either general progress or progress related to specifically dangerous capabilities, such as the capacity to enable a non-expert to design, develop, and deploy novel biological or chemical weapons, or developing and using novel offensive cyber capabilities. For example, the United States might establish safety laboratories with the responsibility to critically evaluate a claimed artificial general intelligence against various risk benchmarks, producing an independent report to Congress, federal agencies, or other oversight bodies. The United Kingdoms new AI Safety Institute could be a useful model.
Debate. A growing community concerned about artificial superintelligence risks are increasingly calling for decelerating, or even pausing, AI development to better manage the risks. In response, the accelerationist community is advocating speeding up research, highlighting the economic, social, and technological benefits AI may unleash, while downplaying risks as an extreme hypothetical. This debate needs to expand beyond techies on social media to global legislatures, governments, and societies. Ideally, that discussion should center around what factors would cause a specific AI system to be more, or less, risky. If an AI possess minimal risk, then accelerating research, development, and implementation is great. But if numerous factors point to serious safety and security risks, then extreme care, even deceleration, may be justified.
Build global collaboration. Although ad hoc summits like the recent AI Safety Summit is a great start, a standing intergovernmental and international forum would enable longer-term progress, as research, funding, and collaboration builds over time. Convening and maintaining regular expert forums to develop and assess safety and security standards, as well as how AI risks are evolving over time, could provide a foundation for collaboration. The forum could, for example, aim to develop standards akin to those applied to biosafety laboratories with scaling physical security, cyber security, and safety standards based on objective risk measures. In addition, the forum could share best practices and lessons learned on national-level regulatory mechanisms, monitor and assess safety and security implementation, and create and manage a funding pool to support these efforts. Over the long-term, once the global community coalesces around common safety and security standards and regulatory mechanisms, the United Nations Security Council (UNSC) could obligate UN member states to develop and enforce those mechanisms, as the Security Council did with UNSC Resolution 1540 mandating various chemical, biological, radiological, and nuclear weapons nonproliferation measures. Finally, the global community should incorporate artificial superintelligence risk reduction as one aspect in a comprehensive all-hazards approach, addressing common challenges with other catastrophic and existential risks. For example, the global community might create a council on human survival aimed at policy coordination, comparative risk assessment, and building funding pools for targeted risk reduction measures.
Establish research, development, and regulation norms within the global community. As nuclear, chemical, biological, and other weapons have proliferated, the potential for artificial superintelligence to proliferate to other countries should be taken seriously. Even if one country successfully contains such a system and harnesses the opportunities for social good, others may not. Given the potential risks, violating AI-related norms and developing unaligned superintelligence should justify violence and war. The United States and the global community have historically been willing to support extreme measures to enforce behavior and norms concerning less risky developments. In August 2013, former President Obama (in)famously drew a red line on Syrias use of chemical weapons, noting the Assad regimes use would lead him to use military force in Syria. Although Obama later demurred, favoring a diplomatic solution, in 2018 former President Trump later carried out airstrikes in response to additional chemical weapons usage. Likewise, in Operation Orchard in 2007, the Israeli Air Force attacked the Syrian Deir ez-Zor site, a suspected nuclear facility aimed at building a nuclear weapons program.
Advanced artificial intelligence poses significant risks to the long-term health and survival of humanity. However, its unclear when, how, or where those risks will manifest. The Trinity Test of the worlds first nuclear bomb took place almost 80 years ago, and humanity has yet to contain the existential risk of nuclear weapons. It would be wise to think of the current progress in AI as our Trinity Test moment. Even if superintelligence takes a century to emerge, 100 years to consider the risks and prepare might still not be enough.
Thanks to Mark Gubrud for providing thoughtful comments on the article.
Read the original here:
- These are the top 10 athletes of all time from the state of Iowa, according to ChatGPT - KCCI Des Moines [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1... - The US Sun [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Yahoo News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence poses real and present danger, headteachers warn - Yahoo Sport Australia [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Rogue Drones and Tall Tales Byline Times - Byline Times [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Britain to host the first major international summit on the threat posed by AI - Daily Mail [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- HWUM Teachers Conference - Unleashing the Super-Teacher of the ... - Heriot-Watt University [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Fantasy fears about AI are obscuring how we already abuse machine intelligence - The Guardian [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Super Hi-Fi Introduces AI-Generated Weather Service For Radio - Radio World [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Your Favorites Radio | All your favorite songs and artists - iHeartRadio [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Fast track to AGI: so, what's the big deal? - Inside Higher Ed [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Accenture Will Invest $3 Billion to Expand Its A.I. Offerings - The New York Times [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Orases Expands Shopper Marketing with Artificial Intelligence Integration - Benzinga [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- What might be the economic impact of AI tools like ChatGPT? - Economics Observatory [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- Ark Invest's Cathie Wood Is Betting Big On AI With These 4 Stocks Including One That Could Skyrocket 750% - Yahoo Finance [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- What are the threats and promises of AI? - Texas Public Radio [Last Updated On: June 13th, 2023] [Originally Added On: June 13th, 2023]
- The Future of Mobile Applications: Trends to Watch in 2022 - Fagen wasanni [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- What to read: Beguiling stories and a memoir of cultural complexity - Sydney Morning Herald [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- Why India needs a skills-based approach to build workspaces of the future - India Today [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- The hidden cost of the AI boom: social and environmental exploitation - BusinessWorld Online [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- Democracy, Defence and Conflict in the Age of AI - INSEAD Knowledge [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- Who Will Win the AGI Race? - Analytics India Magazine [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- More than 1,300 experts call AI a force for good - BBC [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- The pros and cons of AI and how we must stay Human | theHRD - The HR Director Magazine [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- Unraveling the Fiction and Reality of AI's Evolution: An Interview with ... - EnterpriseAI [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- Artificial intelligence has become the cardiologist's 'super-assistant' - Medical Xpress [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- Exploring the Role of Artificial Intelligence in Anesthesiology - HealthITAnalytics.com [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- CTech's Book Review: Welcome to Life 3.0 with Artificial General ... - CTech [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- A Curious Thing Happened When Elon Musk Tweeted One Of My Columns - Forbes [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- The Challenges of AI in Tackling Fundamental Questions - Fagen wasanni [Last Updated On: July 25th, 2023] [Originally Added On: July 25th, 2023]
- FDAs OTP Super Office on Track to Fill 500 Positions - BioSpace [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- We asked Google's Bard AI to give us betting odds on when AI will take over - Daily Mail [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- 'AI is the powerhouse that's going to drive metaverse & web-3 platforms' - Exchange4Media [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- An AI Helped Me Find Running Shoes for the NYC Marathon. Here's ... - CNET [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- The Role Of Legislation In The Regulation Of Artificial Intelligence ... - Mondaq News Alerts [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- AI is starting to affect elections and Wisconsin has yet to take action - PBS Wisconsin [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- AI in Education - EducationNext [Last Updated On: August 11th, 2023] [Originally Added On: August 11th, 2023]
- Expert shuts down AI hype calling it a 'glorified tape recorder' and ... - UNILAD [Last Updated On: August 20th, 2023] [Originally Added On: August 20th, 2023]
- Why firms need to scratch the surface of their AI investments - Money Management [Last Updated On: August 20th, 2023] [Originally Added On: August 20th, 2023]
- 3 Super Speculative AI Stocks Not Worth the Risk - InvestorPlace [Last Updated On: August 20th, 2023] [Originally Added On: August 20th, 2023]
- Black Women Researchers Highlight Dangers of Artificial Intelligence - Yahoo News [Last Updated On: August 20th, 2023] [Originally Added On: August 20th, 2023]
- The Rise of AI | 'Risks and challenges': Educators eye new artificial ... - TribDem.com [Last Updated On: August 20th, 2023] [Originally Added On: August 20th, 2023]
- One think tank vs. 'god-like' AI - POLITICO [Last Updated On: August 20th, 2023] [Originally Added On: August 20th, 2023]
- 5 things about AI you may have missed today: Zomato launches AI chatbot, Israels AI-powered plane and more - HT Tech [Last Updated On: September 1st, 2023] [Originally Added On: September 1st, 2023]
- Conversations in Collaboration: Cognigy's Phillip Heltewig on ... - No Jitter [Last Updated On: September 1st, 2023] [Originally Added On: September 1st, 2023]
- Eight things we learned from the Elon Musk biography - The Guardian [Last Updated On: September 17th, 2023] [Originally Added On: September 17th, 2023]
- Why The Human Touch Is Still Vital in AI Marketing - Entrepreneur [Last Updated On: September 17th, 2023] [Originally Added On: September 17th, 2023]
- Making AI smarter with an artificial, multisensory integrated neuron - Science Daily [Last Updated On: September 17th, 2023] [Originally Added On: September 17th, 2023]
- When regulating artificial intelligence, we must place race and gender at the center of the debate - EL PAS USA [Last Updated On: September 17th, 2023] [Originally Added On: September 17th, 2023]
- Artificial Intelligence May Be Humanity's Most Ingenious Invention ... - Vanity Fair [Last Updated On: September 17th, 2023] [Originally Added On: September 17th, 2023]
- Why we shouldnt want to be the pets of super-intelligent computers - ABC News [Last Updated On: September 17th, 2023] [Originally Added On: September 17th, 2023]
- Elon Musk warns AI 'could replace Chinese government and take control of country' - Daily Star [Last Updated On: September 19th, 2023] [Originally Added On: September 19th, 2023]
- We Have No Chance of Controlling a Superintelligent AI - Medium [Last Updated On: September 19th, 2023] [Originally Added On: September 19th, 2023]
- Why Amazon Stock Was a Winner on Monday - The Motley Fool [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Level Up your business and events - Warrnambool City Council [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Blockchains 2 billionth user could be an AI, says Joe Lubin - Forkast News [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Its WarCEO Of ChatGPT Developer OpenAI And AI Pioneer Issues Stark Bitcoin Warning Amid Crypto Price Swings - Forbes [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Science Writers Treated to a Smorgasbord of Inventive Research - University of Colorado Anschutz Medical Campus [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Before Skynet and The Matrix, This 50-Year-Old Movie Predicted the ... - IGN [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Fueling Interdisciplinary Innovation With AI: Volvo's Anders Sjgren - MIT Sloan Management Review [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank - DataEthics.eu [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- Is It Too Late to Buy Super Micro Computer Stock? - The Motley Fool [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- 1 Super Semiconductor Stock to Buy for the AI Revolution - The Motley Fool [Last Updated On: October 10th, 2023] [Originally Added On: October 10th, 2023]
- The Coming Wave: Technology, Power, and the Twenty-first Centurys Greatest Dilemma - Next Big Idea Club Magazine [Last Updated On: October 16th, 2023] [Originally Added On: October 16th, 2023]
- AI and You: The Chatbots Are Talking to Each Other, AI Helps ... - CNET [Last Updated On: October 16th, 2023] [Originally Added On: October 16th, 2023]
- Artificial intelligence the next 'super application' Vertiv - ChannelLife New Zealand [Last Updated On: October 16th, 2023] [Originally Added On: October 16th, 2023]
- WHAT'S ARTIFICIAL, WHAT'S NOT? | WANDERING IN A RUNNING ... - Toni Reavis [Last Updated On: October 16th, 2023] [Originally Added On: October 16th, 2023]
- Join us! The hottest topic in legal ops: Artificial intelligence - Wolters Kluwer [Last Updated On: October 16th, 2023] [Originally Added On: October 16th, 2023]
- [DGIST] The second half of 2023 Tenure-Track Faculty Public ... - Nature.com [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- Circuits in Session: Analysis of the Quality of ChatGPT4 as an ... - JD Supra [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- Top 7 AI tools in 2023 to boost your business efficiency - YourStory [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- How Can Teachers Prepare Students for an AI-Driven Future? - EdSurge [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- Artificial Intelligence: The New Frontier - Truth, for its own sake. - New Era [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- AI may be coming for your job but there is hope, experts insist: 'We ... - The Big Issue [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- DigiKey Announces Global Partnership with Super Low Power IC ... - PR Newswire [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- The Mega Trends That Will Shape Our Future World - Forbes [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- Researchers use Artificial Intelligence to identify potential vaccine for STD that infects 700k Americans each - Daily Mail [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]
- VERSES Technologies Racing Toward The Artificial General Intelligence Boom - VERSES AI (OTC:VRSSF) - Benzinga [Last Updated On: November 1st, 2023] [Originally Added On: November 1st, 2023]