Everything dies baby, thats a fact. And, if the world cannot manage the current race to superhuman artificial intelligence between great powers, everything may die much sooner than expected.
The past year has witnessed an explosion in the capabilities of artificial intelligence systems. The bulk of these advances have occurred in generative AI systems that produce novel text, image, audio, or video content from human input. The American company OpenAI took the world by storm with its public release of the ChatGPT large language model (LLM) in November 2022. In March, it released an updated version of ChatGPT powered by the more powerful GPT-4 model. Microsoft and Google have followed suit with Bing AI and Bard, respectively.
Beyond the world of text, generative applications Midjourney, DALL-E, and Stable Diffusion produce unprecedentedly realistic images and videos. These models have burst into the public consciousness rapidly. Most people have begun to understand that generative AI is an unparalleled innovation, a type of machine that possesses capacities natural language generation and artistic production long thought to be sacrosanct domains of human ability.
But generative AI is only the beginning. A team of Microsoft AI scientists recently released a paper arguing that GPT-4 arguably the most sophisticated LLM yet is showing the sparks of artificial general intelligence (AGI), an AI that is as smart or smarter than humans in every area of intelligence, rather than simply in one task. They argue that, [b]eyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting." In these multiple areas of intelligence, GPT-4 is strikingly close to human-level performance. In short, GPT-4 appears to presage a program that can think and reason like a human. Half of surveyed AI experts expect an AGI in the next 40 years.
AGI is the holy grail for tech companies involved in AI development primarily the fields leaders, OpenAI and Google subsidiary DeepMind because of the unfathomable profits and world-historical glory that would come with being the first to develop human-level machine intelligence.
The private sector, however, is not the only relevant actor.
Because leadership in AI offers advantages both in economic competitiveness and military prowess, great powers primarily the United States and China are racing to develop advanced AI systems. Much ink has been spilled on the risks of the military applications of AI, which have the potential to reshape the strategic and tactical domains alike by powering autonomous weapons systems, cyberweapons, nuclear command and control, and intelligence gathering. Many politicians and defense planners in both countries believe the winner of the AI race will secure global dominance.
But the consequences of such a race are potentially far more reaching than who wins global hegemony. The perception of an AI arms race is likely to accelerate the already-risky development of AI systems. The pressure to outpace adversaries by rapidly pushing the frontiers of a technology that we still do not fully understand or fully control without commensurate efforts to make AI safe for humans may well present an existential risk to humanitys continued existence.
The dangers of arms races are well-established by history. Throughout the late 1950s, American policymakers began to fear that the Soviet Union was outpacing the U.S. in deployment of nuclear-capable missiles. This ostensible missile gap pushed the U.S. to scale up its ballistic missile development to catch up to the Soviets.
In the early 1960s, it became clear the missile gap was a myth. The United States, in fact, led the Soviet Union in missile technology. However, just the perception of falling behind an adversary contributed to a destabilizing buildup of nuclear and ballistic missile capabilities, with all its associated dangers of accidents, miscalculations, and escalation.
Missile gap logic is rearing its ugly head again today, this time with regard to artificial intelligence, which could be more dangerous than nuclear weapons. Chinas AI efforts are raising fears among American officials, who are concerned about falling behind. New Chinese leaps in AI inexorably produce flurries of warnings that China is on its way to dominating the field.
The reality of such a purported AI gap is complicated. Beijing does appear to lead the U.S. in military AI innovation. China also leads the world in AI academic journal citations and commands a formidable talent base. However, when it comes to the pursuit of AGI, China seems to be the laggard. Chinese companies LLMs are 1-3 years behind their American counterparts, and OpenAI set the pace for generative models. Furthermore, the Biden administrations 2022 export controls on advanced computer chips cut China off from a key hardware prerequisite for building advanced AI.
Whoever is ahead in the AI race, however, is not the most important question. The mere perception of an arms race may well push companies and governments to cut corners and eschew safety research and regulation. For AI a technology whose safety relies upon slow, steady, regulated, and collaborative development an arms race may be catastrophically dangerous.
Despite dramatic successes in AI, humans still cannot reliably predict or control its outputs and actions. While research focused on AI capabilities has produced stunning advancements, the same cannot be said for research in the field of AI alignment, which aims to ensure AI systems can be controlled by their designers and made to act in a way that is compatible with humanitys interests.
Anyone who has used ChatGPT understands this lack of human control. It is not difficult to circumvent the programs guardrails, and it is far too easy to encourage chatbots to say offensive things. When it comes to more advanced models, even if designers are brilliant and benevolent, and even if the AI pursues only its human-chosen ultimate goals, there remains a path to catastrophe.
Consider the following thought experiment about how AGI may be deployed. A human-level or superhuman intelligence is programmed by its human creators with a defined, benign goal say, develop a cure for Alzheimers, or increase my factorys production of paperclips. The AI is given access to a constrained environment of instruments: for instance, a medical lab or a factory.
The problem with such deployment is that, while humans can program AI to pursue a chosen ultimate end, it is infeasible that each instrumental, or intermediate, subgoal that the AI will pursue think acquiring steel before it can make paperclips can be defined by humans.
AI works through machine learning: it trains on vast amounts of data and learns, based on that data, how to produce desired outputs from its inputs. However, the process by which AI connects inputs to outputs the internal calculations it performs under the hood is a black box. Humans cannot understand precisely what an AI is learning to do. For example, an AI trained to pick strawberries might instead have learned to pick the nearest red object and, when released into a different environment, pick both strawberries and red peppers. Further examples abound.
In short, an AI might do precisely what it was trained to do and still produce an unwanted outcome. The means to its programmed ends crafted by an alien, incomprehensible intelligence could be prejudicial to humans. The Alzheimers AI might kidnap billions of human subjects as test subjects. The paperclip AI might turn the entire Earth into metal to make paperclips. Because humans can neither predict every possible means AI might employ nor teach it to reliably perform a definite action, programming away any dangerous outcome is infeasible.
If sufficiently intelligent, and capable of defeating resistant humans, an AI may well wipe out life on Earth in its single-minded pursuit of its goal. If given control of nuclear command and control like the Skynet system in Terminator or access to chemicals and pathogens, AI could engineer an existential catastrophe.
How does international competition come into play when discussing the technical issue of alignment? Put simply, the faster AI advances, the less time we will have to learn how to align it. The alignment problem is not yet solved, nor is it likely to be solved in time without slower and more safety-conscious development.
The fear of losing a technological arms race may encourage corporations and governments to accelerate development and cut corners, deploying advanced systems before they are safe. Many top AI scientists and organizations among them the team at safety lab Anthropic, Open Philanthropys Ajeya Cotra, DeepMind founder Demis Hassabis, and OpenAI CEO Sam Altman believe that gradual development is preferable to rapid development because it offers researchers more time to build safety features into new models; it is easier to align a less powerful model than a more powerful one.
Furthermore, fears of Chinas catching up may imperil efforts to enact AI governance and regulatory measures that could slow down dangerous development and speed up alignment. Altman and former Google CEO Eric Schmidt are on record warning Congress that regulation will slow down American companies to Chinas benefit. A top Microsoft executive has used the language of the Soviet missile gap. The logic goes: AGI is inevitable, so the United States should be first. The problem is that, in the words of Paul Scharre, AI technology poses risks not just to those who lose the race but also to those who win it.
Likewise, the perception of an arms race may preclude the development of a global governance framework on AI. A vicious cycle may emerge where an arms race prevents international agreements, which increases paranoia and accelerates that same arms race.
International conventions on the nonproliferation of nuclear bombs and missiles and the multilateral ban on biological weapons were great Cold War successes that defused arms races. Similar conventions over AI could dissuade countries from rapidly deploying AI into more risky domains in an effort to increase national power. More global cooperation over AIs deployment will reduce the risk that a misaligned AI is integrated into military and even nuclear applications that would give it a greater capacity to create a catastrophe for humanity.
While it is currently unclear whether government regulation could meaningfully increase the chances of solving AI alignment, regulation both domestic and multilateral may at least encourage slower and steadier development.
Fortunately, momentum for private Sino-American cooperation on AI alignment may be building. American AI executives and experts have met with their Chinese counterparts to discuss alignment research and mutual governance. Altman himself recently went on a world tour to discuss AI capabilities and regulation with world leaders. As governments are educated as to the risks of AI, the tide may be turning toward a more collaborative world. Such a shift would unquestionably be good news.
However, the outlook is not all rosy: as the political salience of AI continues to increase, the questions of speed, regulation, and cooperation may become politicized into the larger American partisan debate over China. Regulation may be harder to push when China hawks begin to associate slowing AI with losing an arms race to China. Recent rhetoric in Congress has emphasized the AI arms race and downplayed the necessity of regulation.
Whether or not it is real, the United States and China appear convinced that the AI arms race is happening an extremely dangerous proposition for a world that does not otherwise appear to be on the verge of an alignment breakthrough. A detente on this particular technological race however unlikely it may seem today may be critical to humanitys long-term flourishing.
Link:
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal... [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Programmed Values: The Role of Intention in Developing AI - Psychology Today [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Fear the fire or harness the flame: The future of generative AI - VentureBeat [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Senate's hearing on AI regulation was dangerously friendly - The Verge [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Why we need a "Manhattan Project" for A.I. safety - Salon [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- What is AGI? The Artificial Intelligence that can do it all - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial intelligence: World first rules are coming soon are you ... - JD Supra [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Where AI evolves from here - Axios [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- How AI and other technologies are already disrupting the workplace - The Conversation [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -... [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Israel aims to be 'AI superpower', advance autonomous warfare - Reuters.com [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- What we lose when we work with a giant AI like ChatGPT - The Hindu [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 5 things you should know about investing in artificial intelligence ... - The Motley Fool Australia [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Generative AI Will Have Profound Impact Across Sectors - Rigzone News [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Beware the EU's AI Regulations - theTrumpet.com [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- How to Win the AI War - Tablet Magazine [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Dr. ChatGPT Will Interface With You Now - IEEE Spectrum [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Crypto And AI Innovation: The London Attraction - Forbes [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI would pick Bitcoin over centralized crypto Tether CTO - Cointelegraph [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What's missing from ChatGPT and other LLMs ... - Data Science Central [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mastering ChatGPT: Introduction to ChatGPT | Thomas Fox ... - JD Supra [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Transparency is crucial over how AI is trained - and regulators must take the lead - Sky News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Top 10 AI And Blockchain Projects Revolutionizing The World - Blockchain Magazine [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- An Orb: the new crypto project by the creator of ChatGPT - The Cryptonomist [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI must be emotionally intelligent before it is super-intelligent - Big Think [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI tools trace the body's link between the brain and behavior - Axios [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 27% of jobs at high risk from AI revolution, says OECD - Reuters [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI likely to spell end of traditional school classroom, leading expert says - The Guardian [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- China striving to be first source of artificial general intelligence, says think tank - The Register [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Future of automotive journalism in India: Would AI take charge - Team-BHP [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- From vision to victory: How CIOs embrace the AI revolution - ETCIO [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Demis Hassabis - Information Age [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Why AI cant answer the fundamental questions of life | Mint - Mint [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI bots could replace us, peer warns House of Lords during debate - The Guardian [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Will architects really lose their jobs to AI? - Dezeen [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Which US workers are exposed to AI in their jobs? - Pew Research Center [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AWS announces generative A.I. tool to save doctors time on paperwork - CNBC [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]