With attention-grabbing headlines about the possible end of the world at the hands of an artificial superintelligence, it's easy to get caught up in the AI doomerism hype and imagine a future where AI systems wreak havoc on humankind.
Discourse surrounding any unprecedented moment in history -- the rapid growth of AI included -- is inevitably complex, characterized by competing beliefs and ideologies. Over the past year and a half, concerns have bubbled up regarding both the short- and long-term risks of AI, sparking debate over which issues should be prioritized.
Although considering the risks AI poses and the technology's future trajectory is worthwhile, discussions of AI can also veer into sensationalism. This hype-driven engagement detracts from productive conversation about how to develop and maintain AI responsibly -- because, like it or not, AI seems to be here to stay.
"We're all pursuing the same thing, which is that we want AI to be used for good and we want it to benefit people," said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University.
As AI has gained prominence, so has the conversation surrounding its risks. Concerns range from immediate ethical and societal harms to long-term, more hypothetical risks, including whether AI could pose an existential threat to humanity. Those focused on the latter, a field known as AI safety, see AI as both an avenue for innovation and a source of possibly devastating risks.
Spencer Kaplan, an anthropologist and doctoral candidate at Yale University, studies the AI community and its discourse around AI development and risk. During his time in the San Francisco Bay Area AI safety scene, he's found that many experts are both excited and worried about the possibilities of AI.
"One of the key points of agreement is that generative AI is both a source of incredible promise and incredible peril," Kaplan said.
One major long-term concern about AI is existential risk, often abbreviated as x-risk, the fear that AI could someday cause the mass destruction of humans. An AI system with unprecedented and superhuman levels of intelligence, often referred to as artificial general intelligence (AGI), is considered a prerequisite for this type of destruction. Some AI safety researchers postulate that AGI with intelligence indistinguishable from or superior to that of humans would have the power to wipe out humankind. Opinions in the AI safety scene on the likelihood of such a hostile takeover event vary widely; some consider it highly probable, while others only acknowledge it as a possibility, Kaplan said.
Some circles hold a prevailing belief that long-term risks are the most concerning, regardless of their likelihood -- a concept influenced by tenets of effective altruism (EA), a philosophical and social movement that first gained prominence in Oxford, U.K., and the Bay Area in the late 2000s. Effective altruists' stated aim is to identify the most impactful, cost-effective ways to help others using quantifiable evidence and reasoning.
In the context of AI, advocates of EA and AI safety have coalesced around a shared emphasis on high-impact global issues. In particular, both groups are influenced by longtermism, the belief that focusing on the long-term future is an ethical priority and, consequently, that potential existential risks are most deserving of attention. The prevalence of this perspective, in turn, has meant prioritizing research and strategies that aim to mitigate existential risk from AI.
Fears about extinction-level risk from AI might seem widespread; a group of industry leaders publicly said as much in 2023. A few years prior, in 2021, a subgroup of OpenAI developers split off to form their own safety-focused AI lab, Anthropic, motivated by a belief in the long-term risks of AI and AGI. More recently, Geoffrey Hinton, sometimes referred to as the godfather of AI, left Google, citing fears about the power of AI.
"There is a lot of sincere belief in this," said Jesse McCrosky, a data scientist and principal researcher for open source research and investigations at Mozilla. "There's a lot of true believers among this community."
As conversation around the long-term risks of AI intensifies, the term AI doomerism has emerged to refer to a particularly extreme subset of those concerned about existential risk and AGI -- often dismissively, sometimes as a self-descriptor. Among the most outspoken is Eliezer Yudkowsky, who has publicly expressed his belief in the likelihood of AGI and the downfall of humanity due to a hostile superhuman intelligence.
However, the term is more often used as a pejorative than as a self-label. "I have never heard of anyone in AI safety or in AI safety with longtermist concerns call themselves a doomer," Kaplan said.
Although those in AI safety typically see the most pressing AI problems as future risks, others -- often called AI ethicists -- say the most pressing problems of AI are happening right now.
"Typically, AI ethics is more social justice-oriented and looking at the impact on already marginalized communities, whereas AI safety is more the science fiction scenarios and concerns," McCrosky said.
For years, individuals have raised serious concerns about the immediate implications of AI technology. AI tools and systems have already been linked to racial bias, political manipulation and harmful deepfakes, among other notable problems. Given AI's wide range of applications -- in hiring, facial recognition and policing, to name just a few -- its magnification of biases and opportunity for misuse can have disastrous effects.
"There's already unsafe AI right now," said Chirag Shah, professor in the Information School at the University of Washington and founding co-director of the center for Responsibility in AI Systems and Experiences. "There are some actual important issues to address right now, including issues of bias, fairness, transparency and accountability."
As Emily Bender, a computational linguist and professor at the University of Washington, has argued, conversations that overlook these types of AI risks are both dangerous and privileged, as they fail to account for AI's existing disproportionate effect on marginalized communities. Focusing solely on hypothetical future risk means missing the important issues of the present.
"[AI doomerism] can be a distraction from the harms that we already see," McCrosky said. "It puts a different framing on the risk and maybe makes it easier to sweep other things under the carpet."
Rumman Chowdhury, co-founder of the nonprofit Humane Intelligence, has long focused on tech transparency and ethics, including in AI systems. In a 2023 Rolling Stone article, she commented that the demographics of doomer and x-risk communities skew white, male and wealthy -- and thus tend not to include victims of structural inequality.
"For these individuals, they think that the biggest problems in the world are can AI set off a nuclear weapon?" Chowdhury told Rolling Stone.
McCrosky recently conducted a study on racial bias in multimodal LLMs. When he asked the model to determine whether a person was trustworthy based solely on facial images, he found that racial bias often influenced its decision-making process. Such biases are deeply concerning and have serious implications, especially when considered in the context of AI applications, such as military and defense.
"We've already seen significant harm from AI," McCrosky said. "These are real harms that we should be caring a whole lot more about."
In addition to fearing that discussions of existential risk overshadow current AI-related harms, many researchers also question the scientific foundation for concerns about superintelligence. If there's little basis for the idea that AGI could develop in the first place, they worry about the effect such sensational language could have.
"We jump to [the idea of] AI coming to destroy us, but we're not thinking enough about how that happens," Shah said.
McCrosky shared this skepticism regarding the existential threat from AI. The plateau currently reached by generative AI isn't indicative of the AGI that longtermists worry about, he said, and the path towards AGI remains unclear.
Transformers, the models underlying today's generative AI, were a revolutionary concept when Google published the seminal paper "Attention Is All You Need" in 2017. Since then, AI labs have used transformer-based architectures to build the LLMs that power generative AI tools, like OpenAI's chatbot, ChatGPT.
Over time, LLMs have become capable of handling increasingly large context windows, meaning that the AI system can process greater amounts of input at once. But larger context windows come with higher computational costs, and technical issues, like hallucinations, have remained a problem even for highly powerful models. Consequently, scientists are now contending with the possibility that advancing to the next frontier in AI may require a completely new architecture.
"[Researchers] are kind of hitting a wall when it comes to transformer-based architecture," Kaplan said. "What happens if they don't find this new architecture? Then, suddenly, AGI becomes further and further off -- and then what does that do to AI safety?"
Given the uncertainty around whether AGI can be developed in the first place, it's worth asking who stands to benefit from AI doomerism talk. When AI developers advocate for investing more time, money and attention into AI due to possible AGI risks, a self-interested motive may also be at play.
"The narrative comes largely from people that are building these systems and are very excited about these systems," McCrosky said. While he noted that AI safety concerns are typically genuine, he also pointed out that such rhetoric "becomes very self-serving, in that we should put all our philanthropic resources towards making sure we do AI right, which is obviously the thing that they want to do anyway."
Despite the range of beliefs and motivations, one thing is evident: The dangers associated with AI feel incredibly tangible to those who are concerned about them.
A future with extensive integration of AI technologies is increasingly easy to imagine, and it's understandable why some genuinely believe these developments could lead to serious dangers. Moreover, people are already affected by AI every day in unintended ways, from harmless but frustrating outcomes to dangerous and disenfranchising ones.
To foster productive conversation amid this complexity, experts are emphasizing the importance of education and engagement. When public awareness of AI outpaces understanding, a knowledge gap can emerge, said Reggie Townsend, vice president of data ethics at SAS and member of the National AI Advisory Committee.
"Unfortunately, all too often, people fill the gap between awareness and understanding with fear," Townsend said.
One strategy for filling that gap is education, which Shah sees as the best way to build a solid foundation for those entering the AI risk conversation. "The solution really is education," he said. "People need to really understand and learn about this and then make decisions and join the real discourse, as opposed to hype or fear." That way, sensational discourse, like AI doomerism, doesn't eclipse other AI concerns and capabilities.
Technologists have a responsibility to ensure that overall societal understanding of AI improves, Townsend said. Hopefully, better AI literacy results in more responsible discourse and engagement with AI.
Townsend emphasized the importance of meeting people where they are. "Oftentimes, this conversation gets way too far ahead of where people actually are in terms of their willingness to accept and their ability to understand," he said.
Lastly, polarization impedes progress. Those focused on current concerns and those worried about long-term risk are more connected than they might realize, Green said. Seeing these perspectives as contradictory or in a zero-sum way is counterproductive.
"Both of their projects are looking at really important social impacts of technology," he said. "All that time spent infighting is time that could be spent actually solving the problems that they want to solve."
In the wake of recent and rapid AI advancements, harms are being addressed on multiple fronts. Various groups and individuals are working to train AI more ethically, pushing for better governance to prevent misuse and considering the impact of intelligent systems on people's livelihoods, among other endeavors. Seeing these efforts as inherently contradictory -- or rejecting others' concerns out of hand -- runs counter to a shared goal that everyone can hopefully agree on: If we're going to build and use powerful AI, we need to get it right.
Olivia Wisbey is associate site editor for TechTarget Enterprise AI. She graduated from Colgate University with Bachelor of Arts degrees in English literature and political science, where she served as a peer writing consultant at the university's Writing and Speaking Center.
Lev Craig contributed reporting and research to this story.
See the original post:
Beyond AI doomerism: Navigating hype vs. reality in AI risk - TechTarget
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal... [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Programmed Values: The Role of Intention in Developing AI - Psychology Today [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Fear the fire or harness the flame: The future of generative AI - VentureBeat [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Senate's hearing on AI regulation was dangerously friendly - The Verge [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Why we need a "Manhattan Project" for A.I. safety - Salon [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- What is AGI? The Artificial Intelligence that can do it all - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial intelligence: World first rules are coming soon are you ... - JD Supra [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Where AI evolves from here - Axios [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- How AI and other technologies are already disrupting the workplace - The Conversation [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -... [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Israel aims to be 'AI superpower', advance autonomous warfare - Reuters.com [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- What we lose when we work with a giant AI like ChatGPT - The Hindu [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 5 things you should know about investing in artificial intelligence ... - The Motley Fool Australia [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Generative AI Will Have Profound Impact Across Sectors - Rigzone News [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Beware the EU's AI Regulations - theTrumpet.com [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- How to Win the AI War - Tablet Magazine [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Dr. ChatGPT Will Interface With You Now - IEEE Spectrum [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Crypto And AI Innovation: The London Attraction - Forbes [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI would pick Bitcoin over centralized crypto Tether CTO - Cointelegraph [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What's missing from ChatGPT and other LLMs ... - Data Science Central [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mastering ChatGPT: Introduction to ChatGPT | Thomas Fox ... - JD Supra [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Transparency is crucial over how AI is trained - and regulators must take the lead - Sky News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Top 10 AI And Blockchain Projects Revolutionizing The World - Blockchain Magazine [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- An Orb: the new crypto project by the creator of ChatGPT - The Cryptonomist [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI must be emotionally intelligent before it is super-intelligent - Big Think [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI tools trace the body's link between the brain and behavior - Axios [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 27% of jobs at high risk from AI revolution, says OECD - Reuters [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI likely to spell end of traditional school classroom, leading expert says - The Guardian [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- China striving to be first source of artificial general intelligence, says think tank - The Register [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Future of automotive journalism in India: Would AI take charge - Team-BHP [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- From vision to victory: How CIOs embrace the AI revolution - ETCIO [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Demis Hassabis - Information Age [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Why AI cant answer the fundamental questions of life | Mint - Mint [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI bots could replace us, peer warns House of Lords during debate - The Guardian [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Will architects really lose their jobs to AI? - Dezeen [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Which US workers are exposed to AI in their jobs? - Pew Research Center [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AWS announces generative A.I. tool to save doctors time on paperwork - CNBC [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]