Is the further development of artificial intelligence (AI) worth the trouble? On 29 March 2023, in an open letter published on the Future of Lifes website, about 1,800 scientists, historians, philosophers and even some billionaires and others let us call them the Tech Nobility called for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 []. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
In a reaction to this letter, decision theorist Eliezer Yudkowsky wrote that the call in the open letter does not go far enough, and insisted that governments should:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data centre by airstrike.
Calls for such extreme measures against AI are based on the fear that AI poses an existential risk to humanity. Following the release of large language models (LLM) by OpenAI (GTP-4) and Microsoft (Bing) there is a growing concern that further versions could move us towards an AI singularity that is where AI becomes as smart as humans and can self-improve. The result is runaway intelligence. An intelligence explosion.
There are many ways in which this could spell doom for humanity. All of these are argued to be unavoidable by proponents of AI doom because we do not know how to align AI and human interests (the alignment problem) and how to control how AI is used (the control problem).
A 2020 paper lists 25 ways in which AI poses an existential risk. We can summarise these into four main hypothetical consequences that would be catastrophic.
One is that such a superintelligence causes an accident or does something with the unintended side-effect of curtailing humanitys potential. An example is given by the thought experiment of the paper clip maximiser.
A second is that a superintelligent AI may pre-emptively strike against humanity because it may see humanity as its biggest threat.
A third is that a superintelligent AI takes over world government, merges all corporations into one ascended corporation, and rules forever as a singleton locking humanity into a potential North Korean dystopia until the end of time.
A fourth is that a superintelligent AI may wire-head humans (like we wire-head mice) somewhat akin to Aldous Huxleys Brave New World where humans are kept in a pacified condition to accept their tech-ruled existence through using a drug called Soma.
Read more in Daily Maverick: Artificial intelligence has a dirty little secret
Issuing highly publicised open letters on AI like that of 29 March is nothing new in the tech industry, the main beneficiaries of AI. On 28 October 2015 we saw a similar grand public signing by much the same Tech Nobility also published as an open letter on the Future of Lifes website wherein they did not, however, call for a pause in AI research, but instead stated that we recommend expanded research and that the potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence.
In eight short years the tech industry seems to have moved from hype to hysteria calling not for further research to advance AI, but instead for airstrikes to destroy rogue data centres.
First, the hysteria surrounding AI has steadily risen to exceed the hype. This was to be expected given humans cognitive bias towards bad news. After all, the fear that AI will pose an existential threat to humanity is deep-seated. Samuel Butler wrote an essay in 1863 titled Darwin Among The Machines, in which he predicted that intelligent machines will come to dominate:
The machines are gaining ground upon us; day by day we are becoming more subservient to them that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.
Not much different from Eliezer Yudkowsky writing in 2023. That the hysteria surrounding AI has steadily risen to exceed the hype is however not only due to human bias and deep-seated fears of The Machine, but also because public distrust in AI has grown between 2015 and 2023.
None of the benefits touted in the 2015 open letter have materialised. Instead, we saw AI being of little value during the global Covid-19 crisis, we have seen a select few rich corporations getting more monopoly power and richer on the back of harvesting peoples private data, and we have seen the rise of the surveillance state.
At the same time, productivity, research efficiency, tech progress and science have all declined in the most advanced economies. People are more likely to believe the worst about AI, and the establishment of several institutes that earn their living from peddling existential risks just further feeds the number of newspaper articles that drive the hysteria.
The second reason for the tech industrys flip from hype to hysteria between 2015 and 2023 is that another AI winter or at least an AI autumn may be approaching. The Tech Nobility is freaking out.
Not only are they facing growing public distrust and increasing scrutiny by governments, but the tech industry has taken serious knocks in recent months. These include more than 100,000 industry job cuts, the collapse of Silicon Valley Bank the second-largest bank failure in US history declining stock prices and growing fears that the tech bubble is about to burst.
Underlying these cutbacks and declines is a growing realisation that new technologies have failed to meet expectations.
Read more in Daily Maverick: Why is everyone so angry at artificial intelligence?
The jobs cuts, bank failures and tech bubble problems compound the markets evaluation of an AI industry where the costs are increasingly exceeding the benefits.
AI is expensive developing and rolling out LLMs such as GTP-4 and Bing requires investment. And add infrastructure cost in the billions of dollars and training costs in the millions. GTP-4 has 100 trillion parameters and the total training compute it needed has been estimated to be about 18 billion petaflops in comparison, the famous AlphaGo which beat the best human Go player needed less than a million petaflops in compute.
The point is, these recent LLMs are pushing against the boundaries of what can be thrown at deep learning methods and make sophisticated AI systems out of bounds for most firms and even most governments. Not surprisingly then, the adoption of AI systems by firms in the US, arguably the country most advanced in terms of AI, has been very low: a US Census Bureau survey of 800,000 firms found that only 2.9% were using machine learning as recently as 2018.
AIs existential risk is at present only in the philosophical and literary realms. This does not mean that the narrow AI we have cannot cause serious harm there are many examples of Awful AI we should continue to be vigilant.
It also does not mean that some day in the future the existential risk will not be real but we are still too far from this to know how to do anything sensible about it. The open letters call to pause AI for three months is more likely a response borne out of desperation in an industry that is running out of steam.
It is a perfect example of a virtue signal and an advertisement for GTP-4 (called a tool of hi-tech plagiarism by Noam Chomsky and a failure by Gary Marcus) all rolled into one grand publicity stunt. DM
Wim Naud is Visiting Professor in Technology, Innovation, Marketing and Entrepreneurship at RWTH Aachen University, Germany; Distinguished Visiting Professor at the University of Johannesburg; a Fellow of the African Studies Centre, Leiden University, the Netherlands; and an AI Expert at the OECDs AI Policy Observatory, Paris, France.
Read the rest here:
CRAZED NEW WORLD OP-ED: Open letters, AI hysteria, and ... - Daily Maverick
- So Is an AI Winter Really Coming This Time? - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: January 24th, 2020] [Originally Added On: January 24th, 2020]
- AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet [Last Updated On: February 10th, 2020] [Originally Added On: February 10th, 2020]
- Why The Race For AI Dominance Is More Global Than You Think - Forbes [Last Updated On: February 10th, 2020] [Originally Added On: February 10th, 2020]
- 2020: The Year Of Peak New-Car? Disruption Is Fast Approaching - InsideEVs [Last Updated On: February 13th, 2020] [Originally Added On: February 13th, 2020]
- The professionals who predict the future for a living - MIT Technology Review [Last Updated On: February 28th, 2020] [Originally Added On: February 28th, 2020]
- The New ABC's: Artificial Intelligence, Blockchain And How Each Complements The Other - Technology - United States - Mondaq News Alerts [Last Updated On: March 25th, 2020] [Originally Added On: March 25th, 2020]
- This A.I. makes up gibberish words and definitions that sound astonishingly real - Yahoo Tech [Last Updated On: May 15th, 2020] [Originally Added On: May 15th, 2020]
- This A.I. makes up gibberish words and definitions that sound astonishingly real - Digital Trends [Last Updated On: May 18th, 2020] [Originally Added On: May 18th, 2020]
- The term 'ethical AI' is finally starting to mean something - Report Door [Last Updated On: August 24th, 2020] [Originally Added On: August 24th, 2020]
- Artificial intelligence could help fund managers monetise data but will conservatism hold back the industry? - HedgeWeek [Last Updated On: October 8th, 2020] [Originally Added On: October 8th, 2020]
- Dialogues with Global Young Scholars Held in Guangzhou - Invest Courier [Last Updated On: November 24th, 2020] [Originally Added On: November 24th, 2020]
- We don't need to go back to the office to be creative, we need AI - Wired.co.uk [Last Updated On: February 26th, 2021] [Originally Added On: February 26th, 2021]
- Brinks Home Security Will Leverage AI to Drive Customer Experience - Security Sales & Integration [Last Updated On: February 26th, 2021] [Originally Added On: February 26th, 2021]
- PNYA Post Break Will Explore the Relationship Between Editors and Assistants - Creative Planet Network [Last Updated On: March 24th, 2021] [Originally Added On: March 24th, 2021]
- Diffblue's First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers - StreetInsider.com [Last Updated On: March 24th, 2021] [Originally Added On: March 24th, 2021]
- Is there intelligence in artificial intelligence? - Vaughan Today [Last Updated On: April 8th, 2021] [Originally Added On: April 8th, 2021]
- One Thousand and One Talents: The Race for AI Dominance - Just Security [Last Updated On: April 8th, 2021] [Originally Added On: April 8th, 2021]
- Computing to win: Addressing the policy blind spot that threatens national AI ambitions - Atlantic Council [Last Updated On: May 1st, 2021] [Originally Added On: May 1st, 2021]
- Can we teach Artificial Intelligence to make moral judgements? - Innovation Origins [Last Updated On: May 1st, 2021] [Originally Added On: May 1st, 2021]
- AI in MedTech: Risks and Opportunities of Innovative Technologies in Medical Applications - MedTech Intelligence [Last Updated On: May 9th, 2021] [Originally Added On: May 9th, 2021]
- Challenges and New Frontiers of AI - ETCIO.com [Last Updated On: June 14th, 2021] [Originally Added On: June 14th, 2021]
- Different Types of Robot Programming Languages - Analytics Insight [Last Updated On: June 19th, 2021] [Originally Added On: June 19th, 2021]
- Chinese AI Learns To Beat Top Fighter Pilot In Simulated Combat - Forbes [Last Updated On: June 19th, 2021] [Originally Added On: June 19th, 2021]
- Yes, the US is losing the AI war to China analysts - Verdict [Last Updated On: October 15th, 2021] [Originally Added On: October 15th, 2021]
- Anthropology, AI, and the Future of Human Society #CFP - Patheos [Last Updated On: November 22nd, 2021] [Originally Added On: November 22nd, 2021]
- SysMoore: The Next 10 Years, The Next 1,000X In Performance - The Next Platform [Last Updated On: February 27th, 2022] [Originally Added On: February 27th, 2022]
- What is my chatbot thinking? Nothing. Here's why the Google sentient bot debate is flawed - Diginomica [Last Updated On: August 5th, 2022] [Originally Added On: August 5th, 2022]
- AlphaGo Zero Explained In One Diagram | by David Foster - Medium [Last Updated On: October 1st, 2022] [Originally Added On: October 1st, 2022]
- Pushkin Press signs The Maniac from International Booker Prize ... - The Bookseller [Last Updated On: March 25th, 2023] [Originally Added On: March 25th, 2023]
- Artificial intelligence pays off when businesses go all in - MIT Sloan News [Last Updated On: March 27th, 2023] [Originally Added On: March 27th, 2023]
- The new technocracy: who's who in the chatbot revolution? - The Spectator [Last Updated On: April 4th, 2023] [Originally Added On: April 4th, 2023]
- 'Good swimmers are more likely to drown.' Have we created a ... - SHINE News [Last Updated On: April 4th, 2023] [Originally Added On: April 4th, 2023]
- Alphabet merges AI research units DeepMind and Google Brain - Computing [Last Updated On: April 21st, 2023] [Originally Added On: April 21st, 2023]
- Call Me 'DeepBrain': Google Smushes DeepMind and Brain AI Teams Together - Yahoo News [Last Updated On: April 26th, 2023] [Originally Added On: April 26th, 2023]
- AI At War - War On The Rocks [Last Updated On: April 30th, 2023] [Originally Added On: April 30th, 2023]
- The circle of life works for AI, too - BusinessLine [Last Updated On: April 30th, 2023] [Originally Added On: April 30th, 2023]
- Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Terence Tao Leads White House's Generative AI Working Group ... - Pandaily [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Google at I/O 2023: Weve been doing AI since before it was cool - Ars Technica [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Jack Gao: Prepare for profound AI-driven transformations - China.org [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- OpenAI of Sam Altman Scores Most Visible In TIME100 Most ... - Cryptopolitan [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Best Tech Documentaries That Must to Watch in 2023 - Startup.info [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Exploring the Economic Opportunities of Generative AI - Analytics Insight [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Artificial Intelligence: The Journey to a Thinking Machine - Visual Capitalist [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- The AI Programs You Need to Know - Design News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What personal qualities do we need in the future? EJINSIGHT ... - EJ Insight [Last Updated On: August 30th, 2023] [Originally Added On: August 30th, 2023]
- The truth about Artificial Intelligence to be revealed in CHRISTMAS ... - The Royal Institution [Last Updated On: August 30th, 2023] [Originally Added On: August 30th, 2023]
- U.S. should use Nvidias powerful chips as a chokepoint to force adoption of A.I. rules, DeepMind cofounder Mustafa Suleyman says - AOL [Last Updated On: September 4th, 2023] [Originally Added On: September 4th, 2023]
- Here's Google CEO Sundar Pichai's public memo on Google at 25 - The Verge [Last Updated On: September 11th, 2023] [Originally Added On: September 11th, 2023]
- U.S. should use Nvidias powerful chips as a chokepoint to force adoption of A.I. rules, DeepMind cofounder Mustafa Suleyman says - Fortune [Last Updated On: September 11th, 2023] [Originally Added On: September 11th, 2023]
- Google's AI division shocks world with claim it's now just 5 years ... - NewsThump [Last Updated On: September 13th, 2023] [Originally Added On: September 13th, 2023]
- Charted: The Exponential Growth in AI Computation - Visual Capitalist [Last Updated On: September 25th, 2023] [Originally Added On: September 25th, 2023]
- The better the AI gets, the harder it is to ignore - BSA bureau [Last Updated On: October 9th, 2023] [Originally Added On: October 9th, 2023]
- How AI and ML Can Drive Sustainable Revenue Growth by Waleed ... - Digital Journal [Last Updated On: October 9th, 2023] [Originally Added On: October 9th, 2023]
- How AI and the Metaverse will Impact the Datasphere - Visual Capitalist [Last Updated On: October 9th, 2023] [Originally Added On: October 9th, 2023]
- On AI and the soul-stirring char siu rice - asianews.network [Last Updated On: October 16th, 2023] [Originally Added On: October 16th, 2023]
- Belt & Road - Latest - The Nation [Last Updated On: October 22nd, 2023] [Originally Added On: October 22nd, 2023]
- With new funding, Astrus has secured $3.6 million CAD to date for its ... - BetaKit - Canadian Startup News [Last Updated On: October 22nd, 2023] [Originally Added On: October 22nd, 2023]
- Transcript: The Futurist Summit: The Battlefields of AI with Scale AI ... - The Washington Post [Last Updated On: October 30th, 2023] [Originally Added On: October 30th, 2023]
- Before the end of the world, can't we just laugh at AI? - Daily Maverick [Last Updated On: November 17th, 2023] [Originally Added On: November 17th, 2023]
- What can the current EU AI approach do to overcome the challenges ... - Modern Diplomacy [Last Updated On: November 17th, 2023] [Originally Added On: November 17th, 2023]
- For the first time, AI produces better weather predictions -- and it's ... - ZME Science [Last Updated On: November 17th, 2023] [Originally Added On: November 17th, 2023]
- Absolutely, here's an article on the impact of upcoming technology - Medium [Last Updated On: November 20th, 2023] [Originally Added On: November 20th, 2023]
- Sam Altman In Talks to Return to OpenAI | by Derick David | Utopian ... - Medium [Last Updated On: November 20th, 2023] [Originally Added On: November 20th, 2023]
- These are OpenAIs board members who fired Sam Altman - Hindustan Times [Last Updated On: November 20th, 2023] [Originally Added On: November 20th, 2023]
- Researchers seek consensus on what constitutes Artificial General Intelligence - Tech Xplore [Last Updated On: November 24th, 2023] [Originally Added On: November 24th, 2023]
- AI Unleashed :Transforming Humanity - Medium [Last Updated On: November 24th, 2023] [Originally Added On: November 24th, 2023]
- What is Google Gemini? CEO Sundar Pichai says 'excited' about the innovation - Business Today [Last Updated On: November 24th, 2023] [Originally Added On: November 24th, 2023]
- Unraveling the Mystery of QAR: The Next Leap in AI? - Medium [Last Updated On: November 28th, 2023] [Originally Added On: November 28th, 2023]
- AI Systems That Master Math Will Change the World - PYMNTS.com [Last Updated On: November 30th, 2023] [Originally Added On: November 30th, 2023]
- DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica [Last Updated On: January 20th, 2024] [Originally Added On: January 20th, 2024]