Artificial intelligence is advancing at a breakneck pace. Earlier this month, one of the world's most famous AI researchers, Geoffrey Hinton, left his job at Google to warn us of the existential threat it poses. Executives of the leading AI companies are making the rounds in Washington to meet with the Biden administration and Congress to discuss its promise and perils. This is what it feels like to stand at the hinge of history.
An AI trained on pharmaceutical data in 2022 to design non-toxic chemicals had its sign flipped and quickly came up with recipes for nerve gas and 40,000 other lethal compounds.
This is not about consumer-grade AI the use of products like ChatGPT and DALLE to write articles and make art. While those products certainly pose a material threat to certain creative industries, the future threat of which I speak is that of AI being used in ways that threaten life itself say, to design deadly bioweapons, serve as autonomous killing machines, or aid and abet genocide. Certainly, the sudden advent of ChatGPT was to the general public akin to a rabbit being pulled out of a hat. Now imagine what another decade of iterations on that technology might yield in terms of intelligence and capabilities. It could even yield an AGI, meaning a type of AI that can accomplish any cognitive task that humans can.
In fact, the threat of God-like AI has loomed large on the horizon since computer scientist I. J. Good warned of an "intelligence explosion" in the 1960s. But efforts to develop guardrails have sputtered for lack of resources. The newfound public and institutional impetus allows us for the first time to compel the tremendous initiative we need, and this window of opportunity may not last long.
As a sociologist and statistician who studies technological change, I find this situation extremely concerning. I believe governments need to fund an international, scientific megaproject even more ambitious than the Manhattan Project the 1940s nuclear research project pursued by the U.S., the U.K., and Canada to build bombs to defeat the unprecedented global threat of the Axis powers in World War II.
This "San Francisco Project" named for the industrial epicenter of AI would have the urgent and existential mandate of the Manhattan Project but, rather than building a weapon, it would bring the brightest minds of our generation to solve the technical problem of building safe AI. The way we build AI today is more like growing a living thing than assembling a conventional weapon, and frankly, the mathematical reality of machine learning is that none of us have any idea how to align an AI with social values and guarantee its safety. We desperately need to solve these technical problems before AGI is created.
We can also take inspiration from other megaprojects like the International Space Station, Apollo Program, Human Genome Project, CERN, and DARPA. As cognitive scientist Gary Marcus and OpenAI CEO Sam Altman told Congress earlier this week, the singular nature of AI compels a dedicated national or international agency to license and audit frontier AI systems.
Present-day harms of AI are undeniably escalating. AI systems reproduce race, gender, and other biases from their training data. An AI trained on pharmaceutical data in 2022 to design non-toxic chemicals had its sign flipped and quickly came up with recipes for nerve gas and 40,000 other lethal compounds. This year, we saw the first suicide attributed to interaction with a chatbot, EleutherAI's GPT-J, and the first report of a faked kidnapping and ransom call using an AI-generated voice of the purported victim.
Bias, inequality, weaponization, breaches of cybersecurity, invasions of privacy, and many other harms will grow and fester alongside accelerating AI capabilities. Most researchers think that AGI will arrive by 2060, and a growing number expect cataclysm within a decade. Chief doomsayer Eliezer Yudkowsky recently argued that the most likely AGI outcome "under anything remotely like the current circumstances, is that literally everyone on Earth will die."
Complete annihilation may seem like science fiction, but if AI begins to self-improvemodify its own cognitive architecture and build its own AI workers like those in Auto-GPTany misalignment of its values with our own will be astronomically magnified. We have very little control over what happens to today's AI systems as we train them. We pump them full of books, websites, and millions of other texts so they can learn to speak like a human, and we dictate the rules for how they learn from each piece of data, but even leading computer scientists have very little understanding of how the resultant AI system actually works.
One of the most impressive interpretability efforts to date sought simply to locate where in its neural network edifice GPT-2 stores the knowledge that the capital of Italy is Rome, but even that finding has been called into question by other researchers. The favored metaphor in 2023 has been a Lovecraftian shoggoth, an alien intelligence on which we strap a yellow smiley face maskbut the human-likeness is fleeting and superficial.
Recent discourse has centered on proposals to slow down AI research, including the March 22nd open letter calling for a 6-month pause on training systems more powerful than GPT-4, signed by some of the world's most famous AI researchers.
With the black magic of AI training, we could easily stumble upon a digital mind with goals that make us mere collateral damage. The AI has an initial goal and gets human feedback on the output produced by that goal. Every time it makes a mistake, the system picks a new goal that it hopes will do a little better. This guess-and-check method is an inherently dangerous way to learn because most goals that do well on human feedback in the lab do not generalize well to a superintelligence taking action in the real world.
Among all the goals an AI could stumble upon that elicit positive human feedback, there is instrumental convergence to dangerous tendencies of deception and power-seeking. To best achieve a goal say, filling a cauldron with water in the classic story of The Sorcerer's Apprentice a superintelligence would be incentivized to gather resources to ensure that goal is achievedlike filling the whole room with water to ensure that the cauldron never empties. There are so many alien goals that the AI could land on that, unless the AI just happens to land on exactly the goal that matches what humans want from it. Then it might just act like it's safe and friendly while figuring out how to best take over and optimize the world to ensure its success.
In response to these dangerous advances, concrete and hypothetical, recent discourse has centered on proposals to slow down AI research, including the March 22nd open letter calling for a 6-month pause on training systems more powerful than GPT-4, signed by some of the world's most famous AI researchers including Yoshua Bengio and Stuart Russell.
Want more health and science stories in your inbox? Subscribe toSalon's weekly newsletter The Vulgar Scientist.
That approach is compelling but politically infeasible given the massive profit potential and the difficulty in regulating machine learning software. In the delicate balance of AI capabilities and safety, we should consider pushing up the other end, funding massive amounts of AI safety research. If the future of AI is as dangerous as computer scientists think, this may be a moonshot we desperately need.
As a sociologist and statistician, I study the interwoven threads of social and technological change. Using computational tools like word embeddings alongside traditional research methods like interviews with AI engineers, my team and I built a model of how expert and popular understanding of AI has changed over time. Before 2022, our model focused on the landmark years of 2012 when the modern AI paradigm of deep learning took hold in the computer science firmament and 2016 when, we argue, the public and corporate framing of AI inflected from science fiction and radical futurism to an incremental real-world technology being integrated across industries such as healthcare and security.
Our model changed in late 2022 after seeing the unprecedented social impact of ChatGPT's launch: it quickly became the fastest growing app in history, outpacing even the viral social media launches of Instagram and TikTok.
This public spotlight on AI provides an unprecedented opportunity to start the San Francisco Project. The "SFP" could take many forms with varying degrees of centralization to bring our generation's brightest minds to AI safety: a single, air-gapped facility that houses researchers and computer hardware; a set of major grants to seed and support multi-university AI safety labs alongside infrastructure to support their collaboration; or major cash prizes for outstanding research projects, perhaps even a billion-dollar grand prize for an end-to-end solution to the alignment problem. In any case, it's essential that such a project stay laser-focused on safety and alignment lest it become yet another force pushing forward the dangerous frontier of unmitigated AI capabilities.
It may be inauspicious to compare AI safety technology with the rapid nuclear weaponization of the Manhattan Project. In 1942, shortly after it began, the world's first nuclear chain reaction was ignited just a few blocks from where I sit at the University of Chicago. In July 1945, the world's first nuclear weapon was tested in New Mexico, and a month later, the bombs fell on Hiroshima and Nagasaki.
The San Francisco Project could end of the century of existential risk that began when the Manhattan Project first made us capable of self-annihilation. The intelligence explosion will happen soon whether humanity is ready or not either way, AGI will be our species' final invention.
Read more
on the theoretical and real threat of A.I.
Read the rest here:
Why we need a "Manhattan Project" for A.I. safety - Salon
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal... [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Programmed Values: The Role of Intention in Developing AI - Psychology Today [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Fear the fire or harness the flame: The future of generative AI - VentureBeat [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Senate's hearing on AI regulation was dangerously friendly - The Verge [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- What is AGI? The Artificial Intelligence that can do it all - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial intelligence: World first rules are coming soon are you ... - JD Supra [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Where AI evolves from here - Axios [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- How AI and other technologies are already disrupting the workplace - The Conversation [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -... [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Israel aims to be 'AI superpower', advance autonomous warfare - Reuters.com [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- What we lose when we work with a giant AI like ChatGPT - The Hindu [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 5 things you should know about investing in artificial intelligence ... - The Motley Fool Australia [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Generative AI Will Have Profound Impact Across Sectors - Rigzone News [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Beware the EU's AI Regulations - theTrumpet.com [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- How to Win the AI War - Tablet Magazine [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Dr. ChatGPT Will Interface With You Now - IEEE Spectrum [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Crypto And AI Innovation: The London Attraction - Forbes [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI would pick Bitcoin over centralized crypto Tether CTO - Cointelegraph [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What's missing from ChatGPT and other LLMs ... - Data Science Central [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mastering ChatGPT: Introduction to ChatGPT | Thomas Fox ... - JD Supra [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Transparency is crucial over how AI is trained - and regulators must take the lead - Sky News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Top 10 AI And Blockchain Projects Revolutionizing The World - Blockchain Magazine [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- An Orb: the new crypto project by the creator of ChatGPT - The Cryptonomist [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI must be emotionally intelligent before it is super-intelligent - Big Think [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI tools trace the body's link between the brain and behavior - Axios [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 27% of jobs at high risk from AI revolution, says OECD - Reuters [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI likely to spell end of traditional school classroom, leading expert says - The Guardian [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- China striving to be first source of artificial general intelligence, says think tank - The Register [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Future of automotive journalism in India: Would AI take charge - Team-BHP [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- From vision to victory: How CIOs embrace the AI revolution - ETCIO [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Demis Hassabis - Information Age [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Why AI cant answer the fundamental questions of life | Mint - Mint [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI bots could replace us, peer warns House of Lords during debate - The Guardian [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Will architects really lose their jobs to AI? - Dezeen [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Which US workers are exposed to AI in their jobs? - Pew Research Center [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AWS announces generative A.I. tool to save doctors time on paperwork - CNBC [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Convergence of brain-inspired AI and AGI: Exploring the path to intelligent synergy - Tech Xplore [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]