One of the most prominent narratives about AGI, or artificial general intelligence, in the popular media these days is the AI doomer narrative. This claims that were in the midst of an arms race to build AGI, propelled by a relatively small number of extremely powerful AI companies like DeepMind, OpenAI, Anthropic, and Elon Musks xAI (which aims to design an AGI that uncovers truths about the universe by eschewing political correctness). All are backed by billions of dollars: DeepMind says that Microsoft will invest over $100 billion in AI, while OpenAI has thus far received $13 billion from Microsoft, Anthropic has $4 billion in investments from Amazon, and Musk just raised $6 billion for xAI.
Many doomers argue that the AGI race is catapulting humanity toward the precipice of annihilation: if we create an AGI in the near future, without knowing how to properly align the AGIs value system, then the default outcome will be total human extinction. That is, literally everyone on Earth will die. And since it appears that were on the verge of creating AGI or so they say this means that you and I and everyone we care about could be murdered by a misaligned AGI within the next few years.
These doomers thus contend, with apocalyptic urgency, that we must pause or completely ban all research aiming to create AGI. By pausing or banning this research, it would give others more time to solve the problem of aligning AGI to our human values, which is necessary to ensure that the AGI is sufficiently safe. Failing to do this means that the AGI will be unsafe, and the most likely consequence of an unsafe AGI will be the untimely death of everyone on our planet.
The doomers contrast with the AI accelerationists, who hold a much more optimistic view. They claim that the default outcome of AGI will be a bustling utopia: well be able to cure diseases, solve the climate crisis, figure out how to become immortal, and even colonize the universe. Consequently, these accelerationists some of whom use the acronym e/acc (pronounced ee-ack) to describe their movement argue that we should accelerate rather than pause or ban AGI research. There isnt enough money being funneled into the leading AI companies, and calls for government regulation are deeply misguided because theyre only going to delay the arrival of utopia.
Some even contend that any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder. So, if you advocate for slowing down research on advanced AI, you are no better than a murderer.
The loudest voices within the AI doomer camp have been disproportionately responsible for launching and sustaining the very technological race that they now claim could doom humanity.
But theres a great irony to this whole bizarre predicament: historically speaking, no group has done more to accelerate the race to build AGI than the AI doomers. The very people screaming that the AGI race is a runaway train barreling toward the cliff of extinction have played an integral role in starting these AI companies. Some have helped found these companies, while others provided crucial early funding that enabled such companies to get going. They wrote papers, books and blog posts that popularized the idea of AGI and organized conferences that inspired interest in the topic. Many of those worried that AGI will kill everyone on Earth have gone on to work for the leading AI companies, and indeed the two techno-cultural movements that initially developed and promoted the doomer narrative namely, Rationalism and Effective Altruism have been at the very heart of the AGI race since its inception.
In a phrase, the loudest voices within the AI doomer camp have been disproportionately responsible for launching and sustaining the very technological race that they now claim could doom humanity in the coming years. Despite their apocalyptic warnings of near-term annihilation, the doomers have in practice been more effective at accelerating AGI than the accelerationists themselves.
Consider a few examples, beginning with the Skype cofounder and almost-billionaire Jaan Tallinn, who also happens to be one of the biggest financial backers of the Rationalist and Effective Altruist (EA) movements. Tallinn has repeatedly claimed that AGI poses an enormous threat to the survival of humanity. Or, in his words, it is by far the biggest risk facing us this century bigger than nuclear war, global pandemics or climate change.
In 2014, Tallinn co-founded a Boston-based organization called the Future of Life Institute (FLI), which has helped raise public awareness of the supposedly grave dangers of AGI. Last year, FLI released an open letter calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4, where GPT4 was the most advanced system that OpenAI had released at the time. The letter warns that AI labs have become locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict, or reliably control, resulting in a dangerous race. Tallinn was one of the first signatories.
Tallinn is thus deeply concerned about the race to build AGI. Hes worried that this race might lead to our extinction in the near future. Yet, through his wallet, he has played a crucial role in sparking and fueling the AGI race. He was an early investor in DeepMind, which Demis Hassabis, Shane Legg and Mustafa Suleyman cofounded 2010 with the explicit goal of creating AGI. After OpenAI started in 2015, he had a close connection to some people at the company, meeting regularly with individuals like Dario Amodei, a member of the EA movement and a key figure in the direction of OpenAI. (Tallinn himself is closely aligned with the EA movement.)
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter Lab Notes.
In 2021, Amodei and six other former employees of OpenAI founded Anthropic, a competitor of both DeepMind and OpenAI. Where did Anthropic get its money? In part from Tallinn, who donated $25 million and led a $124 million series A fundraising round to help the company get started.
Here we have one of the leading voices in the doomer camp claiming that the AGI race could result in everyone on Earth dying, while simultaneously funding the biggest culprits in this reckless race toward AGI. Im reminded of something that Noam Chomsky once said in 2002, during the early years of George Bushs misguided War on Terror. Chomsky declared: We certainly want to reduce the level of terror, he said, referring to the U.S. There is one easy way to do that stop participating in it. The same idea applies to the AGI race: if AI doomers are really so worried that the race to build AGI will lead to an existential catastrophe, then why are they participating in it? Why have they funded and, in some cases, founded the very companies responsible for supposedly pushing humanity toward the precipice of total destruction?
In fact, Amodei, Shane Legg, Sam Altman and Elon Musk all of whom founded or cofounded some of the leading AI companies have expressed doomer concerns that AGI could annihilate our species in the near term. In an interview with the EA organization 80,000 Hours, Amodei referenced the possibility that an AGI could destroy humanity, saying I cant see any reason in principle why that couldnt happen. He adds that this is a possible outcome and at the very least as a tail risk we should take it seriously.
Over and over again, the very same people saying that AGI could kill us all have done more than anyone else to launch and accelerate the race toward AGI.
Similarly, DeepMind cofounder Shane Legg wrote on the website LessWrong in 2011 that AGI is his number 1 risk for this century. That was one year after DeepMind was created. In 2015, the year he co-founded OpenAI with Elon Musk and others, Altman declared that I think AI will most likely sort of lead to the end of the world, adding on his personal blog that the development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.
Then theres Musk, who has consistently identified AGI as the biggest existential threat, and far more dangerous than nukes. In early 2023, Musk signed the open letter from FLI calling for a six month pause on advanced AI research. Just four months later, he announced that he was starting yet another AI company: xAI.
Over and over again, the very same people saying that AGI could kill us all have done more than anyone else to launch and accelerate the race toward AGI. This is even true of the most famous doomer in the world today, a self-described genius named Eliezer Yudkowsky. In a Time magazine article from last year, Yudkowsky argued that our only hope of survival is to immediately shut down all of the large computer farms where the most powerful AIs are refined. Countries should sign an international treaty to halt AGI research and be willing to engage in military airstrikes against rogue datacenters to enforce this treaty.
Yudkowsky is so worried about the AGI apocalypse that he claims we should be willing to risk an all-out thermonuclear war that kills nearly everyone on Earth to prevent AGI from being built in the near future. He then gave a TED talk in which he reiterated his warnings: if we build AGI without knowing how to make it safe and we have no idea how to make it safe right now, he claims then literally everyone on Earth will die.
Yet I doubt that any single individual has promoted the idea of AGI more than Yudkowsky himself. In a very significant way, he put AGI on the map, inspired many people involved in the current AGI race to become interested in the topic, and organized conferences that brought together early AGI researchers to cross-pollinate ideas.
Consider the Singularity Summit, which Yudkowsky co-founded with the Google engineer Ray Kurzweil and tech billionaire Peter Thiel in 2006. This summit, held annually until 2012, focused on the promises and perils of AGI, and included the likes of Tallinn, Hassabis, and Legg on its list of speakers. In fact, both Hassabis and Legg gave talks about AGI-related issues in 2010, shortly before co-founding DeepMind. At the time, DeepMind needed money to get started, so after the Singularity Summit, Hassabis followed Thiel back to his mansion, where Hassabis asked Thiel for financial support to start DeepMind. Thiel obliged, offering Hassabis $1.85 million, and thats how DeepMind was born. (The following year, in 2011, is when Tallinn made his early investment in the company.)
If not for Yudkowskys Singularity Summit, DeepMind might not have gotten off the ground or at least not when it did. Similar points could be made about various websites and mailing lists that Yudkowsky created to promote the idea of AGI. For example, AGI has been a major focus of the community blogging website LessWrong, created by Yudkowsky around 2009. This website quickly became the online epicenter for discussions about how to build AGI, the utopian future that a safe or aligned AGI could bring about, and the supposed existential risks associated with AGIs that are unsafe or misaligned. As noted above, it was on the LessWrong website that Legg identified AGI to be the number one threat facing humanity, and records show that Legg was active on the website very early on, sometimes commenting directly under articles by Yudkowsky about AGI and related issues.
Or consider the SL4 mailing list that Yudkowsky created in 2001, which described itself as dedicated to advanced topics in transhumanism and the Singularity, including strategies to accelerate the Singularity. The Singularity is a hypothetical future event in which advanced AI begins to redesign itself, leading to a superintelligent AGI system over the course of weeks, days, or perhaps even minutes. Once again, Legg also contributed to the list, which indicates that the connections between Yudkowsky, the worlds leading doomer, and Legg, cofounder of one of the biggest AI companies involved in the AGI race, goes back more than two decades.
These are just a few reasons that Altman himself wrote on Twitter (now X) last year that Yudkowsky the worlds leading AI doomer has probably contributed more than anyone to the AGI race. In Altmans words, Yudkowsky got many of us interested in AGI, helped DeepMind get funding at a time when AGI was extremely outside the Overton window, was critical in the decision to start OpenAI, etc. He then joked that Yudkowsky may deserve the Nobel Peace Prize for this. (These quotes have been lightly edited to improve readability.)
Rationalists and EAs are also some of the main participants and contributors to the very race they believe could precipitate our doom.
Though Altman was partly trolling Yudkowsky for complaining about a situation the AGI race that Yudkowsky was instrumental in creating, Altman isnt wrong. As a New York Times article from 2023 notes, Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind. One could say something similar about Anthropic, as it was Yudkowskys blog posts that convinced Tallinn that AGI could be existentially risky, and Tallinn later played a crucial role in helping Anthropic get started which further accelerated the race to build AGI. The connections and overlaps between the doomer movement and the race to build AGI are extensive and deep the more one scratches the surface, the clearer these links appear.
Indeed, I mentioned the Rationalist and EA movements earlier. Rationalism was founded by Yudkowsky via the LessWrong website, while EA emerged around the same time, in 2009, and could be seen as the sibling of Rationalism. These communities overlap considerably, and both have heavily promoted the idea that AGI poses a profound threat to our continued existence this century.
Yet Rationalists and EAs are also some of the main participants and contributors to the very race they believe could precipitate our doom. As noted above, Dario Amodei (co-founder of Anthropic) is an EA, and Tallinn has given talks at major EA conferences and donated tens of millions of dollars to both movements. Similarly, an Intelligencer article about Altman reports that Altman once embraced EA, and a New York Times profile describes him as the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members ofthis movementwere instrumental in the creation of OpenAI.
Yet another New York Times article notes that the EA movement beat the drum so loudly about the dangers of AGI that many young people became inspired to work on the topic. Consequently, all of the major AI labs and safety research organizations contain some trace of effective altruisms influence, and many count believers among their staff members. The article then observes that no major AI lab embodies the EA ethos as fully as Anthropic, given that many of the companys early hires were effective altruists, and much of its start-up funding came from wealthy EA-affiliated tech executives not just Tallinn, but the co-founder of Facebook Dustin Moskovitz, who, like Tallinn, has donated considerably to EA projects.
There is a great deal to say about this topic, but the key point for our purposes is that the doomer narrative largely emerged out of the Rationalist and EA movements the very movements that have been pivotal in founding, funding and inspiring all the major AI companies now driving the race to build AGI.
Again, one wants to echo Chomsky in saying: if these communities are so worried about the AGI apocalypse, why have they done so much to create the very conditions that enabled the AGI race to get going? The doomers have probably done more to accelerate AGI research than the AI accelerationists that they characterize as recklessly dangerous.
How has this happened? And why? One reason is that many doomers believe that AGI will be built by someone, somewhere, eventually. So it might as well be them who builds the first AGI. After all, many Rationalists and EAs pride themselves on having exceptionally high IQs while claiming to be more rational than ordinary people, or normies. Hence, they are the best group to build AGI while ensuring that it is maximally safe and beneficial. The unfortunate consequence is that these Rationalists and EAs have inadvertently initiated a race to build AGI that, at this point, has gained so much momentum that it appears impossible to stop.
Even worse, some of the doomers most responsible for the AGI race are now using this situation to gain even more power by arguing that policymakers should look to them for the solutions. Tallinn, for example, recently joined the United Nations Artificial Intelligence Advisory Body, which focuses on the risks and opportunities of advanced AI, while Yudkowsky has defended an international policy that leaves the door open to military strikes that might trigger a thermonuclear war. These people helped create a huge, complicated mess, then turned around, pointed at that mess, and shouted: Oh, my! Were in such a dire situation! If only governments and politicians would listen to us, though, we just might be able to dodge the bullet of annihilation.
This looks like a farce. Its like someone drilling a hole in a boat and then declaring: The only way to avoid drowning is to make me captain.
The lesson is that governments and politicians should not be listening to the very people or the Rationalist and EA movements to which they belong that are disproportionately responsible for this mess in the first place. One could even argue plausibly, in my view that if not for the doomers, there probably wouldnt be an AGI race right now at all.
Though the race to build AGI does pose many dangers, the greatest underlying danger is the Rationalist and EA movements that spawned this unfortunate situation over the past decade and a half. If we really want to bring the madness of the AGI race to a stop, its time to let someone else have the mic.
Read more
about artificial intelligence
See the original post:
AI doomers have warned of the tech-pocalypse while doing their best to accelerate it - Salon
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal... [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Programmed Values: The Role of Intention in Developing AI - Psychology Today [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Fear the fire or harness the flame: The future of generative AI - VentureBeat [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Senate's hearing on AI regulation was dangerously friendly - The Verge [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Why we need a "Manhattan Project" for A.I. safety - Salon [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- What is AGI? The Artificial Intelligence that can do it all - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial intelligence: World first rules are coming soon are you ... - JD Supra [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Where AI evolves from here - Axios [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- How AI and other technologies are already disrupting the workplace - The Conversation [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -... [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Israel aims to be 'AI superpower', advance autonomous warfare - Reuters.com [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- What we lose when we work with a giant AI like ChatGPT - The Hindu [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 5 things you should know about investing in artificial intelligence ... - The Motley Fool Australia [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Generative AI Will Have Profound Impact Across Sectors - Rigzone News [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Beware the EU's AI Regulations - theTrumpet.com [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- How to Win the AI War - Tablet Magazine [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Dr. ChatGPT Will Interface With You Now - IEEE Spectrum [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Crypto And AI Innovation: The London Attraction - Forbes [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI would pick Bitcoin over centralized crypto Tether CTO - Cointelegraph [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What's missing from ChatGPT and other LLMs ... - Data Science Central [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mastering ChatGPT: Introduction to ChatGPT | Thomas Fox ... - JD Supra [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Transparency is crucial over how AI is trained - and regulators must take the lead - Sky News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Top 10 AI And Blockchain Projects Revolutionizing The World - Blockchain Magazine [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- An Orb: the new crypto project by the creator of ChatGPT - The Cryptonomist [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI must be emotionally intelligent before it is super-intelligent - Big Think [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI tools trace the body's link between the brain and behavior - Axios [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 27% of jobs at high risk from AI revolution, says OECD - Reuters [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI likely to spell end of traditional school classroom, leading expert says - The Guardian [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- China striving to be first source of artificial general intelligence, says think tank - The Register [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Future of automotive journalism in India: Would AI take charge - Team-BHP [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- From vision to victory: How CIOs embrace the AI revolution - ETCIO [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Demis Hassabis - Information Age [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Why AI cant answer the fundamental questions of life | Mint - Mint [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI bots could replace us, peer warns House of Lords during debate - The Guardian [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Will architects really lose their jobs to AI? - Dezeen [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Which US workers are exposed to AI in their jobs? - Pew Research Center [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AWS announces generative A.I. tool to save doctors time on paperwork - CNBC [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]