Category Archives: Artificial General Intelligence

A glimpse of AI technologies at the WIC in N China’s Tianjin – CGTN

02:38

The seventh World Intelligence Congress (WIC), a major artificial intelligence (AI) event in China, kicked off on Thursday in north China's Tianjin Municipality, focusing on topics such as intelligent connected vehicles, artificial general intelligence and brain-computer interfaces.

China's AI industry is making steady progress in technological innovation, industrial ecology and integrated application, with the scale of its core sectors reached 508 billion yuan ($72.5 billion) in 2022, an increase of 18 percent year on year, according to China AcademyofInformation and CommunicationsTechnology.

A new generation of intelligent electric vehicle technology at the seventh World Intelligence Congress in north China's Tianjin Municipality, May 18, 2023. /CFP

A new generation of intelligent electric vehicle technology at the seventh World Intelligence Congress in north China's Tianjin Municipality, May 18, 2023. /CFP

The WIC exhibition featured technologies and products related to generative AI, 5G plus industrial internet.

Generative AI on display, apart from iFLYTEK's SparkDesk, included a generative language modeldeveloped by a homegrown intelligent speech and AI company, the National Supercomputing Center of Tianjin.

Also on show were examples of how the integration of 5G plus industrial internet has deepened in supporting multiple scenarios of the manufacturing industry, including those used in inspection and transport.

A visitor plays chess with a robot at the seventh WIC in Tianjin, May 18, 2023. /CFP

A visitor plays chess with a robot at the seventh WIC in Tianjin, May 18, 2023. /CFP

Nearly 500 enterprises participated in the exhibition, including 350 intelligent technology enterprises and 51 research institutions and universities, according to the WIC.

The exhibition presented music, literature, art and other fields through AI, 3D, metaverse and other technologies to break time and space restrictions, allowing participants an immersive experience.

An intelligent driving challenge and four other competitions were also held during the congress.

(Cover image via CFP, designed by Xing Cheng)

See the original post:

A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN

UK schools bewildered by AI and do not trust tech firms, headteachers say – The Guardian

Artificial intelligence (AI)

School leaders announce launch of body to protect students from the risks of artificial intelligence

Sat 20 May 2023 05.50 EDT

Schools are bewildered by the fast pace of development in artificial intelligence and do not trust tech firms to protect the interests of students and educational establishments, headteachers have written.

A group of UK school leaders have announced the launch of a body to advise and protect schools from the risks of AI, with their fears not limited to the capacity of chatbots such as ChatGPT to aid cheating. There are also concerns about the impact on childrens mental and physical health as well as the teaching profession itself, according to the Times.

The headteachers fears were outlined in a letter to the Times in which they warned of the very real and present hazards and dangers being presented by AI, which has gripped the public imagination in recent months through the emergence of breakthroughs in generative AI where tools can produce plausible text, images and even voice impersonations on command.

The group of school leaders is led by Sir Anthony Seldon, the head of Epsom College, a fee-paying school, while the AI body is supported by the heads of dozens of private and state schools.

The letter to the Times says: Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust? We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools and in the past the government has not shown itself capable or willing to do so.

Signatories to the letter include Seldon, Chris Goodall, the deputy head of Epsom & Ewell High School, and Geoff Barton, general secretary of the Association of School and College Leaders.

It adds that the group is pleased the government is now grasping the nettle on the issue. This week Rishi Sunak said guardrails would have to be put around AI as Downing Street indicated support for a global framework for regulating the technology. However, the letter adds that educational leaders are forming their own advisory body because AI is moving too quickly for politicians to cope.

AI is moving far too quickly for the government or parliament alone to provide the real-time advice schools need. We are thus announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts.

Supporters include James Dahl, the head of Wellington College in Berkshire, and Alex Russell, chief executive of the Bourne Education Trust, which runs about two dozen state schools.

The Times reported that the group would create a website led by the heads of science or digital at 15 state and private schools, offering guidance on developments in AI and what technology to avoid or embrace.

Seldon told the Times: Learning is at its best, human beings are at their best, when they are challenged and overcome those challenges. AI will make life easy and strip away learning and teaching unless we get ahead of it.

The Department for Education said: The education secretary has been clear about the governments appetite to pursue the opportunities and manage the risks that exist in this space, and we have already published information to help schools do this. We continue to work with experts, including in education, to share and identify best practice.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original post:

UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian

The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and arent yet capable of, as well as an assessment of security and performance risks, according to industry experts.

With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of ones individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.

An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).

LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies to create a relationship between your question and patterns of data and text. In other words, the more data an LLM like ChatGPTwhich is currently learning from users across the entire internetabsorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT is not a knowledge model, Kontaridis said. Calling ChatGPT a knowledge model would insinuate that it is going to give you the correct answer every time you put in a question. Because it is not artificial general intelligence, something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not self-reasoning, he said.

Were not even close to having real AGI out there, Kontaridis added.

Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that the really important thing when youre using a tool like [ChatGPT] is recognizing its limitations. He explained that it is not providing source material for legal or tax advice. What its doing, and this is very important, is simply making a probabilistic determination about the next likely word. For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it might give you a slightly different answer with different words because its responding to a different prompt.

At a separate panel, Ken Crutchfieldvice president and general manager of Legal Markets said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said yes, Bryant Crutchfield did invent the Trapper Keeper.' Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his fathers name, but listed his own alma mater. So its getting better and kind of learns through the these back-and-forths with people that are interacting.

Aidid explained that these instances are referred to as hallucinations. That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of decision trees, such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.

Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is especially problematic because the [Tax Assistant Tool] is implying that it has all that information and its generating responses based on the world of information, but its really not doing that, so its misleading.

The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning can work with unstructured data. Such technology can not only synthesize and review information, but review new information for us. Its starting to take all that and generate thingsnot simple predictionsbut actually generate things that are in the style and mode of human communication, and thats where were seeing significant investment today.

Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is a little harder to see where that is in tax law. These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.

PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon, he said.

However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that its really important to make sure before you deploy something like this to your staff or use it yourself that youre doing it in a safe environment where you are protecting the confidentiality of your personal IP and privilege that you have with your clients.

Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a very broadly societal risk. Kontaridis assured the audience that he is not worried about generative AI replacing our role and the tax professional this is a tool that will help us do our work better.

Referring to the myth that CPA bots will take over the industry, he said: What Im worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.

Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for afree 7-day trialtoday.

Excerpt from:

The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting

Navigating artificial intelligence: Red flags to watch out for – ComputerWeekly.com

Lou Steinberg, founder and managing partner of CTM Insights, a cyber security research lab and incubator, doesnt watch movies about artificial intelligence (AI) because he believes what he sees in real life is enough.

Steinberg has also worn other hats, including a six-year tenure as chief technology officer of TD Ameritrade, where he was responsible for technology innovation, platform architecture, engineering, operations, risk management and cyber security.

He has worked with US government officials on cyber issues as well. Recently, after a White House meeting with tech leaders about AI, Steinberg spoke about the benefits and downsides of having AI provide advice and complete tasks.

Businesses with agendas, for example, might try to skew training data to get people to buy their cars, stay in their hotels, or eat at their restaurants.Hackers may also change training data to advise people to buy stocks that are being sold at inflated prices.They may even teach AI to write software with built-in security issues, he contended.

In an interview with Computer Weekly, Steinberg drilled down into these red flags and what organisations can do to mitigate the risks of the growing use of AI.

What would you say are the top three things we should really be worried about right now when it comes to AI?

Steinberg: My short- to medium-term concerns with AI are in three main areas. First, AI- and machine learning-powered chatbots and decision support tools will return inaccurate results that are misconstrued as accurate, as they used untrustworthy training data and lack traceability.

Second, the lack of traceability means we dont know why AI gives the answers it gives though Google is taking an interesting approach by providing links to supporting documentation that a user can assess for credibility.

Third, attempts to slow the progress of AI, while well meaning, will slow the pace of innovation in Western nations while countries like China will continue to advance. While there have been examples of internationally respected bans on research, such as human cloning, AI advancement is not likely to be slowed globally.

How soon can bad actors jail-break AI? And what would that mean for society? Can AI developers pre-empt such dangers?

People have already gotten past guardrails built into tools like ChatGPT through prompt engineering. For example, a chatbot might refuse to generate code that is obviously malware but will happily create one function at a time that can be combined to create malware. Jail-breaking of AI is already happening today, and will continue as both the guardrails and attacks gain in sophistication.

The ability to attack poorly protected training data and bias the outcome is an even larger concern. Combined with the lack of traceability, we have a system without feedback loops to self-correct.

The ability to synthetically recreate a real person's voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences Lou Steinberg, CTM Insights

When will we get past the black box problem of AI?

Great question. As I said, Google appears to be trying to reinforce answers with pointers to supporting data. That helps, though I would rather see a chain of steps that led to a decision. Transparency and traceability are key.

Who can exploit AI the most? Governments? big tech? Hackers?

All of the above can and will exploit AI to analyse data, support decision-making and synthesise new outputs. Exploiting AI comes down to whether the use cases will be good or bad for society.

If made by a tech company, it will be to gain commercial advantage, ranging from selling you products to detecting fraud to personalising medicine and medical diagnoses. Businesses will also tap cost savings by replacing humans with AI, whether to write movie scripts, drive a delivery truck, develop software, or board an airplane by using facial recognition as a boarding pass.

Many hackers are also profit-seeking, and will try to steal money by guessing bank account passwords or replicating a persons voice and likeness to scam others. Just look at recent examples of realistic, synthesised voices being used to trick people into believing a loved one has been kidnapped.

While autonomous killer robots from science fiction are certainly a concern with some nation states and terrorist groups, governments and some companies sit on huge amounts of data that would benefit from improved pattern detection. Expect governments to analyse and interpret data to better manage everything from public health to air traffic congestion. AI will also allow personalised decision-making at scale, where agencies like the US Internal Revenue Service will look for fraud while authoritarian governments will increase their ability to do surveillance.

What advice would you give to AI developers? As an incubator, does CTM Insights have any special lens here?

There are so many dimensions of protection needed. Training data must be curated and protected from malicious tampering. The ability to synthetically recreate a real persons voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences.

Similarly, the ability to realistically edit images and evade detection will create cases where even real images, like your medical scans, are untrustworthy. CTM has technology to isolate untrustworthy portions of data and images, without requiring everything to be thrown out. We are working on a new way to detect synthetic deepfakes.

Is synthetic data a good thing or a bad thing if we want to create safer AI?

Synthetic data is mostly a good thing, and we can use it to help create curated training data. The challenge is that attackers can do the same thing.

Will singularity and artificial general intelligence (AGI) be a utopia or a dystopia?

Im an optimist. While most major technology advances can be used to do harm, AI has the ability to eliminate a huge amount of work done by people but still create the value of that work. If the benefits are shared across society, and not concentrated, society will gain broadly.

For example, one of the most common jobs in the US is driving a delivery truck. If autonomous vehicles replace those jobs, society still gets the benefit of having things delivered. If all that does is raise profit margins at delivery companies, then that will be deeply impactful to laid-off drivers. But if some of the benefit is used to help those ex-drivers do something else like construction, then society benefits by getting new buildings.

Data poisoning, adversarial AI, co-evolution of good guys and bad guys how serious have these issues become?

Co-evolution of AI and adversarial AI have already started. There is debate as to the level of data poisoning out there today as many attacks arent made public. Id say they are all in their infancy. Im worried about what happens when they grow up.

If you were to create an algorithm thats water-tight on security, what broad areas would you be careful about?

The system would have traceability built in from the start. The inputs would be carefully curated and protected. The outputs would be signed and have authorised use built in. Today, we focus way too much on identity and authentication of people and not enough on whether those people authorised things.

Have you seen any evidence of AI-driven or assisted attacks?

Yes, deepfake videos exist of Elon Musk and others for financial scams, as well as Ukraines President Zelensky telling his troops to surrender in disinformation campaigns. Synthesised voices of real people have been used in fake kidnapping scams, and fake CEO voices on phone calls have asked employees to transfer money to a fraudsters account. AI is also being used by attackers to exploit vulnerabilities to breach networks and systems.

Whats your favourite Black Mirror episode or movie about AI that feels like a premonition?

I try to not watch stuff that might scare me real life is enough!

Read more from the original source:

Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com

Zoom Invests in and Partners With Anthropic to Improve Its AI … – PYMNTS.com

Zoomhas become the latest tech company riding this years wave of artificial intelligence (AI) integrations.

The video conferencing platform announced in a Tuesday (May 16) press release that it has teamed with and isinvestingin AI firmAnthropic.

The collaboration will integrate Anthropics AI assistant, Claude, with Zooms platform, beginning withZoom Contact Center.

With Claude guiding agents toward trustworthy resolutions and powering self-service for end-users, companies will be able to take customer relationships to another level, saidSmita Hashim, chief product officer for Zoom, in the release.

Working with Anthropic, Hashim said, furthers the companys goal of a federated approach to AI while also advancing leading-edge companies like Anthropic and helping to drive innovation in the Zoom ecosystem and beyond.

As the next step in evolving the Zoom Contact Center portfolio, (Zoom Virtual Agent, Zoom Contact Center, Zoom Workforce Management), Zoom plans to incorporate Anthropic AI throughout its suite, improving end-user outcomes and enabling superior agent experiences, the news release said.

Zoom said in the release it eventually plans to incorporate Anthropic AI throughout its suite, including products like Team Chat, Meetings, Phone, Whiteboard and Zoom IQ.

Last year, Zoom debutedZoom Virtual Agent, an intelligent conversational AI and chatbot tool that employs natural language processing and machine learning to understand and solve customer issues.

The company did not reveal the amount of its investment in Anthropic, which isbackedbyGoogleto the tune of $300 million.

Zooms announcement came amid a flurry of AI-related news Tuesday, with fraud prevention firmComplyAdvantagelaunching anAI tooland the New York Times digging intoMicrosofts claims that it had made a breakthrough in the realm of artificial general intelligence.

Perhaps the biggest news isOpenAICEO Sam Altmanstestimonybefore a U.S. Senate subcommittee, in which he warned: I think if this technology goes wrong, it can go quite wrong.

Altmans testimony happened as regulators and governments around the world step up their examination of AI in a race tomitigate fearsabout its transformative powers, which have spread in step with the future-fit technologys ongoing integration into the broader business landscape.

Go here to read the rest:

Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com

People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

An artificial intelligence investor has warned that humanity may need to hit the breaks on AI development, claiming it's becoming 'God-like' and that it could cause 'catastrophe' for us in the not-so-distant future.

Ian Hogarth - who has invested in over 50 AI companies - made an ominous statement on how the constant pursuit of increasingly-smart machines could spell disaster in an essay for the Financial Times.

The AI investor and author claims that researchers are foggy on what's to come and have no real plan for a technology with that level of knowledge.

"They are running towards a finish line without an understanding of what lies on the other side," he warned.

Hogarth shared what he'd recently been told by a machine-learning researcher that 'from now onwards' we are on the verge of artificial general intelligence (AGI) coming to the fore.

AGI has been defined as is an autonomous system that can learn to accomplish any intellectual task that human beings can perform and surpass human capabilities.

Hogarth, co-founder of Plural Platform, said that not everyone agrees that AGI is imminent but rather 'estimates range from a decade to half a century or more' for it to arrive.

However, he noted the tension between companies that are frantically trying to advance AI's capabilities and machine learning experts who fear the end point.

The AI investor also explained that he feared for his four-year-old son and what these massive advances in AI technology might mean for him.

He said: "I gradually shifted from shock to anger.

"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."

When considering whether the people in the AGI race were planning to 'slow down' to ' let the rest of the world have a say' Hogarth admitted that it's morphed into a 'them' versus 'us' situation.

Having been a prolific investor in AI startups, he also confessed to feeling 'part of this community'.

Hogarth's descriptions of the potential power of AGI were terrifying as he declared: "A three-letter acronym doesnt capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI."

Hogarth described it as 'a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it'.

But even with this knowledge and, despite the fact that it's still on the horizon, he warned that we have no idea of the challenges we'll face and the 'nature of the technology means it is exceptionally difficult to predict exactly when we will get there'.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," the investor said.

Despite a career spent investing in and supporting the advancement of AI, Hogarth explained that what made him pause for thought was the fact that 'the contest between a few companies to create God-like AI has rapidly accelerated'.

He continued: "They do not yet know how to pursue their aim safely and have no oversight."

Hogarth still plans to invest in startups that pursue AI responsibly, but explained that the race shows no signs of slowing down.

"Unfortunately, I think the race will continue," he said.

"It will likely take a major misuse event - a catastrophe - to wake up the public and governments."

Continue reading here:

People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD

‘Godfather’ of AI is now having second thoughts – The B.C. Catholic

Until a few weeks ago British-born Canadian university professor Geoffrey Hinton was little known outside academic circles. His profile became somewhat more prominent in 2019 when he was a co-winner of the A. M. Turing Award, more commonly known as the Nobel Prize for computing.

However, it is events of the past month or so that have made Hinton a bit of a household name, after he stepped down from an influential role at Google.

Hintons life work, particularly that in computing at the University of Toronto, has been deemed groundbreaking and revolutionary in the field of artificial intelligence, AI. Anyone reading this column will surely have encountered numerous pieces on AI in recent months, be it on TV, through radio, or in print, physical and digital. AI applications such as large language model ChatGPT have completely altered the digital landscape in ways unimaginable even a year ago.

While at the U of T, Hinton and graduate students made major advances in deep neural networks, speech recognition, the classification of objects, and deep learning. Some of this work morphed into a technology startup which captured the attention of Google, leading to the acquisition of the business for around $44 million a decade ago.

Eventually, Hinton became a Google vice-president, in charge of running the California companys Toronto AI lab. Leaving that position recently, at the age of 75, led to speculation, particularly in a New York Times interview, that he did so in order to criticize or attack his former employer.

Not so, said Hinton in a tweet. Besides his age being a factor, he suggested he wanted to be free to speak about the dangers of AI, irrespective of Googles involvement in the burgeoning field. Indeed, Hinton noted in his tweet that in his view Google had acted very responsibly.

Underscoring his view of Googles public AI work may be the companys slow response to the adoption of Microsoft-backed ChatGPT in its various incarnations. Googles initial public AI product, Bard, appeared months after ChatGPT began its meteoric adoption in early December. It did not gain much traction at the outset.

In recent weeks weve seen news stories of large employers such as IBM serving notice that about 7,000 positions would be replaced by AI bots such as specialized versions of ChatGPT. Weve also seen stories about individuals turning over significant aspects of their day-to-day life to such bots. One person gained particular attention for giving all his financial, email, and other records to a specialized AI bot with a view to having it find $10,000 in savings and refunds through automated actions.

Perhaps it is these sorts of things that are giving Hinton pause as he looks back at his lifes work. In the NYT interview, he uses expressions such as, It is hard to see how you can prevent the bad actors from using it for bad things, and Most people will not be able to know what is true anymore -- the latter in reaction to AI-created photos, videos, and audio depicting objects or events that didnt occur.

Right now, they are not more intelligent than us, as far as I can tell. But they soon may be, said Hinton, speaking to the BBC about AI machines. He went on to add, Ive come to the conclusion that the kind of intelligence we are developing (via AI) is very different from the intelligence we have.

Hinton went on to note how biological systems (i.e. people) are different from digital systems. The latter, he notes, has many copies of the same set of weights and the same model of the world, and while these copies can learn separately, they can share new knowledge instantly.

In a somewhat enigmatic tweet on March 14 Hinton wrote: Caterpillars extract nutrients which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT-4 is humanitys butterfly.

Hinton spent the first week of May correcting various lines from interviews he gave to prominent news outlets. He took particular issue with a CBC online headline: Canadas AI pioneer Geoffrey Hinton says AI could wipe out humans. In the meantime, theres money to be made. In a tweet he said: The second sentence was said by a journalist, not me, but you wouldnt know that.

Whether the race to a God-like form of artificial intelligence fully materializes, or not, AI is already being placed alongside climate change and nuclear war as a trio of existential threats to human life. Climate change is being broadly tackled by most nations, and nuclear weapons use has been effectively stifled by the notion of mutually-assured destruction. Perhaps artificial general intelligence needs a similar global focus for regulation and management.

Follow me on Facebook (facebook.com/PeterVogelCA), or on Twitter (@PeterVogel)

[emailprotected]

More:

'Godfather' of AI is now having second thoughts - The B.C. Catholic