What is AGI and how is it different from AI? – ReadWrite

As artificial intelligence continues to develop at a rapid pace, its easy to wonder where this new age is headed.

The likes of ChatGPT, Midjourney and Sora are transforming the way we work through chatbots, text-to-image and text-to-video generators, while robots and self-driving cars are helping us perform day-to-day tasks. The latter isnt as mainstream as the former, but its only a matter of time.

But wheres the limit? Are we headed towards a dystopian world run by computers and robots? Artificial general intelligence (AGI) is essentially the next step but as things stand, were a little way off from that becoming a reality.

AGI is considered to be strong AI, whereas narrow AI is what we know to be generative chatbots, image generators and coffee-making robots.

Strong AI refers to software that has the same, or better, cognitive abilities as a human being, meaning it can solve problems, achieve goals, think and learn on its own, without any human input or assistance. Narrow AI can solve one problem or complete one task at a time, without any sentience or consciousness.

This level of AI is only seen in the movies at the moment, but were likely headed towards this level of AI-driven technology in the future. When that might be remains open to debate some experts claim its centuries away, others believe it could only be years. However, Ray Kurzweils book The Singularity is Near predicts it to be between 2015 and 2045, which was seen as a plausible timeline by the AGI research community in 2007although its a pretty broad timeline.

Given how quickly narrow AI is developing, its easy to imagine a form of AGI in society within the next 20 years.

Despite not yet existing, AGI can theoretically perform in ways that are indistinguishable from humans and will likely exceed human capacities due to fast access to huge data sets. While it might seem like youre engaging with a human when using something like ChatGPT, AGI would theoretically be able to engage with humans without necessarily having any human intervention.

An AGI systems capabilities would include the likes of common sense, background knowledge and abstract thinking, as well as practical capabilities, such as creativity, fine motor skills, natural language understanding (NLU), navigation and sensory perception.

A combination of all of those abilities will essentially give AGI systems high-level capabilities, such as being able to understand symbol systems, create fixed structures for all tasks, use different kinds of knowledge, engage in metacognition, handle several types of learning algorithms and understand belief systems.

That means AGI systems will be ultra-intelligent and may also possess additional traits, such as imagination and autonomy, while physical traits like the ability to sense, detect and act could also be present.

We know that narrow AI systems are widely being used in public today and are fast becoming part of everyday life, but it currently needs a human to function at all levels. It requires machine learning and natural language processing, before requiring human-delivered prompts in order to execute a task. It executes the task based on what it has previously learned and can essentially only be as intelligent as the level of information humans give it.

However, the results we see from narrow AI systems are not beyond what is possible from the human brain. It is simply there to assist us, not replace or be more intelligent than humans.

Theoretically, AGI should be able to undertake any task and portray a high level of intelligence without human intervention. It will be able to perform better than humans and narrow AI at almost every level.

Stephen Hawking warned of the dangers of AI in 2014, when he told the BBC: The development of full artificial intelligence could spell the end of the human race.

It would off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded.

Kurzweil followed up his prediction in The Singularity is Near by saying in 2017 that computers will achieve human levels of intelligence by 2029. He predicted that AI itself will get better exponentially, leading to it being able to operate at levels beyond human comprehension and control.

He then went on to say: I have set the date 2045 for the Singularity which is when we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.

These discussions and predictions have, of course, sparked debates surrounding the responsible use of CGI. The AI we know today is viewed to be responsible and there are calls to regulate many of the AI companies to ensure these systems do not get out of hand. Weve already seen how controversial and unethical the use of AI can be when in the wrong hands. Its unsurprising, then, that the same debate is happening around AGI.

In reality, society must approach the development of AGI with severe caution. The ethical problems surrounding AI now, such as the ability to control biases within its knowledge base, certainly point to a similar issue with AGI, but on a more harmful level.

If an AGI system can essentially think for itself and no longer has the need to be influenced by humans, there is a danger that Stephen Hawkings vision might become a reality.

Featured Image: Ideogram

Here is the original post:

What is AGI and how is it different from AI? - ReadWrite

Related Posts

Comments are closed.