The Open AI Drama: What Is AGI And Why Should You Care? – Forbes

Evolution of humans and intelligence

Pixabay

Artificial general intelligence is something everyone should know and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with speculation that the leadership shakeup may have been due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or notand we may never knowAGI is still serious. All of which begs the questions: what exactly is AGI, what does it mean to all of us, and whatif anythingcan the average person do about it?

As expected for such a complex and impactful topic, definitions vary:

Given the recent OpenAI news, it is particularly opportune that OpenAIs chief scientist, Ilya Sutskever, actually presented his perspective on AGI just a few weeks ago at TED AI. You can find his full presentation here, but some takeaways:

As we can see, AGI spans many dimensions. The ability to perform generalized tasks implies that AGI will affect the job market far more than the AIs that preceded it. For example, an AI that can read an X-ray and detect disease can assist doctors in their work. However, an AGI that can read the X-ray, understand the patients personal history, make a recommendation and explain that recommendation to the patient with a kind beside manner could conceivably replace the doctor entirely. The potential benefits and risks to world economies and jobs are massive. Add to those the ability for AGIs to learn and produce new AGIs, and the risk becomes existential. It is not clear how humanity would control such an AGI or what decisions it would make for itself.

Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away. For example, Geoff Hinton, winner of the Turing Award (the highest prize in computer science), believes AGI is less than 20 years away but that it will not present an existential threat. Meanwhile, his fellow winner of the same award, Yoshua Bengio, states that we do not know how many decades it will take to reach AGI. Much of this discrepancy also has to do with the lack of a broadly agreed-upon definition, as the examples above show.

Yes, I believe so. If nothing else, this weeks drama at OpenAI shows how little we know about the technology development that is so fundamental to humanitys futureand how unstructured our global conversation on the topic is. Fundamental questions exist, such as:

Who will decide if AGI has been reached?

Would we even know that it has happened or is imminent?

What measures will be in place to manage it?

How will countries around the world collaborate or fight over it?

And so on.

For those not following The Terminator franchise, Skynet is a fictional, human-created, machine network that becomes self-aware and decides to destroy humanity. I dont think this is cause for major concern. While certain parts of the AGI definition (particularly the idea of AGIs creating future AGIs) are heading in this direction, and while movies like The Terminator show a certain view of the future, history has shown us that harm caused by technology is usually caused by intentional or accidental human misuse of the technology. AGI may eventually reach some form of consciousness that is independent of humans, but it seems far more likely that human-directed AI-powered weapons, misinformation, job displacement, environmental disruption, etc. will threaten our well-being before that.

I believe the only thing each of us can do is to be informed, be AI-literate and exercise our rights, opinions and best judgement. The technology is transformative. What is not clear is who will decide how it will transform.

Along these lines, less than a month ago, U.S. President Joe Biden issued an executive order on AI, addressing a wide range of near-term AI concerns from individual privacy to responsible AI development to job displacement and necessary upskilling. While not targeted directly at AGI, these orders and similar legislation can direct responsible AI development in the short termprior to AGIand hopefully continuing through to AGI.

It is also worth noting that AGI is unlikely to be a binary eventone day not there and the next day there. ChatGPT appeared to many people as if it came from nowhere, but it did not. It was preceded in 2019 and 2020 by GPT 2 and GPT 3. Both were very powerful but harder to use and far less well known. While ChatGPT (GPT3.5 and beyond) represented major advances, the trend was already in place.

Similarly, we will see AGI coming. For example, a Microsoft research team recently reported that GPT-4 has shown signs of human reasoning, a step toward AGI. As expected, these reports are often disputed, with others claiming that such observations are more indicative of imperfect testing methodologies than of actual AGI.

The real question is: What will we do about about AGI before it arrives?

That decision should be made by everyone. The OpenAI drama continues, with new developments daily. However, no matter what happens with OpenAI, the AGI debate and issues are here to stay, and we will need to deal with themideally, sooner rather than later.

I am an entrepreneur and technologist in the AI space and the CEO of AIClub and AIClubPro - pioneering AI Literacy for K-12 students and individuals worldwide (https://corp.aiclub.world and https://aiclubpro.world). I am also the author of Fundamentals of Artificial Intelligence - the first AI Textbook for Middle School and High School students.

Previously, I co-founded ParallelM and defined MLOps (Production Machine Learning and Deep Learning). MLOps is the practice for full lifecycle management of Machine Learning and AI in production. My background is in software development for distributed systems, focusing on machine learning, analytics, storage, I/O, file systems, and persistent memory. Prior to PM, I was Lead Architect/Fellow at Fusion-io (acquired by SanDisk), developing new technologies and software stacks for persistent memory, Non-Volatile Memory File System (NVMFS) and application acceleration. Before Fusion-io, I was the technology lead for server flash at Intel - heading up server platform non volatile memory technology development and partnerships and foundational work on NVM Express.

Before that, I was Chief Technology Officer at Gear6, where we built clustered computing caches for high performance I/O environments. I got my PhD at UC Berkeley doing research on clusters and distributed storage. I hold 63 patents in distributed systems, networking, storage, performance, key-value stores, persistent memory and memory hierarchy optimization. I enjoy speaking at industry and academic conferences and serving on conference program committees. I am currently co-chairing USENIX OpML 2019 - the first industry conference on Operational Machine Learning. I also serve on the steering committees of both OpML and HotStorage.

See more here:

The Open AI Drama: What Is AGI And Why Should You Care? - Forbes

Related Posts

Comments are closed.