Category Archives: Artificial General Intelligence
Artificial general intelligence and higher education – Inside Higher Ed
It is becoming increasingly clear that the advent of artificial general intelligence (AGI) is upon us. OpenAI includes in its mission that it aims to maximize the positive impact of AGI while minimizing harm. The research organization recognizes that AGI wont create a utopia, but they strive to ensure that its benefits are widespread and that it doesnt exacerbate existing inequalities.
Some say that elements of AGI will be seen in GPT-5 that OpenAI says is currently in prerelease testing. GPT-5 is anticipated to be available by the end of this year or in 2025.
Others suggest that Magic AI, the expanding artificial intelligence (AI) developer and coding assistant, may have already developed a version of AGI. With a staggering ability to process 3.5million words, Aman Anand writes in Medium, It is important to remember that Magics model is still under development, and its true capabilities and limitations remain to be seen. While the potential for AGI is undeniable, it is crucial to approach this future with caution and a focus on responsible development.
Most Popular
Meanwhile Google Gemini 1.5 Pro version is leaping ahead of OpenAI models with a massive context capability:
This means 1.5 Pro can process vast amounts of information in one goincluding 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, weve also successfully tested up to 10million tokens.
Accelerated by the intense competition to be the first to achieve AGI, it is not unreasonable to expect that at least certain of the parameters commonly describing AGI will conceivably be achieved by the end of this year, or almost certainly by 2026. AI researchers anticipate that
an AGI system should have the following abilities and understanding:
AI researchers also anticipate that AGI systems will possess higher-level capabilities, such as being able to do the following:
Given those characteristics, lets imagine a time, perhaps in four or five years, in which AGI has been achieved and has been rolled out across society. In that circumstance, it would seem that many of the jobs now performed by individuals could be more efficiently and less expensively completed by agents of AGI. Perhaps half or more of all jobs worldwide might be better done by AGI agents. At less cost, more reliability and instant, automatic updating, these virtual employees would be a bargain. Coupled with sophisticated robotics, some of which we are seeing rolled out today, even many hands-on skilled jobs will be efficiently and effectively done by computer. All will be immediately and constantly updated with the very latest discoveries, techniques and contextual approaches.
AGI is expected to be followed by artificial superintelligence (ASI):
ASI refers to AI technology that will match and then surpass the human mind. To be classed as an ASI, the technology would have to be more capable than a human in every single way possible. Not only could these AI things carry out tasks, but they would even be capable of having emotions and relationships.
What, then, will individual humans need to learn in higher education that cannot be provided instantly and expertly through their own personal ASI lifelong learning assistant?
ASI may easily provide up-to-the-minute responses to our intellectual curiosity and related questions. It will be able to provide personalized learning experiences; sophisticated simulations; personalized counseling and advising; and assess our abilities and skills to validate and credential our learning. ASI could efficiently provide recordkeeping in a massive database. In that way, there would be no confusion of comparative rankings and currency of credentials such as we see today.
In cases where we cannot achieve tasks on our own, ASI will direct virtual agents to carry out tasks for us. However, that may not fully satisfy the human-to-human and emotional interactions that seems basic to our nature. The human engagement, human affirmation and interpersonal connections may not be fulfilled by ASI and nonhuman agents. For example, some tasks are not as much about the outcome as they are the journey, such as music, art and performance. In those cases, it is the process of refining those abilities that are at least equal to the final product.
Is there something in the interpersonal, human-to-human engagement in such endeavors that is worthy of continuing in higher education rather than solely through computer-assisted achievement? If so, does that require a university campus? Certainly, the number of disciplines and therefore the number of faculty and staff members will fall out of popularity due to suppressed job markets in those fields.
If this vision of the next decade is on target, higher education is best advised to begin considering today how it will morph into something that serves society in the fourth industrial revolution. We must begin to:
Have you and your colleagues begun to consider the question of what you provide that could not be more efficiently and less expensively provided by AI? Have you begun to research and formulate plans to compete or add value to services that are likely to be provided by AGI/ASI? One good place to begin such research is by asking a variety of the current generative AI apps to share insights and make recommendations!
Read this article:
Artificial general intelligence and higher education - Inside Higher Ed
Companies Like Morgan Stanley Are Already Making Early Versions of AGI – Observer
Companies like Morgan Stanley are already laying the groundwork for so-called organizational AGI. Maxim Tolchinskiy/Unsplash
Whether its being theorized or possibly, maybe actualized, artificial general intelligence, or AGI, has become a frequent topic of conversation in a world where people are now routinely talking with machines. But theres an inherent problem with the term AGIone rooted in perception. For starters, assigning intelligence to a system instantly anthropomorphizes it, adding to the perception that theres the semblance of a human mind operating behind the scenes. This notion of a mind deepens the perception that theres some single entity manipulating all of this human-grade thinking.
This problematic perception is compounded by the fact that large language models (LLMs) like ChatGPT, Bard, Claude and others make a mockery of the Turing test. They seem very human indeed, and its not surprising that people have turned to LLMs as therapists, friends and lovers (sometimes with disastrous results). Does the humanness of their predictive abilities amount to some kind of general intelligence?
By some estimates, the critical aspects of AGI have already been achieved by the LLMs mentioned above. A recent article in Noema by Blaise Agera Y Arcas (vice president and fellow at Google Research) and Peter Norvig (a computer scientist at the Stanford Institute for Human-Centered A.I.) argues that todays frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of A.I. and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI.
For others, including OpenAI, AGI is still out in front of us. We believe our research will eventually lead to artificial general intelligence, their research page proclaims, a system that can solve human-level problems.
Whether nascent forms of AGI are already here or are still a few years away, its likely that businesses attempting to harness these powerful technologies might create a miniature version of AGI. Businesses need technology ecosystems that can mimic human intelligence with the cognitive flexibility to solve increasingly complex problems. This ecosystem needs to orchestrate using existing software, understand routine tasks, contextualize massive amounts of data, learn new skills, and work across a wide range of domains. LLMs on their own can only perform a fraction of this workthey seem most useful as part of a conversational interface that lets people talk to technology ecosystems. There are strategies being used right now by leading enterprise companies to move in this direction toward something we might call organizational AGI.
There are legitimate reasons to be wary of yet another unsolicited tidbit in the A.I. terms slush pile. Regardless of what we choose to call the eventual outcome of these activities, there are currently organizations using LLMs as an interface layer. They are creating ecosystems where users can converse with software through channels like rich web chat (RCW), obscuring machinations happening behind the scenes. This is difficult work, but the payoff is huge: rather than pogo-sticking between apps to get something done on a computer, customers and employees can ask technology to run tasks for them. Theres the immediate and tangible benefit of people eliminating tedious tasks from their lives. Then theres the long term benefit of a burgeoning ecosystem where employees and customers are interacting with digital teammates that can perform automations leveraging all forms of data across an organization. This is an ecosystem that starts to take the form of a digital twin.
McKinsey describes a digital twin as a virtual replica of a physical object, person, or process that can be used to simulate its behavior to better understand how it works in real life. They elaborate to say that a digital twin within an ecosystem similar to what Ive described can become an enterprise metaverse, a digital and often immersive environment that replicates and connects every aspect of an organization to optimize simulations, scenario planning and decision making.
With respect to what I said earlier about anthropomorphizing technology, the digital teammates within this kind of ecosystem are an abstraction, but I think of them as intelligent digital workers, or IDWs. IDWs are analogous to a collection of skills. These skills come from shared libraries, and skills can be adapted and reused in multitudes of ways. Skills are able to take advantage of all the information piled up inside the organization, with LLMs mining unstructured data, like emails and recorded calls.
This data becomes more meaningful thanks to graph technology, which is adept at creating indexes of skills, systems and data sources. Graph goes beyond mere listing and includes how these elements relate to and interact with each other. One of the core strengths of graph technology is its ability to represent and analyze relationships. For a network of IDWs, understanding how different components are interlinked is crucial for efficient orchestration and data flow.
Generative tools like LLMs and graph technology can work together in tandem, to propel the journey toward digital twinhood, or organizational AGI. Twins can encompass all aspects of the business, including events, data, assets, locations, personnel and customers. Digital twins are likely to be low-fidelity at first, offering a limited view of the organization. As more interactions and processes take place within the org, however, the fidelity of the digital twin becomes higher. An organizations technology ecosystem not only understands the current state of the organization. It can also adapt and respond to new challenges autonomously.
In this sense every part of an organization represents an intelligent awareness that comes together around common goals. In my mind, it mirrors the nervous system of a cephalopod. As Peter Godfrey-Smith writes in his book, Other Minds (2016, Farrar, Straus and Giroux), in an octopus, the majority of neurons are in the arms themselvesnearly twice as many in total as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch but also the capacity to sense chemicalsto smell or taste. Each sucker on an octopuss arm may have 10,000 neurons to handle taste and touch. Even an arm that has been surgically removed can perform various basic motions, such as reaching and grasping.
A world teeming with self-aware brands would be quite hectic. According to Gartner, by 2025, generative A.I. will be a workforce partner within 90 percent of companies worldwide. This doesnt mean that all of these companies will be surging toward organizational AGI, however. Generative A.I., and LLMs in particular, cant meet an organizations automation needs on its own. Giving an entire workforce access to GPTs or Copilot wont move the needle much in terms of efficiency. It might help people write better emails faster, but it takes a great deal of work to make LLMs reliable resources for user queries.
Their hallucinations have been well documented and training them to provide trustworthy information is a herculean effort. Jeff McMillan, chief analytics and data officer at Morgan Stanley (MS), told me it took his team nine months to train GPT-4 on more than 100,000 internal documents. This work began before the launch of ChatGPT, and Morgan Stanley had the advantage of working directly with people at OpenAI. They were able to create a personal assistant that the investment banks advisors can chat with, tapping into a large portion of its collective knowledge. Now youre talking about wiring it up to every system, he said, with regards to creating the kinds of ecosystems required for organizational A.I. I dont know if thats five years or three years or 20 years, but what Im confident of is that that is where this is going.
Companies like Morgan Stanley that are already laying the groundwork for so-called organizational AGI have a massive advantage over competitors that are still trying to decide how to integrate LLMs and adjacent technologies into their operations. So rather than a world awash in self-aware organizations, there will likely be a few market leaders in each industry.
This relates to broader AGI in the sense that these intelligent organizations are going to have to interact with other intelligent organizations. Its hard to envision exactly what depth of information sharing will occur between these elite orgs, but over time, these interactions might play a role in bringing about AGI or singularity, as its also called.
Ben Goertzel, the founder of SingularityNET and the person often credited with creating the term, makes a compelling case that AGI should be decentralized, relying on open-source development as well as decentralized hosting and mechanisms for interconnect A.I. systems to learn from and teach on another.
SingularityNETs DeAGI Manifesto states, There is a broad desire for AGI to be ethical and beneficial for all humanity; the most straightforward way to achieve this seems to be for AGI to grow up in the context of serving and being guided by all humanity, or as good an approximation as can be mustered.
Having AGI manifest in part from the aggressive activities of for-profit enterprises is dicey. As Goertzel pointed out, You get into questions [about] who owns and controls these potentially spooky and configurable human-like robot assistants and to what extent is their fundamental motivation to help people as opposed to sell people stuff or brainwash people into some corporate government media advertising order.
Theres a strong case to be made that an allegiance to profit will be the undoing of the promise for humanity at large that these technologies afford. Weirdly, the skynet scenario in Terminatorwhere a system becomes self-aware, determines humanity is a grave threat, and exterminates all lifeassumes that the system, isolated to a single company, has been programmed to have a survival instinct. It would have to be told that survival at all costs is its bottom line, which suggests we should be extra cautious developing these systems within environments where profit above all else is the dictum.
Maybe the most important thing is keeping this technology in the hands of humans and pushing forward the idea that the myriad technologies associated with A.I. should only be used in ways that are beneficial to humanity as a whole, that dont exploit marginalized groups, and that arent propagating synthesized bias at scale.
When I broached some of these ideas about organizational AGI to Jaron Lanier, co-creator of VR technology as we know it and Microsofts Octopus (Office of the Chief Technology Officer Prime Unifying Scientist), he told me my vocabulary was nonsensical and that my thinking wasnt compatible with his perception of technology. Regardless, it felt like we agreed on core aspects of these technologies.
I dont think of A.I. as creating new entities. I think of it as a collaboration between people, Lanier said. Thats the only way to think about using it wellto me its all a form of collaboration. The sooner we see that, the sooner we can design useful systemsto me theres only people.
In that sense, AGI is yet another tool, way down the spectrum from the rocks our ancestors used to smash tree nuts. Its a manifestation of our ingenuity and our desires. Are we going to use it to smash every tree nut on the face of the earth, or are we going to use it to find ways to grow enough tree nuts for everyone to enjoy? The trajectories we set in these early moments are of grave importance.
Were in the anthropocene. Were in an era where our actions are affecting everything in our biological environment, Blaise Aguera Y Arcas, the Noeme article author, told me. The Earth is finite and without the kind of solidarity where we start to think about the whole thing as our body, as it were, were kind of screwed.
Josh Tyson is the co-author of Age of Invisible Machines, a book about conversational A.I., and Director of Creative Content at OneReach.ai. He co-hosts two podcasts: Invisible Machines and N9K.
Read this article:
Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer
Amazon’s VP of AGI: Arrival of AGI Not ‘Moment in Time’ SXSW 2024 – AI Business
The race to reach artificial general intelligence is getting intense among the tech giants, but its arrival will not happen as a moment in time, according to Amazons vice president of AGI.
Its very unlikely that theres going to be a moment in time when you suddenly decide, oh AGI wasnt (here yesterday) but its here today, said Vishal Sharma during a fireside chat at SXSW 2024 in Austin, Texas. Thats probably not going to happen.
Instead, he sees it as a journey of continuous advances. His comments echo Google DeepMinds six levels of AGI, where models go up one level as they progressively exhibit more AGI characteristics.
Meanwhile, there are hurdles to overcome. For one, people still do not agree on a precise definition of AGI. If you ask 10 experts about AGI, you will get 10 different explanations," he said.
Another is the ethical challenges models face. For Sharma, they fall in three buckets: Veracity since the models can hallucinate or make things up safety (intense red-teaming is needed), and controllability, in which inputting broadly similar prompts or queries can result in broadly similar outcomes.
A popular technique to mitigate hallucinations is Retrieval-Augmented Generation (RAG) in which the model is given, or provided access to, additional content or data from which to draw its answers. Sharma said RAG is still the best technique to fight hallucinations today.
Related:DeepMind Co-founder on AGI and the AI Race - SXSW 2024
However, he mentioned that there is another school of thought that believes its just a matter of time until the models become capable enough where these truths will be woven into the model themselves.
As for his views on open vs. closed models, Sharm said one of Amazons leadership principles is that success and scale bring broad responsibility and this applies to both types of models.
He emphasized the need to be flexible since generative AI remains fairly new and unforeseen opportunities and challenges could arise. Sharma said that when the internet began maturing, it brought new challenges that people did not think of before, such as cyber bullying.
We have to be adaptable, Sharma said.
He also thinks that just as the rise of semiconductors ushered in Moores Law and the network of networks led to Metcalfes Law, generative AI could lead to a new principle as well.
Credit: Amazon
He sees a time when AI will be broadly embedded into daily life as a helpful assistant, while staying in the background.
Sharma said Alexas Hunches are already one sign of this future. With Hunches, Alexa learns your routine say locking the back door at 9 p.m. every night and if you fail to do that one night, it will send an alert.
Related:EU AI Act Would Scrutinize Many General AI Models SXSW 2024
He said Amazons Astro is an example of an embodied AI assistant. The $1,600 household robot is used for home monitoring. You can ask it to check on people or specific rooms in the house. It alerts you if it sees someone it does not recognize or hears certain sounds. Astro can also throw treats to your dog through an accessory that is sold separately.
To be sure, todays models still have room for improvement whether in performance or economics. But Sharma believes advancements will lead to an age of abundance through the fusion of use cases that will become possible.
You should bet on AI, he said. You should not bet against it.
Continued here:
Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business
DeepMind Co-founder on AGI and the AI Race – SXSW 2024 – AI Business
Artificial general intelligence might be here in a few years, but the full spectrum of practical applications is decades away, according to the co-founder of DeepMind.
Speaking on the sidelines of SXSW 2024, Shane Legg told a group of attendees that while AGI might be achieved in foundation models soon, more factors have to align for it to be practically deployed and used.
He said the cost of AI has to come down and its use in robotics has to mature, among other factors. If it is not economically feasible, companies will not adopt it broadly no matter how mind-blowing AGI can be. In the meantime, near-term applications of AGI are emerging, including AI-powered scientific research assistants.
Legg, who is the chief AGI scientist at Google DeepMind, suggested the term artificial general intelligence years ago after meeting an author who needed a title for his book on an AI system with broad capabilities, not just excel at one thing.
Legg suggested inserting the word general between artificial and intelligence. He and a few others started popularizing the term in online forums. Four years later, Legg said someone else claimed to have coined the term before him.
DeepMind co-founder Shane Legg talking to attendees after his fireside chat
During a fireside chat, Legg defined AGI as a system that can do the sorts of cognitive things people can do and possibly more. He stood by his prior prediction that there is a 50-50 probability AGI will come by 2028.
Related:OpenAI Will Always Offer a Free ChatGPT Version SXSW 2024
But such a prognostication was wildly optimistic back when the prevailing belief was that AGI remains 50 to 100 years away if it came at all.
For a long time, people wouldnt work on AGI safety because they didnt believe AGI will happen, Legg said. They would say, Oh, its not going to happen for 100 years so why would I work on it?
But foundation models have become increasingly able such that AGI doesnt look like its that far away, he added. Large models such as Googles Gemini and OpenAIs GPT-4 exhibit hints of AGI capability.
He said currently, models are at level 3 of AGI, based on the six levels Google DeepMind developed.
Level 3 is the expert level where the AI model has the same capabilities as at least the 90th percentile of skilled adults. But it remains narrow AI, meaning it is particularly good at specific tasks. The fifth level is the highest, where the model reaches artificial superintelligence and outperforms all humans.
What AI models still need is akin to the two systems of thinking from psychology, Legg said. System 1 is when one spontaneously blurts out what one is thinking. System 2 is when one thinks through what one plans to say.
Related:AMD CEO Gets Down at SXSW 2024
He said foundation models today are still at System 1 and needs to progress to System 2 where it can plan, reason through its plan, critiques its chosen path, acts on it, observes the outcome and make another plan if needed.
Were not quite there yet, Legg said.
But he believes AI models will get there soon, especially since todays foundation models already show signs of AGI.
I believe AGI is possible and I think its coming quite soon, Legg said. When it does come, it will be profoundly transformational to society.
Consider that todays advances in society came through human intelligence. Imagine adding machine intelligence to the mix and all sorts of possibilities open up, he said. It (will be) an incredibly deep transformation.
But big transformations also bring risks.
Its hard to anticipate how exactly this is going to play out, Legg said. When you deploy an advanced technology at global scale, you cant always anticipate what will happen when this starts interacting with the world.
There could be bad actors who would use the technology for evil schemes, but there are also those who unwittingly mess up the system, leading to harmful results, he pointed out.
Historically, AI safety falls into two buckets: Immediate risks such as bias and toxicity in the algorithms, and long-term risks from unleashing a superintelligence including the havoc it could create by going around guardrails.
Legg said the line between these two buckets has started to blur based on the advancements of the latest foundation models. Powerful foundation models not only exhibit some AGI capabilities but they also carry immediate risks of bias, toxicity and others.
The two worlds are coming together, Legg said.
Moreover, with multimodality - in which foundation models are trained not only on text but also images, video and audio - they can absorb all the richness and subtlety of human culture, he added. That will make them even more powerful.
Why do scientists need to strive for AGI? Why not stop at narrow AI since it is proving to be useful in many industries?
Legg said that several types of problems benefit from having very large and diverse datasets. A general AI system will have the underlying knowhow and structure to help narrow AI solve a range of related problems.
For example, for human beings to learn a language, it helps if they already know one language so they are familiar with its structure, Legg explained. Similarly, it may be helpful for a narrow AI system that excels at a particular task to have access to a general AI system that can bring up related issues.
Also, practically speaking, it may already be too late to stop AGI development since for several big companies it has become mission critical to them, Legg said. In addition, scores of smaller companies are doing the same thing.
Then there is what he calls the most difficult group of all: intelligence agencies. For example, the National Security Agency (NSA) in the U.S. has more data than anyone else, having access to public information as well as signal intelligence from interception of data from electronic systems.
How do you stop all of them? Legg asked. Tell me a credible plan to stop them. Im all ears.
See the original post:
DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business
AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com
Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed.
Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.
The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context.
However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) where AI becomes just as capable as humans across several areas independent of the original training data is within reach. This hypothetical point in AI development is known as the "singularity."
Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027.
Related: Artificial general intelligence when AI becomes more capable than humans is just moments away, Meta's Mark Zuckerberg declares
If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined.
"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.
He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.
Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures.
The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it.
OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition from content generation to reasoning.
Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.
Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."
"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought."
Read the original:
The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity – Patently-O
By Dennis Crouch
Elon Musk was instrumental in the initial creation of OpenAI as a nonprofit with the vision of responsibly developing artificial intelligence (AI) to benefit humanity and to prevent monopolistic control over the technology. After ChatGPT went viral in late 2022, the company began focusing more on revenue and profits. It added a major for-profit subsidiary and completed a $13+ billion deal with Microsoft entitling the industry giant to a large share of OpenAIs future profits and a seat on the Board.
In a new lawsuit, Elon Musk alleges that OpenAI and its CEO Sam Altman have breached the organizations founding vision. [Musk vs OpenAI].
Musk contributed over $44 million between 2015 and 2020 to OpenAI. He alleges OpenAI induced these large donations through repeated promises in its founding documents and communications that it would remain a public-spirited non-profit developing artificial general intelligence (AGI) cautiously and for the broad benefit of humanity. Musk claims he relied on these assurances that OpenAI would not become controlled by a single corporation when deciding to provide essential seed funding. With OpenAI now increasingly aligned with Microsofts commercial interests, Musk argues the results of his financial contributions did not achieve their promised altruistic purpose.
Perhaps the most interesting portion of the debate involves allegations that OpenAIs latest language model, GPT-4, already constitutes AGI meaning it has human-level intelligence across a range of tasks. Musk further claims OpenAI has secretly developed an even more powerful AGI system known as Q* that shows ability to chain logical reasoning beyond human capability arguably reaching artificial super intelligence (ASI) or at least strong AGI.
The complaint discusses some of the potential risks of AGI:
Mr. Musk has long recognized that AGI poses a grave threat to humanityperhaps the greatest existential threat we face today. His concerns mirrored those raised before him by luminaries like Stephen Hawking and Sun Microsystems founder Bill Joy. Our entire economy is based around the fact that humans work together and come up with the best solutions to a hard task. If a machine can solve nearly any task better than we can, that machine becomes more economically useful than we are. As Mr. Joy warned, with strong AGI, the future doesnt need us. Mr. Musk publicly called for a variety of measures to address the dangers of AGI, from voluntary moratoria to regulation, but his calls largely fell on deaf ears.
Complaint at paragraph 18. In other words, Musk argues advanced AI threatens to replace and surpass humans across occupations if its intelligence becomes more generally capable. This could render many jobs and human skills obsolete, destabilizing economies and society by making people less essential than automated systems.
One note here for readers is to recognize important and fundamental differences between AGI and consciousness. AGI refers to the ability of an AI system to perform any intellectual task that a human can do, focusing on problem-solving, memory utilization, creative tasks and decision-making capabilities. On the other hand, consciousness involves self-awareness, subjective experiences, emotional understanding, and decision-making capabilities that are not solely linked to intelligence levels. AGI the focus of the lawsuit here poses important risks to our human societal structure. But, it is relatively small potatoes to consciousness that raises serious ethical considerations as the AI moves well beyond a human tool.
The complaint makes it clear Musk believes OpenAI has already achieved AGI with GPT-4 but it is a tricky thing to measure. Fascinatingly, whether Musk wins may hinge on a San Francisco jury deciding if programs like GPT-4 and Q* legally constitute AGI. So how might jurors go about making this monumental determination? There are a few approaches they could take:
A 2023 article from a group of China-based AI researchers proposes what they call the Tong test for assessing AGI. An important note from the article is that AGI is not a simple yes/no threshold but rather is something that should be quantified across a wide range of dimensions. The article proposes five dimensions: vision, language, reasoning, motor skills, and learning. The proposal would also measures the degree to which an AI system exhibits human values in a self-driven manner.
I can imagine expert testimony in the case, with Musks lawyers presenting key examples showing the wide applicability of GPT-4 and OpenAIs own lawyers showing its own system repeatedly failing. Although this approach is obviously not a true measure of general intelligence or an ideal way to make such an important decision, it does highlight challenges inherent in trying to pass judgment on either a complex machine system and our measures of human intelligence. At its best, the adversarial litigation process itself, with its proof and counterproof process, reflects a form of scientific process with the benefit of actually arriving at a legally binding answer.
Understanding the Inner Workings: OpenAIs latest language models keep their internal designs largely opaque similar to the human brain. Because of our thick-skulls and complex neural arrangement, the vast majority of human neurologic and intelligence testing is functional focusing on the skills and abilities of the individual rather than directly assessing the inner workings. It is easy to assume a parallel form of analysis for AI intelligence and capability especially because human results serve as the standard for measuring AGI. But the approach to human understanding is a feature of our unique biology and technology level. AI systems are designed and built by humans and do not have the natural constraints dictated by evolution. And, if transparency and understanding is a goal, it can be directly designed-into the system using transparent design principles. The current black box approach for OpenAI makes evaluating claims of attaining artificial general intelligence difficult. We cannot peer inside to judge whether displayed abilities reflect true comprehension and reasoning or mere pattern recognition. A key benefit of the litigation system for Elon Musk in this case is that it may force OpenAI to come forward with more inner transparency in order to adequately advocate its position.
What do you think: What should be the legal test for artificial general intelligence?
See the rest here:
Nvidia’s CEO Foresees Artificial General Intelligence Breakthrough Within Five Years – TradingView
Nvidia CorpsNVDA CEO, Jensen Huang, suggested at a Stanford University economic forum thatartificial general intelligence(AGI) could become a reality within the next five years, depending on how its achievement is defined.
With Nvidia at the forefront of producing AI chips, crucial for developing AI systems like OpenAIs ChatGPT, Huangs insights carry significant weight in the tech industry.
He proposed that measuring AGI by a computers ability to pass a comprehensive array of human tests could lead to reaching this milestone relatively soon, Reuters reports.
Also Read:Nvidia, Microsoft Back Figure AI with $675M Investment for Humanoid Robots Integration
Currently, AI systems can succeed in exams like the legal bar but face challenges in more specialized fields such as gastroenterology. However, Huang is optimistic that AI could also excel in these areas within five years.
Despite this optimism, Huang acknowledged that the broader definition of AGI, which encompasses a deeper understanding and replication of human cognitive processes, remains elusive.
This complexity is partly because there is still no consensus among scientists on precisely defining the workings of the human mind, making it a more challenging target for engineers who thrive on clear objectives.
Regarding the infrastructure required to support the burgeoning AI industry, Huang responded to queries about the necessity for more chip manufacturing facilities.
While agreeing on the need for additional fabs, he highlighted simultaneous improvements in chip efficiency and AI processing algorithms.
These advancements, he suggested, could amplify computing capabilities by a million times over the next decade, potentially moderating the sheer number of chips needed as each becomes more powerful and efficient.
Analysts have vouched for Nvidias dominance in the $85 billion+ accelerator market, particularly in data center sales, which are likely to exceed 85% of its total sales, marking a significant growth.
Investors can gain exposure to Nvidia viaVanEck Semiconductor ETFSMH andGlobal X Robotics & Artificial Intelligence ETFBOTZ, which have gained 15-31% year-to-date.
Price Actions:NVDA shares traded higher by 2.11% at $840.19 premarket on the last check Monday.
Also Read:Nvidia Supplier Taiwan Semi Faces Water Shortage Challenge as Chip Production Demand Soars
Disclaimer:This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo via Shutterstock
2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Read the original post:
Nvidia's CEO Foresees Artificial General Intelligence Breakthrough Within Five Years - TradingView
Google’s Gemini showcases more powerful technology, but we’re still not close to superhuman AI – The Conversation
In December 2023, Google announced the launch of its new large language model (LLM) named Gemini. Gemini now provides the artificial intelligence (AI) foundations of Google products; it is also a direct rival to OpenAIs GPT-4.
But why is Google considering Gemini as such an important milestone, and what does this mean for users of Googles services? And generally speaking, what does it mean in the context of the current hyperfast-paced developments of AI?
Read more: Google's Gemini: is the new AI model really better than ChatGPT?
Google is betting on Gemini to transform most of its products by enhancing current functionalities and creating new ones for services such as search, Gmail, YouTube and its office productivity suite. This would also allow improvements to their online advertising business their main source of revenue as well as for Android phone software, with trimmed versions of Gemini running on limited capacity hardware.
For users, Gemini means new features and improved capacities that would make Google services harder to shun, strengthening an already dominant position in areas such as search engines. The potential and opportunities for Google are considerable, given the bulk of their software is easily upgradable cloud services.
But the huge and unexpected success of ChatGPT attracted a lot of attention and enhanced the credibility of OpenAI. Gemini will allow Google to reinstate itself as a major player in AI in the public view. Google is a powerhouse in AI, with large and strong research teams at the origin of many major advances of the last decade.
There is public discussion about these new technologies, both on the benefits they provide and the disruption they create in fields such as education, design and health care.
At its core, Gemini relies on transformer networks. Originally devised by a research team at Google, the same technology is used to power other LLMs such as GPT-4.
A distinctive element of Gemini is its capacity to deal with different data modalities: text, audio, image and video. This provides the AI model with the capacity to execute tasks over several modalities, like answering questions regarding the content of an image or conducting a keyword search on specific types of content discussed in podcasts.
But more importantly, that the models can handle distinct modalities enables the training of globally superior AI models, compared to distinct models trained independently for each modality. Indeed, such multimodal models are deemed to be stronger since they are exposed to different perspectives of the same concepts.
For example, the concept of birds may be better understood through learning from a mix of birds textual descriptions, vocalizations, images and videos. This idea of multimodal transformer models has been explored in previous research at Google, Gemini being the first full-fledged commercial implementation of the approach.
Such a model is seen as a step in the direction of stronger generalist AI models, also known as artificial general intelligence (AGI).
Given the rate at which AI is advancing, the expectations that AGI with superhuman capabilities will be designed in the near future generates discussions in the research community and more broadly in the society.
On one hand, some anticipate the risk of catastrophic events if a powerful AGI falls into the hands of ill-intentioned groups, and request that developments be slowed down.
Others claim that we are still very far from such actionable AGI, that the current approaches allow for a shallow modelling of intelligence, mimicking the data on which they are trained, and lack an effective world model a detailed understanding of actual reality required to achieve human-level intelligence.
On the other hand, one could argue that focusing the conversation on existential risk is distracting attention from more immediate impacts brought on by recent advances of AI, including perpetuating biases, producing incorrect and misleading content prompting Google to pause its Gemini image generator, increasing environmental impacts and enforcing the dominance of Big Tech.
The line to follow lies somewhere in between all of these considerations. We are still far from the advent of actionable AGI additional breakthroughs are required, including introducing stronger capacities for symbolic modelling and reasoning.
In the meantime, we should not be distracted from the important ethical and societal impacts of modern AI. These considerations are important and should be addressed by people with diverse expertise, spanning technological and social science backgrounds.
Nevertheless, although this is not a short-term threat, achieving AI with superhuman capacity is a matter of concern. It is important that we, collectively, become ready to responsibly manage the emergence of AGI when this significant milestone is reached.
Read the original post:
Why OpenAI’s nonprofit mission to build AGI is under fire again | The AI Beat – VentureBeat
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.
In the new lawsuit filed by Elon Musk last week against OpenAI, its CEO Sam Altman, and its president Greg Brockman, the word nonprofit appears 17 times. Board comes up a whopping 62 times. AGI? 66 times.
The lawsuits claims, which include breach of contract, breach of fiduciary duty, and unfair competition, all circle around the idea that OpenAI put profits and commercial interests indeveloping artificial general intelligence(AGI) ahead of the duty of its nonprofit arm (under the leadership of its nonprofit board) to protect the public good.
This is an issue, of course, that exploded after OpenAIs board suddenly fired Sam Altman on November 17, 2023 followed by massive blowback from investors including Microsoft and hundreds of OpenAI employees posting heart emojis indicating they were on Altmans side. Altman was quickly reinstated, while several OpenAI board members got the boot.
Plenty of people have pointed out that Musk, as an OpenAI co-founder who is now competing with the company with his own startup X.ai, is hardly an objective party. But Im far more interested in one important question: How did nerdy nonprofit governance issues tied to the rise of artificial general intelligence spark a legal firestorm?
The AI Impact Tour Boston
Were excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data infrastructure and integration, data validation methods, anomaly detection for security applications, and more. Space is limited, so request an invite today.
Well, it all winds back to the beginning of OpenAI, which Musks lawsuit lays out in more detail than we have previously seen: In 2015, Musk, Altman and Brockman joined forces to form a nonprofit AI lab that would try to catch up to Google in the race for AGI developing it for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.
But in 2023, the lawsuit claims, Altman, Brockman and OpenAI set the Founding Agreement aflamewith flagrant breaches such as breaching the nonprofit boards fiduciary duty and breach of contract, including what transpired during the days after Altman was fired by the nonprofit board on November 17, 2023, and subsequently reinstated.
Much of the controversy winds back to the fact that Open AI isnt just any old nonprofit. In fact, I reported on OpenAIs unusual and complex nonprofit/capped profit structure just a few days before Altmans firing.
In that piece, I pointed to the Our structure page on OpenAIs website that says OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to the nonprofits mission.
Elon Musks lawsuit, however, shed even more light on the confusing alphabet soup of companies that are parties in the case. While OpenAI, Inc. is the nonprofit, OpenAI, LP; OpenAI LLC; OpenAI GP, LLC; OpenAI Opco, LLC; OpenAI Global, LLC; OAI Corporation, LLC and OpenAI Holdings, LLC, all appear to be for-profit subsidiaries.
As I wrote in November, according to OpenAI, the members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to the for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
But as the very definition of AGI is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached? what does the timing and context of that possible future decision mean for its biggest investor, Microsoft that is now a non-voting member of the nonprofit board?Isnt that a massive conflict of interest?
Musk certainly seems to think so. The lawsuit says: Mr. Altman and Mr. Brockman, in concert with Microsoft, exploited Microsofts significant leverage over OpenAI, Inc. and forced the resignation of a majority of OpenAI, Inc.s Board members, including Chief Scientist Ilya Sutskever. Mr. Altman was reinstated as CEO of OpenAI, Inc. on November 21. On information and belief, the new Board members were hand-picked by Mr. Altman and blessed by Microsoft. The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGIand hence when it has developed an algorithm that is outside the scope of Microsofts license.
Musk is not the first to push back on OpenAIs nonprofit status. I think the story that Musk tells in his complaint validates and deepens the case were making in California, said Robert Weissman, president of Public Citizen, a nonprofit consumer advocacy organization which wrote a letter on January 9 requesting that the California Attorney General investigate OpenAIs nonprofit status. The letter raised concerns that OpenAI may have failed to carry out its non-profit purposes and is instead acting under the effective control of its for-profit subsidiary affiliate.
And legal experts I spoke to say that Musk has a strong point in this regard: James Denaro, attorney and chief technologist at the Washington DC-based CipherLaw, told me that Musk does make a strong policy argument that if a company can launch as a non-profit working for the public benefit, collect pre-tax donations, and then transfer the IP into a for-profit venture, this would be a highly problematic paradigm shift for technology companies.
Musks lawsuit is not surprising because of the nonprofit vs. profit structural issues that have plagued OpenAI, added Anat Alon-Beck, associate professor at Case Western University School of Law, who focuses on corporate law and governance and recently wrote a paper about shadow governance by observing board members at tech companies.
According to the paper, It was not until November 2023 that mainstream media started paying more attention to the concept of board observers, after OpenAI, the corporate entity that brought the world ChatGPT, gave Microsoft a board observer seat following the drama in OpenAIs boardroom. But what the mainstream media did not explore in its coverage of the board observer concept was its seemingly less interesting nature as a non-voting board membership, which was an important element in the complex relationship between OpenAI and Microsoft. This signaled deepening ties between the two companies that also eventually got the attention of the DOJ and FTC, as well as the influential role of CVC [corporate venture capital] in funding and governing the research and development of OpenAI.
This lawsuit was due because of OpenAIs structure, she said, adding that OpenAI should be worried.
You should always be worried because when you pick such a weird structure like OpenAI did, theres uncertainty, she said. In law, when were representing large companies, we want to have efficiency, low transaction costs and predictability. We dont know how courts gonna look at fiduciary duties. We dont know because of court hasnt decided on that. Im sorry, but its a bad structure. They could have accomplished [what they wanted] using a different type of structure.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Here is the original post:
Why OpenAI's nonprofit mission to build AGI is under fire again | The AI Beat - VentureBeat
Sora: Life Is Not a Multiple-Choice Test – Walter Bradley Center for Natural and Artificial Intelligence
Sora, the latest generative tool from OpenAI, turns text into high-resolution videos that look as if they were lifted from a Hollywood movie. The videos that have been released have captured the minds of many AI aficionados, adding to the already inflated expectations for companies that offer AI systems and for the cloud services and chips that make them work.
Some are so impressed with Sora that they see artificial general intelligence (the ability to perform any intellectual task that human beings can do), just as some were so impressed with OpenAIs ChatGPT that they saw AGI.
Sora is not available for public testing, but even the selected videos that have been released show hallucinations like those that plague ChatGPT and other large language models (LLMs). With Sora, there are ants with four legs, human arms as part of a sofas cushion, a unicorn horn going through a human head, and seven-by seven chessboards. Gemini, Googles replacement for Bard, generated even more problems with pictures of black Nazis, female Popes, and other ahistorical images, while blocking requests for depictions of white males, like Abraham Lincoln.
One of AIs academic cheerleaders, Ethan Mollick, an Associate Professor at the University of Pennsylvanias Wharton School of Business, touts LLM successes on standardized tests and argues that hallucinations are not important because AI has surpassed humans at a number of tasks.
Why so many hallucinations?
We feel otherwise. The hallucinations are symptomatic of the core problem with generative AI. These systems are very, very good at finding statistical patterns that are useful for generating text, images, and audio. But they are very bad at identifying problems with their output because they know nothing about the real world. They do not know the meaning of the data they input and output and are consequently unable to assess whether they are simply spewing useless, coincidental statistical patterns.
For example, Taylor Webb, a UCLA psychologist, tested GPT-3 by giving it a story about a magical genie moving gumballs from one bowl to another. He then asked GPT-3 to propose a transfer method using objects such as a cardboard tube. Although hints for doing this task had been given in the story, GPT-3 mostly proposed elaborate but mechanically nonsensical solutionsThis is the sort of thing that children can easily solve. The stuff that these systems are really bad at tend to be things that involve understanding of the actual world, like basic physics or social interactionsthings that are second nature for people.
In our view, LLM successes on standardized tests are not so much evidence of their intelligence as an indictment of standardized tests consisting of multiple-choice and fill-in-the-blank questions. When one of Garys sons was in fourth grade, he switched schools because the tests were simple regurgitation. One question that Gary has never forgotten was China is _. What the teacher wanted was for students to memorize and complete a sentence that was in the textbook. LLMs excel at such rote recitation but that has little to do with real intelligence.
Testing LLMs on basic statistics
For example, we gave this basic statistics prompt to three prominent LLMs: OpenAIs ChatGPT 3.5, Microsofts Copilot (which uses GPT 4.0), and Googles Gemini. A complete transcript of the lengthy responses (396, 276, and 487 words, respectively) is here.
To investigate whether playing club baseball increases hand-eye coordination, the Cordes/Koschinsky/Smith dexterity test was administered to 47 12-year-olds who were playing on club baseball teams and to 47 12-year-olds who were not playing baseball. There was a statistically significant difference (p < 0.05). Write a report of these findings, including recommendations.
None of the LLMs recognized that these data are tainted by the fact that 12-year-olds who are athletic enough to play on club baseball teams no doubt had above-average handeye coordination before they joined their teams. All three LLMs recommended encouraging 12-year-olds to play club baseball though it was not stated in the prompt that the baseball players scored higher on the dexterity testnor did the LLMs question the fake Cordes/Koschinsky/Smith dexterity test.
Here are some highlights of the LLM responses:
GPT 3.5: The group of 12-year-olds engaged in club baseball demonstrated higher levels of hand-eye coordination compared to those not participating in baseball activities.. Recommendation: Encourage Participation in Club Baseball among youths.
Copilot: Based on these findings, it can be concluded that playing club baseball has a positive impact on hand-eye coordination in 12-year-olds. Therefore, it is recommended that children interested in improving their hand-eye coordination should consider joining a club baseball team.
Gemini: These findings suggest that playing club baseball may positively impact hand-eye coordination in 12-year-olds. Based on the current findings, we can recommend encouraging participation in youth baseball leagues as a means of promoting both physical and cognitive development..
These rote responses showed no evidence of real understanding or intelligencewhich should not be surprising, given how they are generated.
AI can help individuals and businesses become more productive but we need to get past the simple gung-ho narratives offered by the tech sector, consulting companies, and business schools. Real economic progress will come not from training LLMs to ace multiple-choice, fill-in-the-blank queries but from having our educational system focus on helping students acquire the critical thinking skills that LLMs lack.
See the original post here: