Page 385«..1020..384385386387..390400..»

AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com

Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed.

Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.

The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context.

However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) where AI becomes just as capable as humans across several areas independent of the original training data is within reach. This hypothetical point in AI development is known as the "singularity."

Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027.

Related: Artificial general intelligence when AI becomes more capable than humans is just moments away, Meta's Mark Zuckerberg declares

If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.

He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.

Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures.

The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it.

OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition from content generation to reasoning.

Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.

Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought."

Read the original:

AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist - Livescience.com

Read More..

The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity – Patently-O

By Dennis Crouch

Elon Musk was instrumental in the initial creation of OpenAI as a nonprofit with the vision of responsibly developing artificial intelligence (AI) to benefit humanity and to prevent monopolistic control over the technology. After ChatGPT went viral in late 2022, the company began focusing more on revenue and profits. It added a major for-profit subsidiary and completed a $13+ billion deal with Microsoft entitling the industry giant to a large share of OpenAIs future profits and a seat on the Board.

In a new lawsuit, Elon Musk alleges that OpenAI and its CEO Sam Altman have breached the organizations founding vision. [Musk vs OpenAI].

Musk contributed over $44 million between 2015 and 2020 to OpenAI. He alleges OpenAI induced these large donations through repeated promises in its founding documents and communications that it would remain a public-spirited non-profit developing artificial general intelligence (AGI) cautiously and for the broad benefit of humanity. Musk claims he relied on these assurances that OpenAI would not become controlled by a single corporation when deciding to provide essential seed funding. With OpenAI now increasingly aligned with Microsofts commercial interests, Musk argues the results of his financial contributions did not achieve their promised altruistic purpose.

Perhaps the most interesting portion of the debate involves allegations that OpenAIs latest language model, GPT-4, already constitutes AGI meaning it has human-level intelligence across a range of tasks. Musk further claims OpenAI has secretly developed an even more powerful AGI system known as Q* that shows ability to chain logical reasoning beyond human capability arguably reaching artificial super intelligence (ASI) or at least strong AGI.

The complaint discusses some of the potential risks of AGI:

Mr. Musk has long recognized that AGI poses a grave threat to humanityperhaps the greatest existential threat we face today. His concerns mirrored those raised before him by luminaries like Stephen Hawking and Sun Microsystems founder Bill Joy. Our entire economy is based around the fact that humans work together and come up with the best solutions to a hard task. If a machine can solve nearly any task better than we can, that machine becomes more economically useful than we are. As Mr. Joy warned, with strong AGI, the future doesnt need us. Mr. Musk publicly called for a variety of measures to address the dangers of AGI, from voluntary moratoria to regulation, but his calls largely fell on deaf ears.

Complaint at paragraph 18. In other words, Musk argues advanced AI threatens to replace and surpass humans across occupations if its intelligence becomes more generally capable. This could render many jobs and human skills obsolete, destabilizing economies and society by making people less essential than automated systems.

One note here for readers is to recognize important and fundamental differences between AGI and consciousness. AGI refers to the ability of an AI system to perform any intellectual task that a human can do, focusing on problem-solving, memory utilization, creative tasks and decision-making capabilities. On the other hand, consciousness involves self-awareness, subjective experiences, emotional understanding, and decision-making capabilities that are not solely linked to intelligence levels. AGI the focus of the lawsuit here poses important risks to our human societal structure. But, it is relatively small potatoes to consciousness that raises serious ethical considerations as the AI moves well beyond a human tool.

The complaint makes it clear Musk believes OpenAI has already achieved AGI with GPT-4 but it is a tricky thing to measure. Fascinatingly, whether Musk wins may hinge on a San Francisco jury deciding if programs like GPT-4 and Q* legally constitute AGI. So how might jurors go about making this monumental determination? There are a few approaches they could take:

A 2023 article from a group of China-based AI researchers proposes what they call the Tong test for assessing AGI. An important note from the article is that AGI is not a simple yes/no threshold but rather is something that should be quantified across a wide range of dimensions. The article proposes five dimensions: vision, language, reasoning, motor skills, and learning. The proposal would also measures the degree to which an AI system exhibits human values in a self-driven manner.

I can imagine expert testimony in the case, with Musks lawyers presenting key examples showing the wide applicability of GPT-4 and OpenAIs own lawyers showing its own system repeatedly failing. Although this approach is obviously not a true measure of general intelligence or an ideal way to make such an important decision, it does highlight challenges inherent in trying to pass judgment on either a complex machine system and our measures of human intelligence. At its best, the adversarial litigation process itself, with its proof and counterproof process, reflects a form of scientific process with the benefit of actually arriving at a legally binding answer.

Understanding the Inner Workings: OpenAIs latest language models keep their internal designs largely opaque similar to the human brain. Because of our thick-skulls and complex neural arrangement, the vast majority of human neurologic and intelligence testing is functional focusing on the skills and abilities of the individual rather than directly assessing the inner workings. It is easy to assume a parallel form of analysis for AI intelligence and capability especially because human results serve as the standard for measuring AGI. But the approach to human understanding is a feature of our unique biology and technology level. AI systems are designed and built by humans and do not have the natural constraints dictated by evolution. And, if transparency and understanding is a goal, it can be directly designed-into the system using transparent design principles. The current black box approach for OpenAI makes evaluating claims of attaining artificial general intelligence difficult. We cannot peer inside to judge whether displayed abilities reflect true comprehension and reasoning or mere pattern recognition. A key benefit of the litigation system for Elon Musk in this case is that it may force OpenAI to come forward with more inner transparency in order to adequately advocate its position.

What do you think: What should be the legal test for artificial general intelligence?

See the rest here:

The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity - Patently-O

Read More..

Google’s Gemini showcases more powerful technology, but we’re still not close to superhuman AI – The Conversation

In December 2023, Google announced the launch of its new large language model (LLM) named Gemini. Gemini now provides the artificial intelligence (AI) foundations of Google products; it is also a direct rival to OpenAIs GPT-4.

But why is Google considering Gemini as such an important milestone, and what does this mean for users of Googles services? And generally speaking, what does it mean in the context of the current hyperfast-paced developments of AI?

Read more: Google's Gemini: is the new AI model really better than ChatGPT?

Google is betting on Gemini to transform most of its products by enhancing current functionalities and creating new ones for services such as search, Gmail, YouTube and its office productivity suite. This would also allow improvements to their online advertising business their main source of revenue as well as for Android phone software, with trimmed versions of Gemini running on limited capacity hardware.

For users, Gemini means new features and improved capacities that would make Google services harder to shun, strengthening an already dominant position in areas such as search engines. The potential and opportunities for Google are considerable, given the bulk of their software is easily upgradable cloud services.

But the huge and unexpected success of ChatGPT attracted a lot of attention and enhanced the credibility of OpenAI. Gemini will allow Google to reinstate itself as a major player in AI in the public view. Google is a powerhouse in AI, with large and strong research teams at the origin of many major advances of the last decade.

There is public discussion about these new technologies, both on the benefits they provide and the disruption they create in fields such as education, design and health care.

At its core, Gemini relies on transformer networks. Originally devised by a research team at Google, the same technology is used to power other LLMs such as GPT-4.

A distinctive element of Gemini is its capacity to deal with different data modalities: text, audio, image and video. This provides the AI model with the capacity to execute tasks over several modalities, like answering questions regarding the content of an image or conducting a keyword search on specific types of content discussed in podcasts.

But more importantly, that the models can handle distinct modalities enables the training of globally superior AI models, compared to distinct models trained independently for each modality. Indeed, such multimodal models are deemed to be stronger since they are exposed to different perspectives of the same concepts.

For example, the concept of birds may be better understood through learning from a mix of birds textual descriptions, vocalizations, images and videos. This idea of multimodal transformer models has been explored in previous research at Google, Gemini being the first full-fledged commercial implementation of the approach.

Such a model is seen as a step in the direction of stronger generalist AI models, also known as artificial general intelligence (AGI).

Given the rate at which AI is advancing, the expectations that AGI with superhuman capabilities will be designed in the near future generates discussions in the research community and more broadly in the society.

On one hand, some anticipate the risk of catastrophic events if a powerful AGI falls into the hands of ill-intentioned groups, and request that developments be slowed down.

Others claim that we are still very far from such actionable AGI, that the current approaches allow for a shallow modelling of intelligence, mimicking the data on which they are trained, and lack an effective world model a detailed understanding of actual reality required to achieve human-level intelligence.

On the other hand, one could argue that focusing the conversation on existential risk is distracting attention from more immediate impacts brought on by recent advances of AI, including perpetuating biases, producing incorrect and misleading content prompting Google to pause its Gemini image generator, increasing environmental impacts and enforcing the dominance of Big Tech.

The line to follow lies somewhere in between all of these considerations. We are still far from the advent of actionable AGI additional breakthroughs are required, including introducing stronger capacities for symbolic modelling and reasoning.

In the meantime, we should not be distracted from the important ethical and societal impacts of modern AI. These considerations are important and should be addressed by people with diverse expertise, spanning technological and social science backgrounds.

Nevertheless, although this is not a short-term threat, achieving AI with superhuman capacity is a matter of concern. It is important that we, collectively, become ready to responsibly manage the emergence of AGI when this significant milestone is reached.

Read the original post:

Google's Gemini showcases more powerful technology, but we're still not close to superhuman AI - The Conversation

Read More..

Nvidia’s CEO Foresees Artificial General Intelligence Breakthrough Within Five Years – TradingView

Nvidia CorpsNVDA CEO, Jensen Huang, suggested at a Stanford University economic forum thatartificial general intelligence(AGI) could become a reality within the next five years, depending on how its achievement is defined.

With Nvidia at the forefront of producing AI chips, crucial for developing AI systems like OpenAIs ChatGPT, Huangs insights carry significant weight in the tech industry.

He proposed that measuring AGI by a computers ability to pass a comprehensive array of human tests could lead to reaching this milestone relatively soon, Reuters reports.

Also Read:Nvidia, Microsoft Back Figure AI with $675M Investment for Humanoid Robots Integration

Currently, AI systems can succeed in exams like the legal bar but face challenges in more specialized fields such as gastroenterology. However, Huang is optimistic that AI could also excel in these areas within five years.

Despite this optimism, Huang acknowledged that the broader definition of AGI, which encompasses a deeper understanding and replication of human cognitive processes, remains elusive.

This complexity is partly because there is still no consensus among scientists on precisely defining the workings of the human mind, making it a more challenging target for engineers who thrive on clear objectives.

Regarding the infrastructure required to support the burgeoning AI industry, Huang responded to queries about the necessity for more chip manufacturing facilities.

While agreeing on the need for additional fabs, he highlighted simultaneous improvements in chip efficiency and AI processing algorithms.

These advancements, he suggested, could amplify computing capabilities by a million times over the next decade, potentially moderating the sheer number of chips needed as each becomes more powerful and efficient.

Analysts have vouched for Nvidias dominance in the $85 billion+ accelerator market, particularly in data center sales, which are likely to exceed 85% of its total sales, marking a significant growth.

Investors can gain exposure to Nvidia viaVanEck Semiconductor ETFSMH andGlobal X Robotics & Artificial Intelligence ETFBOTZ, which have gained 15-31% year-to-date.

Price Actions:NVDA shares traded higher by 2.11% at $840.19 premarket on the last check Monday.

Also Read:Nvidia Supplier Taiwan Semi Faces Water Shortage Challenge as Chip Production Demand Soars

Disclaimer:This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo via Shutterstock

2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original post:

Nvidia's CEO Foresees Artificial General Intelligence Breakthrough Within Five Years - TradingView

Read More..

Why OpenAI’s nonprofit mission to build AGI is under fire again | The AI Beat – VentureBeat

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

In the new lawsuit filed by Elon Musk last week against OpenAI, its CEO Sam Altman, and its president Greg Brockman, the word nonprofit appears 17 times. Board comes up a whopping 62 times. AGI? 66 times.

The lawsuits claims, which include breach of contract, breach of fiduciary duty, and unfair competition, all circle around the idea that OpenAI put profits and commercial interests indeveloping artificial general intelligence(AGI) ahead of the duty of its nonprofit arm (under the leadership of its nonprofit board) to protect the public good.

This is an issue, of course, that exploded after OpenAIs board suddenly fired Sam Altman on November 17, 2023 followed by massive blowback from investors including Microsoft and hundreds of OpenAI employees posting heart emojis indicating they were on Altmans side. Altman was quickly reinstated, while several OpenAI board members got the boot.

Plenty of people have pointed out that Musk, as an OpenAI co-founder who is now competing with the company with his own startup X.ai, is hardly an objective party. But Im far more interested in one important question: How did nerdy nonprofit governance issues tied to the rise of artificial general intelligence spark a legal firestorm?

The AI Impact Tour Boston

Were excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data infrastructure and integration, data validation methods, anomaly detection for security applications, and more. Space is limited, so request an invite today.

Well, it all winds back to the beginning of OpenAI, which Musks lawsuit lays out in more detail than we have previously seen: In 2015, Musk, Altman and Brockman joined forces to form a nonprofit AI lab that would try to catch up to Google in the race for AGI developing it for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.

But in 2023, the lawsuit claims, Altman, Brockman and OpenAI set the Founding Agreement aflamewith flagrant breaches such as breaching the nonprofit boards fiduciary duty and breach of contract, including what transpired during the days after Altman was fired by the nonprofit board on November 17, 2023, and subsequently reinstated.

Much of the controversy winds back to the fact that Open AI isnt just any old nonprofit. In fact, I reported on OpenAIs unusual and complex nonprofit/capped profit structure just a few days before Altmans firing.

In that piece, I pointed to the Our structure page on OpenAIs website that says OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to the nonprofits mission.

Elon Musks lawsuit, however, shed even more light on the confusing alphabet soup of companies that are parties in the case. While OpenAI, Inc. is the nonprofit, OpenAI, LP; OpenAI LLC; OpenAI GP, LLC; OpenAI Opco, LLC; OpenAI Global, LLC; OAI Corporation, LLC and OpenAI Holdings, LLC, all appear to be for-profit subsidiaries.

As I wrote in November, according to OpenAI, the members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to the for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

But as the very definition of AGI is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached? what does the timing and context of that possible future decision mean for its biggest investor, Microsoft that is now a non-voting member of the nonprofit board?Isnt that a massive conflict of interest?

Musk certainly seems to think so. The lawsuit says: Mr. Altman and Mr. Brockman, in concert with Microsoft, exploited Microsofts significant leverage over OpenAI, Inc. and forced the resignation of a majority of OpenAI, Inc.s Board members, including Chief Scientist Ilya Sutskever. Mr. Altman was reinstated as CEO of OpenAI, Inc. on November 21. On information and belief, the new Board members were hand-picked by Mr. Altman and blessed by Microsoft. The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGIand hence when it has developed an algorithm that is outside the scope of Microsofts license.

Musk is not the first to push back on OpenAIs nonprofit status. I think the story that Musk tells in his complaint validates and deepens the case were making in California, said Robert Weissman, president of Public Citizen, a nonprofit consumer advocacy organization which wrote a letter on January 9 requesting that the California Attorney General investigate OpenAIs nonprofit status. The letter raised concerns that OpenAI may have failed to carry out its non-profit purposes and is instead acting under the effective control of its for-profit subsidiary affiliate.

And legal experts I spoke to say that Musk has a strong point in this regard: James Denaro, attorney and chief technologist at the Washington DC-based CipherLaw, told me that Musk does make a strong policy argument that if a company can launch as a non-profit working for the public benefit, collect pre-tax donations, and then transfer the IP into a for-profit venture, this would be a highly problematic paradigm shift for technology companies.

Musks lawsuit is not surprising because of the nonprofit vs. profit structural issues that have plagued OpenAI, added Anat Alon-Beck, associate professor at Case Western University School of Law, who focuses on corporate law and governance and recently wrote a paper about shadow governance by observing board members at tech companies.

According to the paper, It was not until November 2023 that mainstream media started paying more attention to the concept of board observers, after OpenAI, the corporate entity that brought the world ChatGPT, gave Microsoft a board observer seat following the drama in OpenAIs boardroom. But what the mainstream media did not explore in its coverage of the board observer concept was its seemingly less interesting nature as a non-voting board membership, which was an important element in the complex relationship between OpenAI and Microsoft. This signaled deepening ties between the two companies that also eventually got the attention of the DOJ and FTC, as well as the influential role of CVC [corporate venture capital] in funding and governing the research and development of OpenAI.

This lawsuit was due because of OpenAIs structure, she said, adding that OpenAI should be worried.

You should always be worried because when you pick such a weird structure like OpenAI did, theres uncertainty, she said. In law, when were representing large companies, we want to have efficiency, low transaction costs and predictability. We dont know how courts gonna look at fiduciary duties. We dont know because of court hasnt decided on that. Im sorry, but its a bad structure. They could have accomplished [what they wanted] using a different type of structure.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Here is the original post:

Why OpenAI's nonprofit mission to build AGI is under fire again | The AI Beat - VentureBeat

Read More..

Sora: Life Is Not a Multiple-Choice Test – Walter Bradley Center for Natural and Artificial Intelligence

Sora, the latest generative tool from OpenAI, turns text into high-resolution videos that look as if they were lifted from a Hollywood movie. The videos that have been released have captured the minds of many AI aficionados, adding to the already inflated expectations for companies that offer AI systems and for the cloud services and chips that make them work.

Some are so impressed with Sora that they see artificial general intelligence (the ability to perform any intellectual task that human beings can do), just as some were so impressed with OpenAIs ChatGPT that they saw AGI.

Sora is not available for public testing, but even the selected videos that have been released show hallucinations like those that plague ChatGPT and other large language models (LLMs). With Sora, there are ants with four legs, human arms as part of a sofas cushion, a unicorn horn going through a human head, and seven-by seven chessboards. Gemini, Googles replacement for Bard, generated even more problems with pictures of black Nazis, female Popes, and other ahistorical images, while blocking requests for depictions of white males, like Abraham Lincoln.

One of AIs academic cheerleaders, Ethan Mollick, an Associate Professor at the University of Pennsylvanias Wharton School of Business, touts LLM successes on standardized tests and argues that hallucinations are not important because AI has surpassed humans at a number of tasks.

Why so many hallucinations?

We feel otherwise. The hallucinations are symptomatic of the core problem with generative AI. These systems are very, very good at finding statistical patterns that are useful for generating text, images, and audio. But they are very bad at identifying problems with their output because they know nothing about the real world. They do not know the meaning of the data they input and output and are consequently unable to assess whether they are simply spewing useless, coincidental statistical patterns.

For example, Taylor Webb, a UCLA psychologist, tested GPT-3 by giving it a story about a magical genie moving gumballs from one bowl to another. He then asked GPT-3 to propose a transfer method using objects such as a cardboard tube. Although hints for doing this task had been given in the story, GPT-3 mostly proposed elaborate but mechanically nonsensical solutionsThis is the sort of thing that children can easily solve. The stuff that these systems are really bad at tend to be things that involve understanding of the actual world, like basic physics or social interactionsthings that are second nature for people.

In our view, LLM successes on standardized tests are not so much evidence of their intelligence as an indictment of standardized tests consisting of multiple-choice and fill-in-the-blank questions. When one of Garys sons was in fourth grade, he switched schools because the tests were simple regurgitation. One question that Gary has never forgotten was China is _. What the teacher wanted was for students to memorize and complete a sentence that was in the textbook. LLMs excel at such rote recitation but that has little to do with real intelligence.

Testing LLMs on basic statistics

For example, we gave this basic statistics prompt to three prominent LLMs: OpenAIs ChatGPT 3.5, Microsofts Copilot (which uses GPT 4.0), and Googles Gemini. A complete transcript of the lengthy responses (396, 276, and 487 words, respectively) is here.

To investigate whether playing club baseball increases hand-eye coordination, the Cordes/Koschinsky/Smith dexterity test was administered to 47 12-year-olds who were playing on club baseball teams and to 47 12-year-olds who were not playing baseball. There was a statistically significant difference (p < 0.05). Write a report of these findings, including recommendations.

None of the LLMs recognized that these data are tainted by the fact that 12-year-olds who are athletic enough to play on club baseball teams no doubt had above-average handeye coordination before they joined their teams. All three LLMs recommended encouraging 12-year-olds to play club baseball though it was not stated in the prompt that the baseball players scored higher on the dexterity testnor did the LLMs question the fake Cordes/Koschinsky/Smith dexterity test.

Here are some highlights of the LLM responses:

GPT 3.5: The group of 12-year-olds engaged in club baseball demonstrated higher levels of hand-eye coordination compared to those not participating in baseball activities.. Recommendation: Encourage Participation in Club Baseball among youths.

Copilot: Based on these findings, it can be concluded that playing club baseball has a positive impact on hand-eye coordination in 12-year-olds. Therefore, it is recommended that children interested in improving their hand-eye coordination should consider joining a club baseball team.

Gemini: These findings suggest that playing club baseball may positively impact hand-eye coordination in 12-year-olds. Based on the current findings, we can recommend encouraging participation in youth baseball leagues as a means of promoting both physical and cognitive development..

These rote responses showed no evidence of real understanding or intelligencewhich should not be surprising, given how they are generated.

AI can help individuals and businesses become more productive but we need to get past the simple gung-ho narratives offered by the tech sector, consulting companies, and business schools. Real economic progress will come not from training LLMs to ace multiple-choice, fill-in-the-blank queries but from having our educational system focus on helping students acquire the critical thinking skills that LLMs lack.

See the original post here:

Sora: Life Is Not a Multiple-Choice Test - Walter Bradley Center for Natural and Artificial Intelligence

Read More..

Could We Achieve AGI Within 5 Years? NVIDIA’s CEO Jensen Huang Believes It’s Possible – Unite.AI

In the dynamic field of artificial intelligence, the quest for Artificial General Intelligence (AGI) represents a pinnacle of innovation, promising to redefine the interplay between technology and human intellect. Jensen Huang, CEO of NVIDIA, a trailblazer in AI technology, recently brought this topic to the forefront of technological discourse. During a forum at Stanford University, Huang posited that AGI might be realized within the next five years, a projection that hinges critically on the definition of AGI itself.

According to Huang, if AGI is characterized by its ability to successfully pass a diverse range of human tests, then this milestone in AI development is not merely aspirational but could be nearing actualization. This statement from a leading figure in the AI industry not only sparks interest but also prompts a reassessment of our current understanding of artificial intelligence and its potential trajectory in the near future.

The landscape of artificial intelligence today is a testament to remarkable achievements and yet, simultaneously, a reminder of the challenges that remain. A notable milestone in AI's current capabilities is its success in passing legal bar exams, a feat that underscores its proficiency in processing and applying extensive legal knowledge. This accomplishment not only demonstrates AI's advanced analytical skills but also its potential to revolutionize sectors reliant on data interpretation and legal expertise.

However, the prowess of AI is not without its limitations. In more specialized fields, such as gastroenterology, AI continues to grapple with complexities. These fields require not only a deep understanding of intricate subject matter but also the ability to navigate nuances and subtleties that are often second nature to human experts. The contrast between AI's success in legal examinations and its struggles in specialized medical tests highlights the current disparity in AI's ability to mimic human expertise across diverse domains.

Jensen Huang, in his forecast, envisions a rapidly evolving AI landscape. Within the next five years, he anticipates AI to make significant strides in conquering a broader range of complex tasks, extending beyond its current scope. Huang's projection suggests a future where AI could adeptly handle challenges in specialized fields, matching, or even surpassing, human expertise in areas where it currently falters. This expectation is not merely a prediction of incremental improvement but a forecast of transformative advancement, signaling a shift towards a more versatile and capable AI. The realization of these goals would mark a substantial leap forward in AI technology, potentially reshaping numerous industries and impacting the way we approach problem-solving and innovation.

Venturing into the realm of AGI involves delving deep into the complexities of human thought processes, a venture that remains one of the most challenging aspects of AI development. Human cognition is a rich tapestry of logical reasoning, emotional intelligence, creativity, and contextual understanding elements that are inherently difficult to quantify and replicate in machines. This challenge forms the crux of the AGI puzzle.

Huang, reflecting on this challenge, emphasized that engineering AGI is an intricate task, primarily due to the elusive nature of human cognition. It's not just about programming an AI to perform tasks; it's about imbuing it with an understanding of the world that mirrors the human mind's flexibility and depth. This task, as Huang suggested, is not just a technological hurdle but also a philosophical and scientific one, requiring insights from various disciplines to fully grasp the essence of human thought.

The expansion of AI, especially towards AGI, necessitates a robust infrastructure, particularly in semiconductor technology. Fabrication plants, or fabs, are critical in this respect, serving as the backbone for producing advanced AI chips. However, Huang offers a nuanced view of this requirement. He acknowledges the growing need for fabs to sustain AI's growth but also draws attention to the ongoing improvements in chip efficiency and AI algorithms.

This perspective suggests a strategic approach to AI development: a balance between increasing physical production capacities and enhancing the technological prowess of each component. It's not just about quantity; it's about quality and efficiency. This approach aims to maximize the potential of each chip, reducing the need for mass production and focusing on smarter, more efficient designs. Huang's insight reflects NVIDIA's commitment to not only expanding AI's physical infrastructure but also pushing the boundaries of what each element within that infrastructure can achieve.

As we stand at the threshold of potentially achieving AGI, the implications for society and various industries are profound. AGI promises to revolutionize fields like healthcare, finance, education, and transportation, offering solutions that are currently beyond our grasp. This transformative potential extends to everyday life, reshaping how we interact with technology and each other.

NVIDIA, at the helm of this AI revolution, faces both challenges and opportunities in its pursuit of AGI. The company's role in driving AI advancements is undeniable, but the journey towards AGI is laden with complex ethical, technical, and philosophical questions. As NVIDIA continues to push the boundaries of AI, its strategies, innovations, and foresight will be pivotal in navigating the uncharted waters of AGI. The path forward is an exciting one, filled with possibilities that could redefine our world. In this race towards AGI, NVIDIA stands not just as a participant but as a key architect of the future.

Link:

Could We Achieve AGI Within 5 Years? NVIDIA's CEO Jensen Huang Believes It's Possible - Unite.AI

Read More..

Deputy calls for national plan for AGI development in the country – China Daily

Visitors look at an iFlytek robot during an expo in Hefei, Anhui province. [Photo/Xinhua]

As artificial intelligence takes the world by storm, efforts are needed in China to draft a national plan to boost the development of artificial general intelligence, the latest tech frontier that is a key part of new quality productive forces, said a national legislator.

AGI refers to AI's versatility in doing multiple tasks such as writing essays, checking program bugs and making business plans. This differentiates it from previous narrow AI projects that excel in only one area. The emergence of AI chatbot ChatGPT and the text-to-video generation tool Sora triggered widespread discussion regarding the rapid development of AGI.

Highlighting global competition in the sector, Liu Qingfeng, a deputy to the National People's Congress and chairman of Chinese AI company iFlytek, said there is a need for China to draft a national development plan for the latest tech frontier.

"We need to address the gaps and focus on the 'main battlefield' of large language models. We must concentrate national resources to catch up rapidly while systematically constructing the ecosystem and applications for AGI," Liu said.

The comments came after the Government Work Report submitted on Tuesday to the national legislature for deliberation said that China will step up research and development, applications of big data and AI, as well as launch an AI Plus initiative and build digital industry clusters with international competitiveness.

Liu said that after China unveiled a new-generation AI development plan in 2017, the country has accumulated technical reserves and organized teams in the field of cognitive intelligence, making it an important player in the global AI arena.

But with the emergence of generative AI technology, global competition has intensified to new heights and China lags behind the US in key aspects, Liu said. Generative AI refers to computer algorithms that produce new text, images, code, videos or audio in a human-like fashion. It is the key technology behind ChatGPT and Sora.

"So, on the basis of the 2017 AI plan, I called for efforts to draft a new national development plan to systematically address the shortcomings in China's AI development," Liu added.

He said the plan should involve key areas such as building a powerful computing power ecosystem, high-quality data sharing, scientific evaluation standards, forward-looking technological R&D, talent cultivation, legal frameworks and ethical considerations.

Liu expressed confidence that with concerted efforts in computing power, data and model training resources, China could narrow the gap with the US.

Goldman Sachs Research forecast in a report that breakthroughs in generative AI can drive a 7 percent, or almost $7 trillion, increase in global GDP and raise productivity growth by 1.5 percentage points over a 10-year period.

Well aware of the opportunities ahead, established tech heavyweights such as Alibaba, Tencent, Baidu, ByteDance, iFlytek and Huawei, as well as thousands of startups in China, are scrambling to develop and embrace large language models.

Chi Xiannian, a senior engineer at the China Center for Information Industry Development, a think tank affiliated with the Ministry of Industry and Information Technology, said finance, manufacturing, governance and transportation are the top industries in China to use AI large language models.

See the article here:

Deputy calls for national plan for AGI development in the country - China Daily

Read More..

OpenAI fired back at Elon Musk’s lawsuit by releasing his emails – Quartz

OpenAI fired back at Elon Musks lawsuit against the company by releasing screenshots of emails from Musk during his time at OpenAI that show he supported making it a for-profit company and said a merger with Tesla was the only way to compete with Google.

What inspired Elon Musk's design for Tesla's Cybertruck?

Elon left OpenAI, saying there needed to be a relevant competitor to Google/DeepMind and that he was going to do it himself, OpenAI wrote in a blog post published late Tuesday. He said hed be supportive of us finding our own path.

Musk, an OpenAI co-founder, sued the company and CEO Sam Altman last week, alleging that its multi-billion-dollar partnership with Microsoft betrays its founding commitment to benefiting humanity over making profit. The lawsuit has a sparked debate among Musks fellow Silicon Valley billionaires about the future of AI.

OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft, Musks lawsuit says. Under its new board, it is not just developing but is actually refining an [artificial general intelligence] to maximize profits for Microsoft, rather than for the benefit of humanity.

OpenAI said in its response Tuesday that Musk wanted to start with a $1 billion funding commitment to avoid sounding hopeless, after Altman and their other co-founder Greg Brockman initially planned to raise $100 million.

By 2017, OpenAI said, the company realized building AGI, or artificial general intelligence, would require billions of dollars per year far more than the company and Musk thought it would be able to raise as a nonprofit.

OpenAI said Musk wanted majority equity, board control, and the CEO position after the team discussed creating a for-profit entity that year and withheld his funding during those discussions. But OpenAI said it couldnt agree to Musks terms because it would have been against the mission for one person to have absolute control over the company.

In 2018, OpenAI said, Musk suggested merging the company with Tesla, which Musk said in one of the emailscould serve as its cash cow to compete with Google.

Musk decided to leave OpenAI that year, the company said, adding that he had warned that our probability of success was 0. Musk planned to build his own AGI competitor within Tesla, OpenAI said.

When he left in late February 2018, he told our team he was supportive of us finding our own path to raising billions of dollars, OpenAI said. The company included a screenshot of an email from December 2018, in which Musk wrote: Even raising several hundred million wont be enough. This needs billions per year immediately or forget it.

Musks lawsuit also seeks an order for OpenAI to open its research and technology to the public. OpenAI said Tuesday that it provides broad access to its AI models, including free versions, and that Musk understood the mission did not imply open-sourcing AGI.

In one email, OpenAI co-founder Ilya Sutskever wrote to Musk, saying it will make sense to start being less open as the company continued developing its AI, and that Open in openAI means that everyone should benefit from the fruits of AI after its built, but its totally OK to not share the science.

Musk responded to that, Yup.

Were sad that its come to this with someone whom weve deeply admired someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAIs mission without him, OpenAI said.

Read the original:

OpenAI fired back at Elon Musk's lawsuit by releasing his emails - Quartz

Read More..

OpenAI says Elon Musk wanted ‘absolute control’ of the company – The Verge

OpenAI has responded to Elon Musks lawsuit by saying that he at one point wanted absolute control of the company by merging it with Tesla.

In a blog post published on Tuesday, OpenAI said it will move to dismiss all of Elons claims and offered its own counter-narrative to his account of the company abandoning its original mission as a nonprofit.

As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control,including majority equity, initial board control, and to be CEO, according to the post, which is authored by OpenAI co-founders Greg Brockman, Ilya Sutskever, John Schulman, Sam Altman, and Wojciech Zaremba. We couldnt agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI.

Musk alleged in his suit that OpenAI has become a closed-source de facto subsidiary of Microsoft that is focused on making money instead of benefitting humanity. In so doing, his suit claims that OpenAI abandoned its original nonprofit mission that he helped fund.

In Musks view, this constitutes a breach of a contract. While Musks complaint mentions an OpenAI founding agreement, no formal agreement has been made public yet, and OpenAIs post did not directly address the question of whether one existed.

OpenAI also defends its decision not open-source its work: Elon understood the mission did not imply open-sourcing AGI, the post says, referring to artificial general intelligence. The company published a January 2016 email conversation in which Sutskever said, as we get closer to building AI, it will make sense to start being less open, and that its totally OK to not share the science. Musk replied: Yup.

There are some other puzzling allegations in Musks suit, like the one that GPT-4 is a de facto Microsoft proprietary algorithm that represents artificial general intelligence. OpenAI had already rejected those claims in a staff memo but didnt address them in its public blog post on Tuesday.

See original here:

OpenAI says Elon Musk wanted 'absolute control' of the company - The Verge

Read More..