Category Archives: Artificial General Intelligence
Meta’s AI Needs to Speak With You – New York Magazine
Photo-Illustration: Intelligencer
Meta has an idea: Instead of ever leaving its apps, why not stay and chat with a bot? This past week, Mark Zuckerberg announced an update to Metas AI models, claiming that, in some respects, they were now among the most capable in the industry. He outlined his companys plans to pursue AGI, or Artificial General Intelligence, and made some more specific predictions: By the end of the decade, I think lots of people will talk to AIs frequently throughout the day, using smart glasses like what were building with Ray-Ban Meta.
Maybe so! But for now, the company has something else in mind. Meta is deploying its chatbot across its most popular apps, including Facebook, Instagram, WhatsApp, and Messenger. Users might encounter the chatbot commenting on Facebook posts, chiming in when tagged in group messages, or offering suggestions in social feeds. You can chat with it directly, like ChatGPT. Itll generate images and write messages for you; much in the way that Microsoft and Google have built AI assistants into their productivity software, Meta has installed helpers into a range of social contexts. Itll be genuinely interesting to see if and how people use them in these contexts, and Meta will find out pretty quickly.
This move has been described as both savvy and desperate. Is Meta playing catchup, plowing money into a fad, and foisting half-baked technology on its users? Or is Meta now the de facto leader in AI, with a capable model, a relevant hardware business, and more users than anyone else? Like AI models themselves, claims like these are hard to benchmark every player in AI is racing in the same direction toward an ill-defined destination where they, or at least their investors, believe great riches await.
In actual usage, though, Metas AI tells a more mundane story about its intentions. The place most users are likely to encounter Metas chatbots most of the time is in the context of search:
Meta AI is also available in search across Facebook, Instagram, WhatsApp and Messenger. You can access real-time information from across the web without having to bounce between apps. Lets say youre planning a ski trip in your Messenger group chat. Using search in Messenger you can ask Meta AI to find flights to Colorado from New York and figure out the least crowded weekends to go all without leaving the Messenger app.
This is both a wide and conspicuous deployment, in practice. The box used to search for other people or pages, or groups, or locations, or topics is now also something between a chatbot and a search engine. It looks like this:
Like ChatGPT, you can ask it about whatever you want, and it will synthesize a response. In contrast to some other chatbots, and in line with the sorts of results you might get from an AI-powered search engine like Perplexity or Googles Search Generative Experience, Metas AI will often return something akin to search results, presented as a summary with footnoted links sourced from the web. When it works, the intention is pretty clear: Rather than providing something else to do within Facebook or Instagram, these features are about reducing the need to ever leave. Rather than switch out of Instagram to search for something on Google, or to tap around the web for a while, you can just tap Metas search bar and get your question answered there.
This isnt a simple case of Meta maximizing engagement, although thats surely part of it. Deploying this sort of AI, which is expensive to train and uses a lot of computing power to run, is almost certainly costing Meta a huge amount of money at this scale, which is why OpenAI charges users for similar tools. Its also a plan for a predicted future in which the web that is, openly accessible websites that exist outside of walled gardens like Metas is diminished, harder to browse, and less central to the online lives of most people. Now, smartphone users bounce between apps and web browsers and use web browsers within apps. Links provide connective tissue between apps that otherwise dont really talk to one another, and the web is a common resource to which most apps refer, at least somewhat. Here, Meta offers a preview of a world in which the web is reduced to a source for summarization and reference, less a thing that you browse than a set of data thats browsed on your behalf, by a machine.
This wouldnt be great news for the web, or the various parties that currently contribute to it; indeed, AI firms broadly rapacious approach to any and all existing and available sources of data could have the effect of making such data harder to come by, and its creators less likely to produce or at least share it (as currently built, Metas AI depends on results from Google and Bing). And lets not get ahead of ourselves: the first thing I did when I got this feature on Instagram was type New York, which presented me with a list of accounts and a couple of suggested searches, including, curiously, New York fries near me. I decided to check it out:
Guess its a good thing I didnt actually want any fries. Elsewhere, Metas AI is giving parenting advice on Facebook claiming its the parent of a both gifted and disabled child whos attending a New York City public school.
Maybe Zuckerbergs right that well be having daily conversations with AIs in our Ray Bans by the end of the decade. But right now, Meta is expecting us to have those conversations even if we dont like, need, or understand what we hear back. Were stuck testing the AI, and it us.
Get an email alert as soon as a new article publishes.
By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.
View original post here:
AI’s Illusion of Rapid Progress – Walter Bradley Center for Natural and Artificial Intelligence
The media loves to report on everything Elon Musk says, particularly when it is one of his very optimistic forecasts. Two weeks ago he said: If you define AGI (artificial general intelligence) as smarter than the smartest human, I think itβs probably next year, within two years.β
In 2019, he predicted there would be a million robo-taxis by 2020 and in 2016, he said about Mars, βIf things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.β
On the other hand, the media places less emphasis on negative news such as announcements that Amazon would abandon its cashier-less technology called βJust Walk Out, because it wasnt working properly. Introduced three years ago, the tech purportedly enabled shoppers to pick up meat, dairy, fruit and vegetables and walk straight out without queueing, as if by magic. That magic, which Amazon dubbed βJust Walk Outβ technology, was said to be autonomously powered by AI.
Unfortunately, it wasnt. Instead, the checkout-free magic was happening in part due to a network of cameras that were overseen by over 1,000 people in India who would verify what people took off the shelves. Their tasks included βmanually reviewing transactions and labeling images from videos.
Why is this announcement more important than Musks prediction? Because so many of the predictions by tech bros such as Elon Musk are based on the illusion that there are many AI systems that are working properly, when they are still only 95% there, with the remaining 5% dependent on workers in the background. The obvious example is self-driving vehicles, which are always a few years away, even as many vehicles are controlled by remote workers.Β Β
But self-driving vehicles and cashier-less technology are just the tip of the iceberg. A Gizmodo article listed about 10 examples of AI technology that seemed like they were working, but just werent.
A company named Presto Voice sold its drive-thru automation services, purportedly powered by AI, to Carls Jr, Chilis, and Del Taco, but in reality, Filipino offsite workers are required to help with over 70% of Prestos orders.
Facebook released a virtual assistant named M in 2015 that purportedly enabled AI to book your movie tickets, tell you the weather, or even order you food from a local restaurant. But it was mostly human operators who were doing the work.
There was an impressive Gemini demo in December of 2023 that showed how Geminis AI could allegedly decipher between video, image, and audio inputs in real-time. That video turned out to be sped up and edited so humans could feed Gemini long text and image prompts to produce any of its answers. Todays Gemini can barely even respond to controversial questions, let alone do the backflips it performed in that demo.
Amazon has offered a service for years called Mechanical Turk of which one service was Expensify in 2017 in which you could take a picture of a receipt and the app would automatically verify that it was an expense compliant with your employers rules, and file it in the appropriate location. In reality, Amazon used a team of secure technicians to file the expense on your behalf, who were often Amazon Mechanical Turk workers.
Twitter offered a virtual assistant in 2016 that had access to your calendar and could correspond with you over email. In reality, humans, posing as AI, responded to emails, scheduled meetings on calendars, and even ordered food for people.β
Google claims that AI is scanning your Gmail inbox for information to personalize ads, but in reality, humans are doing the work, and are seeing your private information.
In the last three cases, real humans were viewing private information such as credit card numbers, full names, addresses, food orders, and more.
Then there are the hallucinations that keep cropping up in the output from large-language models. Many experts claim that the lowest hallucination rates among tracked AI models are around 3 to 5%, and that they arent fixable because they stem from the LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts.
Every time you hear one of the tech bros talking about the future, keep in mind that they think large language models and self-driving vehicles already work almost perfectly. They have already filed away those cases as successfully done and they are thinking about whats next.
For instance, Garry Tan, the president and CEO of startup accelerator Y Combinator, claimed that Amazons cashier-less technology was:
βruined by a professional managerial class that decided to use fake AI. Honestly it makes me sad to see a Big Tech firm ruined by a professional managerial class that decided to use fake AI, deliver a terrible product, and poison an entire market (autonomous checkout) when an earnest Computer Vision-driven approach could have reached profitable.
The president of Y Combinator should have known that humans were needed to make Amazons technology work, and many other AI systems. It is one of Americas most respected venture capital firms. It has funded around 4,000 startups and Sam Altman, currently CEO of OpenAI, was president of it between 2014 and 2019. For the president, Rodney Tan, to claim that Amazon could have succeeded if they had used real tech after many other companies have failed doing the same thing suggests he is either misinformed or lying.
So the next time you hear that AGI is imminent or jobs will soon be gone, remember that most of these optimistic predictions assume that Amazons cashierless technology, self-driving vehicles, and many other systems already work, when they are only 95 percent there, and the last five percent is the hardest.
In reality, those systems wont be done for years because the last few percentage points of work usually take as long as the first 95%. So what the media should be asking the tech bros about is how long will it take before those systems go from 95% successfully done autonomously to 99.99% or higher. Similarly, companies should be asking the consultants is when the 95% will become 99.99% because the rapid progress is an illusion.
Too many people are extrapolating from the systems that are purportedly automated, even though they arent yet working properly. This means that any extrapolations should attempt to understand when they will become fully automated, not just when those new forms of automated systems will begin to be used. Understanding whats going on in the background is important for understanding what the future will be in the foreground.
Read the original here:
AI's Illusion of Rapid Progress - Walter Bradley Center for Natural and Artificial Intelligence
Will AI help or hinder trust in science? – CSIRO
By Jon Whittle 23 April 2024 6 min read
In the past year, generative artificial intelligence tools such as ChatGPT , Gemini , and OpenAIs video generation tool Sora have captured the publics imagination.
All that is needed to start experimenting with AI is an internet connection and a web browser. You can interact with AI like you would with a human assistant: by talking to it, writing to it, showing it images or videos, or all of the above.
While this capability marks entirely new terrain for the general public, scientists have used AI as a tool for many years.
But with greater public knowledge of AI will come greater public scrutiny of how its being used by scientists.
AI is already revolutionising science six percent of all scientific work leverages AI, not just in computer science, but in chemistry, physics, psychology and environmental science.
Nature, one of the worlds most prestigious scientific journals, included ChatGPT on its 2023 Natures 10 list of the worlds most influential and, until then, exclusively human scientists.
The use of AI in science is twofold.
At one level, AI can make scientists more productive.
When Google DeepMind released an AI-generated dataset of more than 380,000 novel material compounds, Lawrence Berkeley Lab used AI to run compound synthesis experiments at a scale orders of magnitude larger than what could be accomplished by humans.
But AI has even greater potential: to enable scientists to make discoveries that otherwise would not be possible at all.
It was an AI algorithm that for the first time found signal patterns in brain-activity data that pointed to the onset of epileptic seizures, a feat that not even the most experienced human neurologist can repeat.
Early success stories of the use of AI in science have led some to imagine a future in which scientists will collaborate with AI scientific assistants as part of their daily work.
That future is already here. CSIRO researchers are experimenting with AI science agents and have developed robots that can follow spoken language instructions to carry out scientific tasks during fieldwork.
While modern AI systems are impressively powerful especially so-called artificial general intelligence tools such as ChatGPT and Gemini they also have drawbacks.
Generative AI systems are susceptible to hallucinations where they make up facts.
Or they can be biased. Googles Gemini depicting Americas Founding Fathers as a diverse group is an interesting case of over-correcting for bias.
There is a very real danger of AI fabricating results and this has already happened. Its relatively easy to get a generative AI tool to cite publications that dont exist .
Furthermore, many AI systems cannot explain why they produce the output they produce.
This is not always a problem. If AI generates a new hypothesis that is then tested by the usual scientific methods, there is no harm done.
However, for some applications a lack of explanation can be a problem.
Replication of results is a basic tenet in science, but if the steps that AI took to reach a conclusion remain opaque, replication and validation become difficult, if not impossible.
And that could harm peoples trust in the science produced.
A distinction should be made here between general and narrow AI.
Narrow AI is AI trained to carry out a specific task.
Narrow AI has already made great strides. Google DeepMinds AlphaFold model has revolutionised how scientists predict protein structures.
But there are many other, less well publicised, successes too such as AI being used at CSIRO to discover new galaxies in the night sky, IBM Research developing AI that rediscovered Keplers third law of planetary motion , or Samsung AI building AI that was able to reproduce Nobel prize winning scientific breakthroughs .
When it comes to narrow AI applied to science, trust remains high.
AI systems especially those based on machine learning methods rarely achieve 100 percent accuracy on a given task. (In fact, machine learning systems outperform humans on some tasks, and humans outperform AI systems on many tasks. Humans using AI systems generally outperform humans working alone and they also outperform AI working alone. There is a large scientific evidence base for this fact, including this study. )
AI working alongside an expert scientist, who confirms and interprets the results, is a perfectly legitimate way of working, and is widely seen as yielding better performance than human scientists or AI systems working alone.
On the other hand, general AI systems are trained to carry out a wide range of tasks, not specific to any domain or use case.
ChatGPT, for example, can create a Shakespearian sonnet, suggest a recipe for dinner, summarise a body of academic literature, or generate a scientific hypothesis.
When it comes to general AI, the problems of hallucinations and bias are most acute and widespread. That doesnt mean general AI isnt useful for scientists but it needs to be used with care.
This means scientists must understand and assess the risks of using AI in a specific scenario and weigh them against the risks of not doing so.
Scientists are now routinely using general AI systems to help write papers , assist review of academic literature, and even prepare experimental plans.
One danger when it comes to these scientific assistants could arise if the human scientist takes the outputs for granted.
Well-trained, diligent scientists will not do this, of course. But many scientists out there are just trying to survive in a tough industry of publish-or-perish. Scientific fraud is already increasing , even without AI.
AI could lead to new levels of scientific misconduct either through deliberate misuse of the technology, or through sheer ignorance as scientists dont realise that AI is making things up.
Both narrow and general AI have great potential to advance scientific discovery.
A typical scientific workflow conceptually consists of three phases: understanding what problem to focus on, carrying out experiments related to that problem and exploiting the results as impact in the real world.
AI can help in all three of these phases.
There is a big caveat, however. Current AI tools are not suitable to be used naively out-of-the-box for serious scientific work.
Only if researchers responsibly design, build, and use the next generation of AI tools in support of the scientific method will the publics trust in both AI and science be gained and maintained.
Getting this right is worth it: the possibilities of using AI to transform science are endless.
Google DeepMinds iconic founder Demis Hassabis famously said : Building ever more capable and general AI, safely and responsibly, demands that we solve some of the hardest scientific and engineering challenges of our time.
The reverse conclusion is true as well: solving the hardest scientific challenges of our time demands building ever more capable, safe and responsible general AI.
Australian scientists are working on it.
This article was originally published by360info under a Creative Commons license. Read theoriginal article .
Professor Jon Whittle is Director of CSIROs Data61, Australias national centre for R&D in data science and digital technologies. He is co-author of the book, Responsible AI: Best Practices for Creating Trustworthy AI Systems.
Dr Stefan Harrer is Program Director of AI for Science at CSIROs Data61, leading a global innovation, research and commercialisation programme aiming to accelerate scientific discovery through the use of AI. He is the author of the Lancet article Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine.
Stefan Harrer is an inventor on several granted US and international patents that relate to using AI for science.
See the original post:
Can Oil, Gas Companies Use Generative AI to Help Hire People? – Rigzone News
Artificial Intelligence (AI) will definitely help oil and gas companies hire people.
Thats what Louisiana based OneSource Professional Search believes, according to Dave Mount, the companys president.
Our search firm is already implementing AI to augment our traditional recruiting/headhunting practices to more efficiently source a higher number of candidates along with managing the extra activity related to sourcing and qualifying a larger amount of candidates/talent pool, Mount revealed to Rigzone.
Were integrating AI as we speak and its definitely helping in covering more ground and allowing us to access a larger talent pool, although its a learning process to help the quality of the sourcing/screening match the increased quantity of qualified candidates, he added.
Gladney Darroh - an energy search specialist with 47 years of experience who developed and coaches the interview methodology Winning the Offer, which earned him the ranking of #1 technical and professional recruiter in Houston for 17 consecutive years by HAAPC - told Rigzone that oil and gas companies will use generative AI to help hire people, and so will everyone else.
Generative AI is a historic leap in technology and oil and gas companies have used technology for years to hire people, the Founding Partner and President of Houston, Texas, based Piper-Morgan Associates Personnel Consultants said.
It is typically a time intensive exercise to develop an initial pool of qualified candidates, determine which will consider a job change, which will consider a job change for this opportunity, who is really gettable, who meets the expectations of the hiring company in terms of what he/she brings to the table now, and if she/he possesses the talent to become a long term asset, Darroh added.
Deep learning models can be trained on key word content searches for anything and everything: education, training, skillset, general and specific experience all quantitative data, Darroh continued.
Once AI is trained this way and applied to searches, AI will generate in seconds what an in-house or outside recruiter might generate over days or weeks, he went on to state.
Darroh also noted that AI is developing inference - the ability to draw conclusions from data, which is the qualitative data that helps determine a candidates long-term potential for promotion and leadership roles.
For companies who are racing against their competitors to identify and hire the right talent, whether an entry level or an experienced hire, they will all adopt AI to help hire people, Darroh concluded.
Earlier this year, Enverus Chief Innovation Officer, Colin Westmoreland, revealed to Rigzone that the company believes generative AI will shape oil and gas decision making in 2024 and into the future.
Generative AI will reduce the time to value significantly by providing rapid analysis and insights, leveraging vast amounts of curated data, he said.
Westmoreland also told Rigzone that generative AI is expected to become commonplace among oil and gas companies over the next few years.
Back in January, Trygve Randen, the Senior Vice President of Digital Products and Solutions at SLB, outlined to Rigzone that generative AI will continue to gain traction in the oil and gas industry this year.
In an article published on its website in January 2023, which was updated in April 2024, McKinsey & Company noted that generative AI describes algorithms, such as ChatGPT, that can be used to create new content, including audio, code, images, text, simulations, and videos.
OpenAI, which describes itself as an A.I. research and deployment company whose mission is to ensure that artificial general intelligence benefits all of humanity, introduced ChatGPT on November 30, 2022.
In April 2023, Rigzone looked at how ChatGPT will affect oil and gas jobs. To view that article, clickhere.
To contact the author, emailandreas.exarheas@rigzone.com
Read the original here:
Can Oil, Gas Companies Use Generative AI to Help Hire People? - Rigzone News
Beyond AI doomerism: Navigating hype vs. reality in AI risk – TechTarget
With attention-grabbing headlines about the possible end of the world at the hands of an artificial superintelligence, it's easy to get caught up in the AI doomerism hype and imagine a future where AI systems wreak havoc on humankind.
Discourse surrounding any unprecedented moment in history -- the rapid growth of AI included -- is inevitably complex, characterized by competing beliefs and ideologies. Over the past year and a half, concerns have bubbled up regarding both the short- and long-term risks of AI, sparking debate over which issues should be prioritized.
Although considering the risks AI poses and the technology's future trajectory is worthwhile, discussions of AI can also veer into sensationalism. This hype-driven engagement detracts from productive conversation about how to develop and maintain AI responsibly -- because, like it or not, AI seems to be here to stay.
"We're all pursuing the same thing, which is that we want AI to be used for good and we want it to benefit people," said Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University.
As AI has gained prominence, so has the conversation surrounding its risks. Concerns range from immediate ethical and societal harms to long-term, more hypothetical risks, including whether AI could pose an existential threat to humanity. Those focused on the latter, a field known as AI safety, see AI as both an avenue for innovation and a source of possibly devastating risks.
Spencer Kaplan, an anthropologist and doctoral candidate at Yale University, studies the AI community and its discourse around AI development and risk. During his time in the San Francisco Bay Area AI safety scene, he's found that many experts are both excited and worried about the possibilities of AI.
"One of the key points of agreement is that generative AI is both a source of incredible promise and incredible peril," Kaplan said.
One major long-term concern about AI is existential risk, often abbreviated as x-risk, the fear that AI could someday cause the mass destruction of humans. An AI system with unprecedented and superhuman levels of intelligence, often referred to as artificial general intelligence (AGI), is considered a prerequisite for this type of destruction. Some AI safety researchers postulate that AGI with intelligence indistinguishable from or superior to that of humans would have the power to wipe out humankind. Opinions in the AI safety scene on the likelihood of such a hostile takeover event vary widely; some consider it highly probable, while others only acknowledge it as a possibility, Kaplan said.
Some circles hold a prevailing belief that long-term risks are the most concerning, regardless of their likelihood -- a concept influenced by tenets of effective altruism (EA), a philosophical and social movement that first gained prominence in Oxford, U.K., and the Bay Area in the late 2000s. Effective altruists' stated aim is to identify the most impactful, cost-effective ways to help others using quantifiable evidence and reasoning.
In the context of AI, advocates of EA and AI safety have coalesced around a shared emphasis on high-impact global issues. In particular, both groups are influenced by longtermism, the belief that focusing on the long-term future is an ethical priority and, consequently, that potential existential risks are most deserving of attention. The prevalence of this perspective, in turn, has meant prioritizing research and strategies that aim to mitigate existential risk from AI.
Fears about extinction-level risk from AI might seem widespread; a group of industry leaders publicly said as much in 2023. A few years prior, in 2021, a subgroup of OpenAI developers split off to form their own safety-focused AI lab, Anthropic, motivated by a belief in the long-term risks of AI and AGI. More recently, Geoffrey Hinton, sometimes referred to as the godfather of AI, left Google, citing fears about the power of AI.
"There is a lot of sincere belief in this," said Jesse McCrosky, a data scientist and principal researcher for open source research and investigations at Mozilla. "There's a lot of true believers among this community."
As conversation around the long-term risks of AI intensifies, the term AI doomerism has emerged to refer to a particularly extreme subset of those concerned about existential risk and AGI -- often dismissively, sometimes as a self-descriptor. Among the most outspoken is Eliezer Yudkowsky, who has publicly expressed his belief in the likelihood of AGI and the downfall of humanity due to a hostile superhuman intelligence.
However, the term is more often used as a pejorative than as a self-label. "I have never heard of anyone in AI safety or in AI safety with longtermist concerns call themselves a doomer," Kaplan said.
Although those in AI safety typically see the most pressing AI problems as future risks, others -- often called AI ethicists -- say the most pressing problems of AI are happening right now.
"Typically, AI ethics is more social justice-oriented and looking at the impact on already marginalized communities, whereas AI safety is more the science fiction scenarios and concerns," McCrosky said.
For years, individuals have raised serious concerns about the immediate implications of AI technology. AI tools and systems have already been linked to racial bias, political manipulation and harmful deepfakes, among other notable problems. Given AI's wide range of applications -- in hiring, facial recognition and policing, to name just a few -- its magnification of biases and opportunity for misuse can have disastrous effects.
"There's already unsafe AI right now," said Chirag Shah, professor in the Information School at the University of Washington and founding co-director of the center for Responsibility in AI Systems and Experiences. "There are some actual important issues to address right now, including issues of bias, fairness, transparency and accountability."
As Emily Bender, a computational linguist and professor at the University of Washington, has argued, conversations that overlook these types of AI risks are both dangerous and privileged, as they fail to account for AI's existing disproportionate effect on marginalized communities. Focusing solely on hypothetical future risk means missing the important issues of the present.
"[AI doomerism] can be a distraction from the harms that we already see," McCrosky said. "It puts a different framing on the risk and maybe makes it easier to sweep other things under the carpet."
Rumman Chowdhury, co-founder of the nonprofit Humane Intelligence, has long focused on tech transparency and ethics, including in AI systems. In a 2023 Rolling Stone article, she commented that the demographics of doomer and x-risk communities skew white, male and wealthy -- and thus tend not to include victims of structural inequality.
"For these individuals, they think that the biggest problems in the world are can AI set off a nuclear weapon?" Chowdhury told Rolling Stone.
McCrosky recently conducted a study on racial bias in multimodal LLMs. When he asked the model to determine whether a person was trustworthy based solely on facial images, he found that racial bias often influenced its decision-making process. Such biases are deeply concerning and have serious implications, especially when considered in the context of AI applications, such as military and defense.
"We've already seen significant harm from AI," McCrosky said. "These are real harms that we should be caring a whole lot more about."
In addition to fearing that discussions of existential risk overshadow current AI-related harms, many researchers also question the scientific foundation for concerns about superintelligence. If there's little basis for the idea that AGI could develop in the first place, they worry about the effect such sensational language could have.
"We jump to [the idea of] AI coming to destroy us, but we're not thinking enough about how that happens," Shah said.
McCrosky shared this skepticism regarding the existential threat from AI. The plateau currently reached by generative AI isn't indicative of the AGI that longtermists worry about, he said, and the path towards AGI remains unclear.
Transformers, the models underlying today's generative AI, were a revolutionary concept when Google published the seminal paper "Attention Is All You Need" in 2017. Since then, AI labs have used transformer-based architectures to build the LLMs that power generative AI tools, like OpenAI's chatbot, ChatGPT.
Over time, LLMs have become capable of handling increasingly large context windows, meaning that the AI system can process greater amounts of input at once. But larger context windows come with higher computational costs, and technical issues, like hallucinations, have remained a problem even for highly powerful models. Consequently, scientists are now contending with the possibility that advancing to the next frontier in AI may require a completely new architecture.
"[Researchers] are kind of hitting a wall when it comes to transformer-based architecture," Kaplan said. "What happens if they don't find this new architecture? Then, suddenly, AGI becomes further and further off -- and then what does that do to AI safety?"
Given the uncertainty around whether AGI can be developed in the first place, it's worth asking who stands to benefit from AI doomerism talk. When AI developers advocate for investing more time, money and attention into AI due to possible AGI risks, a self-interested motive may also be at play.
"The narrative comes largely from people that are building these systems and are very excited about these systems," McCrosky said. While he noted that AI safety concerns are typically genuine, he also pointed out that such rhetoric "becomes very self-serving, in that we should put all our philanthropic resources towards making sure we do AI right, which is obviously the thing that they want to do anyway."
Despite the range of beliefs and motivations, one thing is evident: The dangers associated with AI feel incredibly tangible to those who are concerned about them.
A future with extensive integration of AI technologies is increasingly easy to imagine, and it's understandable why some genuinely believe these developments could lead to serious dangers. Moreover, people are already affected by AI every day in unintended ways, from harmless but frustrating outcomes to dangerous and disenfranchising ones.
To foster productive conversation amid this complexity, experts are emphasizing the importance of education and engagement. When public awareness of AI outpaces understanding, a knowledge gap can emerge, said Reggie Townsend, vice president of data ethics at SAS and member of the National AI Advisory Committee.
"Unfortunately, all too often, people fill the gap between awareness and understanding with fear," Townsend said.
One strategy for filling that gap is education, which Shah sees as the best way to build a solid foundation for those entering the AI risk conversation. "The solution really is education," he said. "People need to really understand and learn about this and then make decisions and join the real discourse, as opposed to hype or fear." That way, sensational discourse, like AI doomerism, doesn't eclipse other AI concerns and capabilities.
Technologists have a responsibility to ensure that overall societal understanding of AI improves, Townsend said. Hopefully, better AI literacy results in more responsible discourse and engagement with AI.
Townsend emphasized the importance of meeting people where they are. "Oftentimes, this conversation gets way too far ahead of where people actually are in terms of their willingness to accept and their ability to understand," he said.
Lastly, polarization impedes progress. Those focused on current concerns and those worried about long-term risk are more connected than they might realize, Green said. Seeing these perspectives as contradictory or in a zero-sum way is counterproductive.
"Both of their projects are looking at really important social impacts of technology," he said. "All that time spent infighting is time that could be spent actually solving the problems that they want to solve."
In the wake of recent and rapid AI advancements, harms are being addressed on multiple fronts. Various groups and individuals are working to train AI more ethically, pushing for better governance to prevent misuse and considering the impact of intelligent systems on people's livelihoods, among other endeavors. Seeing these efforts as inherently contradictory -- or rejecting others' concerns out of hand -- runs counter to a shared goal that everyone can hopefully agree on: If we're going to build and use powerful AI, we need to get it right.
Olivia Wisbey is associate site editor for TechTarget Enterprise AI. She graduated from Colgate University with Bachelor of Arts degrees in English literature and political science, where she served as a peer writing consultant at the university's Writing and Speaking Center.
Lev Craig contributed reporting and research to this story.
See the original post:
Beyond AI doomerism: Navigating hype vs. reality in AI risk - TechTarget
Regulation must be put in place today for the superhuman AI of tomorrow – Verdict
Companies such as OpenAI and Meta have publicly committed to achieving AGI and are working towards this goal. Credit: TY Lim / Shutterstock.
With its potential, artificial intelligence (AI) is perhaps the most hotly discussed technology theme in the world. Since the launch of ChatGPT in November 2022, a day has not gone by in which AI did not capture the headlines. And it is only the beginning. GlobalData forecasts that the global AI market will grow drastically at a compound annual growth rate (CAGR) of 39% between 2023 and 2030.
According to GlobalData, we are only in the very early stages of AI. But even now, AI can do a lot. It can engage in high-quality conversations and some people have even reportedly got married to AI bots. Such incredible capabilities at this early stage suggest how advanced the technology will get. Scarily, at one point, AI could go on to become more intelligent than the most gifted minds in the world. Researchers call this stage of development artificial superintelligence (ASI).
So far, many influential businesspeople and experts have made guesses and expressed their opinions on ASI. In April 2024, Elon Musk argued that AI smarter than humans will be here as soon as the end of next year. This is a drastic change from his previous forecast in which he predicted that ASI would exist by 2029.
However, according to GlobalData, this is unlikely. GlobalData notes that researchers theorise that we first must achieve artificial general intelligence (AGI) before reaching ASI. At this stage, machines will have consciousness and be able to do anything people can do.
Although companies such as OpenAI and Meta have publicly committed to achieving AGI and are working towards this goal, it looks like it is going to take years before we see human-like AI machines around us that can do and think exactly as humans do. As a result, GlobalData expects that AGI will be achieved in no earlier than 35 years. Considered the holy grail of AI, AGI remains completely theoretical for now, despite the hype.
And considering that ASI is the step after AGI, it is also likely decades away.
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
Country * UK USA Afghanistan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos Islands Colombia Comoros Congo Democratic Republic of the Congo Cook Islands Costa Rica Cte d"Ivoire Croatia Cuba Curaao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See Honduras Hong Kong Hungary Iceland India Indonesia Iran Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati North Korea South Korea Kuwait Kyrgyzstan Lao Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia Moldova Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Runion Romania Russian Federation Rwanda Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan Tajikistan Tanzania Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates US Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Vietnam British Virgin Islands US Virgin Islands Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe Kosovo
Industry * Academia & Education Aerospace, Defense & Security Agriculture Asset Management Automotive Banking & Payments Chemicals Construction Consumer Foodservice Government, trade bodies and NGOs Health & Fitness Hospitals & Healthcare HR, Staffing & Recruitment Insurance Investment Banking Legal Services Management Consulting Marketing & Advertising Media & Publishing Medical Devices Mining Oil & Gas Packaging Pharmaceuticals Power & Utilities Private Equity Real Estate Retail Sport Technology Telecom Transportation & Logistics Travel, Tourism & Hospitality Venture Capital
Tick here to opt out of curated industry news, reports, and event updates from Verdict.
Submit and download
This level of advancement brings to mind science fiction movies and literature, in which AI takes over the world. Notably, Elon Musk has commented on this possibility before, as he argued there is a slim but not zero chance that AI will kill all of humanity.
In September 2023, headlines announced that tech executives like Bill Gates, Elon Musk, and Mark Zuckerberg met with lawmakers to discuss the dangers of uncontrolled AI and superintelligence behind closed doors. Evidently, not everyone is excited about ASI.
Even todays good enough AI with its limited capabilities is concerning tech leaders and world governments. AI-enhanced issues such as misinformation have already caused considerable trouble. Noticing the current and future threats of AI, governments, key influencers, and organizations have taken action. For instance, in March 2024, the General Assembly accepted the first-ever UN resolution for AI to ensure that the technology is used safely and reliably.
In the end, it looks like it may still be some time before ASI exists. Nevertheless, although ASI has the potential to revolutionize how humans and machines interact, steps must be taken today to minimize any potential threats. Perhaps, to ensure ASI remains safe, we must turn to fiction. Maybe the world needs a set of rules like Isaac Asimovs Three Laws of Robotics, which were followed by robots in many of his stories and prevented the machines from harming humans.
Read the rest here:
Regulation must be put in place today for the superhuman AI of tomorrow - Verdict
Tech companies want to build artificial general intelligence. But who decides when AGI is attained? – The Atlanta Journal Constitution
But what exactly is AGI and how will we know when its been attained? Once on the fringe of computer science, its now a buzzword thats being constantly redefined by those trying to make it happen.
Not to be confused with the similar-sounding generative AI which describes the AI systems behind the crop of tools that "generate" new documents, images and sounds artificial general intelligence is a more nebulous idea.
It's not a technical term but "a serious, though ill-defined, concept," said Geoffrey Hinton, a pioneering AI scientist who's been dubbed a "Godfather of AI."
I don't think there is agreement on what the term means, Hinton said by email this week. I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do.
Hinton prefers a different term superintelligence for AGIs that are better than humans.
A small group of early proponents of the term AGI were looking to evoke how mid-20th century computer scientists envisioned an intelligent machine. That was before AI research branched into subfields that advanced specialized and commercially viable versions of the technology from face recognition to speech-recognizing voice assistants like Siri and Alexa.
Mainstream AI research "turned away from the original vision of artificial intelligence, which at the beginning was pretty ambitious, said Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008.
Putting the G in AGI was a signal to those who still want to do the big thing. We dont want to build tools. We want to build a thinking machine, Wang said.
Without a clear definition, it's hard to know when a company or group of researchers will have achieved artificial general intelligence or if they already have.
Twenty years ago, I think people would have happily agreed that systems with the ability of GPT-4 or (Google's) Gemini had achieved general intelligence comparable to that of humans, Hinton said. Being able to answer more or less any question in a sensible way would have passed the test. But now that AI can do that, people want to change the test.
Improvements in "autoregressive" AI techniques that predict the most plausible next word in a sequence, combined with massive computing power to train those systems on troves of data, have led to impressive chatbots, but they're still not quite the AGI that many people had in mind. Getting to AGI requires technology that can perform just as well as humans in a wide variety of tasks, including reasoning, planning and the ability to learn from experiences.
Some researchers would like to find consensus on how to measure it. It's one of the topics of an upcoming AGI workshop next month in Vienna, Austria the first at a major AI research conference.
"This really needs a community's effort and attention so that mutually we can agree on some sort of classifications of AGI," said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to segment it into levels in the same way that carmakers try to benchmark the path between cruise control and fully self-driving vehicles.
Others plan to figure it out on their own. San Francisco company OpenAI has given its nonprofit board of directors whose members include a former U.S. Treasury secretary the responsibility of deciding when its AI systems have reached the point at which they "outperform humans at most economically valuable work."
The board determines when weve attained AGI, says OpenAI's own explanation of its governance structure. Such an achievement would cut off the company's biggest partner, Microsoft, from the rights to commercialize such a system, since the terms of their agreements only apply to pre-AGI technology.
Hinton made global headlines last year when he quit Google and sounded a warning about AI's existential dangers. A new Science study published Thursday could reinforce those concerns.
Its lead author is Michael Cohen, a University of California, Berkeley, researcher who studies the expected behavior of generally intelligent artificial agents, particularly those competent enough to present a real threat to us by out planning us.
Cohen made clear in an interview Thursday that such long-term AI planning agents don't yet exist. But they have the potential" to get more advanced as tech companies seek to combine today's chatbot technology with more deliberate planning skills using a technique known as reinforcement learning.
"Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity," according to the paper whose co-authors include prominent AI scientists Yoshua Bengio and Stuart Russell and law professor and former OpenAI adviser Gillian Hadfield.
I hope weve made the case that people in government (need) to start thinking seriously about exactly what regulations we need to address this problem, Cohen said. For now, governments only know what these companies decide to tell them.
With so much money riding on the promise of AI advances, it's no surprise that AGI is also becoming a corporate buzzword that sometimes attracts a quasi-religious fervor.
It's divided some of the tech world between those who argue it should be developed slowly and carefully and others including venture capitalists and rapper MC Hammer who've declared themselves part of an accelerationist camp.
The London-based startup DeepMind, founded in 2010 and now part of Google, was one of the first companies to explicitly set out to develop AGI. OpenAI did the same in 2015 with a safety-focused pledge.
But now it might seem that everyone else is jumping on the bandwagon. Google co-founder Sergey Brin was recently seen hanging out at a California venue called the AGI House. And less than three years after changing its name from Facebook to focus on virtual worlds, Meta Platforms in January revealed that AGI was also on the top of its agenda.
Meta CEO Mark Zuckerberg said his company's long-term goal was "building full general intelligence" that would require advances in reasoning, planning, coding and other cognitive abilities. While Zuckerberg's company has long had researchers focused on those subjects, his attention marked a change in tone.
At Amazon, one sign of the new messaging was when the head scientist for the voice assistant Alexa switched job titles to become head scientist for AGI.
While not as tangible to Wall Street as generative AI, broadcasting AGI ambitions may help recruit AI talent who have a choice in where they want to work.
In deciding between an old-school AI institute or one whose goal is to build AGI and has sufficient resources to do so, many would choose the latter, said You, the University of Illinois researcher.
Credit: AP
Credit: AP
Credit: AP
Credit: AP
See more here:
What to Expect from ChatGPT 5 – The Dales Report
The TDR Three Takeaways on ChatGPT 5:
OpenAI is on the verge of launching ChatGPT 5, a milestone that underscores the swift progress in artificial intelligence and its future role in human-computer interaction. As the next version after ChatGPT 4, ChatGPT 5 aims to enhance AIs capability to understand and produce text that mirrors human conversation, offering a smoother, more individualized, and accurate experience. This expectation is based on OpenAIs continuous efforts to advance AI technology, with ChatGPT 5 anticipated to debut possibly by this summer. This upcoming version is a part of OpenAIs wider goal to achieve artificial general intelligence (AGI), striving to create systems that can outperform human intelligence.
The model is the generative pre-trained transformer technology, a foundational AI mechanism that has been central to the progression of ChatGPT models. Each version of ChatGPT is built on an updated, more sophisticated GPT, allowing it to manage a broader spectrum of content, including, potentially, video. The transition from ChatGPT 4 to ChatGPT 5 focuses on improving personalization, minimizing errors, and broadening the range of content it can interpret. This progression is noteworthy, given ChatGPT 4s already substantial capabilities, such as its awareness of events up until April 2023, its proficiency in analyzing extensive prompts, and its ability to integrate various tools like the Dall-E 3 image generator and Bing search engine seamlessly.
Sam Altman, the CEO of OpenAI, has openly discussed the advancements and the enhanced intelligence OpenAI introduces. He stresses the significance of multimodality, adding speech input and output, images, and eventually video, to cater to the increasing demand for advanced AI tools. Additionally, Altman points out the advancements in reasoning abilities and dependability as key areas where ChatGPT 5 will excel beyond its predecessors. OpenAI plans to use both publicly available data sets and extensive proprietary data from organizations to train ChatGPT 5, demonstrating a thorough approach to improving its learning mechanisms.
The anticipation of ChatGPT 5s release has sparked conversations about AIs future, with various sectors keen to see its impact on human-machine interactions. OpenAIs emphasis on safety testing and the red teaming strategy highlights their dedication to introducing a secure and dependable AI model. This dedication is further shown by the organizations efforts to navigate challenges like GPU supply shortages through a worldwide network of investors and partnerships.
Although the exact release date for ChatGPT 5 and the full extent of its capabilities remain uncertain, the AI community and users are filled with excitement. The quickening pace of GPT updates, as seen in the launch schedule of earlier models, points to a fast-changing and evolving AI landscape. ChatGPT 5 is not just the next step towards AGI but also a significant marker in the pursuit of developing AI systems capable of thinking, learning, and interacting in ways once considered purely fictional. As OpenAI keeps refining its models, the global audience watches eagerly, prepared to welcome the advancements ChatGPT 5 is set to offer. Want to keep up to date with all of TDRs research and news, subscribe to our daily Baked In newsletter.
Original post:
Tech companies want to build artificial general intelligence. But who decides when AGI is attained? – The Caledonian-Record
Theres a race underway to build artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.
Achieving such a concept commonly referred to as AGI is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.
Javascript is required for you to be able to read premium content. Please enable it in your browser settings.
kAmxEVD 2=D@ 2 42FD6 7@C 4@?46C? k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^2CE:7:4:2=:?E6==:86?46C:D
kAmqFE H92E 6I24E=J π pvx 2?5 9@H H:== H6 @H H96? :ED 366? 2EE2:?65n ~?46 @? E96 7C:?86 @7 4@>AFE6C D4:6?46[ :ED ?@H 2 3FKKH@C5 E92ED 36:?8 4@?DE2?E=J C6567:?65 3J E9@D6 ECJ:?8 E@ >2<6 :E 92AA6?]k^Am
k9am(92E π pvxnk^9am
kAm}@E E@ 36 4@?7FD65 H:E9 E96 D:>:=2CD@F?5:?8 k2 9C67lQ9EEADi^^2A?6HD]4@>^9F3^86?6C2E:G62:Qm86?6C2E:G6 pxk^2m H9:49 56D4C:36D E96 px DJDE6>D 369:?5 E96 4C@A @7 E@@=D E92E 86?6C2E6 ?6H 5@4F>6?ED[ :>286D 2?5 D@F?5D 2CE:7:4:2= 86?6C2= :?E6==:86?46 π 2 >@C6 ?63F=@FD :562]k^Am
kAmxEVD ?@E 2 E649?:42= E6C> 3FE 2 D6C:@FD[ E9@F89 :==567:?65[ 4@?46AE[ D2:5 v6@77C6J w:?E@?[ 2 k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^_fg3`a_364c6c2d7gadga`6gh2fch254QmA:@?66C:?8 px D4:6?E:DEk^2m H9@VD 366? 5F3365 2 v@572E96C @7 px]k^Am
kAmx 5@?VE E9:?< E96C6 :D 28C66>6?E @? H92E E96 E6C> >62?D[ w:?E@? D2:5 3J 6>2:= E9:D H66<] x FD6 :E E@ >62? px E92E π 2E =62DE 2D 8@@5 2D 9F>2?D 2E ?62C=J 2== @7 E96 4@8?:E:G6 E9:?8D E92E 9F>2?D 5@]k^Am
kAmw:?E@? AC676CD 2 5:776C6?E E6C> DFA6C:?E6==:86?46 7@C pvxD E92E 2C6 36EE6C E92? 9F>2?D]k^Am
kAmp D>2== 8C@FA @7 62C=J AC@A@?6?ED @7 E96 E6C> pvx H6C6 =@@<:?8 E@ 6G@<6 9@H >:5a_E9 46?EFCJ 4@>AFE6C D4:6?E:DED 6?G:D:@?65 2? :?E6==:86?E >249:?6] %92E H2D 367@C6 px C6D62C49 3C2?4965 :?E@ DF37:6=5D E92E 25G2?465 DA64:2=:K65 2?5 4@>>6C4:2==J G:23=6 G6CD:@?D @7 E96 E649?@=@8J 7C@> 7246 C64@8?:E:@? E@ DA6649C64@8?:K:?8 G@:46 2DD:DE2?ED =:<6 $:C: 2?5 p=6I2]k^Am
kAm|2:?DEC62> px C6D62C49 QEFC?65 2H2J 7C@> E96 @C:8:?2= G:D:@? @7 2CE:7:4:2= :?E6==:86?46[ H9:49 2E E96 368:??:?8 H2D AC6EEJ 2>3:E:@FD[ D2:5 !6: (2?8[ 2 AC@76DD@C H9@ E62496D 2? pvx 4@FCD6 2E %6>A=6 &?:G6CD:EJ 2?5 96=A65 @C82?:K6 E96 7:CDE pvx 4@?76C6?46 π a__g]k^Am
kAm!FEE:?8 E96 v π pvx H2D 2 D:8?2= E@ E9@D6 H9@ DE:== H2?E E@ 5@ E96 3:8 E9:?8] (6 5@?E H2?E E@ 3F:=5 E@@=D] (6 H2?E E@ 3F:=5 2 E9:?<:?8 >249:?6[ (2?8 D2:5]k^Am
k9ampC6 H6 2E pvx J6Enk^9am
kAm(:E9@FE 2 4=62C 567:?:E:@?[ :EVD 92C5 E@ @H H96? 2 4@>A2?J @C 8C@FA @7 C6D62C496CD H:== 92G6 249:6G65 2CE:7:4:2= 86?6C2= :?E6==:86?46 @C :7 E96J 2=C625J 92G6]k^Am
kAm%H6?EJ J62CD 28@[ x E9:?< A6@A=6 H@F=5 92G6 92AA:=J 28C665 E92E DJDE6>D H:E9 E96 23:=:EJ @7 v!%c @C Wv@@8=6VDX v6>:?: 925 249:6G65 86?6C2= :?E6==:86?46 4@>A2C23=6 E@ E92E @7 9F>2?D[ w:?E@? D2:5] q6:?8 23=6 E@ 2?DH6C >@C6 @C =6DD 2?J BF6DE:@? π 2 D6?D:3=6 H2J H@F=5 92G6 A2DD65 E96 E6DE] qFE ?@H E92E px 42? 5@ E92E[ A6@A=6 H2?E E@ 492?86 E96 E6DE]k^Am
kAmx>AC@G6>6?ED π 2FE@C68C6DD:G6 px E649?:BF6D E92E AC65:4E E96 >@DE A=2FD:3=6 ?6IE H@C5 π 2 D6BF6?46[ 4@>3:?65 H:E9 >2DD:G6 4@>AFE:?8 A@H6C E@ EC2:? E9@D6 DJDE6>D @? EC@G6D @7 52E2[ 92G6 =65 E@ :>AC6DD:G6 492E3@ED[ 3FE k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^?6HJ@C<4:EJ492E3@E>:D:?7@C>2E:@?e634f`53d3ff_3hheh4h_e2f66c726a`QmE96JC6 DE:== ?@E BF:E6k^2m E96 pvx E92E >2?J A6@A=6 925 π >:?5] v6EE:?8 E@ pvx C6BF:C6D E649?@=@8J E92E 42? A6C7@C> ;FDE 2D H6== 2D 9F>2?D π 2 H:56 G2C:6EJ @7 E2D
kAm$@>6 C6D62C496CD H@F=5 =:<6 E@ 7:?5 4@?D6?DFD @? 9@H E@ >62DFC6 :E] xEVD @?6 @7 E96 E@A:4D @7 2? FA4@>:?8 pvx H@C
kAm%9:D C62==J ?665D 2 4@>>F?:EJD 677@CE 2?5 2EE6?E:@? D@ E92E >FEF2==J H6 42? 28C66 @? D@>6 D@CE @7 4=2DD:7:42E:@?D @7 pvx[ D2:5 H@C kAm~E96CD A=2? E@ 7:8FC6 :E @FE @? E96:C @H?] $2? uC2?4:D4@ 4@>A2?J ~A6?px 92D 8:G6? :ED k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^@A6?2:D2>2=E>2?492E8AEa7bb_b577aag_cfghcf6h373_cgebdbfQm?@?AC@7:E 3@2C5 @7 5:C64E@CDk^2m H9@D6 >6>36CD :?4=F56 2 7@C>6C &]$] %C62DFCJ D64C6E2CJ E96 C6DA@?D:3:=:EJ @7 564:5:?8 H96? :ED px DJDE6>D 92G6 C624965 E96 A@:?E 2E H9:49 E96J @FEA6C7@C> 9F>2?D 2E >@DE 64@?@>:42==J G2=F23=6 H@C<]k^Am kAm%96 3@2C5 56E6C>:?6D H96? H6G6 2EE2:?65 pvx[ D2JD ~A6?pxVD @H? 6IA=2?2E:@? @7 :ED 8@G6C?2?46 DECF4EFC6] $F49 2? 249:6G6>6?E H@F=5 4FE @77 E96 4@>A2?JVD 3:886DE A2CE?6C[ |:4C@D@7E[ 7C@> E96 C:89ED E@ 4@>>6C4:2=:K6 DF49 2 DJDE6>[ D:?46 E96 E6C>D @7 E96:C 28C66>6?ED @?=J 2AA=J E@ AC6pvx E649?@=@8J]k^Am k9amxD pvx 52?86C@FDnk^9am kAmw:?E@? >256 8=@32= 9625=:?6D =2DE J62C H96? 96 BF:E v@@8=6 k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^2:52?86CDFA6C:?E6==:86?E492E8AE9:?E@?8@@8=6e6chha6f2gf5d3426fgf25cddcdfdf53Qm2?5 D@F?565 2 H2C?:?8k^2m 23@FE pxVD 6I:DE6?E:2= 52?86CD] p ?6H k2 9C67lQ9EEADi^^HHH]D4:6?46]@C8^5@:^`_]``ae^D4:6?46]25=_eadQm$4:6?46 DEF5J AF3=:D965 %9FCD52Jk^2m 4@F=5 C6:?7@C46 E9@D6 4@?46C?D]k^Am kAmxED =625 2FE9@C π |:4926= r@96?[ 2 &?:G6CD:EJ @7 r2=:7@C?:2[ q6C<6=6J[ C6D62C496C H9@ DEF5:6D E96 6IA64E65 3692G:@C @7 86?6C2==J :?E6==:86?E 2CE:7:4:2= 286?ED[ A2CE:4F=2C=J E9@D6 4@>A6E6?E 6?@F89 E@ AC6D6?E 2 C62= E9C62E E@ FD 3J @FE A=2??:?8 FD]k^Am kAmr@96? >256 4=62C π 2? :?E6CG:6H %9FCD52J E92E DF49 =@?8E6C> px A=2??:?8 286?ED 5@?VE J6E 6I:DE] qFE E96J 92G6 E96 A@E6?E:2=Q E@ 86E >@C6 25G2?465 2D E649 4@>A2?:6D D66< E@ 4@>3:?6 E@52JVD 492E3@E E649?@=@8J H:E9 >@C6 56=:36C2E6 A=2??:?8 D<:==D FD:?8 2 E649?:BF6 @H? 2D C6:?7@C46>6?E =62C?:?8]k^Am kAmv:G:?8 2? 25G2?465 px DJDE6> E96 @3;64E:G6 E@ >2I:>:K6 :ED C6H2C5 2?5[ 2E D@>6 A@:?E[ H:E99@=5:?8 C6H2C5 7C@> :E[ DEC@?8=J :?46?E:G:K6D E96 px DJDE6> E@ E2<6 9F>2?D @FE @7 E96 =@@A[ :7 :E 92D E96 @AA@CEF?:EJ[ 244@C5:?8 E@ E96 A2A6C H9@D6 4@2FE9@CD :?4=F56 AC@>:?6?E k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^D4:6?46E649?@=@8J3FD:?6DD56>:D92DD23:D2e3g_ad76ec737a7hdf356c6b5b7``2gQmpx D4:6?E:DED *@D9F2 q6?8:@k^2m 2?5 k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^?@CE92>6C:422CE:7:4:2=:?E6==:86?4672463@@<:?4>2C kAmx 9@A6 H6G6 >256 E96 42D6 E92E A6@A=6 π 8@G6C?>6?E W?665X E@ DE2CE E9:?<:?8 D6C:@FD=J 23@FE 6I24E=J H92E C68F=2E:@?D H6 ?665 E@ 255C6DD E9:D AC@3=6>[ r@96? D2:5] u@C ?@H[ 8@G6C?>6?ED @?=J @H H92E E96D6 4@>A2?:6D 564:56 E@ E6== E96>]k^Am k9am%@@ =68:E E@ BF:E pvxnk^9am kAm(:E9 D@ >F49 >@?6J C:5:?8 @? E96 AC@>:D6 @7 px 25G2?46D[ :EVD ?@ DFCAC:D6 E92E pvx π 2=D@ 364@>:?8 2 4@CA@C2E6 3FKKH@C5 E92E D@>6E:>6D 2EEC24ED 2 BF2D:C6=:8:@FD 76CG@C]k^Am kAmxEVD 5:G:565 D@>6 @7 E96 E649 H@C=5 36EH66? E9@D6 H9@ 2C8F6 :E D9@F=5 36 56G6=@A65 D=@H=J 2?5 42C67F==J 2?5 @E96CD :?4=F5:?8 G6?EFC6 42A:E2=:DED 2?5 C2AA6C |r w2>>6C H9@VG6 564=2C65 E96>D6=G6D A2CE @7 2? 2446=6C2E:@?:DE 42>A]k^Am kAm%96 {@?5@?32D65 DE2CEFA s66A|:?5[ 7@F?565 π a_`_ 2?5 ?@H A2CE @7 v@@8=6[ H2D @?6 @7 E96 7:CDE 4@>A2?:6D E@ 6IA=:4:E=J D6E @FE E@ 56G6=@A pvx] ~A6?px 5:5 E96 D2>6 π a_`d H:E9 2 D276EJ7@4FD65 A=6586]k^Am kAmqFE ?@H :E >:89E D66> E92E 6G6CJ@?6 6=D6 π ;F>A:?8 @? E96 32?5H28@?] v@@8=6 4@7@F?56C $6C86J qC:? H2D C646?E=J D66? 92?8:?8 @FE 2E 2 r2=:7@C?:2 G6?F6 42==65 E96 pvx w@FD6] p?5 =6DD E92? E9C66 J62CD k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^72463@@<>6E2>2C kAm|6E2 rt~ |2C< +F4<6C36C8 D2:5 9:D 4@>A2?JVD =@?8E6C> 8@2= H2D 3F:=5:?8 7F== 86?6C2= :?E6==:86?46 E92E H@F=5 C6BF:C6 25G2?46D π C62D@?:?8[ A=2??:?8[ 4@5:?8 2?5 @E96C 4@8?:E:G6 23:=:E:6D] (9:=6 +F4<6C36C8VD 4@>A2?J 92D =@?8 925 C6D62C496CD k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^2:@A6?D@FC46>6E2:3>492E8AE55e`6hh24g`bd3begfa3bhgfe_`_ef64Qm7@4FD65 @? E9@D6 DF3;64EDk^2m[ 9:D 2EE6?E:@? >2C<65 2 492?86 :? E@?6]k^Am kAmpE p>2K@?[ @?6 D:8? @7 E96 ?6H >6DD28:?8 H2D H96? E96 9625 D4:6?E:DE 7@C E96 G@:46 2DD:DE2?E p=6I2 DH:E4965 ;@3 E:E=6D E@ 364@>6 9625 D4:6?E:DE 7@C pvx]k^Am kAm(9:=6 ?@E k2 9C67lQ9EEADi^^2A?6HD]4@>^2CE:4=6^?G:5:22:49:AD;6?D6?9F2?847h7h3_dbbd2a6d444453b7f3b563f3fQm2D E2?8:3=6 E@ (2== $EC66Ek^2m 2D 86?6C2E:G6 px[ 3C@2542DE:?8 pvx 2>3:E:@?D >2J 96=A C64CF:E px E2=6?E H9@ 92G6 2 49@:46 π H96C6 E96J H2?E E@ H@C<]k^Am kAmx? 564:5:?8 36EH66? 2? @=5D49@@= px :?DE:EFE6 @C @?6 H9@D6 8@2= π E@ 3F:=5 pvx 2?5 92D DF77:4:6?E C6D@FC46D E@ 5@ D@[ >2?J H@F=5 49@@D6 E96 =2EE6C[ D2:5 *@F[ E96 &?:G6CD:EJ @7 x==:?@:D C6D62C496C]k^Am View original post here:
What is AGI and how is it different from AI? – ReadWrite
As artificial intelligence continues to develop at a rapid pace, its easy to wonder where this new age is headed.
The likes of ChatGPT, Midjourney and Sora are transforming the way we work through chatbots, text-to-image and text-to-video generators, while robots and self-driving cars are helping us perform day-to-day tasks. The latter isnt as mainstream as the former, but its only a matter of time.
But wheres the limit? Are we headed towards a dystopian world run by computers and robots? Artificial general intelligence (AGI) is essentially the next step but as things stand, were a little way off from that becoming a reality.
AGI is considered to be strong AI, whereas narrow AI is what we know to be generative chatbots, image generators and coffee-making robots.
Strong AI refers to software that has the same, or better, cognitive abilities as a human being, meaning it can solve problems, achieve goals, think and learn on its own, without any human input or assistance. Narrow AI can solve one problem or complete one task at a time, without any sentience or consciousness.
This level of AI is only seen in the movies at the moment, but were likely headed towards this level of AI-driven technology in the future. When that might be remains open to debate some experts claim its centuries away, others believe it could only be years. However, Ray Kurzweils book The Singularity is Near predicts it to be between 2015 and 2045, which was seen as a plausible timeline by the AGI research community in 2007although its a pretty broad timeline.
Given how quickly narrow AI is developing, its easy to imagine a form of AGI in society within the next 20 years.
Despite not yet existing, AGI can theoretically perform in ways that are indistinguishable from humans and will likely exceed human capacities due to fast access to huge data sets. While it might seem like youre engaging with a human when using something like ChatGPT, AGI would theoretically be able to engage with humans without necessarily having any human intervention.
An AGI systems capabilities would include the likes of common sense, background knowledge and abstract thinking, as well as practical capabilities, such as creativity, fine motor skills, natural language understanding (NLU), navigation and sensory perception.
A combination of all of those abilities will essentially give AGI systems high-level capabilities, such as being able to understand symbol systems, create fixed structures for all tasks, use different kinds of knowledge, engage in metacognition, handle several types of learning algorithms and understand belief systems.
That means AGI systems will be ultra-intelligent and may also possess additional traits, such as imagination and autonomy, while physical traits like the ability to sense, detect and act could also be present.
We know that narrow AI systems are widely being used in public today and are fast becoming part of everyday life, but it currently needs a human to function at all levels. It requires machine learning and natural language processing, before requiring human-delivered prompts in order to execute a task. It executes the task based on what it has previously learned and can essentially only be as intelligent as the level of information humans give it.
However, the results we see from narrow AI systems are not beyond what is possible from the human brain. It is simply there to assist us, not replace or be more intelligent than humans.
Theoretically, AGI should be able to undertake any task and portray a high level of intelligence without human intervention. It will be able to perform better than humans and narrow AI at almost every level.
Stephen Hawking warned of the dangers of AI in 2014, when he told the BBC: The development of full artificial intelligence could spell the end of the human race.
It would off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded.
Kurzweil followed up his prediction in The Singularity is Near by saying in 2017 that computers will achieve human levels of intelligence by 2029. He predicted that AI itself will get better exponentially, leading to it being able to operate at levels beyond human comprehension and control.
He then went on to say: I have set the date 2045 for the Singularity which is when we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.
These discussions and predictions have, of course, sparked debates surrounding the responsible use of CGI. The AI we know today is viewed to be responsible and there are calls to regulate many of the AI companies to ensure these systems do not get out of hand. Weve already seen how controversial and unethical the use of AI can be when in the wrong hands. Its unsurprising, then, that the same debate is happening around AGI.
In reality, society must approach the development of AGI with severe caution. The ethical problems surrounding AI now, such as the ability to control biases within its knowledge base, certainly point to a similar issue with AGI, but on a more harmful level.
If an AGI system can essentially think for itself and no longer has the need to be influenced by humans, there is a danger that Stephen Hawkings vision might become a reality.
Featured Image: Ideogram
Here is the original post: