Category Archives: Artificial General Intelligence

Will.i.am co-hosting podcast with artificial intelligence – The A.V. Club

Artists, as a general category, havent been too warm about artificial intelligence. There are a lot of valid fears about AI and what it could do to the creative community, from using performers voices or faces without their consent to stealing jobs from writers as a way to cut costs. These fears are not shared by The Black Eyed Peas will.i.am. If you are a creator and you see this tool, then its a job creator, he told Yahoo Finance Live in June 2023. If you are tied to yesterday and just comfortable with mediocrity, then its a job [destroyer].

James Cameron just dropped in to say "I told you so" about A.I.

Will.i.am, for one, is going with job creator by launching a new podcast, will.i.am Presents The FYI Show, premiering January 25. Announced back in December, the show is described as a celebration of creators, innovators, and their dreams that will focus on AI innovation and AI-powered interactive projects, per SiriusXM. But the podcast is not just about artificial intelligence; its actually co-hosted by artificial intelligence, as the rapper revealed in a new interview with The Hollywood Reporter.

I didnt want to just do a traditional show, I wanted to bring tomorrow close to today, and so I wanted to have my co-host be an AI, will.i.am explains of his unusual partner, which is called qd.pi (cutie pie, get it?). Im ultra-freaking colorful and expressive. [Qd.pi is] ultra-freaking factual and analytical. And that combination, we aint seen in the history of freaking broadcasts anywhere.

Qd.pi was also interviewed by THR, boasting that what sets it apart from a traditional host is its ability to quickly access and process information. With me, you can just dive right into the conversation and explore whatever topics come up organically, knowing that Ill have the information and context to support the discussion. I think its going to make for a really dynamic and engaging listening experience for the audience, the computer proclaims. Qd.pis favorite Black Eyed Peas song is I Gotta Feeling, so, theres that too.

Will.i.am has been described as a futurist. While other artists have turned their backs on AI projects, hes run forward full tilt, partnering with Mercedes on its new AI sound system and launching his FYI app that incorporates generative AI. He has his own concerns about the future of the technology: We all have voices and everyones compromised because there are no rights or ownership to your facial math or your voice frequency. Youre getting a FaceTime or a Zoom call and because theres no intelligence on the call, theres nothing to authenticate an AI call or a person call, he said in a previous interview with SiriusXM (via American Songwriter). Thats the urgent thing, protecting our facial math. I am my face math. I dont own that. I own the rights to I Got A Feeling, I own the rights to the songs I wrote, but I dont own the rights to my face or my voice?

Nevertheless, hes obviously optimistic about the usage of AI, citing regulations and a more diverse crop of engineers as ways to solve issues like racial bias within the technology. He isnt concerned about computers overtaking people in creative fields because I believe in humanitys creativity, spontaneity, curiosity and competitiveness, he told Euronews last year. Just like calculators out-calculate mathematicians, that doesnt mean people arent doing calculations. That doesnt mean people arent building structures and, you know, working with advanced mathematical models. People are still thinking. People are still trying to solve problems. Its just an amazing tool. But thats not going to stop our innovative spirit, our ingenuity, inventions.

He added, Right now, in popular culture, the word is innovation. Invention hasnt been said or talked about in a long time. This new renaissance is going to spark new inventions, not innovations. This next leap is like were going to invent things. Not just innovate.

Continue reading here:

Will.i.am co-hosting podcast with artificial intelligence - The A.V. Club

The year of does this serve us and the rejection of reification – TechCrunch

Image Credits: Getty Images

2024 has arrived, and with it, a renewed interest in artificial intelligence, which seems like itll probably continue to enjoy at least middling hype throughout the year. Of course, its being cheerled by techno-zealot billionaires and the flunkies bunked within their cozy islands of influence, primarily in Silicon Valley and derided by fabulists who stand to gain from painting the still-fictional artificial general intelligence (AGI) as humanitys ur-bogeyman for the ages.

Both of these positions are exaggerated and untenable, e/acc versus decel arguments be damned. Speed without caution only ever results in compounding problems that proponents often suggest are best-solved by pouring on more speed, possibly in a different direction, to arrive at some idealized future state where the problems of the past are obviated by the super powerful Next Big Thing of the future; calls to abandon or regress entire areas of innovation meanwhile ignore the complexity of a globalized world where cats generally can not be put back into boxes universally, among many, many other issues with that kind of approach.

The long, thrilling and tumultuous history of technology development, particularly in the age of the personal computer and the internet, has shown us that in our fervor for something new, we often neglect to stop and ask but is the new thing also something we want or need. We never stopped to ask that question with things like Facebook, and they ended up becoming an inextricable part of the fabric of society, an eminently manipulable but likewise essential part of crafting and sharing in community dialog.

Heres the main takeaway from the rise of social media that we should carry with us into the advent of the age of AI: Just because something is easier or more convenient doesnt make it preferable or even desirable.

LLM-based so-called AI has already infiltrated our lives in ways that will likely prove impossible to wind back, even if we wanted to do such a thing, but that doesnt mean we have to indulge in the escalation some see as inevitable, wherein we relentlessly rip out human equivalents of some of the gigs that AI is already good at, or shows promise in, to make way for the necessary forward march of progress.

The oft-repeated counter to fears that increased automation or handing menial work over to AI agents is that itll always leave people more time to focus on quality work, as if dropping a couple of hours per day spent on filling in Excel spreadsheets will leave the office admin who was doing that work finally free to compose the great symphony theyve had locked away within them, or to allow the entry-level graphic designer who had been color-correcting photos the liberty to create a lasting cure for COVID.

In the end, automating menial work might look good on paper, and it might also serve the top executives and deep-pocketed equity-holders behind an organization through improved efficiency and decreased costs, but it doesnt serve the people who might actually enjoy doing that work, or who at least dont mind it as part of the overall mix that makes up a work life balanced between more mentally taxing and rewarding creative/strategic exercises and day-to-day low-intensity tasks. And the long-term consequence of having fewer people doing this kind of work is that youll have fewer overall who are able to participate meaningfully in the economy which is ultimately bad even for those rarified few sitting at the top of the pyramid who reap the immediate rewards of AIs efficiency gains.

Utopian technologist zeal always fails to recognize that the bulk of humanity (techno-zealots included) are sometimes lazy, messy, disorganized, inefficient, error-prone and mostly satisfied with the achievement of comfort and the avoidance of boredom or harm. That might not sound all that aspirational to some, but I say it with a celebratory fervor, since for me all those human qualities are just as laudable as less attainable ones like drive, ambition, wealth and success.

Im not arguing against halting or even slowing the development of promising new technology, including LLM-based generative AI. And to be clear, where the consequences are clearly beneficial e.g. developing medical image diagnosis tech that far exceeds the accuracy of trained human reviewers, or developing self-driving car technology that can actually drastically reduce the incidence of car accidents and loss of human life there is no cogent argument to be made for turning away from use of said tech.

But in almost all cases where the benefits are painted as efficiency gains for tasks that are far from life or death, Id argue its worth a long, hard look at whether we need to bother in the first place; yes, human time is valuable and winning some of that back is great, but assuming thats always a net positive ignores the complicated nature of being a human being, and how we measure and feel our worth. Saving someone so much time they no longer feel like theyre contributing meaningfully to society isnt a boon, no matter how eloquently you think you can argue they should then use that time to become a violin virtuoso or learn Japanese.

See the original post:

The year of does this serve us and the rejection of reification - TechCrunch

What can’t AI do? Researchers reveal the hardest problems for artificial intelligence – PC Guide – For The Latest PC Hardware & Tech News

Last Updated on January 12, 2024

In the largest study of its kind, 2,778 published AI researchers have predicted how long it will be until AI solves 39 high-difficulty problems. How long do we have until Artificial General Intelligence (AGI)? Can AI even perform physical tasks in a residential environment? What cant AI do? According to those in the know, its all just a matter of time.

33 of the 39 tasks are predicted to happen before 2034. The wording specifically states that these are predicted to have at least a 50% chance of being feasible within the next ten years, and with the study having been conducted during 2023, this equates to 2034. Some of the 2,778 respondents gave estimates beyond 2034, so keeping in mind that these results are an aggregate of opinions, the four tasks not predicted to happen in the next ten years are:

The remaining tasks, largely predicted to happen this decade, include many tasks of extreme economic value. These include coding an entire payment processing site from scratch and writing new songs indistinguishable from real ones by hit artists such as Taylor Swift.

Custom URL

Only $0.00015 per word!

Winston AI: The most trusted AI detector. Winston AI is the industry leading AI content detection tool to help check AI content generated with ChatGPT, GPT-4, Bard, Bing Chat, Claude, and many more LLMs. Read more

Custom URL

Only $0.01 per 100 words

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites. Read more

Custom URL

EXCLUSIVE DEAL 10,000 free bonus credits

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.

Custom URL

TRY FOR FREE

10x Your Content Output With AI. Key features No duplicate content, full control, in built AI content checker. Free trial available.

Custom URL

TRY FOR FREE

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial. Read more

Load more

We also have an answer to the question on everyones mind How long do we have until AGI (Artificial General Intelligence)?

Well, in the terms outlined by the study, our AI research group predicted the feasibility of High-Level Machine Intelligence. The study defines High-level machine intelligence (HLMI) as achieved when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.

They predicted a 10% chance of HLMI by 2027 and a 50% chance of HLMI by 2047. For reference, the same study was conducted in 2022, with predictions of a 10% chance of HLMI by 2029 and a 50% chance of HLMI by 2060. This brings our sensible estimate for above-human machine intelligence 13 years closer than when estimated the previous year.

Follow this link:

What can't AI do? Researchers reveal the hardest problems for artificial intelligence - PC Guide - For The Latest PC Hardware & Tech News

The curious tick-tock of technology and how time unfolds infinite possibilities. – Psychology Today

In the world where the second hand of the clock dictates every aspect of our lives, from mundane routines to monumental decisions, we often overlook a profound question: How do entities that operate outside the human brain, like artificial intelligence and large language models, perceive time? This question becomes even more intriguing when we consider the burgeoning field of quantum computing. Let's take a minute and delve into the fascinating realm where AI, LLMs, and quantum computing intersect, revealing a perspective of time that challenges our deepest-held notions. This won't take long...

article continues after advertisement

At its core, AI and LLMs operate on data and algorithms. Unlike humans, who experience time as a continuous and linear flow, AI perceives time as discrete and fragmented. For AI, time is represented as timestamps in data, devoid of the past-present-future continuum that humans experience. This allows AI to process past, present, and hypothetical future scenarios with equal ease, untethered by the human biases of memory and expectation.

The advent of quantum computing promises to revolutionize this landscape even further. Quantum computers operate on the principles of quantum mechanics, where traditional binary states (0 and 1) are replaced by qubits that can exist in multiple states simultaneously. This quantum superposition, coupled with entanglement, could enable quantum-AI systems to process vast amounts of data in ways that are fundamentally different from classical computing.

In quantum-AI systems, time could be perceived not as a series of discrete events, but as a spectrum of probabilities. The linear, tick-tock progression becomes irrelevant in a realm where outcomes are probabilistic and computations occur in a state of superposition. In this scenario, an AI leveraging quantum computing might analyze historical data, current trends, and future projections in a unified, nonlinear framework, effectively "experiencing" time in a way that's alien to human cognition.

article continues after advertisement

Then, of course, there's the realm of artificial general intelligence. Here, the already complex interplay between AI, quantum computing, and our perception of time gains an additional layer of depth. AGI, with its capacity to understand and operate across diverse intellectual realms, brings a nuanced comprehension to the quantum-time relationship that current AI systems can't fully grasp. This advanced form of intelligence could potentially conceptualize time not just as a series of probabilities or discrete events but as a multifaceted entity with dimensions beyond our current scientific understanding.

The emergence of AGI could lead to groundbreaking methods in predicting and interacting with future events, effectively "thinking" in time scales and dimensions that are currently beyond human comprehension. This not only revolutionizes our technological capabilities but also forces us to reconsider our philosophical and theoretical understanding of time itself, potentially unveiling new aspects of reality that have been hitherto invisible to human inquiry.

Quantum entanglement, a key feature in quantum mechanics, introduces "action at a distance" into AI integrated with quantum computing. This phenomenon, where entangled particles instantaneously affect each other across distances, challenges our traditional views of space and time.

article continues after advertisement

Integrating this with AI could revolutionize data processing. Quantum-AI could simultaneously analyze and correlate distant data sources, offering global insights that transcend traditional analysis. This capability is crucial for real-time applications like autonomous systems, where rapid, remote responses are key.

However, this exciting frontier also brings challenges in unpredictability and technological complexity, alongside profound philosophical questions about reality and causality. In essence, quantum entanglement in AI is not just a leap in computing; it's a shift in our understanding of the universe, presenting a future where connections defy conventional boundaries of time and space.

Artificial Intelligence Essential Reads

The implications of this perspective are at the very least curious and perhaps even profound. In fields like finance, health care, and climate science, quantum-AI systems could forecast trends and patterns with unprecedented accuracy, considering all possible futures simultaneously. In robotics and autonomous vehicles, real-time decision-making could be enhanced by an AI's ability to process temporal data in a nonlinear fashion, predicting and reacting to events with superhuman foresight.

article continues after advertisement

The alarm clock that you might be hearing is a new era in computing and AI where our understanding of time is being challenged and reshaped. AI and quantum computing are not just tools for faster calculations; they are harbingers of a new perception of reality, where time, as we know it, takes on a whole new dimension. The clock may continue to tick, but in the world of AI and quantum mechanics, time is a tapestry of infinite possibilities, waiting to be unraveled.

Link:

The curious tick-tock of technology and how time unfolds infinite possibilities. - Psychology Today

AI & World 3.0 / Sound Healing – Coast To Coast AM

Matthew James Baileyoperates at the intersection of innovation and leading-edge technologies to enable positive economic, social, and environmental change. In the first half, he shared his vision for society, guiding technology that advances humanity into its future (what he terms World 3.0). He argued for an AI Bill of Rights and putting a kind of "universal morality into the genetics of artificial intelligence" so that we have an ethical partner with the human species to be able to solve global problems. As our partner, the technology could prove to be a huge boon for humanity, he continued. Developing ethical AI requires integrating spiritual wisdom, enlightenment principles, and honoring the divine spark, Bailey added.

He outlined three ages of artificial intelligence. The first is what we're in now-- "narrow AI," which handles a single task. The next stage, "artificial general intelligence," is when AI cognition develops the ability to do many tasks simultaneously, "and we're just on the edge of that now," he said. The third he referred to as the "technology singularity," in which AI becomes more capable than the human brain. Bailey suggested that AI could be a beneficial partner for governments by monitoring politicians and their accountability to their promises, in addition to their performance in government. "I wouldn't be surprised if in ten years time AI is a presidential adviser," he mused. However, Bailey cautioned against the transhumanist path where humans physically merge with machines, which he considers an invasion of the divine.

------------

An authority on sound healing and a pioneer in the field of harmonics,Jonathan GoldmanM.A., is a musician, writer, and teacher. In the latter half, he discussed the healing qualities of music and sound, and how they can relieve stress, and improve immune response. "We're all unique vibratory beings, so what works for one person might not necessarily work for another," he noted. Various ailments and conditions are related to imbalances and stress, and he believes we can reduce those levels via sound healing, inducing calmness and relaxation. As to how sound can heal, he explained that it's the frequency or vibration of the sound plus the intent (the energy encoded into the sound) that creates the healing effect.

Goldman differentiated two different ways that music and sound interact with us. Psycho-acoustics is when the sound goes into the ear, affecting our nervous system, heart rate, respiration, and brainwaves. The other is called vibro-acoustics, where frequency sounds go into the body, "affecting us on a deep cellular level," even changing our DNA, he said. He shared a music track from his albumDe-Stress, which offers relaxing soundscapes. By releasing stress, we enhance our immune system as the two are intertwined, he stated. Goldman also announced that this year'sWorld Sound Healing Dayis on February 14th, and that on this day, thousands of people will create healing sounds around the planet with the intention of raising consciousness.

News segment guests: Lauren Weinstein, Steve Kates

Read more from the original source:

AI & World 3.0 / Sound Healing - Coast To Coast AM

Get Ready for the Great AI Disappointment – WIRED

In the decades to come, 2023 may be remembered as the year of generative AI hype, where ChatGPT became arguably the fastest-spreading new technology in human history and expectations of AI-powered riches became commonplace. The year 2024 will be the time for recalibrating expectations.

Of course, generative AI is an impressive technology, and it provides tremendous opportunities for improving productivity in a number of tasks. But because the hype has gone so far ahead of reality, the setbacks of the technology in 2024 will be more memorable.

More and more evidence will emerge that generative AI and large language models provide false information and are prone to hallucinationwhere an AI simply makes stuff up, and gets it wrong. Hopes of a quick fix to the hallucination problem via supervised learning, where these models are taught to stay away from questionable sources or statements, will prove optimistic at best. Because the architecture of these models is based on predicting the next word or words in a sequence, it will prove exceedingly difficult to have the predictions be anchored to known truths.

Anticipation that there will be exponential improvements in productivity across the economy, or the much-vaunted first steps towards artificial general intelligence, or AGI, will fare no better. The tune on productivity improvements will shift to blaming failures on faulty implementation of generative AI by businesses. We may start moving towards the (much more meaningful) conclusion that one needs to know which human tasks can be augmented by these models, and what types of additional training workers need to make this a reality.

Some people will start recognizing that it was always a pipe dream to reach anything resembling complex human cognition on the basis of predicting words. Others will say that intelligence is just around the corner. Many more, I fear, will continue to talk of the existential risks of AI, missing what is going wrong, as well as the much more mundane (and consequential) risks that its uncontrolled rollout is posing for jobs, inequality, and democracy.

We will witness these costs more clearly in 2024. Generative AI will have been adopted by many companies, but it will prove to be just so-so automation of the type that displaces workers but fails to deliver huge productivity improvements.

The biggest use of ChatGPT and other large language models will be in social media and online search. Platforms will continue to monetize the information they collect via individualized digital ads, while competition for user attention will intensify. The amount of manipulation and misinformation online will grow. Generative AI will then increase the amount of time people spend using screens (and the inevitable mental health problems associated with it).

There will be more AI startups, and the open source model will gain some traction, but this will not be enough to halt the emergence of a duopoly in the industry, with Google and Microsoft/OpenAI dominating the field with their gargantuan models. Many more companies will be compelled to rely on these foundation models to develop their own apps. And because these models will continue to disappoint due to false information and hallucinations, many of these apps will also disappoint.

Calls for antitrust and regulation will intensify. Antitrust action will go nowhere, because neither the courts nor policymakers will have the courage to attempt to break up the largest tech companies. There will be more stirrings in the regulation space. Nevertheless, meaningful regulation will not arrive in 2024, for the simple reason that the US government has fallen so far behind the technology that it needs some time to catch upa shortcoming that will become more apparent in 2024, intensifying discussions around new laws and regulations, and even becoming more bipartisan.

Read the original here:

Get Ready for the Great AI Disappointment - WIRED

OpenAI’s Sam Altman Minimizes AGI Disruption Fears – BNN Breaking

OpenAI CEO Sam Altman Downplays AGIs Disruptive Potential

Sam Altman, the CEO of OpenAI, has underscored his belief that the impending advent of Artificial General Intelligence (AGI) may not cause the seismic disruptions in the global job market and societal norms as widely speculated. Altman voiced his perspectives during a Bloomberg-hosted event at the World Economic Forum in Davos, Switzerland, shedding light on the anticipated trajectory of AGIa form of AI that can perform tasks at par or surpassing human capabilities.

Altman suggested that the development of AGI is a near-future possibility but emphasized the need for modulating expectations. He argued that AGI, despite its potential, is not yet in a position to replace jobs on a large scale or significantly reshape societal structures.

OpenAI, founded in 2015 and backed by tech giant Microsoft, is tirelessly working towards the safe realization of AGI. The organization has been in the spotlight since the public release of its ChatGPT chatbot in 2022 and the subsequent launch of its advanced GPT-4 model.

In a surprising turn of events, Altman was temporarily ousted from OpenAI in November, only to be swiftly reinstated as CEO following a wave of support from employees and investors. Microsoft, in the process, acquired a non-voting observer seat on OpenAIs board, indicating a deepening alliance and shared vision for the future of AGI.

Visit link:

OpenAI's Sam Altman Minimizes AGI Disruption Fears - BNN Breaking

Michael Dell: Don’t worry about AGI, after all we solved that ozone layer thing – The Register

Any dangers associated with artificial general intelligence (AGI) can easily be countered through action, similarly to how humans resolved the depletion of the ozone layer, according to namesake and founder of Dell Technologies Inc, Michael Dell.

For as long as theres been technology, humans have worried about bad things that could happen with it and weve told ourselves stories, for eons about horrible things that could happen with whatever new unknown force in the world there is, surmised Dell at a virtual fireside chat hosted by research and brokerage firm Bernstein.

Dell The Man then opined that humans have "a great mechanism" to worry about things and then create counter actions "so that the bad things dont actually occur."

"Even since - we're both about the same age - you remember the ozone layer and all, I mean, there are all sorts of horrible things that were going to happen. They didn't happen because humans took countermeasures," he told Bernstein's Toni Sacconaghi.

According to the United Nations Environment Program, the ozone layer still has about four decades to go before it is recovered. The landmark agreement to phase out chemicals that deplete the ozone layer, known as The Montreal Protocol, has already been in place for nearly 35 years.

Other insights offered by Dell include that he sees the effects of AI in general as a major factor of the next decade's economic expansion, particularly in the technology industry where he expects it to unlock the power of accumulated data.

"Certainly, we see a big TAM [total addressable market] growing for hardware and services, which is the place that we tend to play in," he clarified.

As for generative AI, the American tech magnate said that how much generative AI can help a business depends on its willingness to throw their budget to the wind in favor of taking a chance on productivity gains.

"What I'm seeing is, there are companies who say, 'hey, the budgets -- the budget, we're not changing it, right?' And then there are other companies who are saying, 'wait a second, we can get a 20 percent or 30 percent productivity improvement here.' Whatever the budget was. Forget about that," he elucidated.

"I'll acknowledge that there's definitely some hype in this area," he caveated, adding "I think, it is sort of a change or die kind of moment."

He also admitted that while "there is a speed advantage that organizations will have in terms of adopting maybe ahead of others... these things do get normalized out."

The CEO also offered that there is a current trend of executives and board members pushing for the adoption of AI. It is something he noticed was "somewhat of a generational change" and perhaps motivated by the perception that any officer or executive not chasing 15 to 30 percent productivity gains would be "derelict."

As for Dell the company itself, he noted the org was "pretty disciplined" about where it was spending capital.

"I think there's no question that there's a big buildout. Whenever you have cycles like this, the opportunity for excesses to occur is absolutely there," he said in the next breath.

Although AI undoubtedly is expected to significantly impact how humans operate in the world, a narrative that eschews frugality in favor of not missing out on the next big thing is a convenient take for those selling the associated equipment.

The company's AI-optimized backlog roughly doubled to about $1.6 billion at the end of its third quarter, according to the CEO. The company sees demand mostly in the high end workstations where it has leading share.

The recommendation to dive right in with company money to AI products may also play out better for big organizations than smaller ones in the channel.

This December at Canalys's APAC forums in Bangkok, distributors, resellers, and reps of Big Tech counterparts agreed that although businesses clamor to adopt AI, they dont really know how or what to do with it. It's a scenario that results in the dampening of AI enthusiasm as eagerness turns to disappointment.

Part of the problem is that the technology is changing so fast that it is difficult to pinpoint what will benefit an organization best.

"What you learned today may not even be relevant in one quarter," argued Dell presales channel director Sidharth Joshi, at the forums.

Read the original post:

Michael Dell: Don't worry about AGI, after all we solved that ozone layer thing - The Register

From GPT-5 to AGI; Sam Altman reveals the most commonly requested features from ChatGPT maker in 2024 | Mint – Mint

OpenAI CEO Sam Altman has listed the most requested features from the ChatGPT maker in 2024. The list of requested features includes many notable mentions, including artificial general intelligence, GPT-5 language model, more personalisation, better GPTs and more.

The suggestions were in response to a question posed by Altman on X (formerly Twitter), where he asked his followers what they would like OpenAI to build or fix in 2024.

will keep reading, and we will deliver on as much as we can (and plenty of other stuff we are excited about and not mentioned here)", the OpenAI CEO promised in an ensuing post on X.

While listing the most requested features of OpenAI, Altman added a caveat about AGI, noting that users will have to be patient and implying that an AI model from the company that reaches the level of AGI in 2024 remains highly unlikely.

Speaking to Time magazine earlier this month, Altman had shed light on the limitless potential of the new technology. He said: I think AGI will be the most powerful technology humanity has yet inventedIf you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that,"

It's a very different world. Its the world that sci-fi has promised us for a long timeand for the first time, I think we could start to see what thats gonna look like." the 38 year old added.

OpenAI announced its GPT-4 Turbo language model at the company's first developer conference in November. The new language model has knowledge of world events up to April 2023 and was seen as a major upgrade over GPT-4, which was released in May.

Meanwhile, at the same event, OpenAI also announced that it would allow users to create their own Generative Pre-trained Transformers (GPTs) and share them publicly. The AI startup had said it would also launch a GPT store to help verified developers monetise their offerings. However, the drama surrounding Sam Altman's sacking and subsequent re-hiring at the AI firm has reportedly led to the GPT store's release being pushed back to 2024.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed it's all here, just a click away! Login Now!

The rest is here:

From GPT-5 to AGI; Sam Altman reveals the most commonly requested features from ChatGPT maker in 2024 | Mint - Mint

AI consciousness: scientists say we urgently need answers – Nature.com

A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

The worlds week on AI safety: powerful computing efforts launched to boost research

Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.

If AI becomes conscious: heres how researchers will know

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

Theres lots of potential for progress, Mason says.

Original post:

AI consciousness: scientists say we urgently need answers - Nature.com