Category Archives: Artificial General Intelligence

Nvidia CEO Jensen Huang says AI will be ‘fairly competitive’ with humans in 5 years – CNBC

Nvidia CEO Jensen Huang said Wednesday that artificial intelligence is gaining on humans.

Speaking at The New York Times' annual DealBook Summit, Huang said that if artificial general intelligence (AGI) is defined as a computer that can complete tests in a "fairly competitive" way to human intelligence, then "within the next five years, you're going to see, obviously, AIs that can achieve those tests."

Nvidia's business is booming because of the surge in demand for high-powered graphics processing units (GPUs) that are needed to train AI models and run hefty workloads across industries like automotive, architecture, electronics, engineering and scientific research as well as for OpenAI's ChatGPT.

Revenue in Nvidia's fiscal third quarter tripled, while net income climbed to $9.24 billion from $680 million a year earlier.

In the interview Wednesday, Huang recalled delivering "the world's first AI supercomputer" to OpenAI, after Elon Musk, who co-founded the AI project before departing it in 2018, heard Huang speak about the device at a conference.

"Elon saw it, and he goes, 'I want one of those' he told me about OpenAI," Huang said. "I delivered the world's first AI supercomputer to OpenAI on that day."

Regarding the recent chaos surrounding OpenAI, its board structure, and the ousting and subsequent reinstatement of CEO Sam Altman, Huang said he hoped things were calming down.

"I'm happy that they're settled, and I hope they're settled it's a really great team," Huang said. "It also brings to mind the importance of corporate governance. Nvidia is here 30 years after our founding, we've gone through a lot of adversity. If we didn't set up our company properly, who knows what would have been."

Huang predicted that competition in the AI space will lead to the emergence of off-the-shelf AI tools that companies in different industries will tune according to their needs, from chip design and software creation to drug discovery and radiology.

Huang was asked onstage to rank the success of various companies in the AI market.

"I'm not going to rank my friends," he said. "I'll admit it, I want to, but I'm not going to do it."

One reason the tech industry is still years away from AGI, Huang said, is that although machine learning is currently skilled at tasks like recognition and perception, it can't yet perform multistep reasoning, which is a top priority for companies and researchers.

"Everybody's working on it," Huang said.

And the technology is moving forward very quickly.

"There's no question that the rate of progress is high," Huang said. "What we realize today is that of course, what we can do today with these models and intelligence are related, but not the same."

WATCH: Nvidia CEO says U.S. chipmakers at least a decade away from China supply chain independence

Originally posted here:

Nvidia CEO Jensen Huang says AI will be 'fairly competitive' with humans in 5 years - CNBC

Navigating The AI Landscape: Part 1 – Forbes

in 1950, this is one of Britain's earliest stored program computers and the oldest complete general purpose electronic computer in Britain. Designed and built at the National Physical Laboratory, Middlesex in 1949-1950, it was based on plans for a larger computer (the ACE) designed by the mathematician Alan Turing (1912-1954) at NPL between 1945 and 1947. Previously Turing worked on the Colossus computer used in codebreaking at Bletchley Park during World War II. Pilot ACE was estimated to have cost 50,000 to design and build, but by 1954 had earned over 240,000 from advanced scientific and engineering work in various fields including crystallography, aeronautics and computing bomb trajectories. (Photo by SSPL/Getty Images)SSPL via Getty Images

For the past year, I have been working on a technology company focused in the AI (artificial intelligence) space. The only thing constant about AI is its rapid pace of change. Chat GPT, which was most consumers introduction to AI, just turned one year old, and since then, the feverish pace at which AI has moved, board drama and all, has been incredible.

But what about the basics? What should you know as a business person, or a casual follower of AI to catch you up on what you need to know? I've compiled a primer aimed at guiding you back to the foundational elements of AI, so you can be conversant and knowledgeable on the fundamentals. In future parts of this series, Ill be addressing practical use cases of AI and companies you should follow.

A previous Forbes contributor put a similar guide together 6 years ago. However, considering the dramatic advancements since then, an updated version seemed not just appropriate, but necessary.

Glossary Of Terms

I wanted to first start off with a key set of terms that you should know in order to understand what artificial intelligence is, including AI itself. Of course many define these terms differently, so I recommend you do your own research and deeper dives on the subject. But to start with, I recommend you know the following:

What Are The Core Technologies Powering AI?

So you want to start an AI company, and dont know the different areas that need support? Or you are seeking investment ideas for potential companies to target? There are different areas of AI that present opportunities for investment, pathways for employment, or avenues for further education and understanding.

Finally, large language models (LLMs) like GPT-4 have emerged as a key component powering conversational AI. LLMs provide the underlying language capabilities leveraged by chatbots and voice assistants.

Further Reading

If you really want to get into the guts of how AI functions, consider exploring these detailed technical papers. These selections not only serve as excellent entry points but are also widely recognized as pivotal contributions to the field. However, bear in mind, this list is not exhaustive:

For a frequently updated, high-level perspective on AI, I recommend Rowan Cheungs newsletter The Rundown.

I will be adding to this series frequently, so please stay tuned.

I am the Founder of Hamlet - a technology company focused on using artificial intelligence to summarize public information. Hamlets goal is to make local government easy to understand.

Prior to Hamlet, I Co-Founded Scripted.com - a marketplace for businesses to hire freelance writers. Scripted was backed by Redpoint Ventures and Crosslink Capital (raised ~$20M of capital and was eventually sold). I ran a local publication in San Francisco called The Bold Italic, which I sold to Medium in 2019.

Ive been a marketing executive at Metromile - where I was CMO, and GoodRx, where I was VP of Marketing. I ran one of the top 200 podcasts (called This is Your life in Silicon valley) in the US and my writing has generated 10s of millions of pageviews. I live in the Bay Area where I mostly grew up, but went to high school in Cleveland (diehard Cleveland sports fan).

Read this article:

Navigating The AI Landscape: Part 1 - Forbes

Nvidia CEO Jensen Huang says artificial general intelligence will be achieved in five years – Business Insider India

Jensen Huang, the CEO of Nvidia one of the companies that is fueling the AI revolution predicts that we may be able to see artificial general intelligence, or AGI, within the next five years.

During the 2023 New York Times DealBook Summit, the outlet's Andrew Ross Sorkin asked Huang if he expected to see AGI in the next 10 years.

"By depending on how you define it, I think the answer is yes," Huang replied.

At the summit, Huang defined AGI as a piece of software or a computer that can complete tests which reflect basic intelligence that's "fairly competitive" to that of a normal human.

"I would say that within the next five years, you're gonna see, obviously, AIs that can achieve those tests," Huang said.

While the CEO didn't specify what exactly he thinks AGI would look like, Ross Sorkin asked if AGI would refer to AI that can design the chips Nvidia is currently making, to which Huang agreed.

"Will you need to have the same staff that designs them?" Sorkin asked as a follow-up, referring to the development of Nvidia's chips.

"In fact, none of our chips are possible today without AI," Huang said.

He specified that the H-100 chips he said Nvidia is shipping today were designed with help from a number of AIs.

"Software can't be written without AI, chips can't be designed without AI, nothing's possible," he concluded on the point of AI's potential.

Even though Huang said that AI is developing faster than he expected, he said the technology hasn't showed signs it can exhibit or surpass complex human intelligence just yet.

"There's no question that the rate of progress is high," he said. "But there's a whole bunch of things that we can't do yet."

"This multi-step reasoning that humans are very good at, AI can't do," he said.

The CEO's thoughts on AGI come as some business leaders sound the alarm about what they personally consider to be AGI.

Ilya Sutskever, cofounder of OpenAI, the company behind ChatGPT, said that AI in its most advanced form will create new problems such as a surge in fake news and cyberattacks, automated AI weapons, and even "infinitely stable dictatorships."

Ian Hogarth, who has invested in more than 50 AI companies, said that a future "God-like AI" would lead to the "obsolescence or destruction of the human race" if the rapid development of the technology isn't regulated.

Huang isn't the only tech leader who believes that AGI will be achieved in the near future.

In February, ex-Meta executive John Carmack said that AGI will be achieved by the 2030s and be worth trillions of dollars.

A few months later, Demis Hassabis, CEO and cofounder of DeepMind, Google's AI division, predicted that AI that is as powerful as the human brain would arrive within the next few years.

Nvidia didn't immediately respond to Business Insider's request for comment.

View original post here:

Nvidia CEO Jensen Huang says artificial general intelligence will be achieved in five years - Business Insider India

Meta’s AI chief doesn’t think AI super intelligence is coming anytime soon, and is skeptical on quantum computing – OODA Loop

Metas chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.His point of view stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be fairly competitive with humans in less than five years, besting people at a multitude of mentally intensive tasks.I know Jensen, LeCun said at a recent event highlighting the Facebook parent companys 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to gain from the AI craze. There is an AI war, and hes supplying the weapons.[If] you think AGI is in, the more GPUs you have to buy, LeCun said, about technologists attempting to develop artificial general intelligence, the kind of AI on par with human-level intelligence. As long as researchers at firms such as OpenAI continue their pursuit of AGI, they will need more of Nvidias computer chips.Society is more likely to get cat-level or dog-level AI years before human-level AI, LeCun said. And the technology industrys current focus on language models and text data will not be enough to create the kinds of advanced human-like AI systems that researchers have been dreaming about for decades.Text is a very poor source of information, LeCun said, explaining that it would likely take 20,000 years for a human to read the amount of text that has been used to train modern language models. Train a system on the equivalent of 20,000 years of reading material, and they still dont understand that if A is the same as B, then B is the same as A.Theres a lot of really basic things about the world that they just dont get through this kind of training, LeCun said.

Full opinion : Metas AI chief doesnt think AI super intelligence is coming anytime soon, and is skeptical on quantum computing.

More here:

Meta's AI chief doesn't think AI super intelligence is coming anytime soon, and is skeptical on quantum computing - OODA Loop

A Different AI Scenario: AI and Justice in a Brave New World Part 1 – DataScienceCentral.com – Data Science Central

The recent upheavals at OpenAI and OpenAIs Chief Scientists apprehensions regarding the safety of AI have ignited a fresh wave of concerns and fears about the march towards Artificial General Intelligence (AGI) and Super Intelligence.

AI safety concerns the development of AI systems aligned with human values and do not cause harm to humans. Some of the AI safety concerns that Ilya Sutskever, OpenAIs Chief Scientist and one of the leading AI scientists behind ChatGPT, has expressed are:

Sutskever reportedly led the board coup at OpenAI that ousted CEO Sam Altman (Altman has since been reinstated at OpenAI) over these AI safety concerns, arguing that Altman was pushing for commercialization and growth too fast and too far without adequate attention to the safety and social impact of OpenAIs AI technologies.

The wild agitation about AI becoming misaligned with human goals or developing their own goals that conflict with ours is an overwhelmingly essential and critical topic. If you want a refresher on how AI can go wrong, watch the movie Eagle Eye. In the movie, an AI model (ARIIA) was designed to protect American lives, but that desired outcome wasnt counter-balanced against other desired outcomes to prevent unintended consequences.

With these concerns about AI models going rogue, Im going to present a multi-part series on what would be required to create a counter scenario AI and Justice in a Brave New World that not only addresses the AI concerns and fears but establishes a more impartial and transparent government, legal, and judicial system that promotes a fair playing field in which everyone can participate (have a voice), benefit, and prosper.

The Challenge. In this AI scenario, elected officials would still make societys laws and regulations. Unfortunately, these laws and regulations are not enforced fairly, consistently, or unbiasedly in todays legal and judicial systems. Some people and organizations use their money and influence to bend the laws and regulations in their favor, while others face discrimination and injustice because of their identity, background, or position in society. This creates a system of inequality and distrust that leads to social disenfranchisement and political extremism. We need to ensure that the laws and regulations are enforced with fairness, equity, consistency, and transparency for everyone.

The Solution. To address the unfair, inconsistent, and biased enforcement issue, we could build AI models that would be responsible for enforcing the laws and regulations legislated by elected officials. With AI in charge of enforcement, there will be no room for prejudice or biased decision-making when enforcing these laws and regulations. Instead, AI systems would ensure that the laws and regulations are applied equitably, fairly, and transparently to everyone. Finally, justice would truly be blind.

This requires that our legislators define the laws and regulations and collaborate across a diverse set of stakeholders to define the desired outcomes, the importance or value of these desired outcomes, and the variables and metrics against which adherence to those outcomes could be measured. This would necessitate substantial forethought in defining the laws and regulations and identifying the potential unintended consequences of these laws and regulations to ensure that the variables and metrics to flag, avoid, or mitigate those unintended consequences are clearly articulated. Yes, our legislators would need to think more carefully and holistically about including variables and metrics across a more robust set of measurement criteria that can help ensure that the AI models deliver more relevant, meaningful, responsible, and ethical outcomes (Figure 1).

Figure 1: Economic Value Definition Wheel

Here is the good news: those desired outcomes, their weights, and the metrics and variables against which the AI model will use to make its decisions and actions could be embedded into the AI Utility Function, the beating heart of the AI model.

The AI Utility Function is a mathematical function that evaluates different actions or decisions that the AI system can take and chooses the one that maximizes the expected utility or value of the desired outcomes.

Figure 2: The AI Utility Function

Analyzing the performance of the AI Utility Function provides a means to audit the AI models performance and provide the ultimate transparency into why the AI model made the decisions or actions that it did. However, several deficiencies exist today that limit the AI Utility Functions ability to enforce societys laws and regulations.

The table below outlines some AI Utility Function challenges and what AI engineers can do to address these challenges in designing and developing a healthy AI Utility Function.

Table 1: Addressing AI Utility Function Challenges

Maybe Im too Pollyannish. But I refuse to turn over control of this conversation to folks who are only focused on what could go wrong. Instead, lets consider those concerns and create something that addresses those concerns and creates something that benefits everyone.

Part 2 will examine a real-world case study about using AI to deliver consistent, unbiased, and fair outcomesrobot umpires. We will also discuss how we can create AI models that act more human. Part 3 will try to pull it all together by highlighting the potential of AI-powered governance to ensure AIs responsible and ethical application to societys laws and regulations.

Read more:

A Different AI Scenario: AI and Justice in a Brave New World Part 1 - DataScienceCentral.com - Data Science Central

Unveiling OpenAI’s Project Q*: Navigating the AGI Superintelligence Landscape – Medium

A Closer Look at OpenAIs Pursuit of Artificial General Intelligence Amidst Leadership Turmoil

In the aftermath of OpenAI's recent upheaval, revelations about Project Q* the clandestine AGI project have surfaced, shedding light on the concerns that led to the ousting of former CEO Sam Altman.

Let's delve into the details of Project Q* and its potential implications for the future of artificial intelligence.

Project Q AGI: A Breakthrough in AI Reasoning*

Project Q*, pronounced 'Q-Star,' represents OpenAI's ambitious venture into developing a new AI model. Insider sources suggest that Q* could revolutionize artificial intelligence by significantly enhancing AI reasoning, bringing OpenAI closer to achieving a breakthrough in the development of AGI.

Unlike current generative AI models, AGI is envisioned as an autonomous system with human-level problem-solving capabilities, capable of cumulative learning. Q* has reportedly demonstrated such attributes by outperforming grade-school students in solving mathematical problems, showcasing reasoning skills beyond the scope of current AI technology.

The duration of Project Q*'s development and its potential applications remain unclear, but staff members and board members were briefed about the project before the recent leadership turmoil at OpenAI.

Ethical Concerns and Leadership Turmoil

However, concerns about the ethical implications of Project Q* were raised by several OpenAI researchers. A letter addressed to the board of directors highlighted potential dangers associated with the powerful algorithm, leading to Sam Altman's removal as CEO. The specific ethical concerns were not disclosed, but the move triggered a massive shake-up within the company.

Altman's subsequent reinstatement, prompted by the threat of a mass staff exodus, has left OpenAI with a revamped leadership structure. As Altman returns to the helm and Project Q* looms on the horizon, questions arise about the impact on those within the AI ecosystem.

Should We Be Concerned?

While Project Q* represents a significant stride toward realizing AGI, it raises questions about the ethical considerations and potential risks associated with rapid AI development. Altman's comparisons of AGI to a "median human" have stirred controversy, especially as concerns about AI's impact on job security escalate.

As OpenAI presses forward with AGI development, some fear that the company may prioritize commercial success over user and societal well-being. The balance between the positive outcomes of AGI and the potential risks it poses remains a topic of ongoing debate.

In this ever-evolving landscape of artificial intelligence, Project Q* stands as a testament to the delicate balance between innovation and responsibility. As OpenAI navigates these uncharted waters, the broader implications for the AI industry and society as a whole are yet to unfold.

Follow this link:

Unveiling OpenAI's Project Q*: Navigating the AGI Superintelligence Landscape - Medium

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University.Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history.He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

The rest is here:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

About That Mysterious AI Breakthrough Known As Q* By OpenAI That Allegedly Attains True AI Or Is On The Path Toward Artificial General Intelligence…

intelligence (AGI) nearly in hand?getty

In todays column, I am going to walk you through a prominent AI-mystery that has caused quite a stir leading to an incessant buzz across much of social media and garnering outsized headlines in the mass media. This is going to be quite a Sherlock Holmes adventure and sleuth detective-exemplifying journey that I will be taking you on.

Please put on your thinking cap and get yourself a soothing glass of wine.

The roots of the circumstance involve the recent organizational gyrations and notable business crisis drama associated with the AI maker OpenAI, including the off and on-again firing and then rehiring of the CEO Sam Altman, along with a plethora of related carry-ons. My focus will not particularly be the comings and goings of the parties involved. I instead seek to leverage those reported facts primarily as telltale clues associated with the AI-mystery that some believe sits at the core of the organizational earthquake.

We shall start with the vaunted goal of arriving at the topmost AI.

The Background Of The AI Mystery

So, here's the deal.

Some suggest that OpenAI has landed upon a new approach to AI that either has attained true AI, which is nowadays said to be Artificial General Intelligence (AGI) or that demonstrably resides on or at least shows the path toward AGI. As a fast backgrounder for you, todays AI is considered not yet at the realm of being on par with human intelligence. The aspirational goal for much of the AI field is to arrive at something that fully exhibits human intelligence, which would broadly then be considered as AGI, or possibly going even further into superintelligence (for my analysis on what this AI superhuman aspects might consist of, see the link here).

Nobody has yet been able to find out and report specifically on what this mysterious AI breakthrough consists of (if indeed such an AI breakthrough was at all devised or invented). This situation could be like one of those circumstances where the actual occurrence is a far cry from the rumors that have reverberated in the media. Maybe the reality is that something of modest AI advancement was discovered but doesnt deserve the hoopla that has ensued. Right now, the rumor mill is filled with tall tales that this is the real deal and supposedly will open the door to reaching AGI.

Time will tell.

On the matter of whether the AI has already achieved AGI per se, lets noodle on that postulation. It seems hard to imagine that if the AI became true AGI we wouldnt already be regaled with what it is and what it can do. That would be a chronicle of immense magnitude. Could the AI developers involved be capable of keeping a lid on such a life goal attainment that they miraculously found the source of the Nile or that they essentially turned stone into gold?

Seems hard to believe that the number of people likely knowing this fantastical outcome would be utterly secretive and mum for any considerable length of time.

The seemingly more plausible notion is that they arrived at a kind of AI that shows promise toward someday arriving at AGI. You could likely keep that a private secret for a while. The grand question though looming over this would be the claimed basis for asserting that the AI is in fact on the path to AGI. Such a basis should conceivably be rooted in substantive ironclad logic, one so hopes. On the other hand, perhaps the believed assertion of being on the path to AGI is nothing more than a techie hunch.

Those kinds of hunches are at times hit-and-miss.

You see, this is the way that those ad hoc hunches frequently go. You think youve landed on the right trail, but you are actually once again back in the woods. Or you are on the correct trail, but the top of the mountain is still miles upon miles in the distance. Simply saying or believing that you are on the path to AGI is not necessarily the same as being on said path. Even if you are on the AGI path, perhaps the advancement is a mere inch whilst the distance ahead is still far away. One can certainly rejoice in advancing an inch, dont get me wrong on that. The issue is how much the inch is parlayed into being portrayed intentionally or inadvertently as getting us to the immediate doorstep of AGI.

The Clues That Have Been Hinted At

Now that you know the overarching context of the AI mystery, we are ready to dive into the hints or clues that so far have been reported on the matter. We will closely explore those clues. This will require some savvy Sherlock Holmes AI-considered insights.

A few caveats are worth mentioning at the get-go.

A shrewd detective realizes that some clues are potentially solid inklings, while some clues are wishy-washy or outright misleading. When you are in the fog of war about solving a mystery there is always a chance that you are bereft of sufficient clues. Later on, once the mystery is completely solved and revealed, only then can you look back and discern which clues were on target and which ones were of little use. Alluringly, clues can also be a distraction and take you in a direction that doesnt solve the mystery. And so on.

Given those complications, lets go ahead and endeavor to do the best we can with the clues at this time that seem to be available (more clues are undoubtedly going to leak out in the next few days and weeks; Ill provide further coverage in my column postings as that unfolds).

I am going to draw upon these relatively unsubstantiated foremost three clues:

You can find lots of rampant speculation online that uses only the first of those above clues, namely the name of Q*. Some believe that the mystery can be unraveled on that one clue alone. They might not know about the other two above clues. Or they might not believe that the other two clues are pertinent.

I am going to choose to use all three clues and piece them together in a kind of mosaic that may provide a different perspective than others have espoused online about the mystery. Just wanted to let you know that my detective work might differ somewhat from other narratives you might read about elsewhere online.

The First Clue Is The Alleged Name Of The AI

It has been reported widely that the AI maker has allegedly opted to name the AI software as being referred to by the notation of a capital letter Q that is followed by an asterisk.

The name or notation is this: Q*.

Believe it or not, by this claimed name alone, you can go into a far-reaching abyss of speculation about what the AI is.

I will gladly do so.

I suppose it is somewhat akin to the word Rosebud in the famous classic film Citizen Kane. I wont spoil the movie other than to emphasize that the entire film is about trying to make sense of the seemingly innocuous word of Rosebud. If you have time to do so, I highly recommend watching the movie since it is considered one of the best films of all time. There isnt any AI in it, so realize you would be watching the movie for its incredible plot, splendid acting, eye-popping cinematography, etc., and relishing the deep mystery ardently pursued throughout the movie.

Back to our mystery in hand.

What can we divine from the Q* name?

Those of you who are faintly familiar with everyday mathematical formulations are likely to realize that the asterisk is typically said to represent a so-called star symbol. Thus, the seemingly Q-asterisk name would conventionally be pronounced aloud as Q-star rather than as Q-asterisk. There is nothing especially out of the ordinary in mathematical notations to opt to make use of the asterisk as a star notation. It is done quite frequently, and I will shortly explain why this is the case.

Overall, the use specifically of the letter Q innately coupled with the star representation does not notably denote anything already popularized in the AI field. Ergo, I am saying that Q* doesnt jump out as meaning this particular AI technique or that particular AI technology. It is simply the letter Q that is followed by an asterisk (which we naturally assume by convention represents a star symbol).

Aha, our thinking caps now come into play.

We will separate the letter Q from its accompanying asterisk. Doing so is seemingly productive. Heres why. The capital letter Q does have significance in the AI field. Furthermore, the use of an asterisk as a star symbol does have significance in the mathematics and computer science arena. By looking at the significance of each distinctly, we can subsequently make a reasonable leap of logic as a result of considering the meaning associated when they are combined in unification.

I will start by unpacking the use of the asterisk.

What The Asterisk Or Star Symbol Signifies

One of the most historically well-known uses of the asterisk in a potentially similar context was the use by the mathematician Stephen Kleene when he defined something known as V*. You might cleverly observe that this notation consists of the capital letter V that is followed by the asterisk. It is pronounced as V-star.

In his paper published in the 1950s, he described that suppose you had a set of items that were named by the capital letter V, and you then decided to make a different set that consisted of various combinations associated with the items that are in the set V. This new set will by definition contain all the elements of set V and will show them furthermore in as many concatenated ways as we can come up with. The resulting new set will be denoted as V* (there are other arcane rules about this formulation, but I am only seeking to give a brief tasting herein).

As an example about this matter, suppose that I had a set consisting of the first three lowercase letters of the alphabet: {a, b, c}. I will go ahead and refer to that set as the set V. We have a set V that consists of {a, b, c}.

You are then to come up with V* by making lots of combinations of the elements in V. You are allowed to repeat the elements as much as you wish. Thus, the V* will contain elements like this: {a, b, c, ab, ac, ba, bc, aa, bb, cc, aaa, aab, aac, }.

I trust that you see that the V* is a combination of the elements of V. This V* is kind of amazing in that it has all kinds of nifty combinations. I am not going to get into the details of why this is useful and will merely bring your attention to the fact that the asterisk or star symbol suggests that whatever set V you have there is another set V* that is much richer and fuller. I would recommend that those of you keenly interested in mathematics and computer science might want to see a classic noteworthy article by Stephen Kleene entitled "Representation of Events in Nerve Nets and Finite Automata" which was published by Princeton University Press in 1956. You can also readily find lots of explanations online about V*.

Your overall takeaway here is that when you use a capital letter and join it with an asterisk, the conventional implication in mathematics and computer science is that you are saying that the capital letter is essentially supersized. You are magnifying whatever the original thing is. To some degree, you are said to be maximizing it to the nth degree.

Are you with me on this so far?

I hope so.

Lets move on and keep this asterisk and star symbol stuff in mind.

The Use Of Asterisk Or Star In The Case Of Capital A

You are going to love this next bit of detective work.

Ive brought you up-to-speed about the asterisk and showed you an easy example involving the capital letter V. Well, in the AI field, there is a famous instance that involves the capital letter A. We have hit a potential jackpot regarding the underlying mystery being solved, some believe.

Allow me to explain.

The famous instance of the capital letter A which is accompanied by an asterisk in the field of AI is shown this way: A*. It is pronounced as A-star.

As an aside, when I was a university professor, I always taught A* in my university classes on AI for undergraduates and graduates. Any budding computer science student learning about AI should be at least aware of the A* and what it portends. This is a foundational keystone for AI.

In brief, a research paper in the 1960s proposed an AI foundational approach to a difficult mathematical problem such as trying to find the shortest path to get from one city to another city. If you are driving from Los Angeles to New York and you have lets assume thirty cities that you might go through to get to your destination, which cities would you pick to minimize the time or distance for your planned trip?

You certainly would want to use a mathematical algorithm that can aid in calculating the best or at least a really good path to take. This also relates to the use of computers. If you are going to use a computer to figure out the path, you want a mathematical algorithm that can be programmed to do so. You want that mathematical algorithm to be implementable on a computer and run as fast as possible or use the least amount of computing resources as you can.

The classic paper that formulated A* is entitled A Formal Basis for the Heuristic Determination of Minimum Cost Paths by Peter Hart, Nils Nilsson, and Bertram Raphael, published in IEEE Transactions on Systems Science and Cybernetics, 1968. The researchers said this:

The paper proceeds to define the algorithm that they named as A*. You can readily find online lots and lots of descriptions about how A* works. It is a step-by-step procedure or technique. Besides being useful for solving travel-related problems, the A* is used for all manner of search-related issues. For example, when playing chess, you can think of finding the next chess move as a search-related problem. You might use A* and code it into part of a chess-playing program.

You might be wondering whether the A* has a counterpart possibly known as simply A. In other words, I mentioned earlier that we have V* which is a variant or supersizing of V. Youll be happy to know that some believe that A* is somewhat based on an algorithm which is at times known as A.

Do tell, you might be thinking.

In the 1950s, the famous mathematician and computer scientist Edsger Dijkstra came up with an algorithm that is considered one of the first articulated techniques to figure out the shortest paths between various nodes in a weighted graph (once again, akin to the city traveling problem and more).

Interestingly, he figured out the algorithm in 1956 while sitting in a caf in Amsterdam and according to his telling of how things arose, the devised technique only took about twenty minutes for him to come up with. The technique became a core part of his lifelong legacy in the field of mathematics and computer science. He took his time to write it up. He published a paper about it three years later, and it is a highly readable and mesmerizing read, see E. W. Dijkstra, "A Note on Two Problems in Connection with Graphs", published in Numerische Mathematik, 1959.

Some have suggested that the later devised A* is essentially based on the A of his works. There is a historical debate about that. What can be said with relative sensibility is that the A* is a much more extensive and robust algorithm for doing similar kinds of searches. Ill leave things there and not get mired in the historical disputes.

Id like to add two more quick comments about the use of the asterisk symbol in the computer field.

First, those of you who happen to know coding or programming or the use of computer commands are perhaps aware that a longstanding use of the asterisk has been as a wildcard character. This is pretty common. Suppose I want to inform you that you are to identify all the words that can be derived based on the root word or letters dog. For example, you might come up with the word doggie or the word dogmatic. I could succinctly tell you what you can do by putting an asterisk at the end of the root word, like this: dog*. The asterisk is considered once again to be a star symbol and implies that you can put whatever letters you want after the first fixed set of three letters of dog.

Secondly, another perspective on the asterisk when used with a capital letter is that it is the last or furthest possible iteration or version of something. Lets explore this. Suppose I make a piece of software and I decide to refer to it via the capital letter B. My first version might be referred to as B1. My second version might be referred to as B2. On and on this goes. I might later on have B26, the twenty-sixth version, and much later maybe B8245 which is presumably the eight thousand two hundred forty-fifth version.

A catchy or cutesy way to refer to the end of all of the versions might be to say B*. The asterisk or star symbol in this case tells us that whatever is named as B* is the highest or final of all of the versions that we could ever come up with.

I will soon revisit these points and show you why they are part of the detective work.

The Capital Letter Q Is Considered A Hefty Clue

You are now aware of the asterisk or star symbol. Congratulations!

We need to delve into the capital letter Q.

The seemingly most likely reference to the capital letter Q that exists in the field of AI would indubitably be something known as Q-learning. Some have speculated that the Q might instead be a reference to the work of the famous mathematician Richard Bellman and his optimal value function in the Bellman equation. Sure, I get that. We dont know if thats the reference being made. Im going to make a detective instinctive choice and steer toward the Q that is in Q-learning.

Im using my Ouija board to help out.

Sometimes it is right, sometimes it is wrong.

Q-learning is an important AI technique. Once again, it is a topic that I always covered in my AI classes and that I expected my students to know by heart. The technique makes use of reinforcement learning. You are already generally aware of reinforcement learning by your likely life experiences.

Lets make sure you are comfortable with the intimidatingly fancy phrase reinforcement learning.

Suppose you are training a dog to perform a handshake or shall we say paw shake. You give the dog a verbal command such as telling the cute puppy to do a handshake. The dog lifts its tiny paw to touch your outreached hand. To reward this behavior, you give the dog a delicious canine treat.

You continue doing this repeatedly. The dog is rewarded with a treat for each time that it performs the heartwarming trick. If the dog doesnt do the trick when commanded, you dont provide the treat. In a sense, the denial of a treat is almost a penalty too. You could have a more explicit penalty such as scowling at the dog, but usually, the more advisable course of action is to focus on rewards rather than also including explicit penalties.

All in all, the dog is being taught by reinforcement learning. You are reinforcing the behavior you desire by providing rewards. The hope is that the dog is somehow within its adorable canine brain getting the idea that doing a handshake is a good thing. The internal mental rules that the dog is perhaps devising are that when the command to do a handshake is spoken, the best bet is to lift its handy paw since doing so is amply rewarded.

Q-learning is an AI technique that seeks to leverage reinforcement learning in a computer or is said to be implemented computationally.

The algorithm consists of mathematically and computationally examining a current state or step and trying to figure out which next state or step would be the best to undertake. Part of this consists of anticipating the potential future states or steps. The idea is to see if the rewards associated with those future states can be added up and provide the maximum attainable reward.

You presumably do something like this in real life.

Consider this. If I choose to go to college, I might get a better-paying job than if I dont go to college. I might also be able to buy a better house than if I didnt go to college. There are lots of possible rewards so I might add them all up to see how much that might be. That is one course or sequence of steps and maybe it is good for me or maybe there is something better.

If I dont go to college, I can start working in my chosen field of endeavor right away. I will have four years of additional work experience prior to those that went to college. It could be that those four years of experience will give me a long-lasting advantage over having used those years to go to college. I consider the down-the-road rewards associated with that path.

Upon adding up the rewards for each of those two respective paths, I might decide that whichever path has the maximum calculated reward is the better one for me to pick. You might say that I am adding up the expected values. To make things more powerful, I might decide to weight the rewards. For example, I mentioned that I am considering how much money I will make. It could be that I also am considering the type of lifestyle and work that I will do. I could give greater weight to the type of lifestyle and work while giving a bit less weight to the money side of things.

The formalized way to express all of this is that an agent, which in the example is me, will be undertaking a series of steps, which we will denote as states, and taking actions that transition the agent from one state to the next state. The goal of the agent entails maximizing a total reward. Upon each state or step taken, a reevaluation will occur to recalculate which next step or state seems to be the best to take.

Notice that I did not beforehand know for sure which would be the best or right steps to take. I am going to make an estimate at each state or step. I will figure things out as I go along. I will use each reward that I encounter as a further means to ascertain the next state or step to take.

Given that description, I hope you can recognize that perhaps the dog that is learning to do a handshake is doing something similar to this (we cant know for sure). The dog has to decide at each repeated trial whether to do the handshake. It is reacting in the moment, but also perhaps anticipating the potential for future rewards too. We do not yet have a means to have the dog tell us what it is thinking so we dont know for sure what is happening in that mischievous canine mind.

I want to proffer a few more insights about Q-learning and then we will bring together everything that I have so far covered. We need to steadfastly keep in mind that we are on a quest. The quest involves solving the mystery of the alleged AI that might be heading us toward AGI.

Q-learning is often depicted as making use of a model-free and off-policy approach to reinforcement learning. Thats a mouthful. We can unpack it.

Here are some of my off-the-cuff definitions that are admittedly loosey-goosey but I believe are reasonably expressive of the model and policy facets associated with Q-learning (I ask for forgiveness from the strict formalists that might view this as somewhat watered down):

Take a look at those definitions. I have noted in italics the model-free and the off-policy. I also gave you the opposites, namely model-based and the on-policy approaches since those are each respectively potentially contrasting ways of doing things. Q-learning goes the model-free and off-policy route.

The significance is that Q-learning proceeds on a trial-and-error basis (considered to be model-free) and tries to devise rules while proceeding ahead (considered to be off-policy). This is a huge plus for us. You can use Q-learning without having to in advance come up with a pre-stipulated model of how it is supposed to do things. Likewise, you dont have to come up with a bunch of rules beforehand. The overall algorithm proceeds to essentially get things done on the fly as the activity proceeds and self-derives the rules. Of related noteworthiness is that the Q-learning approach makes use of data tables and data values that are known as Q-tables and Q-values (i.e., the capital letter Q gets a lot of usage in Q-learning).

Okay, I appreciate that you have slogged through this perhaps obtuse or complex topic.

Your payoff is next.

The Mystery Of Q* In Light Of Q And Asterisks

You now have a semblance of what an asterisk means when used with a capital letter. Furthermore, I am leaning you toward assuming that the capital letter Q is a reference to Q-learning.

Lets jam together the Q and the asterisk and see what happens, namely this: Q*.

The combination might mean this. The potential AI breakthrough is labeled as Q because it has to do with the Q-learning technique, and maybe the asterisk or star symbol is giving us a clue that the Q-learning is somehow been advanced to a notably better version or variant. The asterisk might suggest that this is the highest or most far-out capability of Q-learning that anyone has ever seen or envisioned.

Wow, what an exciting possibility.

This would imply that the use of reinforcement learning as an AI-based approach and that is model-free and off-policy can leap tall buildings and go faster than a speeding train (metaphorically) to being able to push AI closer to being AGI. If you place this into the context of generative AI such as ChatGPT by OpenAI and GPT-4 of OpenAI, perhaps those generative AI apps could be much more fluent and seem to convey reasoning if they had this Q* included into them (or this might be included into the GPT-5 that is rumored to be under development).

If only OpenAI has this Q* breakthrough (if there is such a thing), and if the Q* does indeed provide a blockbuster advantage, presumably this gives OpenAI a substantial edge over their competition. This takes us to an intriguing and ongoing AI ethics question. For my ongoing and extensive coverage of AI ethics and AI law, see the link here and the link here, just to name a few.

More:

About That Mysterious AI Breakthrough Known As Q* By OpenAI That Allegedly Attains True AI Or Is On The Path Toward Artificial General Intelligence...

The Open AI Drama: What Is AGI And Why Should You Care? – Forbes

Evolution of humans and intelligence

Pixabay

Artificial general intelligence is something everyone should know and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with speculation that the leadership shakeup may have been due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or notand we may never knowAGI is still serious. All of which begs the questions: what exactly is AGI, what does it mean to all of us, and whatif anythingcan the average person do about it?

As expected for such a complex and impactful topic, definitions vary:

Given the recent OpenAI news, it is particularly opportune that OpenAIs chief scientist, Ilya Sutskever, actually presented his perspective on AGI just a few weeks ago at TED AI. You can find his full presentation here, but some takeaways:

As we can see, AGI spans many dimensions. The ability to perform generalized tasks implies that AGI will affect the job market far more than the AIs that preceded it. For example, an AI that can read an X-ray and detect disease can assist doctors in their work. However, an AGI that can read the X-ray, understand the patients personal history, make a recommendation and explain that recommendation to the patient with a kind beside manner could conceivably replace the doctor entirely. The potential benefits and risks to world economies and jobs are massive. Add to those the ability for AGIs to learn and produce new AGIs, and the risk becomes existential. It is not clear how humanity would control such an AGI or what decisions it would make for itself.

Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away. For example, Geoff Hinton, winner of the Turing Award (the highest prize in computer science), believes AGI is less than 20 years away but that it will not present an existential threat. Meanwhile, his fellow winner of the same award, Yoshua Bengio, states that we do not know how many decades it will take to reach AGI. Much of this discrepancy also has to do with the lack of a broadly agreed-upon definition, as the examples above show.

Yes, I believe so. If nothing else, this weeks drama at OpenAI shows how little we know about the technology development that is so fundamental to humanitys futureand how unstructured our global conversation on the topic is. Fundamental questions exist, such as:

Who will decide if AGI has been reached?

Would we even know that it has happened or is imminent?

What measures will be in place to manage it?

How will countries around the world collaborate or fight over it?

And so on.

For those not following The Terminator franchise, Skynet is a fictional, human-created, machine network that becomes self-aware and decides to destroy humanity. I dont think this is cause for major concern. While certain parts of the AGI definition (particularly the idea of AGIs creating future AGIs) are heading in this direction, and while movies like The Terminator show a certain view of the future, history has shown us that harm caused by technology is usually caused by intentional or accidental human misuse of the technology. AGI may eventually reach some form of consciousness that is independent of humans, but it seems far more likely that human-directed AI-powered weapons, misinformation, job displacement, environmental disruption, etc. will threaten our well-being before that.

I believe the only thing each of us can do is to be informed, be AI-literate and exercise our rights, opinions and best judgement. The technology is transformative. What is not clear is who will decide how it will transform.

Along these lines, less than a month ago, U.S. President Joe Biden issued an executive order on AI, addressing a wide range of near-term AI concerns from individual privacy to responsible AI development to job displacement and necessary upskilling. While not targeted directly at AGI, these orders and similar legislation can direct responsible AI development in the short termprior to AGIand hopefully continuing through to AGI.

It is also worth noting that AGI is unlikely to be a binary eventone day not there and the next day there. ChatGPT appeared to many people as if it came from nowhere, but it did not. It was preceded in 2019 and 2020 by GPT 2 and GPT 3. Both were very powerful but harder to use and far less well known. While ChatGPT (GPT3.5 and beyond) represented major advances, the trend was already in place.

Similarly, we will see AGI coming. For example, a Microsoft research team recently reported that GPT-4 has shown signs of human reasoning, a step toward AGI. As expected, these reports are often disputed, with others claiming that such observations are more indicative of imperfect testing methodologies than of actual AGI.

The real question is: What will we do about about AGI before it arrives?

That decision should be made by everyone. The OpenAI drama continues, with new developments daily. However, no matter what happens with OpenAI, the AGI debate and issues are here to stay, and we will need to deal with themideally, sooner rather than later.

I am an entrepreneur and technologist in the AI space and the CEO of AIClub and AIClubPro - pioneering AI Literacy for K-12 students and individuals worldwide (https://corp.aiclub.world and https://aiclubpro.world). I am also the author of Fundamentals of Artificial Intelligence - the first AI Textbook for Middle School and High School students.

Previously, I co-founded ParallelM and defined MLOps (Production Machine Learning and Deep Learning). MLOps is the practice for full lifecycle management of Machine Learning and AI in production. My background is in software development for distributed systems, focusing on machine learning, analytics, storage, I/O, file systems, and persistent memory. Prior to PM, I was Lead Architect/Fellow at Fusion-io (acquired by SanDisk), developing new technologies and software stacks for persistent memory, Non-Volatile Memory File System (NVMFS) and application acceleration. Before Fusion-io, I was the technology lead for server flash at Intel - heading up server platform non volatile memory technology development and partnerships and foundational work on NVM Express.

Before that, I was Chief Technology Officer at Gear6, where we built clustered computing caches for high performance I/O environments. I got my PhD at UC Berkeley doing research on clusters and distributed storage. I hold 63 patents in distributed systems, networking, storage, performance, key-value stores, persistent memory and memory hierarchy optimization. I enjoy speaking at industry and academic conferences and serving on conference program committees. I am currently co-chairing USENIX OpML 2019 - the first industry conference on Operational Machine Learning. I also serve on the steering committees of both OpML and HotStorage.

See more here:

The Open AI Drama: What Is AGI And Why Should You Care? - Forbes

AI doesnt cause harm by itself. We should worry about the people who control it – The Guardian

Opinion

The chaos at OpenAI reveals contradictions in the way we think about the technology

Sun 26 Nov 2023 02.30 EST

At times it felt less like Succession than Fawlty Towers, not so much Shakespearean tragedy as Laurel and Hardy farce. OpenAI is the hottest tech company today thanks to the success of its most famous product, the chatbot ChatGPT. It was inevitable that the mayhem surrounding the sacking, and subsequent rehiring, of Sam Altman as its CEO would play out across global media last week, accompanied by astonishment and bemusement in equal measure.

For some, the farce spoke to the incompetence of the board; for others, to a clash of monstrous egos. In a deeper sense, the turmoil also reflected many of the contradictions at the heart of the tech industry. The contradiction between the self-serving myth of tech entrepreneurs as rebel disruptors, and their control of a multibillion-dollar monster of an industry through which they shape all our lives. The tension, too, between the view of AI as a mechanism for transforming human life and the fear that it may be an existential threat to humanity.

Few organisations embody these contradictions more than OpenAI. The galaxy of Silicon Valley heavyweights, including Elon Musk and Peter Thiel, who founded the organisation in 2015, saw themselves both as evangelists for AI and heralds warning of the threat it posed. With artificial intelligence we are summoning the demon, Musk portentously claimed.

The combination of unrestrained self-regard for themselves as exceptional individuals conquering the future, and profound pessimism about other people and society has made fear of the apocalypse being around the corner almost mandatory for the titans of tech. Many are preppers, survivalists prepared for the possibility of a Mad Max world. I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to, Altman told the New Yorker shortly after OpenAI was created. The best entrepreneurs, he claimed, are very paranoid, very full of existential crises. Including, inevitably, about AI.

OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit humanity as a whole.

Then, in 2019, the charity set up a for-profit subsidiary to help raise more investment, eventually pulling in more than $11bn (8.7bn) from Microsoft. The non-profit parent organisation, nevertheless, retained full control, institutionalising the tension between the desire to make a profit and doomsday concerns about the products making the profit. The extraordinary success of ChatGPT only exacerbated that tension.

Two years ago, a group of OpenAI researchers left to start a new organisation, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that there was a 20% chance that a rogue AI would destroy humanity within the next decade. That same dread seems to have driven the attempt to defenestrate Altman and the boardroom chaos of the past week.

One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The irony, though, is that while fear of AI is exaggerated, the fear itself poses its own dangers. Exaggerated alarm about AI stems from an inflated sense of its capabilities. ChatGPT is superlatively good at predicting what the next word in a sequence should be; so good, in fact, that we imagine we can converse with it as with another human. But it cannot grasp, as humans do, the meanings of those words, and has negligible understanding of the real world. We remain far from the dream of artificial general intelligence. AGI will not happen, Grady Booch, chief scientist for software engineering at IBM, has suggested, even in the lifetime of your childrens children.

For those in Silicon Valley who disagree, believing AGI to be imminent, humans need to be protected through alignment ensuring that AI is aligned with human values and follows human intent. That may seem a rational way of countervailing any harm AI might cause. Until, that is, you start asking what exactly are human values, who defines them, and what happens when they clash?

Social values are always contested, and particularly so today, in an age of widespread disaffection driven often by the breakdown of consensual standards. Our relationship to technology is itself a matter for debate. For some, the need to curtail hatred or to protect people from online harm outweighs any rights to free speech or privacy. This is the sentiment underlying Britains new Online Safety Act. Its also why many worry about the consequences of the law.

Then there is the question of disinformation. Few people would deny that disinformation is a problem and will become even more so, raising difficult questions about democracy and trust. The question of how we deal with it remains, though, highly contentious, especially as many attempts to regulate disinformation results in even greater powers being bestowed on tech companies to police the public.

Meanwhile, another area of concern, algorithmic bias, highlights the weaknesses of arguments for alignment. The reason algorithms are prone to bias, especially against minorities, is precisely because they are aligned to human values. AI programmes are trained on data from the human world, one suffused with discriminatory practices and ideas. These become embedded into AI software, too, whether in the criminal justice system or healthcare, facial recognition or recruitment.

The problem we face is not that machines may one day exercise power over humans. That is speculation unwarranted by current developments. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power. For those who hold social, political and economic power, it makes sense to project problems as technological rather than social and as lying in the future rather than in the present.

There are few tools useful to humans that cannot also cause harm. But they rarely cause harm by themselves; they do so, rather, through the ways in which they are exploited by humans, especially those with power. That, and not fantasy fears of extinction, should be the starting point for any discussion about AI.

Kenan Malik is an Observer columnist

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

One-timeMonthlyAnnual

Other

See the original post here:

AI doesnt cause harm by itself. We should worry about the people who control it - The Guardian