Page 606«..1020..605606607608..620630..»

Figuring Out What Artificial General Intelligence (AGI) Consists Of Is Enormously Vital And Mindfully On The Minds Of AI Researchers At Google…

Figuring Out What Artificial General Intelligence (AGI) Consists Of Is Enormously Vital And Mindfully On The Minds Of AI Researchers At Google DeepMind  Forbes

Read the original:

Figuring Out What Artificial General Intelligence (AGI) Consists Of Is Enormously Vital And Mindfully On The Minds Of AI Researchers At Google...

Read More..

Transformers probably won’t make AI as smart as humans. Others might. – Business Insider

The transformer technology powering tools like ChatGPT may have limitations. FLORENCE LO/Reuters

The groundbreaking work of a bunch of Googlers in 2017 introduced the world to transformers neural networks that power popular AI products today.

They power the large-language model, or LLM, beneath OpenAI's ChatGPT, the chatbot whose explosion onto the scene last year prompted Bill Gates to declare "the age of AI has begun."

The mission for some AI entrepreneurs now is to realize a sci-fi vision and create artificial general intelligence (AGI): AI that appears as intelligent as a human.

But while transformers can power ChatGPT, a preprint paper published by Google researchers last month suggests they might not be able to make the human-like abstractions, extrapolations, and predictions that would imply we're at AGI.

ChatGPT merely responds to users' prompts with text using the data a human has trained it on. In its earliest public form, the chatbot had no knowledge of events beyond September, 2021, which it had to acknowledge every time someone asked abut more recent topics.

Testing transformers' ability to move beyond the data, the Google researchers described "degradation" of their "generalization for even simple extrapolation tasks."

This has raised the question of whether human-like AI is even possible. Another is whether different technologies may get us there.

Some researchers are testing alternatives to figure that out, with another new paper suggesting that there might be a better model waiting in the wings.

Research submitted to open-access repository ArXiv on December 1 by Albert Gu, assistant professor at the machine-learning department of Carnegie Mellon and Tri Dao, chief scientist at Together AI, introduces a model called Mamba.

Media not supported by AMP.Tap for full mobile experience.

Mamba is a state-space model, or SSM, and, according to Gu and Dao, it seems capable of beating transformers on performance in a bunch of tasks.

A caveat: Research submitted to ArXiv is moderated but not necessarily peer-reviewed. This means the public gets to see research faster, but it isn't necessarily reliable.

Like LLMs, SSMs are capable of language modeling, the process through which chatbots like ChatGPT function. But SSMs do this with mathematical models of different "states" that users' prompts can take.

Gu and Dao's research states: "Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics."

On language modeling, Mamba "outperforms transformers of the same size and matches transformers twice its size, both in pretraining and downstream evaluation," Gu and Dao noted.

Writing on X, Dao also noted how a feature particular to SSMs means Mamba is able to generate language responses five times faster than a transformer.

Media not supported by AMP.Tap for full mobile experience.

In response, Dr Jim Fan, a research scientist at software company Nvidia, wrote on X that he's "always excited by new attempts to dethrone transformers. We need more of these."

He gave "kudos" to Dao and Gu "for pushing on alternative sequence architectures for many years now."

ChatGPT was a landmark cultural event that sparked an AI boom. But its technology looks unlikely to lead the industry to its promised land of human-like intelligence.

But if repeated testing confirms Mamba does consistently outperform transformers, it could inch the industry closer.

Loading...

Continue reading here:

Transformers probably won't make AI as smart as humans. Others might. - Business Insider

Read More..

AGI: What is Artificial General Intelligence, the next (and possible final) step in AI – EL PAS USA

Before Sam Altman was ousted as OpenAI CEO for a brief span of four days, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence that they claimed could threaten humanity, according to a report made by Reuters, whose sources cited the letter as one factor that led to Atlmans temporary firing.

After the report was made public, OpenAI revealed a project called Q* (pronounced Q-Star) to staffers. Following reports claim that some employees of the company believe that Q* could be a breakthrough in the search for whats been known as Artificial General Intelligence (AGI), which the company defines as autonomous systems that surpass humans in most valuable tasks.

The term has been credited to Mark Gubrud, a physicist and current adjunct professor in the Peace, War and Defense curriculum at the University of North Carolina. He used it as early as 1997 in a discussion about the implications of completely automated military production and operations. Around 2022, the term was later reintroduced and popularized by Shane Legg and Ben Goertzel, two entrepreneurs involved in AI research.

Goertzel is the founder and CEO of SingularityNET, a project that seeks to democratize access to artificial intelligence, and has worked with several organizations linked to AI research. He is also the chairman of the OpenCog foundation, which seeks to build an open-source artificial intelligence network, and managed OpenCog Prime, a system architecture that sought to achieve AGI at the human level and ultimately beyond.

AGI is a hypothetical type of artificial intelligence that would be capable of processing information at a human-level or even exceeding human capabilities. It would be a machine or a network capable of doing the same range of tasks humans are capable of and it would be able to learn to do anything a human can do, for example, engage in nuanced interactions, understanding contexts and emotions, transferring learning between tasks and adapt to new tasks and environments without programming. This type of system doesnt exist, and complete forms of AGI are still speculative. Several researchers are working on developing an AGI, for this, many of them are interested in open-ended learning, which would allow AI systems to continuously learn like humans do.

In 2023, after OpenAI released ChatGPT-4, Microsoft said that the system could be viewed as an early and incomplete version of an AGI system. However, currently, no system has been demonstrated to meet the criteria for AGI, and there are questions about its feasibility. While some experts believe that an AGI system could be achieved in the next few months or years, others think it will take decades, and that it could be the biggest technological advance of the century.

Q* is an OpenAI project that allegedly led to Sam Altmans firing as CEO after some employees raised concerns suggesting that the system might be an AGI. Until now, there have only been reports about Q* performing mathematical reasoning and there is no evidence that the system is a development on AGI. Several other researchers were dismissive of the claims.

Most AGI research projects focus on whole brain simulation, in which a cerebral model simulates a biological brain in detail. The goal is to make the simulation faithful to the natural, so it can mimic its behavior. For this to be achieved, research in neuroscience and computer science, including animal brain mapping and simulation, and development of faster machines, as well as other areas, is necessary.

Several public figures from Bill Gates to Stephen Hawking have raised concerned about the potential risks of AI for humans, which have been supported by AI Researchers like Stuart J. Russell, who is known for its contributions to AI. A 2021 review of the risks associated with AGI found the following: AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risks. In 2023, the CEOs of several AI research labs, along with other industry leaders and researchers, issued a statement that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Another risk posed by AI systems is mass unemployment. Since ChatGPT became popular, several workplaces have cut their workforce and started relying on AI. The arrival of AGI could result in millions of people losing their jobs, with office workers being most exposed.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

Read more:

AGI: What is Artificial General Intelligence, the next (and possible final) step in AI - EL PAS USA

Read More..

The Simple Reason Why AGI (Artificial General Intelligence) Is Not Achievable – Medium

Photo by Jonathan Kemper on Unsplash

Were living in an era where the line between science fiction and reality is blurring faster than ever. Everywhere you look, theres talk about Artificial General Intelligence (AGI), a form of AI that can understand, learn, and apply knowledge across a broad range of tasks, much like a human. Its a hot topic, a cool conversation piece, and a tantalizing technological dream.

But heres the kicker: its not going to happen. And the reason is simple yet profound.

First off, lets get one thing straight: Im not a cynic. Im not the guy who says, Thats impossible! just for kicks. But when it comes to AGI, theres a fundamental issue that most tech prophets conveniently overlook. Its about understanding human intelligence itself.

Think about it. We, as a species, are still grappling with the complexities of our own minds. Neuroscience, psychology, philosophytheyve all been chipping away at the enigma of human consciousness for centuries, yet were nowhere close to fully understanding it. How, then, can we expect to create a generalized form of intelligence that mimics our own?

The advocates of AGI often talk about the exponential growth of technology, Moores Law, and all that jazz. Sure, weve made leaps and bounds in computational power and machine learning. But AGI isnt just a fancier algorithm or a more powerful processor. Its about replicating the nuanced, often irrational, and deeply complex nature of human thought and reasoning. And thats where the overzealous optimism falls flat.

Lets dive deeper. Human intelligence isnt just about processing information. Its about emotion, intuition, morality, creativity, and a myriad of other intangibles that machines, as of now, cant even begin to comprehend. You cant code empathy. You cant quantify the soul-stirring depth of a poem. How do you program a machine to understand the nuanced ethics of a complicated situation, or to appreciate the beauty of a sunset?

But wait, theres more. Theres an inherent arrogance in assuming we can create an AGI. Its like saying, We can play God. But can we? Were part of nature, not above it. Our attempts to

Read this article:

The Simple Reason Why AGI (Artificial General Intelligence) Is Not Achievable - Medium

Read More..

Nvidia CEO Jensen Huang says AGI will be achieved in 5 years – Business Insider

Nvidia CEO Jensen Huang said AGI will be achieved in five years during the 2023 NYT DealBook Summit. Sam Yeh / Contributor

Jensen Huang, the CEO of Nvidia one of the companies that is fueling the AI revolution predicts that we may be able to see artificial general intelligence, or AGI, within the next five years.

During the 2023 New York Times DealBook Summit, the outlet's Andrew Ross Sorkin asked Huang if he expected to see AGI in the next 10 years.

"By depending on how you define it, I think the answer is yes," Huang replied.

At the summit, Huang defined AGI as a piece of software or a computer that can complete tests which reflect basic intelligence that's "fairly competitive" to that of a normal human.

"I would say that within the next five years, you're gonna see, obviously, AIs that can achieve those tests," Huang said.

While the CEO didn't specify what exactly he thinks AGI would look like, Ross Sorkin asked if AGI would refer to AI that can design the chips Nvidia is currently making, to which Huang agreed.

"Will you need to have the same staff that designs them?" Sorkin asked as a follow-up, referring to the development of Nvidia's chips.

"In fact, none of our chips are possible today without AI," Huang said.

He specified that the H-100 chips he said Nvidia is shipping today were designed with help from a number of AIs.

"Software can't be written without AI, chips can't be designed without AI, nothing's possible," he concluded on the point of AI's potential.

Even though Huang said that AI is developing faster than he expected, he said the technology hasn't showed signs it can exhibit or surpass complex human intelligence just yet.

"There's no question that the rate of progress is high," he said. "But there's a whole bunch of things that we can't do yet."

"This multi-step reasoning that humans are very good at, AI can't do," he said.

The CEO's thoughts on AGI come as some business leaders sound the alarm about what they personally consider to be AGI.

Ilya Sutskever, cofounder of OpenAI, the company behind ChatGPT, said that AI in its most advanced form will create new problems such as a surge in fake news and cyberattacks, automated AI weapons, and even "infinitely stable dictatorships."

Ian Hogarth, who has invested in more than 50 AI companies, said that a future "God-like AI" would lead to the "obsolescence or destruction of the human race" if the rapid development of the technology isn't regulated.

Huang isn't the only tech leader who believes that AGI will be achieved in the near future.

In February, ex-Meta executive John Carmack said that AGI will be achieved by the 2030s and be worth trillions of dollars.

A few months later, Demis Hassabis, CEO and cofounder of DeepMind, Google's AI division, predicted that AI that is as powerful as the human brain would arrive within the next few years.

Nvidia didn't immediately respond to Business Insider's request for comment.

Loading...

Read the original here:

Nvidia CEO Jensen Huang says AGI will be achieved in 5 years - Business Insider

Read More..

That OpenAI Mind-Boggler Whether Mysterious Q* Led To The Unexplained Firing Of The CEO And How Artificial General Intelligence (AGI) Might Have…

what transpired at OpenAI.getty

In todays column, I am going to do a follow-up to my recent went viral analysis of the mysterious Q* that OpenAI has apparently devised (see the link here) and will be pressing forward to explore yet another presumably allied conundrum, namely what led to or brought about the firing and then rehiring of the CEO of OpenAI.

That pressing question about what really went down regarding the executive-level twists and turns at OpenAI seems to be a top-notch best-kept secret of a Fort Knox quality. OpenAI and the parties involved in the situation are amazingly tightlipped. The world at large seems to only know broadly what transpired, but not why it happened. Meanwhile, all manner of wildly concocted speculation has entered the vacuum created by not having anyone on the inside opt to spill the beans.

Get yourself mentally ready for a sharp bit of puzzle-piece arranging and a slew of reasoned conjecture.

Please join me in a Sherlock Holmes-style examination of how a scarce set of clues might be pieced together to make an educated guess at how the matter arose. We will wander through a range of topics such as AI, Artificial General Intelligence (AGI), Responsible AI and AI ethics, business organizational dynamics and marketplace signaling, the mysterious Q*, governing board dynamics, C-suite positioning, etc. I aim to proceed in a sensible and reasoned matter, seeking to connect the sparse dots, and aspire to arrive at something of a satisfying or at least informative result.

Some readers might recognize that I am once again invoking the investigative prowess of Sherlock Holmes, as I had done in my prior analysis, and believe that once again putting on the daunting detective cap and lugging around the vaunted clue-inspecting magnifying glass is a notably fruitful endeavor.

As Sherlock was known to have stated, we need to proceed on each mystery by abiding by this crucial rule: To begin at the beginning.

Lets therefore begin at the beginning.

Essential Facts Of The Mysterious Case

You undoubtedly know from the massive media coverage of the last several weeks that the CEO of OpenAI, Sam Altman, was let go by the board of OpenAI and subsequently, after much handwringing and machinations, he has rejoined OpenAI. The board has been recomposed and will purportedly be undergoing additional changes. OpenAI has stated that an independent review of the varied circumstances will be undertaken, though no timeline has been stated nor whether or to what degree the review will be made publicly available.

The basis for this seemingly earth-shattering firing-rehiring circumstance still remains elusive and ostensibly unknown (well, a small cohort of insiders must know).

I say earthshattering for several cogent reasons. First, OpenAI has become a household name due to being the company that makes ChatGPT. ChatGPT was released to the public a year ago and reportedly has 100 million active weekly users at this time. The use of generative AI has skyrocketed and become an ongoing focus in our daily lives. Sam Altman became a ubiquitous figurehead for the AI field and has been the constant go-to for quotes and remarks about where AI is heading.

From all outward appearances, there hasnt seemed to be anything that the CEO has said or done on the public stage that would warrant the rather serious action of suddenly and unexpectedly firing him. We might understand such an abrupt action if there were some ongoing guffaws or outlandish steps that precipitated the harsh disengagement. None seems to be on the docket. The firing appears to have come totally out of the blue.

Another consideration is that a straying CEO might be taken down a peg or two if they are somehow misrepresenting a firm or otherwise going beyond an acceptable range of behavior. Perhaps a board might give the CEO a forewarned wake-up call and this often leaks to the outside world. Everyone at that juncture kind of realizes that the CEO is on thin ice. This didnt happen in this case.

The bottom line here is that this was someone who is a widely known spokesperson and luminary in the AI arena who without any apparent provocation was tossed out of the company that he co-founded. Naturally, an expectation all told would be that an ironclad reason and equally solid explanation would go hand in hand with the severity of this startling turn of events. None has been stipulated per se, other than some vagaries, which I will address next.

We need to see what clues to this mystery might exist and try to piece them together.

The Blog That Shocked The World

First, as per the OpenAI official blog site and a posting on the fateful date of November 17, 2023, entitled OpenAI Announces Leadership Transition, we have this stated narrative (excerpted):

I shall delicately parse the above official communique excerpts.

According to the narrative, the stated basis for the undertaken firing is that the CEO was not consistently candid in his communications with the board.

Mark that on your bingo card as not consistently candid.

An additional takeaway, though somewhat more speculative, involves the line that the firm was structured to ensure that artificial general intelligence benefits all humanity. Some have suggested that perhaps the lack of candidness refers to the notion of ensuring that artificial general intelligence benefits all humanity.

These are our two potential clues at this juncture of this analysis:

Okay, with those seemingly independent clues, lets leverage the prevailing scuttlebutt amidst social media chatter and opt to tie those two elements directly together.

Before making that leap, I think it wise to mention that it could be that those two aspects have nothing to do with each other. Maybe we are combining two clues that are not in the same boat. Down the road, if the mystery is ever truly revealed, we will in hindsight presumably learn whether they are mates or not. Just keep that caveat in mind, thanks.

One other thing to note is that the blog makes a rather stark reference to artificial general intelligence, which is commonly referred to as AGI, and potentially has great importance here. In case you dont already know, AGI is the type of AI that we believe someday will be somehow attained and will be on par with human intelligence (possibly even surpassing humans and becoming superintelligent). We arent there yet, despite those blaring headlines that suggest otherwise. There are grave concerns that AGI is going to be an existential risk, potentially enslaving or wiping out humankind, see my discussion at the link here. Another perspective of a more happy-face nature is that maybe AGI will enable us to cure cancer and aid in ensuring the survival and thriving of humanity, see my analysis at the link here.

My reason for emphasizing that we are discussing AGI is that you could assert that AGI is extremely serious stuff. Given that AGI is supposedly going to either destroy us all or perhaps lift us to greater heights than we ever imagined, we are dealing with something more so than the everyday kind of AI that we have today. Our conventional daily encounters with AI-based systems are extremely tame in comparison to what is presumably going to happen once we arrive at AGI (assuming we ultimately do).

The stakes with AGI are sky-high.

Lets brazenly suggest that the issue of candidness concerns AGI. If so, this is a big deal because AGI is a big deal. I trust that you can clearly see why tensions might mount. Anything to do with the destruction of humanity or the heralded uplifting of humanity is undoubtedly going to get some hefty attention. This is the whole can of worms on the table.

Perhaps the CEO was perceived by the board or some portion of the board, as not being entirely candid about AGI. It could be that the perception was that the CEO was less than fully candid about a presumed AGI that might be in hand or an AI breakthrough that was on the path to AGI. Those board members might have heard about the alleged AGI or path to AGI from other sources within the firm and been surprised and dismayed that the CEO had not apprised them of the vital matter.

What nuance or consideration about AGI would likely be at issue for the OpenAI board in terms of their CEO?

One possible answer sits at the feet of the mysterious Q*. As I discussed in my prior column that covered Q*, see the link here, some have speculated that a kind of AI breakthrough is exhibited in an AI app called Q* at OpenAI. We dont know yet what it is, nor if the mysterious Q* even exists. Nonetheless, lets suppose that within OpenAI there is an AI app known as Q* and that it was believed at the time to be either AGI or on the path to AGI.

Thus, we might indeed have the aura of AGI in the middle of this as showcased by the Q*. Keep in mind that there doesnt have to be an actual AGI or even a path-to-AGI involved. The perception that Q* is or might be an AGI or on the path to AGI is sufficient in this instance. Perceptions are key. I will say more about this shortly.

An initial marketplace reaction to the firing of the CEO was that there must have been some kind of major financial or akin impropriety for taking such a radical step by the board. It seems hard to imagine that merely being less than candid about some piece of AI software could rise to an astoundingly dramatic and public firing.

According to reporting in the media by Axios, we can apparently take malfeasance out of this picture:

You might be wondering what the norm is for CEOs getting booted. CEOs are usually bounced out due to malfeasance of one kind or another, or they are steadfastly shoved out because they either exhibited poor leadership or failed to suitably communicate with the board. In this instance, the clues appear to aim primarily toward the communications factor and perhaps edge slightly into the leadership category.

What Goes On With Boards

Id like to briefly bring you up-to-speed about boards in general. Doing so is essential to the mystery at hand.

In my many years of serving in the C-suite as a top-level tech executive, Ive had lots of experience interacting with boards. A few insightful tidbits might be pertinent to bring up here. Ill for now speak in general terms.

A board of directors is supposed to oversee and advise a company, including being kept informed by the CEO and also gauging whether the CEO is doing a dutiful job in the vaunted role. The board serves as a check and balance regarding what the CEO is doing. This is an important body, and they are legally dutifully bound to perform their duties.

The composition of a board varies from firm to firm. Sometimes the board members see everything on an eye-to-eye basis and wholeheartedly agree with each other. Other times, the board members are split as to what they perceive is occurring at the firm. You might think of this as similar to the U.S. Supreme Court, namely, we all realize that some of the justices will perceive things one way while others of the court will see things another way. Votes on particular issues can swing from everyone being in agreement to having some vote for something and others voting against it.

A typical board is set up to deal with splintered voting. For example, a board might have say seven members and if they dont see eye-to-eye on a proposed action, the majority will prevail in a vote. Suppose a vote is taken and three members are in favor of some stipulated action, while three other members are opposed to the proposed action. The swing vote of the seventh member will then decide which way the matter goes.

In that sense, there is often behind-the-scenes lobbying that takes place. If the board already realizes that a contested three versus three tie is coming up, the odds are that the seventh tiebreaker will get an earful from both sides of the issue. There can be tremendous pressure on the seventh member. Compelling and convincing arguments are bound to be conveyed by both sides of the contentious topic.

It is possible that in the heat of battle, so to speak, a board member in that tie-breaking predicament will base their vote on what they believe to be right at the time of the vote. Later on, perhaps hours or days hence, it is conceivable that upon hindsight, the tie breaker might realize that they inadvertently voted in a manner they regret having done so. They wish to recant their vote, but it is usually water already under the bridge and there isnt a means to remake history. The vote was cast when it was cast. They will have to live with the decision they made at the time of the fracas.

This is going to be helpful food for thought and will be worth remembering later on during this puzzle-solving process.

Responsible AI And AI Ethics

We are going to take a seemingly offshoot path here for a little bit and will come back around to the topic of the board and the CEO. I pledge that this path into the trees of the forest will serve a useful purpose.

Sherlock Holmes was a keen observer of clues that seemed outside the purview of a mystery and yet turned out to be quite vital to solving the mystery. His famous line was this: It has long been an axiom of mine that the little things are infinitely the most important.

Time to invoke that principle.

Hang in there as I lay the groundwork for what will come up next.

I would like to bring into this matter the significance of what is known as Responsible AI and the rising interest in AI ethics and AI law. Ive covered the importance of AI ethics and AI law extensively, including the link here and the link here, just to name a few. The tsunami of AI that is being rushed out into society and becoming pervasive in our lives has a lot of good to be had but also has a lot of rottenness to be had too. Todays AI can make our lives easier and more fulfilling. AI can also contain undue biases, algorithmically make discriminatory choices, be toxic, and be used for evil purposes.

Thats the dual-use precept of AI.

Responsible AI refers to the notion that the makers of AI and also those firms making use of AI are asked to build and deploy AI in responsible ways. We are to hold their feet to the fire if they devise or adopt AI that has untoward outcomes. They cannot just wave their arms and proclaim that the AI did it. Many try this as a means of escaping their responsibility and liability. Various codes of ethics associated with AI are supposed to be used by firms as guidance toward producing and using AI in suitable ways. Likewise, new laws regarding AI are intended to similarly keep the development and adoption of AI on the up and up, see my analysis at the link here.

As an example of AI ethics, you might find of interest that the United Nations entity UNESCO passed a set of ethical AI principles that encompassed numerous precepts and was approved by nearly 200 counties (see my coverage details at the link here). A typical set of AI ethics includes these pronouncements:

Not all AI makers are embracing AI ethics.

Some AI makers will say that they earnestly believe in AI ethics, and yet act in ways that seem to be the wink-wink claim.

Right now, the AI field is a mixed bag when it comes to AI ethics. A firm might decide to get fully engaged in and immersed in AI ethics. This hopefully becomes a permanent intent. That being said, the chances are that the commitment will potentially wane. If something shocks the firm into realizing that they have perhaps dropped the ball on AI ethics, a resurgence of interest often subsequently occurs. I have described this as the roller coaster ride of AI ethics in firms.

The adoption of AI ethics by AI makers is like a box of chocolates. You never know what they will pick and choose, nor how long it will last. There are specialists nowadays who are versed in AI ethics, and they fervently try to get AI makers and firms that adopt AI to be mindful of abiding by ethical AI principles. It is a tough job. For my discussion of the role of AI ethics committees in companies and the ins and outs of being an AI ethicist, see my coverage at the link here and the link here.

The emergence of AI ethics and Responsible AI will be instrumental to possibly solving this mystery surrounding the OpenAI board and the CEO.

Lets keep pushing ahead.

Transparency Is A Key AI Ethics Principle

You might have noticed in the above list of AI ethics principles that AI should be devised to be transparent.

Heres what that means.

When an AI maker builds and releases an AI app, they are supposed to be transparent about what the AI does. They should identify the limitations of the AI. There should be stated indications about the right ways to use AI. Guidelines should be provided that express what will happen if the AI is misused. Some of this can be very technical in its depictions, while some of it is more of a narrative and a wordy exposition about the AI.

An AI maker might decide that they are going to be fully transparent and showcase everything they can about their AI app. A difficulty though is that if the AI consists of proprietary aspects, the AI maker is going to want to protect their Intellectual Property (IP) rights and ergo be cautious in what they reveal. Another concern is that perhaps revealing too much will enable evil doers to readily shift or modify the AI into doing bad things. This is a conundrum in its own right.

Research on AI has been exploring the range and depth of materials and portions of an AI app that might be viably disclosed as part of the desire to achieve transparency. An ongoing debate is taking place on what makes sense to do. Some favor tremendous transparency, others balk at this and insist that there should be reasonable boundaries established.

As an example of research on AI-related transparency, consider this research paper that proposes six levels of access to generative AI systems (excerpts shown):

I trust you can discern that transparency is a valuable way of trying to safeguard society.

If AI apps are wantonly thrown into the hands of the public in a cloaked or undisclosed manner, there is a danger for those who use the AI. They might use the AI in ways that were not intended, yet they didnt know what the proper use consisted of, to begin with. The hope is that transparency will allow all eyes to scrutinize the AI and be ready to either use the AI in appropriate ways or be alerted that the AI might have rough edges or be turned into adverse uses. The wisdom of the crowd might aid in mitigating the potential downsides of newly released AI.

Make sure to keep the importance of AI transparency in mind as I proceed further in this elucidation.

Timeline Of OpenAI Releases

Id like to share with you a quick history tracing about the generative AI products of OpenAI, for which will handily impart more noteworthy clues.

You certainly already know about ChatGPT, the generative AI flagship of OpenAI. You might also be aware that OpenAI has a more advanced generative AI app known as GPT-4. Those of you who were deep into the AI field before the release of ChatGPT might further know that before ChatGPT there was GPT-3, GPT-2, and GPT-1. ChatGPT is often referred to as GPT-3.5.

Here's a recap of the chronology of the GPT series (I am using the years to indicate roughly when each version was made available):

I realize the above chronology might not seem significant.

Maybe we can pull a rabbit out of a hat with it.

Lets move on and see.

Race To The Bottom Is A Bad Thing

Shift gears and consider again the importance of transparency when it comes to releasing AI.

If an AI maker opts to stridently abide by transparency, this might motivate other AI makers to do likewise. An upward trend of savoring transparency will especially be the case if an AI maker is a big-time AI maker and not just one of the zillions of one-offs. In that way of thinking, the big-time AI makers could be construed as leading role models. They tend to set the baseline for what is considered marketplace-suitable transparency.

Suppose though that a prominent AI maker decides not to be quite so transparent. The chances are that other AI makers will decide they might as well slide downward too. No sense in staying at the top if the signaling by some comparable AI maker suggests that transparency can shirked, or corners can be cut.

Imagine that this happens repeatedly. Inch by inch, each AI maker is responding to the others by also reducing the transparency they are providing. Regrettably, this is going to become one of those classic and dubious races to the bottom. The odds are that the downward slippery slope is going ultimately hit rock bottom. Perhaps little or no transparency will end up prevailing.

A sad face outcome, for sure.

The AI makers are essentially sending signals to the marketplace by how much they each embrace transparency. Transparency is a combination of what an AI maker says they intend to do and also what they in reality do. Once an AI app is released, the reality becomes evident quite quickly. The materials and elements can be judged according to their level of transparency, ranging from marginally transparent to robustly transparent.

Based on the signaling and the actual release of an AI app, the rest of the AI makers will then likely react accordingly when they do their next respective AI releases. Each will opt to adjust based on what their peers opt to do. This doesnt necessarily have to go to the bottom. It is possible that a turn might occur, and the race proceeds upward again. Or maybe some decide to go down while others are going up, or others decide to go down when others are going up.

By and large, the rule of thumb though is that they tend to act in the proverbial birds-of-a-feather-flock-together mode.

I assume that you readily grasp the overall gist of this signaling and market movement phenomena. Thus, lets now look at a particularly interesting and relevant AI research paper that describes the signaling that often takes place by AI makers.

I will be providing excerpts from a paper entitled Decoding Intentions: Artificial Intelligence And Costly Signals, by Andrew Imbrie, Owen J. Daniels, and Helen Toner, Center for Security and Emerging Technology (CSET), October 2023. The co-authors provide keen insights and have impressive credentials as stated in the research paper at the time of its publication in October 2023:

The paper has a lot to say about signals and AI and provides several insightful case studies.

First, the research paper mentions that AI-related signals to the marketplace are worthy of attention and should be closely studied and considered:

An in-depth discussion in the paper about the veritable race-to-the-bottom exemplifies my earlier points and covers another AI ethics principle underlying reliability:

Among the case studies presented in the paper, one case study was focused on OpenAI. This is handy since one of the co-authors as noted above was on the board of OpenAI at the time and likely was able to provide especially valuable insights for the case study depiction.

According to the paper, GPT-2 was a hallmark in establishing an inspiring baseline for transparency:

Furthermore, the paper indicates that GPT-4 was also a stellar baseline for generative AI releases:

The paper indicates that the release of ChatGPT was not in the same baseline league and notes that perhaps the release of the later-on GPT-4 was in a sense tainted or less heralded as a result of what occurred with the ChatGPT release that preceded GPT-4s release:

Based on the case study in the paper, one might suggest that the chronology for selected instances of the GPT releases has this intonation:

Thats the last of the clues and we can start to assemble the confounding puzzle.

The Final Straw Rather Than The Big Bang

You now have in your hands a set of circuitous clues for a potential puzzle piece assembling theory that explains the mystery of why the CEO of OpenAI was fired by the board. Whether this theory is what actually occurred is a toss-up. Other theories are possible and this particular one might not hold water. Time will tell.

I shall preface the elicitation with another Sherlock Holmes notable quote: As a rule, the more bizarre a thing is, the less mysterious it proves to be.

Read more:

That OpenAI Mind-Boggler Whether Mysterious Q* Led To The Unexplained Firing Of The CEO And How Artificial General Intelligence (AGI) Might Have...

Read More..

The Double-Edged Sword of Advanced Artificial Intelligence – Medium

The Pros and Cons of Developing Mega-Intelligent Self-Thinking AI

Artificial General Intelligence (AGI) stands at the forefront of technological innovation, promising a revolution that could reshape our world. However, as we explore the potential of AGI, it becomes imperative to weigh its pros and cons, understanding its promises and the danger it presents.

AGI boasts unparalleled problem-solving abilities, potentially unlocking solutions to humanitys most pressing challenges. Its advanced cognitive functions could lead to breakthroughs in healthcare and environmental conservation.

The fusion of Quantum Computing and AGI, as explored in A Future Beyond Limits, opens the door to revolutionary discoveries. From cleaner energy sources to advancements in materials science and space exploration, the potential benefits are vast and transformative.

AGI has the potential to spur economic development, especially in regions with limited access to energy resources. Fostering global equity could contribute to a more inclusive and interconnected world.

One of the significant concerns with AGI is the unpredictability of its decision-making processes. As discussed in The Dangers of AI Singularity, it is difficult to predict and control its actions because we do not fully understand how it learns and makes decisions.

Go here to see the original:

The Double-Edged Sword of Advanced Artificial Intelligence - Medium

Read More..

Artificial General Intelligence (AGI): The Tightrope Walk Between Mastery and Catastrophe – Medium

In the accelerating race to develop Artificial General Intelligence (AGI), humanity stands at a crossroads that could lead to unprecedented global prosperity or the brink of disaster. The desire to create a superintelligence and the ambition to maintain absolute control over it is increasingly recognized as one of the most dangerous ventures in human history.

The Allure of AGI: A Double-Edged Sword

The allure of AGI lies in its potential to surpass human intelligence, offering solutions to the worlds most intricate problems. From climate change to complex medical research, the promise of AGI is equivalent to a new era of global prosperity. However, if harnessed and controlled by a select few, this power poses a grave risk. The possibility of weaponizing AGI by governments or corporations could lead to imbalances and conflicts far beyond our current geopolitical struggles.

The Dangers of Concentrated Control

The central concern revolves around the control of such a superintelligent entity. If AGI falls into the hands of a limited group of people, the power imbalance could be extreme. This small group could wield unprecedented influence, manipulating economies, swaying political landscapes, and controlling military power. Concentrating such capabilities in the hands of the few is a recipe for global instability and conflict.

The Ethical Quandary

Moreover, the ethical implications of controlling a superintelligent entity raise profound questions. How do we ensure that the goals of AGI align with the broader interests of humanity? The fear is that in the quest for control, the core values and ethical considerations that should guide AGI development might be overshadowed by military, strategic, and economic objectives.

Collaboration Instead of Control: A Path to Prosperity

The potential of AGI to contribute positively to humanity is likely to be realized not through control but through collaboration. The key to unlocking its full potential is to envision a future where AGI works alongside humans and solves global challenges. This approach necessitates a paradigm shift from mastery over AGI to

Read more from the original source:

Artificial General Intelligence (AGI): The Tightrope Walk Between Mastery and Catastrophe - Medium

Read More..

Understanding AI, ASI, and AGI: Shaping the Future of Humanity – Medium

Artificial Intelligence (AI) has become a transformative force in our world, but its evolution extends beyond the boundaries of conventional understanding. Within this realm, we encounter the concepts of Artificial Super intelligence (ASI) and Artificial General Intelligence (AGI), each holding profound implications for the future of humanity.

AI refers to the simulation of human intelligence processes by machines, enabling them to learn, reason, and perform tasks that typically require human intelligence. From voice assistants to recommendation algorithms and autonomous vehicles, AI has permeated various aspects of our lives, reshaping industries and augmenting human capabilities.

AGI represents the next leap in AI evolution. Unlike narrow AI, which excels in specific tasks, AGI aims for human-like general intelligence. It would possess the ability to understand, learn, adapt, and apply knowledge across diverse domains a level of cognition akin to human beings. Achieving AGI involves creating systems capable of abstract thinking, problem-solving, and learning in varied contexts.

ASI takes the trajectory of intelligence further. It surpasses human intelligence across all domains and tasks a level of intellect that could potentially far exceed the collective cognitive abilities of humanity. ASI would not only be astoundingly intelligent but also possess the capacity for recursive self-improvement, allowing it to continually enhance its own capabilities at an exponential rate.

The advent of AGI and ASI poses unprecedented opportunities and challenges. The proliferation of AGI could revolutionize industries, unlocking innovations in healthcare, education, science, and beyond

Excerpt from:

Understanding AI, ASI, and AGI: Shaping the Future of Humanity - Medium

Read More..

Replacing Humanity: The Impact of Artificial Intelligence | by It’s Jack | Dec, 2023 – Medium

Hey, its Jack. I know. Im late to the party. By now, most of you have probably heard about ChatGPT, Dall-E, and deepfakes. If you havent, congratulations on waking up from your year-long coma. The year is 2023, and everyone thinks the AI apocalypse is just around the corner.

Now, for those of you who have been keeping up with the news, you might have heard Sam Altman being removed as the CEO of OpenAI (and promptly reinstated with help from Microsoft). The rumor is Sam was removed due to the boards concern over major advances towards AGI (artificial general intelligence) and Sams uncandid communication with the board. The source of the rumor comes from a letter sent by Mira Murati (OpenAIs CTO) citing her concerns over OpenAIs progress on project Q* (pronounced Q-Star).

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say | Reuters

However, despite all these dramas, I think there are a few questions that you and I should be able to answer.

If you do a quick search, you will thousands of articles and videos from various media and influencers discussing the impact of AI. However, most of the impact can be summarized into three categories, content creation, jobs, and R&D (research and development).

Content CreationThe barriers to entry have never been lower for new content creators, such as me! My background comes from STEM (science, technology, engineering, and mathematics). I do not have any degree or experience in content creation. However, I do have an interest and knowledge to leverage AI for my benefit. In fact, all the images used for this article were generated by DALL-E, OpenAIs image generator. I also used ChatGPT-4, OpenAIs leading LLM (large language model), to help brainstorm and edit this article. However, there are limits and concerns with this method. I address some of them down

Read more here:

Replacing Humanity: The Impact of Artificial Intelligence | by It's Jack | Dec, 2023 - Medium

Read More..