That OpenAI Mind-Boggler Whether Mysterious Q* Led To The Unexplained Firing Of The CEO And How Artificial General Intelligence (AGI) Might Have…

what transpired at OpenAI.getty

In todays column, I am going to do a follow-up to my recent went viral analysis of the mysterious Q* that OpenAI has apparently devised (see the link here) and will be pressing forward to explore yet another presumably allied conundrum, namely what led to or brought about the firing and then rehiring of the CEO of OpenAI.

That pressing question about what really went down regarding the executive-level twists and turns at OpenAI seems to be a top-notch best-kept secret of a Fort Knox quality. OpenAI and the parties involved in the situation are amazingly tightlipped. The world at large seems to only know broadly what transpired, but not why it happened. Meanwhile, all manner of wildly concocted speculation has entered the vacuum created by not having anyone on the inside opt to spill the beans.

Get yourself mentally ready for a sharp bit of puzzle-piece arranging and a slew of reasoned conjecture.

Please join me in a Sherlock Holmes-style examination of how a scarce set of clues might be pieced together to make an educated guess at how the matter arose. We will wander through a range of topics such as AI, Artificial General Intelligence (AGI), Responsible AI and AI ethics, business organizational dynamics and marketplace signaling, the mysterious Q*, governing board dynamics, C-suite positioning, etc. I aim to proceed in a sensible and reasoned matter, seeking to connect the sparse dots, and aspire to arrive at something of a satisfying or at least informative result.

Some readers might recognize that I am once again invoking the investigative prowess of Sherlock Holmes, as I had done in my prior analysis, and believe that once again putting on the daunting detective cap and lugging around the vaunted clue-inspecting magnifying glass is a notably fruitful endeavor.

As Sherlock was known to have stated, we need to proceed on each mystery by abiding by this crucial rule: To begin at the beginning.

Lets therefore begin at the beginning.

Essential Facts Of The Mysterious Case

You undoubtedly know from the massive media coverage of the last several weeks that the CEO of OpenAI, Sam Altman, was let go by the board of OpenAI and subsequently, after much handwringing and machinations, he has rejoined OpenAI. The board has been recomposed and will purportedly be undergoing additional changes. OpenAI has stated that an independent review of the varied circumstances will be undertaken, though no timeline has been stated nor whether or to what degree the review will be made publicly available.

The basis for this seemingly earth-shattering firing-rehiring circumstance still remains elusive and ostensibly unknown (well, a small cohort of insiders must know).

I say earthshattering for several cogent reasons. First, OpenAI has become a household name due to being the company that makes ChatGPT. ChatGPT was released to the public a year ago and reportedly has 100 million active weekly users at this time. The use of generative AI has skyrocketed and become an ongoing focus in our daily lives. Sam Altman became a ubiquitous figurehead for the AI field and has been the constant go-to for quotes and remarks about where AI is heading.

From all outward appearances, there hasnt seemed to be anything that the CEO has said or done on the public stage that would warrant the rather serious action of suddenly and unexpectedly firing him. We might understand such an abrupt action if there were some ongoing guffaws or outlandish steps that precipitated the harsh disengagement. None seems to be on the docket. The firing appears to have come totally out of the blue.

Another consideration is that a straying CEO might be taken down a peg or two if they are somehow misrepresenting a firm or otherwise going beyond an acceptable range of behavior. Perhaps a board might give the CEO a forewarned wake-up call and this often leaks to the outside world. Everyone at that juncture kind of realizes that the CEO is on thin ice. This didnt happen in this case.

The bottom line here is that this was someone who is a widely known spokesperson and luminary in the AI arena who without any apparent provocation was tossed out of the company that he co-founded. Naturally, an expectation all told would be that an ironclad reason and equally solid explanation would go hand in hand with the severity of this startling turn of events. None has been stipulated per se, other than some vagaries, which I will address next.

We need to see what clues to this mystery might exist and try to piece them together.

The Blog That Shocked The World

First, as per the OpenAI official blog site and a posting on the fateful date of November 17, 2023, entitled OpenAI Announces Leadership Transition, we have this stated narrative (excerpted):

I shall delicately parse the above official communique excerpts.

According to the narrative, the stated basis for the undertaken firing is that the CEO was not consistently candid in his communications with the board.

Mark that on your bingo card as not consistently candid.

An additional takeaway, though somewhat more speculative, involves the line that the firm was structured to ensure that artificial general intelligence benefits all humanity. Some have suggested that perhaps the lack of candidness refers to the notion of ensuring that artificial general intelligence benefits all humanity.

These are our two potential clues at this juncture of this analysis:

Okay, with those seemingly independent clues, lets leverage the prevailing scuttlebutt amidst social media chatter and opt to tie those two elements directly together.

Before making that leap, I think it wise to mention that it could be that those two aspects have nothing to do with each other. Maybe we are combining two clues that are not in the same boat. Down the road, if the mystery is ever truly revealed, we will in hindsight presumably learn whether they are mates or not. Just keep that caveat in mind, thanks.

One other thing to note is that the blog makes a rather stark reference to artificial general intelligence, which is commonly referred to as AGI, and potentially has great importance here. In case you dont already know, AGI is the type of AI that we believe someday will be somehow attained and will be on par with human intelligence (possibly even surpassing humans and becoming superintelligent). We arent there yet, despite those blaring headlines that suggest otherwise. There are grave concerns that AGI is going to be an existential risk, potentially enslaving or wiping out humankind, see my discussion at the link here. Another perspective of a more happy-face nature is that maybe AGI will enable us to cure cancer and aid in ensuring the survival and thriving of humanity, see my analysis at the link here.

My reason for emphasizing that we are discussing AGI is that you could assert that AGI is extremely serious stuff. Given that AGI is supposedly going to either destroy us all or perhaps lift us to greater heights than we ever imagined, we are dealing with something more so than the everyday kind of AI that we have today. Our conventional daily encounters with AI-based systems are extremely tame in comparison to what is presumably going to happen once we arrive at AGI (assuming we ultimately do).

The stakes with AGI are sky-high.

Lets brazenly suggest that the issue of candidness concerns AGI. If so, this is a big deal because AGI is a big deal. I trust that you can clearly see why tensions might mount. Anything to do with the destruction of humanity or the heralded uplifting of humanity is undoubtedly going to get some hefty attention. This is the whole can of worms on the table.

Perhaps the CEO was perceived by the board or some portion of the board, as not being entirely candid about AGI. It could be that the perception was that the CEO was less than fully candid about a presumed AGI that might be in hand or an AI breakthrough that was on the path to AGI. Those board members might have heard about the alleged AGI or path to AGI from other sources within the firm and been surprised and dismayed that the CEO had not apprised them of the vital matter.

What nuance or consideration about AGI would likely be at issue for the OpenAI board in terms of their CEO?

One possible answer sits at the feet of the mysterious Q*. As I discussed in my prior column that covered Q*, see the link here, some have speculated that a kind of AI breakthrough is exhibited in an AI app called Q* at OpenAI. We dont know yet what it is, nor if the mysterious Q* even exists. Nonetheless, lets suppose that within OpenAI there is an AI app known as Q* and that it was believed at the time to be either AGI or on the path to AGI.

Thus, we might indeed have the aura of AGI in the middle of this as showcased by the Q*. Keep in mind that there doesnt have to be an actual AGI or even a path-to-AGI involved. The perception that Q* is or might be an AGI or on the path to AGI is sufficient in this instance. Perceptions are key. I will say more about this shortly.

An initial marketplace reaction to the firing of the CEO was that there must have been some kind of major financial or akin impropriety for taking such a radical step by the board. It seems hard to imagine that merely being less than candid about some piece of AI software could rise to an astoundingly dramatic and public firing.

According to reporting in the media by Axios, we can apparently take malfeasance out of this picture:

You might be wondering what the norm is for CEOs getting booted. CEOs are usually bounced out due to malfeasance of one kind or another, or they are steadfastly shoved out because they either exhibited poor leadership or failed to suitably communicate with the board. In this instance, the clues appear to aim primarily toward the communications factor and perhaps edge slightly into the leadership category.

What Goes On With Boards

Id like to briefly bring you up-to-speed about boards in general. Doing so is essential to the mystery at hand.

In my many years of serving in the C-suite as a top-level tech executive, Ive had lots of experience interacting with boards. A few insightful tidbits might be pertinent to bring up here. Ill for now speak in general terms.

A board of directors is supposed to oversee and advise a company, including being kept informed by the CEO and also gauging whether the CEO is doing a dutiful job in the vaunted role. The board serves as a check and balance regarding what the CEO is doing. This is an important body, and they are legally dutifully bound to perform their duties.

The composition of a board varies from firm to firm. Sometimes the board members see everything on an eye-to-eye basis and wholeheartedly agree with each other. Other times, the board members are split as to what they perceive is occurring at the firm. You might think of this as similar to the U.S. Supreme Court, namely, we all realize that some of the justices will perceive things one way while others of the court will see things another way. Votes on particular issues can swing from everyone being in agreement to having some vote for something and others voting against it.

A typical board is set up to deal with splintered voting. For example, a board might have say seven members and if they dont see eye-to-eye on a proposed action, the majority will prevail in a vote. Suppose a vote is taken and three members are in favor of some stipulated action, while three other members are opposed to the proposed action. The swing vote of the seventh member will then decide which way the matter goes.

In that sense, there is often behind-the-scenes lobbying that takes place. If the board already realizes that a contested three versus three tie is coming up, the odds are that the seventh tiebreaker will get an earful from both sides of the issue. There can be tremendous pressure on the seventh member. Compelling and convincing arguments are bound to be conveyed by both sides of the contentious topic.

It is possible that in the heat of battle, so to speak, a board member in that tie-breaking predicament will base their vote on what they believe to be right at the time of the vote. Later on, perhaps hours or days hence, it is conceivable that upon hindsight, the tie breaker might realize that they inadvertently voted in a manner they regret having done so. They wish to recant their vote, but it is usually water already under the bridge and there isnt a means to remake history. The vote was cast when it was cast. They will have to live with the decision they made at the time of the fracas.

This is going to be helpful food for thought and will be worth remembering later on during this puzzle-solving process.

Responsible AI And AI Ethics

We are going to take a seemingly offshoot path here for a little bit and will come back around to the topic of the board and the CEO. I pledge that this path into the trees of the forest will serve a useful purpose.

Sherlock Holmes was a keen observer of clues that seemed outside the purview of a mystery and yet turned out to be quite vital to solving the mystery. His famous line was this: It has long been an axiom of mine that the little things are infinitely the most important.

Time to invoke that principle.

Hang in there as I lay the groundwork for what will come up next.

I would like to bring into this matter the significance of what is known as Responsible AI and the rising interest in AI ethics and AI law. Ive covered the importance of AI ethics and AI law extensively, including the link here and the link here, just to name a few. The tsunami of AI that is being rushed out into society and becoming pervasive in our lives has a lot of good to be had but also has a lot of rottenness to be had too. Todays AI can make our lives easier and more fulfilling. AI can also contain undue biases, algorithmically make discriminatory choices, be toxic, and be used for evil purposes.

Thats the dual-use precept of AI.

Responsible AI refers to the notion that the makers of AI and also those firms making use of AI are asked to build and deploy AI in responsible ways. We are to hold their feet to the fire if they devise or adopt AI that has untoward outcomes. They cannot just wave their arms and proclaim that the AI did it. Many try this as a means of escaping their responsibility and liability. Various codes of ethics associated with AI are supposed to be used by firms as guidance toward producing and using AI in suitable ways. Likewise, new laws regarding AI are intended to similarly keep the development and adoption of AI on the up and up, see my analysis at the link here.

As an example of AI ethics, you might find of interest that the United Nations entity UNESCO passed a set of ethical AI principles that encompassed numerous precepts and was approved by nearly 200 counties (see my coverage details at the link here). A typical set of AI ethics includes these pronouncements:

Not all AI makers are embracing AI ethics.

Some AI makers will say that they earnestly believe in AI ethics, and yet act in ways that seem to be the wink-wink claim.

Right now, the AI field is a mixed bag when it comes to AI ethics. A firm might decide to get fully engaged in and immersed in AI ethics. This hopefully becomes a permanent intent. That being said, the chances are that the commitment will potentially wane. If something shocks the firm into realizing that they have perhaps dropped the ball on AI ethics, a resurgence of interest often subsequently occurs. I have described this as the roller coaster ride of AI ethics in firms.

The adoption of AI ethics by AI makers is like a box of chocolates. You never know what they will pick and choose, nor how long it will last. There are specialists nowadays who are versed in AI ethics, and they fervently try to get AI makers and firms that adopt AI to be mindful of abiding by ethical AI principles. It is a tough job. For my discussion of the role of AI ethics committees in companies and the ins and outs of being an AI ethicist, see my coverage at the link here and the link here.

The emergence of AI ethics and Responsible AI will be instrumental to possibly solving this mystery surrounding the OpenAI board and the CEO.

Lets keep pushing ahead.

Transparency Is A Key AI Ethics Principle

You might have noticed in the above list of AI ethics principles that AI should be devised to be transparent.

Heres what that means.

When an AI maker builds and releases an AI app, they are supposed to be transparent about what the AI does. They should identify the limitations of the AI. There should be stated indications about the right ways to use AI. Guidelines should be provided that express what will happen if the AI is misused. Some of this can be very technical in its depictions, while some of it is more of a narrative and a wordy exposition about the AI.

An AI maker might decide that they are going to be fully transparent and showcase everything they can about their AI app. A difficulty though is that if the AI consists of proprietary aspects, the AI maker is going to want to protect their Intellectual Property (IP) rights and ergo be cautious in what they reveal. Another concern is that perhaps revealing too much will enable evil doers to readily shift or modify the AI into doing bad things. This is a conundrum in its own right.

Research on AI has been exploring the range and depth of materials and portions of an AI app that might be viably disclosed as part of the desire to achieve transparency. An ongoing debate is taking place on what makes sense to do. Some favor tremendous transparency, others balk at this and insist that there should be reasonable boundaries established.

As an example of research on AI-related transparency, consider this research paper that proposes six levels of access to generative AI systems (excerpts shown):

I trust you can discern that transparency is a valuable way of trying to safeguard society.

If AI apps are wantonly thrown into the hands of the public in a cloaked or undisclosed manner, there is a danger for those who use the AI. They might use the AI in ways that were not intended, yet they didnt know what the proper use consisted of, to begin with. The hope is that transparency will allow all eyes to scrutinize the AI and be ready to either use the AI in appropriate ways or be alerted that the AI might have rough edges or be turned into adverse uses. The wisdom of the crowd might aid in mitigating the potential downsides of newly released AI.

Make sure to keep the importance of AI transparency in mind as I proceed further in this elucidation.

Timeline Of OpenAI Releases

Id like to share with you a quick history tracing about the generative AI products of OpenAI, for which will handily impart more noteworthy clues.

You certainly already know about ChatGPT, the generative AI flagship of OpenAI. You might also be aware that OpenAI has a more advanced generative AI app known as GPT-4. Those of you who were deep into the AI field before the release of ChatGPT might further know that before ChatGPT there was GPT-3, GPT-2, and GPT-1. ChatGPT is often referred to as GPT-3.5.

Here's a recap of the chronology of the GPT series (I am using the years to indicate roughly when each version was made available):

I realize the above chronology might not seem significant.

Maybe we can pull a rabbit out of a hat with it.

Lets move on and see.

Race To The Bottom Is A Bad Thing

Shift gears and consider again the importance of transparency when it comes to releasing AI.

If an AI maker opts to stridently abide by transparency, this might motivate other AI makers to do likewise. An upward trend of savoring transparency will especially be the case if an AI maker is a big-time AI maker and not just one of the zillions of one-offs. In that way of thinking, the big-time AI makers could be construed as leading role models. They tend to set the baseline for what is considered marketplace-suitable transparency.

Suppose though that a prominent AI maker decides not to be quite so transparent. The chances are that other AI makers will decide they might as well slide downward too. No sense in staying at the top if the signaling by some comparable AI maker suggests that transparency can shirked, or corners can be cut.

Imagine that this happens repeatedly. Inch by inch, each AI maker is responding to the others by also reducing the transparency they are providing. Regrettably, this is going to become one of those classic and dubious races to the bottom. The odds are that the downward slippery slope is going ultimately hit rock bottom. Perhaps little or no transparency will end up prevailing.

A sad face outcome, for sure.

The AI makers are essentially sending signals to the marketplace by how much they each embrace transparency. Transparency is a combination of what an AI maker says they intend to do and also what they in reality do. Once an AI app is released, the reality becomes evident quite quickly. The materials and elements can be judged according to their level of transparency, ranging from marginally transparent to robustly transparent.

Based on the signaling and the actual release of an AI app, the rest of the AI makers will then likely react accordingly when they do their next respective AI releases. Each will opt to adjust based on what their peers opt to do. This doesnt necessarily have to go to the bottom. It is possible that a turn might occur, and the race proceeds upward again. Or maybe some decide to go down while others are going up, or others decide to go down when others are going up.

By and large, the rule of thumb though is that they tend to act in the proverbial birds-of-a-feather-flock-together mode.

I assume that you readily grasp the overall gist of this signaling and market movement phenomena. Thus, lets now look at a particularly interesting and relevant AI research paper that describes the signaling that often takes place by AI makers.

I will be providing excerpts from a paper entitled Decoding Intentions: Artificial Intelligence And Costly Signals, by Andrew Imbrie, Owen J. Daniels, and Helen Toner, Center for Security and Emerging Technology (CSET), October 2023. The co-authors provide keen insights and have impressive credentials as stated in the research paper at the time of its publication in October 2023:

The paper has a lot to say about signals and AI and provides several insightful case studies.

First, the research paper mentions that AI-related signals to the marketplace are worthy of attention and should be closely studied and considered:

An in-depth discussion in the paper about the veritable race-to-the-bottom exemplifies my earlier points and covers another AI ethics principle underlying reliability:

Among the case studies presented in the paper, one case study was focused on OpenAI. This is handy since one of the co-authors as noted above was on the board of OpenAI at the time and likely was able to provide especially valuable insights for the case study depiction.

According to the paper, GPT-2 was a hallmark in establishing an inspiring baseline for transparency:

Furthermore, the paper indicates that GPT-4 was also a stellar baseline for generative AI releases:

The paper indicates that the release of ChatGPT was not in the same baseline league and notes that perhaps the release of the later-on GPT-4 was in a sense tainted or less heralded as a result of what occurred with the ChatGPT release that preceded GPT-4s release:

Based on the case study in the paper, one might suggest that the chronology for selected instances of the GPT releases has this intonation:

Thats the last of the clues and we can start to assemble the confounding puzzle.

The Final Straw Rather Than The Big Bang

You now have in your hands a set of circuitous clues for a potential puzzle piece assembling theory that explains the mystery of why the CEO of OpenAI was fired by the board. Whether this theory is what actually occurred is a toss-up. Other theories are possible and this particular one might not hold water. Time will tell.

I shall preface the elicitation with another Sherlock Holmes notable quote: As a rule, the more bizarre a thing is, the less mysterious it proves to be.

Read more:

That OpenAI Mind-Boggler Whether Mysterious Q* Led To The Unexplained Firing Of The CEO And How Artificial General Intelligence (AGI) Might Have...

Related Posts

Comments are closed.