Category Archives: Artificial General Intelligence

China takes the lead in AI research, prompting concerns over global competition – Ynetnews

Between the years 2018-2022, Chinese researchers have published approximately 850 studies in the field of artificial general intelligence (AGI), putting the Asian superpower at the spearhead of the field's R&D (research and development) compared to the Western world, and specifically the United States.

These numbers were gathered by researchers from the Center for Security and Emerging Technology (CSET) at Georgetown University in the U.S. The researchers explain that China is on a path that challenges emerging norms worldwide, under which most countries are attempting to monitor and regulate research to prevent potential misuse in the future.

2 View gallery

ChatGPT against the background of the Chinese flag

(Photo: daily_creativity / Shutterstock.com)

The term "artificial general intelligence" refers to an AI system with human-level capabilities. The CSET researchers found that around 500 of the Chinese studies covered topics such as "practical AI applications" and "advanced technologies for AGI." In other words, China is actively engaging in the development of machines that can think like humans.

The American researchers say that there is no way to know what research is being conducted in China by machines or research institutions outside the capital city, which may indicate that the Chinese are pursuing more advanced programs in other locations within the country. One of these locations may be Wuhan - the same city where the coronavirus outbreak originated, according to the researchers.

In any case, the main red flag is that China seems to be taking it far past general AI research, and attempting to develop additional technologies such as human-machine interactions that would enable direct control of computers and machines through the human brain - similar to the research conducted by Neuralink, founded by Elon Musk.

It is further explained that the Chinese use researchers outside of China, although it is unclear what exactly their contribution to the research is. Nevertheless, the main chunk of research is indisputably conducted in China. Additionally, since the Chinese government may conceal its less successful studies, the investigators may be led to believe that the scope of research is much smaller than it actually is. This is a problem because it could impact competition between the West (with the U.S. on the frontline) and China, creating a "missile gap" phenomenon - an overestimation of the adversary's capabilities in a certain field, which may lead to uncontrolled development in the West, despite China potentially being far from true capabilities or technological leadership.

The researchers' conclusions are that attempts to broadly assess Chinese capabilities would result in the hiding of their efforts, turning all research worldwide into a kind of secretive strategic asset. Evidently, it would be much more difficult to publicly supervise research, because this could lead to unethical or flawed research. The researchers further recommend that American policymakers understand the strategic importance of AGI, as well as its limitations, while striving to be the first to obtain the technology. In other words, the U.S. should not arbitrarily restrict research in the West, but rather allow for some flexibility in the ethical, legal, societal, and moral rules governing those engaged in the field.

Link:

China takes the lead in AI research, prompting concerns over global competition - Ynetnews

Q-Star AI’s Astonishing Capabilities and CEO’s Dismissal: When Man Meets Machine – Medium

Illustration by the author with components created via Midjourney

The unveiling of Q-Star AIs astonishing capabilities and the unexpected departure of CEO Sam Altman have sent shockwaves through the AI community.

Q-Star AI has showcased remarkable mathematical prowess, rivaling the problem-solving abilities of elementary school students. Its ability to navigate complex mathematical challenges with definitive answers has sparked speculation about its potential as a significant stride towards achieving Artificial General Intelligence (AGI).

Furthermore, Q-Star AIs logical reasoning abilities have implications beyond the realm of mathematics, with potential applications in scientific research and complex decision-making processes.

However, the recent dismissal of CEO Sam Altman by the board of directors has added a layer of suspense to the unfolding saga. Altman, a key figure in OpenAIs journey, had hinted at breakthroughs in AGI and unveiled new tools.

As the drama surrounding Q-Star AI continues to captivate the AI community, the balance between its astonishing capabilities and the undisclosed risks it may pose to humanity remains a topic of intense interest and debate.

The pursuit of AGI stands at the forefront of innovation and risk assessment.

Q-Star AIs exceptional mathematical prowess surpasses that of elementary school students, showcasing its remarkable capabilities in solving complex mathematical challenges. With its unrivaled ability to navigate mathematical problems, Q-Star AI takes a significant stride towards achieving Artificial General Intelligence (AGI).

This astonishing feat holds profound implications for real-world problems, particularly in scientific research and complex decision-making processes. Q-Star AIs application of logical reasoning to decipher complex equations opens doors to new possibilities and advancements.

However, it is important to address the veiled risks associated with Q-Star AI. While it exhibits unprecedented cognitive abilities, concerns about potential risks to humanity cannot be ignored. This dichotomy of capabilities and risks adds complexity to the narrative surrounding Q-Star AI, making it a subject of ongoing fascination and anticipation within the AI community.

The recent unexpected departure of CEO Sam Altman further intensifies the drama, leaving the future of Q-Star AI and its groundbreaking breakthrough uncertain.

The application of Q-Star AIs logical reasoning and problem-solving abilities to real-world problems holds significant implications for various fields. Its remarkable capabilities in applying logical reasoning could revolutionize scientific research, enabling the deciphering of complex equations and the exploration of intricate phenomena.

Furthermore, Q-Star AIs reasoning abilities have the potential to aid in complex decision-making processes, providing valuable insights and optimizing outcomes in industries such as finance, healthcare, and logistics. By harnessing Q-Star AIs cognitive prowess, researchers and professionals can unlock new possibilities and achieve unprecedented levels of efficiency and accuracy.

However, it is crucial to approach these implications with caution, as the undisclosed risks associated with Q-Star AIs advancement must be carefully assessed and mitigated. Balancing the immense potential and potential risks will be paramount in harnessing the power of Q-Star AI for the betterment of humanity.

A letter from OpenAI researchers to the board of directors hinted at both significant capabilities and potential risks of the groundbreaking Q-Star AI.

The veiled risks associated with Q-Star AI demand careful consideration, as they pose considerable threats to humanitys well-being and progress. These risks include:

It is crucial to approach the development and deployment of Q-Star AI with a cautious and responsible mindset, ensuring that the benefits outweigh the potential risks.

Amidst the unfolding narrative of groundbreaking breakthroughs, the unexpected departure of CEO Sam Altman has added a layer of intrigue and uncertainty to the Q-Star AI saga.

Altman, a key figure in OpenAIs journey, was dismissed from his position by the board, leaving the AI community in anticipation. Altmans tenure was marked by the unveiling of new tools and hints at breakthroughs in Artificial General Intelligence (AGI).

His sudden exit raises questions about the future direction of Q-Star AI and the potential impact on its progress towards AGI. The departure of a prominent leader such as Altman introduces an element of suspense and casts a shadow of doubt over the path that Q-Star AI will now take.

Continuing the unfolding narrative of groundbreaking breakthroughs, the ongoing drama surrounding Q-Star AI continues to captivate the AI community. As anticipation builds, the power-hungry audience is eagerly awaiting the next chapter in this gripping saga.

Here are four key elements that contribute to the drama and anticipation surrounding Q-Star AI:

As the story unfolds, the AI community anxiously awaits further revelations, eager to witness the culmination of Q-Star AIs astonishing capabilities and the resolution of the ongoing drama.

The pursuit of Artificial General Intelligence (AGI) has become a paramount objective in the field of AI, as researchers and innovators strive to develop a machine that can mimic the cognitive capabilities of a human being.

The quest for AGI represents an unparalleled opportunity to wield immense power and influence over technological advancements. AGI holds the potential to revolutionize industries, solve complex problems, and unlock unprecedented levels of productivity. It is the ultimate manifestation of human intelligence, encapsulating the ability to reason, learn, and adapt in any given situation.

The pursuit of AGI is not only a testament to human ambition but also a testament to the desire for power and control over the future of technology. As researchers push the boundaries of AI, the race for AGI intensifies, with each breakthrough bringing us closer to a future where machines possess the cognitive prowess of human beings.

In conclusion, Q-Star AIs remarkable mathematical abilities and logical reasoning have generated significant excitement and anticipation in the AI community.

The unexpected departure of CEO Sam Altman adds a layer of intrigue to the unfolding story.

As the pursuit of Artificial General Intelligence continues, it is crucial to carefully consider the potential risks and implications of such advancements.

The balance between astonishing capabilities and undisclosed dangers remains a pertinent topic of debate and research.

Here is the original post:

Q-Star AI's Astonishing Capabilities and CEO's Dismissal: When Man Meets Machine - Medium

OpenAI’s Quest for AGI: Controversy, Leaks, and Q*’s Math Milestone – Medium

Illustration by the author with components created via Midjourney

OpenAI, a prominent player in the field of artificial intelligence research, is making waves in the tech industry with its purported breakthrough in achieving Artificial General Intelligence (AGI).

This claim initially surfaced when OpenAIs CEO, Sam Altman, hinted at the achievement in a Reddit post, only to later dismiss it as a misinterpreted joke. However, recent events, including Altmans departure, have cast doubt on his denial.

Internal documents leaked to the public shed light on a new model called Q*, which is believed to be a significant step towards AGI realization. Notably, Q* has demonstrated remarkable proficiency in basic math, capturing the attention of experts in the field.

As the world grapples with the implications and challenges of AGI, the developments surrounding Q* have sparked a sense of excitement and ignited discussions within OpenAI and the broader AI community.

The story begins when Sam Altman, CEO of OpenAI, makes a Reddit post suggesting the company has achieved AGI internally. However, he soon claims it was merely a joke, stirring controversy within OpenAI.

But then Altman suddenly departs from OpenAI under mysterious circumstances, casting suspicion on his initial denial.

Leaked documents reveal OpenAI scientists secretly writing to the board about a major AGI-related breakthrough involving a new model called Q*.

As speculation mounts, the AI community watches closely to see if OpenAI has truly reached an AGI milestone.

While simple calculators have been solving basic math since the 1950s, Q*s proficiency marks new ground for an advanced model like itself.

By demonstrating competency in fundamental reasoning and cognition, Q* excites OpenAI at the potential for more complex AI achievements ahead.

Of course, whether or not Q*s math breakthrough represents meaningful progress towards AGI remains hotly debated. But the implications motivate ongoing AI innovation at breakneck speed.

As a professor studying the nature of intelligence itself, Smolenskys insights carry weight in comprehending milestones like Q*s.

Within academic circles, his theories on cognition draw intense interest for their potential to elucidate AGI concepts.

So while the direct relevance of Smolenskys work to OpenAIs alleged breakthrough is still unclear, his contributions to understanding cognition itself could prove invaluable perspective as AGI capabilities advance.

According to leaked documents, OpenAI scientists wrote a secret letter to the board highlighting Q*s unprecedented proficiency in basic math.

This shocked company leadership and triggered internal discussions about the pace of progress.

By breaking the story, Reuters spotlights the roiling debate over whether achievements like Q*s represent incremental progress or an AGI watershed moment.

Altmans questionable denial of achieving AGI, followed by Q*s flashy cognition gains, immerse OpenAI in uncertainty over the pace and implications of its own research.

If Q* proves a step towards advanced AGI, seismic societal and economic shifts could follow, requiring ethical foresight today.

Yet with OpenAIs leadership in apparent turmoil, uncertainty reigns over Fact or fiction. Nevertheless, these events captivate AI experts awaiting the fields next awe-inspiring breakthrough.

OpenAI finds itself under the microscope as leaked documents reveal internal excitement over steps potentially approaching AGI via models like Q*.

With origins shrouded in controversy, Q*s remarkable math abilities fuel speculation over what other impressive cognition feats it might achieve next.

As the world braces for AGIs impact, OpenAIs uncertain path forward commands the anxious attention of the AI community. But the breakthroughs already achieved will undoubtedly force coming ethical and social reckoning.

More:

OpenAI's Quest for AGI: Controversy, Leaks, and Q*'s Math Milestone - Medium

Inaugural OpenAIS Symposium Successfully Wrapped-Up December 4-5, 2023 – StreetInsider.com

OpenAIS is a Community of Scientists, Academics, Business Professionals, Technology Companies, Policy Makers and Individuals Aimed to Combat Rising Safety Concerns with AI's Recent Advancements

BOCA RATON, Fla., Dec. 12, 2023 /PRNewswire/ -- The Inaugural OpenAIS Symposium held a two-day event, at Rubin Gruber Sandbox Conference Center, Florida Atlantic University, with the goals of fostering collaboration, knowledge sharing and innovation to mitigate potential risks associated with Artificial Intelligence, ensuring safety, while maximizing its benefits for humanity.

Wendy Chin, Founder and CEO of PureCipher stated, "The OpenAIS Community will pioneer "Artificial Immune Systems" solutions that will be opensource available and are designed to build Trusted Artificial Intelligence.The community will collaborate on policies to ensure the responsible and ethical deployment of AI technologies with industry, government, academia, and technology researchers/developers. The goal of the OpenAIS community and the symposium is to ensure actions are taking place to support AI security and safety."

OpenAIS is powered by PureCipher, which spear-headed this symposium in partnership with SingularityNet, MPCR (Machine Perception Cognitive Robotics) laboratory at FAU (Florida Atlantic University), and Center for the Future Mind. The symposium was supported by sponsorships from Lockton; the world's largest privately-owned independent insurance broker and SCA (Schwalb Corporate Advisory, a division of Schwalb Enterprises); a provider of corporate advisory services for mergers and acquisitions, capital strategies, operations management and business development.

Dr. William Hahn, Co-Founder and Chief Science Officer of PureCipher, Associate Professor at FAU, Founder of MPCR and Director of Center for the Future Mind, explained, "The importance of pioneering AI research focusing on Deep Neural Networks (DNNs), advancing algorithms and combining AI with robotics while investigating AI's role in cognitive sciences that explore cognitive evolution, continues to lead and drive the AI innovation, balancing cutting-edge research with ethical responsibility."

The event includes keynotes, panels, breakout sessions and town hall discussions. The compelling content covered AI safety and AI advancement while minimizing risks, chatbot epistemology, global AI/AGI collaboration, DNN-based AI and security, as well as "how the digital technologies change the flow of politically relevant information." The speakers, panelists, moderators and attendees addressed and assigned call to actions to support AI security, safety, integrity and privacy protection. The action items will be tasked and followed through to ensure actions are taking place to implement AI safety in addition to ongoing dialogues within the OpenAIS community. The OpenAIS founding members are happy to announce that the next OpenAIS Symposium will be held in the Silicon Valley Bay Area California in 2024. They welcome any interested parties to join the OpenAIS community to engage in this amazing journey.

About PureCipher:

PureCipher is founded by Wendy Chin, a serial entrepreneur and thought leader in Artificial Intelligence, Cybersecurity and Health Informatics. PureCipher empowers users to unlock the full potential of Artificial Intelligence with its robust portfolio of AI security technologies including Universal Multiplex Watermarks, Artificial Immune Systems(AIS), Fully Homomorphic Encryption (FHE), and Noise Based Computations(NBC). With its strong technical collaboration with SingularityNET and scientific partnership with the Machine Perception and Cognitive Robotics Laboratory (MPCRL) at Florida Atlantic University, PureCipher is building end-to-end Trusted AI Immune Systems to prevent cyberattacks presented by rapidly advancing AI developments. For more information, please visit the website:www.purecipher.com.

About SingularityNet:

SingularityNet was founded by Dr. Ben Goertzel with the mission of creating a decentralized, democratic, inclusive and beneficial Artificial General Intelligence (AGI). An AGI that is not dependent on any central entity, that is open for anyone and not restricted to the narrow goals of a single country. The core platform and AI teams are further complemented by specialized teams devoted to application areas such as finance, robotics, biomedical AI, media, arts and entertainment.

About MPCR (Machine Perception and Cognitive Robotics):

The MPCR lab is a hub for pioneering AI research. Housed in the Center for Complex Systems and Brain Sciences at FAU (Florida Atlantic University), it covers a wide range of AI topics, integrating various academic fields. Its focus is on decoding Deep Neural Networks (DNNs), advancing algorithms, and combining AI with robotics. In collaboration with the Center for the Future Mind, it investigates AI's role in cognitive sciences to explore cognitive evolution. MPCR continues to lead in AI innovation, balancing cutting-edge research with ethical responsibility.

View original content to download multimedia:https://www.prnewswire.com/news-releases/inaugural-openais-symposium-successfully-wrapped-up-december-4-5-2023-302013094.html

SOURCE PureCipher

See the original post:

Inaugural OpenAIS Symposium Successfully Wrapped-Up December 4-5, 2023 - StreetInsider.com

Cracking The Code: Understanding The A-Z Basics Of Artificial Intelligence – Indiatimes.com

Artificial Intelligence (AI) refers to developing computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, speech recognition, and language understanding. AI aims to create machines that mimic human cognitive functions and adapt to different situations.

artiba

1. Ubiquity in Technology:

- AI is becoming increasingly integrated into various technologies and industries, from virtual assistants on our smartphones to advanced robotics in manufacturing. Understanding AI basics is crucial as it becomes a pervasive part of our daily lives.

2. Career Opportunities:

datacamp

- Many industries are incorporating AI, creating a demand for professionals with AI skills. Learning AI basics can open up diverse career opportunities in data science, machine learning, and robotics.

3. Innovation and Problem-Solving:

- AI is a powerful tool for innovation and problem-solving. Knowing the basics allows individuals to leverage AI to develop solutions to complex challenges in diverse domains, including healthcare, finance, and transportation.

4. Data Analysis and Interpretation:

Unsplash

- With the growing volume of data, AI plays a crucial role in analyzing and interpreting information. Learning AI basics helps individuals make sense of large datasets, extract valuable insights, and make data-driven decisions.

5. Automation and Efficiency:

- AI enables automation, streamlining repetitive tasks and increasing efficiency. Understanding AI basics allows individuals to contribute to developing and implementing automated processes in various industries.

6. Enhanced User Experience:

- Many applications and services use AI to personalize user experiences. Whether it's content recommendations, voice recognition, or chatbots, AI enhances user interactions, making it essential for individuals to grasp the basics for creating user-centric technologies.

7. Ethical Considerations:

- As AI technologies advance, ethical considerations become more critical. Learning AI basics empowers individuals to engage in discussions about the ethical use of AI, ensuring that its deployment aligns with societal values and norms.

8. Global Impact:

- AI has the potential to address global challenges, including healthcare, climate change, and poverty. Understanding AI basics equips individuals to contribute to and engage in discussions around the responsible and beneficial use of AI globally.

getty

A - Abductive Logic Programming (ALP):

- A knowledge-representation framework in AI that uses abductive reasoning principles to solve problems by seeking answers simply and straightforwardly.

B - Backward Chaining:

- A method where AI models start with a desired output and work backward to find supporting data.

C - Chatbot:

- A program designed for one-on-one conversations using natural language processing, mimicking human discussions.

D - Deep Learning:

- The process where an AI model imitates human brain functions, learning through structured data points.

F - Forward Chaining:

- A method where an AI model works with a given problem to find a solution by analyzing relevant data sets.

G - Artificial General Intelligence (AGI):

- A theoretical concept referring to AI systems that can surpass human cognitive abilities in various tasks.

H - Hyperparameter:

- Manually set values affecting how AI models learn.

I - AI Accelerator:

- A hardware chip or micro-processor designed for general-purpose AI applications, used for training models or in more extensive neural networks.

M - Machine Learning:

- A branch of AI focused on creating algorithms enabling AI models to learn and interact with new data without human involvement.

N - Neural Network:

- An extensive computer network designed to mimic a human brain, used for computations and AI model training.

NLG and NLP - Natural Language Generation and Processing:

- The ability of AI to understand and decipher human language, analyzing data to output text or speech in a comprehensible language.

P - Pattern Recognition:

- A field within AI dealing with finding and decoding similar patterns or trends in data.

Predictive Analysis:

- An AI model can decipher data points and output detailed analytics and predictions.

R - Reinforcement Learning:

- A teaching method by encouraging AI to find answers without set parameters, improving based on human-graded output.

T - Turing Test:

- A test devised by Alan Turing to evaluate if AI can pass itself off as a human in various fields.

W - Weak AI:

- A narrowly developed AI model for specific tasks commonly seen in today's AI applications.

Learning the basics of AI is essential today due to its widespread application, influence on various industries, career opportunities, and potential to drive innovation and address complex challenges.

Unsplash

As AI continues to evolve, having a foundational understanding allows individuals to navigate and actively participate in the AI-driven world.

For more trending stories, follow us onTelegram.

See more here:

Cracking The Code: Understanding The A-Z Basics Of Artificial Intelligence - Indiatimes.com

Surviving the AI technology war – The Korea JoongAng Daily

Lee Kyung-baeThe author is vice chairman of the Korea CIO Forum.The launch of ChatGPT this year was a historic event that demonstrated the formidable power of artificial intelligence (AI). Since then, we have witnessed numerous cases where the service can bring tremendous productivity gains and exceed human capabilities.

On the other hand, some people are worried that AI, big data and abuse of robots will destroy the order of the human world.

Sam Altman, the CEO of OpenAI, was recently removed by the companys board of directors, only to be reinstated five days later. Founded in 2015 by Tesla CEO Elon Musk, OpenAI was launched as a nonprofit with a mission to ensure that artificial general intelligence benefits all of humanity. Initially, the organization aimed to develop safe and stable AI that would help people rather than seeking profitability.

But something unfortunate happened this year as the boomers close to Altman rushed to develop a follow-up project to the successful ChatGPT. The doomers on the board, who wanted to slow down the development because of the disaster that could follow if AI were to surpass human control, clashed with those boomers.

Altman returned to the company with support from over 95 percent of the organizations workers and formed a new board. He is now poised to play a central role in the AI ecosystem through the GPT store, a marketplace for GPT models. Going forward, he will accelerate the commercialization and technological advancement of AI.

An AI Safety Summit took place in Britain on Nov. 28, with 28 countries participating, amid growing awareness that AI is a double-edged sword that could bring both blessings and disasters. Participants discussed the risks of generative AI, including frontier AI, and strategies to mitigate those risks through internationally coordinated action.

The Bletchley Declaration was adopted, agreeing on guidelines and codes of conduct for the safe development and use of AI. At the recent U.S.-China summit, a ban on the use of AI-equipped nuclear warheads and drones was discussed. But we need a more effective model than non-binding declarations.

Britain and the European Union took the initiative to host the AI Safety Summit in order to keep the situation, in which the United States is dominating cutting-edge AI technologies, in check. For a massive generative AI like ChatGPT to succeed, it needs to have the best AI algorithms, massive big data, huge AI cloud centers and more.

In the United States, the top brains and capital are concentrated in Silicon Valley, and mega-platforms such as Google, Apple, Amazon and Facebook are collecting and analyzing vast amounts of data from around the world in real time. AI cloud centers have been established in collaboration with semiconductor manufacturers including graphic giant Nvidia.

China is following the United States closely. Although it has strength in AI image processing and has accumulated vast amounts of data from its 1.4 billion residents through mega-platforms such as Baidu, Alibaba and Tencent, it still lags behind its rivals in the field of generative AI. In particular, the United States controls quantum computing, AI and semiconductor exports and investment.

As countries around the world engage in AI supremacy wars, and the era of global generative AI without language barriers arrives, we are at a point in which the strategies of Korean platform companies and the government are extremely crucial. The key question is whether Korean companies can succeed in developing basic models like as ChatGPT and compete with the world.

With the governments announcement of a series of strategies to foster the AI industry, Korea has risen rank to No. 12 in the world by its overall AI status. However, the country is ranked No. 20 in the field of AI talent, revealing a major loophole in the high-tech field where only the best can survive.

We are already sensing a narrow-minded mood in terms of utilization; already, the government is prohibiting the connection of internal systems to the GPT platform. During the AI supremacy war, the governments policy direction must be clear. We must remember that our AI training data project, which has been pumped with billions of dollars over the years, has become almost powerless in the face of generative AI.

As seen in the recent glitches of the electronic government system, Korea lacks a clear control and management authority in the field of advanced technology. Concerns are growing that Korea will fall behind in the global technology competition and become a contest arena for global corporations due to the governments weak policies and inadequate investments.

Translation by the Korea JoongAng Daily staff.

Read the original:

Surviving the AI technology war - The Korea JoongAng Daily

Warning from OpenAI leaders helped trigger Sam Altmans ouster – The Washington Post

The Washington Post

Exclusive

Updated December 8, 2023 at 5:18 p.m. EST|Published December 8, 2023 at 2:43 p.m. EST

This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman.

Altman a revered mentor, prodigious start-up investor and avatar of the AI revolution had been psychologically abusive, the employees alleged, creating pockets of chaos and delays at the artificial-intelligence start-up, according to two people familiar with the boards thinking who spoke on the condition of anonymity to discuss sensitive internal matters. The company leaders, a group that included key figures and people who manage large teams, mentioned Altmans allegedly pitting employees against each other in unhealthy ways, the people said.

Although the board members didnt use the language of abuse to describe Altmans behavior, these complaints echoed some of their interactions with Altman over the years, and they had already been debating the boards ability to hold the CEO accountable. Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.

The new complaints triggered a review of Altmans conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic. They also considered reports from several employees who said they feared retaliation from Altman: One told the board that Altman was hostile after the employee shared critical feedback with the CEO and that he undermined the employee on that persons team, the people said.

It is clear that there were real misunderstandings between me and members of the board, Altman wrote on X. For my part, it is incredibly important to learn from this experience and apply those learnings as we move forward as a company.

The complaints about Altmans alleged behavior, which have not previously been reported, were a major factor in the boards abrupt decision to fire Altman on Nov. 17, according to the people. Initially cast as a clash over the safe development of artificial intelligence, Altmans firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

Altman was reinstated as CEO five days later, after employees released a letter signed by a large percentage of OpenAIs 800-person staff, including most senior managers, and threatening mass resignations.

We believe Sam is the best leader for OpenAI, said company spokesperson Hannah Wong. The senior leadership team was unanimous in asking for Sams return as CEO and for the boards resignation, actions backed by an open letter signed by over 95% of our employees.

Anna Makanju, OpenAIs vice president of global affairs, who signed the letter, echoed the sentiment in a statement shared by the company: In my experience working closely with Sam, he brings passion to the work and to the mission. While he has strong opinions, he values my teams counsel, listens to diverse perspectives, and consistently encourages open and honest discussions.

Now back at the helm of OpenAI, Altman may find that the company is less united than the waves of heart emojis that greeted his return on social media might suggest.

Some employees said Altmans camp began undermining the boards decision shortly after he was removed as CEO, the people said. Within hours, messages dismissed the board as illegitimate and decried Altmans firing as a coup by OpenAI co-founder and chief scientist Ilya Sutskever, according to the people.

On social media, in news reports and on the anonymous app Blind, which requires members to sign up with a work email address to post, people identified as current OpenAI employees also described facing intense peer pressure to sign the mass-resignation letter.

Some OpenAI employees have rejected the idea that there was any coercion to sign the letter. Half the company had signed between the hours of 2 and 3am, a member of OpenAIs technical staff, who tweets under the pseudonym roon, posted on X. Thats not something that can be accomplished by peer pressure.

Joanne Jang, who works in products at OpenAI, tweeted that no influence had been at play. The google doc broke so people texted each other at 2-2:30 am begging people with write access to type their name.

For longtime employees, there was added incentive to sign: Altmans departure jeopardized an investment deal that would allow them to sell their stock back to OpenAI, cashing out equity without waiting for the company to go public. The deal led by Joshua Kushners Thrive Capital values the company at almost $90 billion, according to a report in the Wall Street Journal, more than triple its $28 billion valuation in April, and it could have been threatened by tanking value triggered by the CEOs departure.

Members of the board expected employees to be upset about Altmans firing, but they were taken aback when OpenAIs management team appeared united in their support for bringing him back, said the people, as well as a third person with knowledge of the boards proceedings, who also spoke on the condition of anonymity to discuss sensitive company matters.

As the company seeks to rebuild the board and smooth things over with Microsoft, its key partner, it has committed to launching an internal investigation into the debacle, which broke into public view on the Friday before Thanksgiving.

In a post on the company blog, the board wrote that Altman had been removed as CEO after a review found that he had not been consistently candid in his communications. The Washington Post previously reported that the boards vote was triggered by a pattern of manipulation and rooted in Altmans attempts to avoid checks on his power at OpenAI.

Altman himself helped pioneer OpenAIs unique board structure, according to a person familiar with the board proceedings at the time. The group has had as many as nine members and is supposed to contain a majority of members with no financial stake in OpenAI. At the time of Altmans firing, it was down to six members: three employees (president and co-founder Greg Brockman, Altman, and Sutskever) and three independent directors (Toner, tech entrepreneur Tasha McCauley and Quora CEO Adam DAngelo).

But the lack of concrete details around the boards motivations allowed room for speculation and spin to take hold. Some talk focused on Sutskever, who in July was named co-lead of a new AI safety team called Superalignment, whose goal is to make sure advanced AI systems follow human intent. His public comments about the potential dangers of artificial general intelligence set the stage for a narrative about the risks of commercial interests.

The pressure on Sutskever to reverse his vote was particularly intense. Less than three days later, he wrote on X that I deeply regret participating in the boards decision. He also added his name to the employee resignation letter and vowed to reunite the company.

Altman seemed to approve, quoting Sutskevers message on X along with a trio of red heart emojis.

Sutskevers future at OpenAI is now uncertain. We hope to continue our working relationship and are discussing how [Sutskever] can continue his work at OpenAI, Altman wrote in a staff-wide email after returning as CEO.

There have been a lot of wild and inaccurate reports about what happened with the Board but the bottom line is that Ilya has very publicly stated that Sam is the right person to lead OpenAI and he is thrilled that he is back at the helm, Sutskevers lawyer, Alex Weingarten, chair of the litigation practice at Willkie Farr & Gallagher, wrote in a statement.

On Wednesday morning, Sutskever shared a cryptic post on X about learning many lessons in the past month. One such lesson is that the phrase the beatings will continue until morale improves applies more often than it has any right to, he wrote. The tweet was quickly deleted.

clarification

This story has been updated to clarify that "roon" is a pseudonym used by a member of OpenAI's technical staff, not a Twitter handle.

See the original post here:

Warning from OpenAI leaders helped trigger Sam Altmans ouster - The Washington Post

AI changes everythingand nothing Nieman Journalism Lab – Nieman Journalism Lab at Harvard

In the final discussion post in my digital media innovation capstone course this semester, I asked the question, What are your thoughts on the future of innovation? It was no surprise that every student mentioned artificial intelligence. Many expressed concerns about ethical implications and its effect on future employment. But in most cases, their comments were balanced with a healthy sense of optimism and involvement. As one said, Remaining up-to-date, never-ending education, flexibility, and active involvement in ethical issues related to technology will be necessary for success in the changing professional environment.

As I have begun talking about AI over the past several months in classes, at conferences, and in conversation with friends and colleagues I keep repeating that we are going to look back at the past couple decades of search as the dark ages of information. Remember when we had to Google stuff and then go to websites and read them and hope they had the answers to our questions? Googles algorithm that now gives us excellent-quality search results will feel as antiquated as a MySpace Top 8 when we are able to have a conversation with a bot that seemingly knows everything. These all-knowing platforms are now being referred to as artificial general intelligence.

Well also look back on this time when we gladly gave up volumes of personal information to search and social media companies in exchange for the value we perceived in using them. But will we also remember that we didnt solve the problems of misinformation, bias, and abuse when we had the chance? AI just exacerbates these dilemmas.

Looking forward, as we talk about AI, we have to consider how it will affect the ways that information is stored and distributed. Now we have volumes of public content that are used to train AI platforms, created by millions of people. But if we no longer need to go to a website to get information, will many websites become unnecessary? If so, then what will be training the AI? What will be the format of the data? Will the presentation of the remaining web spaces need to be more fluid and customized? What will the platforms of the future be? And who will be in charge of them? Who will have the skills to work in these fields? And how will media education adapt? We have to look a few paces ahead.

So, what is my prediction for 2024? AI will become more accessible and more useful, like search and social media. We will gladly give away all our private information in exchange for the value we perceive in using it. Well use AI platforms to write emails, contribute to stories, edit copy, analyze and present data, create graphics, prepare college papers, learn to codemaybe even write our Nieman Lab predictions.

Well also worry about technology taking our jobs. Spreading falsehoods. Information bias and takeovers by malicious actors.

We cant predict what AI will look like in a year. But we have an idea of where this is going, because weve been there. There will be company shakeups, new platforms, emergent players. There will be ethical, social and legal implications. There will be mishaps and confusion. Same as it ever was.

Maybe the stakes are higher now, with a technology so few understand and in which so few have control. My best advice is not to avoid it. Get knowledgeable, but be critical. How we should have been all along

Cindy Royal is a professor and director of the Media Innovation Lab at Texas State University.

See the article here:

AI changes everythingand nothing Nieman Journalism Lab - Nieman Journalism Lab at Harvard

Achieving alignment: How U of T researchers are working to keep AI on track – University of Toronto

In the year since OpenAI released ChatGPT, what once seemed like an esoteric question among researchers has pushed its way to the forefront of public discourse: As artificial intelligence becomes more capable, how do we ensure AI systems act in the best interests of humans and crucially not turn against us?

He recently spoke withU of T Newsabout the alignment problem and what is being done to try and solve it.

What, exactly, is meant by AI alignment?

In the research sense, it means trying to make sure that AI does what we intended it to do so it follows the objectives that we try to give it. But there are lots of problems that can arise, some of which were already seeing in todays models.

One is called reward misspecification. Its tricky to specify what reward function, or objective, you want in the form of a number that an AI model can understand. For example, if youre a company, you might try to maximize profits thats a relatively simple objective. But in pursuing it, there can be unintended consequences in the real world. The model might make or recommend decisions that are harmful to employees or the environment. This example of rewards being underspecified can occur in even more simple settings. If we ask a robot to bring us coffee, we are also implicitly asking it to do so without breaking anything in the kitchen.

If we built the AI models, how is it they learn to do things we didnt foresee?

When we talk about emergent behaviours abilities that are present in larger models but not in smaller ones its useful to think about large language models (LLMs) such as ChatGPT. If given an incomplete sentence, ChatGPTs objective is to predict what the next word is going to be. But if youre giving it a bunch of different training data from the works of Shakespeare to mathematical textbooks the model is going to gain some level of understanding in order to get better at predicting what word comes next.

We dont specify hard-coded rules for what these models are supposed to learn, so we dont have that much control over what the model generates. One example of this is hallucinations, where models such as ChatGPT create plausible but false claims.

What is artificial general intelligence (AGI) and what are some of the existential concerns about it?

There are many definitions, but in a general sense, AGI refers to the potential that we develop an AI system that performs most tasks that require intelligence better than or at the same level as humans.

People who believe this might happen are concerned about whether these models are going to be aligned with human values. In other words, if theyre more intelligent than the average human, its not clear that theyll actually help us.

Some sci-fi ideas about AIs taking over the world or hurting a lot of humans are getting a lot of media attention. One reason people think this might happen is an AI can often act better on its objectives if it has more resources. Hypothetically, an AI system might decide that manipulating humans, or hurting them in some way, might make it easier to acquire resources. This scenario is not going to happen today, but the potential risk is why luminaries such as Geoffrey Hinton emphasize the importance of studying and better understanding the models we are training.

How are U of T researchers working to tackle the short- and long-term risks of AI?

There are five key areas of AI alignment research: specification, interpretability, monitoring, robustness and governance. The Schwartz Reisman Institute is at the forefront of bringing together people from different disciplines to try to steer this technology in a positive direction.

In the case of specification, a common approach to fix the problem of reward misspecification is a technique that allows models to learn from human feedback. This is already being put into practice in training LLMs like ChatGPT. Going forward, some researchers are looking for ways to encode a set of human principles for future advanced models to follow. An important question that we can all think about is alignment to whom? What sort of guidelines do we want these models to follow?

The rest is here:

Achieving alignment: How U of T researchers are working to keep AI on track - University of Toronto

Meta’s AI chief doesn’t think AI super intelligence is coming anytime soon, and is skeptical on quantum computing – CNBC

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

Meta's chief scientist and deep learning pioneer Yann LeCun said he believes that current AI systems are decades away from reaching some semblance of sentience, equipped with common sense that can push their abilities beyond merely summarizing mountains of text in creative ways.

His point of view stands in contrast to that of Nvidia CEO Jensen Huang, who recently said AI will be "fairly competitive" with humans in less than five years, besting people at a multitude of mentally intensive tasks.

"I know Jensen," LeCun said at a recent event highlighting the Facebook parent company's 10-year anniversary of its Fundamental AI Research team. LeCun said the Nvidia CEO has much to gain from the AI craze. "There is an AI war, and he's supplying the weapons."

"[If] you think AGI is in, the more GPUs you have to buy," LeCun said, about technologists attempting to develop artificial general intelligence, the kind of AI on par with human-level intelligence. As long as researchers at firms such as OpenAI continue their pursuit of AGI, they will need more of Nvidia's computer chips.

Society is more likely to get "cat-level" or "dog-level" AI years before human-level AI, LeCun said. And the technology industry's current focus on language models and text data will not be enough to create the kinds of advanced human-like AI systems that researchers have been dreaming about for decades.

"Text is a very poor source of information," LeCun said, explaining that it would likely take 20,000 years for a human to read the amount of text that has been used to train modern language models. "Train a system on the equivalent of 20,000 years of reading material, and they still don't understand that if A is the same as B, then B is the same as A."

"There's a lot of really basic things about the world that they just don't get through this kind of training," LeCun said.

Hence, LeCun and other Meta AI executives have been heavily researching how the so-called transformer models used to create apps such as ChatGPT could be tailored to work with a variety of data, including audio, image and video information. The more these AI systems can discover the likely billions of hidden correlations between these various kinds of data, the more they could potentially perform more fantastical feats, the thinking goes.

Some of Meta's research includes software that can help teach people how to play tennis better while wearing the company's Project Aria augmented reality glasses, which blend digital graphics into the real world. Executives showed a demo in which a person wearing the AR glasses while playing tennis was able to see visual cues teaching them how to properly hold their tennis rackets and swing their arms in perfect form. The kinds of AI models needed to power this type of digital tennis assistant require a blend of three-dimensional visual data in addition to text and audio, in case the digital assistant needs to speak.

These so-called multimodal AI systems represent the next frontier, but their development won't come cheap. And as more companies such as Meta and Google parent Alphabet research more advanced AI models, Nvidia could stand to gain even more of an edge, particularly if no other competition emerges.

Nvidia has been the biggest benefactor of generative AI, with its pricey graphics processing units becoming the standard tool used to train massive language models. Meta relied on 16,000 Nvidia A100 GPUs to train its Llama AI software.

CNBC asked if the tech industry will need more hardware providers as Meta and other researchers continue their work developing these kinds of sophisticated AI models.

"It doesn't require it, but it would be nice," LeCun said, adding that the GPU technology is still the gold standard when it comes to AI.

Still, the computer chips of the future may not be called GPUs, he said.

"What you're going to see hopefully emerging are new chips that are not graphical processing units, they are just neural, deep learning accelerators," LeCun said.

LeCun is also somewhat skeptical about quantum computing, which tech giants such as Microsoft, IBM, and Google have all poured resources into. Many researchers outside Meta believe quantum computing machines could supercharge advancements in data-intensive fields such as drug discovery, as they're able to perform multiple calculations with so-called quantum bits as opposed to conventional binary bits used in modern computing.

But LeCun has his doubts.

"The number of problems you can solve with quantum computing, you can solve way more efficiently with classical computers," LeCun said.

"Quantum computing is a fascinating scientific topic," LeCun said. It's less clear about the "practical relevance and the possibility of actually fabricating quantum computers that are actually useful."

Meta senior fellow and former tech chief Mike Schroepfer concurred, saying that he evaluates quantum technology every few years and believes that useful quantum machines "may come at some point, but it's got such a long time horizon that it's irrelevant to what we're doing."

"The reason we started an AI lab a decade ago was that it was very obvious that this technology is going to be commercializable within the next years' time frame," Schroepfer said.

WATCH: Meta on the defensive amid reports of Instagram's harm

Original post:

Meta's AI chief doesn't think AI super intelligence is coming anytime soon, and is skeptical on quantum computing - CNBC