Category Archives: Alphago
DeepMind AI rivals the world’s smartest high schoolers at geometry – Ars Technica
Enlarge / Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England.
A system developed by Googles DeepMind has set a new record for AI performance on geometry problems. DeepMinds AlphaGeometry managed to solve 25 of the 30 geometry problems drawn from the International Mathematical Olympiad between 2000 and 2022.
That puts the software ahead of the vast majority of young mathematicians and just shy of IMO gold medalists. DeepMind estimates that the average gold medalist would have solved 26 out of 30 problems. Many view the IMO as the worlds most prestigious math competition for high school students.
Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions, DeepMind writes. To overcome this difficulty, DeepMind paired a language model with a more traditional symbolic deduction engine that performs algebraic and geometric reasoning.
The research was led by Trieu Trinh, a computer scientist who recently earned his PhD from New York University. He was a resident at DeepMind between 2021 and 2023.
Evan Chen, a former Olympiad gold medalist who evaluated some of AlphaGeometrys output, praised it as impressive because it's both verifiable and clean. Whereas some earlier software generated complex geometry proofs that were hard for human reviewers to understand, the output of AlphaGeometry is similar to what a human mathematician would write.
AlphaGeometry is part of DeepMinds larger project to improve the reasoning capabilities of large language models by combining them with traditional search algorithms. DeepMind has published several papers in this area over the last year.
Lets start with a simple example shown in the AlphaGeometry paper, which was published by Nature on Wednesday:
The goal is to prove that if a triangle has two equal sides (AB and AC), then the angles opposite those sides will also be equal. We can do this by creating a new point D at the midpoint of the third side of the triangle (BC). Its easy to show that all three sides of triangle ABD are the same length as the corresponding sides of triangle ACD. And two triangles with equal sides always have equal angles.
Geometry problems from the IMO are much more complex than this toy problem, but fundamentally, they have the same structure. They all start with a geometric figure and some facts about the figure like side AB is the same length as side AC. The goal is to generate a sequence of valid inferences that conclude with a given statement like angle ABC is equal to angle BCA.
For many years, weve had software that can generate lists of valid conclusions that can be drawn from a set of starting assumptions. Simple geometry problems can be solved by brute force: mechanically listing every possible fact that can be inferred from the given assumption, then listing every possible inference from those facts, and so on until you reach the desired conclusion.
But this kind of brute-force search isnt feasible for an IMO-level geometry problem because the search space is too large. Not only do harder problems require longer proofs, but sophisticated proofs often require the introduction of new elements to the initial figureas with point D in the above proof. Once you allow for these kinds of auxiliary points, the space of possible proofs explodes and brute-force methods become impractical.
Read more:
DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica
AI Systems That Master Math Will Change the World – PYMNTS.com
The world may never know just what happened at OpenAI during last weeks whirlwind.
After the headline-grabbing drama, CEO Sam Altman was reinstated without a board seat; the companys chief scientific officer, Ilya Sutskever, returned to his post; and the nonprofits board of directors was given a proper shakeup.
But what was behind it all?
Rumors and hype are swirling around reports that OpenAI researchers created a new model, called Q* (pronounced Q-star), able to complete grade-school-level math problems. This new development, and Altmans push for commercialization, are what some observers believe to have spooked the nonprofit board, whose mission is centered around developing AI for the good of humanity.
A generative artificial intelligence (AI) model that can regularly and reliably solve math problems on its own would constitute a huge advance in the capabilities of AI systems.
Even todays most advanced and cutting-edge AI systems struggle to repeatably solve relatively simple math problems, a situation that has been both vexing AI researchers and inspiring them to push the field forward for years.
If there is an AI model out there, or under development, that can really do math even simple equations on its own, then that represents a massive leap forward for AIs applications across many industries, especially payments.
Math, after all, is a benchmark for reasoning. And the bread and butter for most AI models particularly large language models (LLMs) is pattern recognition, not logical sequence cognition.
Read also: Specific Applications of Gen AI Are Winning Play for Enterprises
LLMs are trained on text and other data that would take a human many millennia to read, but generative AI models still cant be trusted to reliably discern that if X is the same as Y, then Y is the same as X.
AI systems with the ability to plan already exist, however they are typically embedded within highly contextually limited scenarios, such as playing chess, where the rules and permutations are fixed, or controlling a robot on a grid. Outside of their defined zone of expertise, these systems, including Google DeepMinds AlphaGo and AlphaGo Zero, are limited in their planning capacity even when compared to animals like cats or mice.
Building a generative AI system that is capable of unsupervised reasoning and able to solve math problems without regular mistakes is a challenging, but important, milestone.
The name of OpenAIs alleged model, Q*, may give a hint as to how to get there. It combines two fundamental computer science techniques, Q-learning and A* (pronounced A-star).
A* was created to build a mobile robot that could plan its own actions; while Q-learning is a model-free reinforcement learning algorithm to learn the value of an action in a particular state.
Performing math reliably, for both AIs and humans, requires planning over multiple steps. But a senior NVIDIA scientist, Dr. Jim Fan, tweeted that some combination of an A* model like Googles chess rules AlphaGO and a Q-learning system like an LLM could someday get there.
Maybe that model, combining Q-learning with A*, would be called something like Q*.
See also: What Superintelligent Sentience (AGI) Means for the AI Ecosystem
Most AI models operate on weights, not contexts, meaning they operate without truly understanding what they are dealing with. To perform math, however, an in-step sequential understanding is crucial.
An AI capable of doing math reliably is an enticing concept because, as in the laws of nature themselves, math represents a foundation of learning for other, more abstract tasks.
A 2023 research paper by OpenAIs Sutskever along with other OpenAI researchers, titled Lets Verify Step by Step investigates this concept while attempting to reduce the regularity of AI models trained on the MATH dataset from producing logical mistakes. The OpenAI scientists leveraged a dataset of 800,000 step-level human feedback labels to train their model.
Getting AI models to solve math problems would represent a crucial step in the innovations ability to transform enterprise workflows and operations, helping reduce the daily labor burden on internal teams by evolving their responsibilities from doing the process to managing or overseeing it.
Within security-critical areas like finance and payments, the future-fit impact of this as-yet hypothetical capability cant be overstated.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
Visit link:
AI Systems That Master Math Will Change the World - PYMNTS.com
Unraveling the Mystery of QAR: The Next Leap in AI? – Medium
Today, were diving into the enigmatic world of AI, where rumors, breakthroughs, and speculation intertwine to paint a picture of our technological future. Our focus? A whispered name thats sending ripples through the AI community: QAR.
Recently, the tech world has been abuzz with talks about QAR, a project shrouded in mystery and excitement. But what is QAR, really? Some say its the next big thing in AI, while others claim its nothing but hot air. Lets explore together and unearth the truth.
Our story begins with a leak, a glimpse into what might be the next generation of AI. This leak has sparked intense discussions and debates. Is QAR a sign that were nearing, or perhaps already at, the threshold of Artificial General Intelligence (AGI)?
Picture this: Sam Altman and other tech visionaries in a room, witnessing a breakthrough so profound that it pushes the frontier of discovery forward. This scenario, as reported, suggests that something monumental in AI is upon us. But what could it be?
In a surprising twist, Sam Altman is removed from OpenAIs board the day after this alleged breakthrough. This decision set off a chain of events, sparking rumors and concerns about the potential risks of what was witnessed. Could QAR be a threat to humanity, as some insiders suggest?
In the heart of the Bay Area, whispers abound among those working on AI projects. Theres a sense that were close to a major leap in AI. But with the increased secrecy among tech giants, piecing together the puzzle has become a challenge for us all.
Enter Dr. Jim Fan, a respected figure in AI, with his thoughts on QAR. Could his insights help us understand the potential reality behind the rumors?
QAR is speculated to be a blend of Q-learning and A-star search algorithms, a combination that could revolutionize how AI learns and reasons. But is there more to QAR than meets the eye?
We delve into the concept of synthetic data and how it might be the key to training future AI models. Could QAR be leveraging this approach to achieve new heights in AI capabilities?
The union of different AI methodologies, like those used in AlphaGo and large language models, could be at the heart of QAR. Is this the synthesis that will define the next era of AI?
As we stand at the precipice of potentially groundbreaking AI advancements, its crucial to approach these developments with both excitement and caution. The story of QAR, whether fact or fiction, highlights the rapid pace of AI evolution and the need for responsible innovation.
What are your thoughts on QAR and the future of AI? Join the conversation below, and dont forget to hit that like button if youre as intrigued by the unfolding story of QAR as we are!
Read more:
Unraveling the Mystery of QAR: The Next Leap in AI? - Medium
What is Google Gemini? CEO Sundar Pichai says ‘excited’ about the innovation – Business Today
Google's Gemini, hailed by CEO Sundar Pichai as an exciting innovation, has been making waves since its announcement. This development, following the seismic impact of ChatGPT's launch last November, prompted Google to take decisive action, investing substantially in catching up with the generative AI trend. This concerted effort led not only to the introduction of Google Bard but also to the unveiling of Google Gemini.
We are building our next generation of models with Gemini and I am extraordinarily excited at the innovation coming ahead. I expect it to be a golden age of innovation ahead and can't wait to bring all the innovations to more people, Pichai recently said at the APEC CEO Conference.
What exactly is Google Gemini?
Gemini represents a suite of large language models (LLMs) employing training methodologies akin to those used in AlphaGo, integrating reinforcement learning and tree search techniques. It holds the potential to challenge ChatGPT's dominance as the premier generative AI solution globally.
It emerged mere months after Google amalgamated its Brain and DeepMind AI labs to establish a new research entity known as Google DeepMind. It also follows swiftly on the heels of Bard's launch and the introduction of its advanced PaLM 2 LLM.
While expectations suggest a potential release of Google Gemini in the autumn of 2023, comprehensive details regarding its capabilities remain elusive.
In May, Sundar Pichai, CEO of Google and Alphabet, shared a blog post offering a broad overview of the LLM, stating: "Gemini was purpose-built from the ground up to be multimodal, boasting highly efficient tool and API integrations, and designed to facilitate future innovations such as memory and planning."
Pichai also highlighted, "Despite being in its early stages, we are already witnessing remarkable multimodal capabilities not previously seen in earlier models. Once fine-tuned and rigorously assessed for safety, Gemini will be available in various sizes and functionalities, akin to PaLM 2."
Since then, official disclosures about its release have been scarce. Google DeepMind CEO Demis Hassabis, in an interview with Wired, hinted at Gemini's capabilities, mentioning its amalgamation of AlphaGo's strengths with the impressive language capabilities of large models.
According to Android Police, an anonymous source associated with the product suggested that Gemini will generate text alongside contextual images, drawing on sources such as YouTube video transcripts.
Challenges on the horizon
Google's extensive endeavour to catch up with OpenAI, the creators of ChatGPT, appears to be more challenging than initially anticipated, as reported by The Information.
Earlier this year, Google informed select cloud clients and business partners that they would gain access to the company's new conversational AI, the substantial language model Gemini, by November.
However, the company recently notified them to expect it in the first quarter of the following year, as revealed by two individuals with direct insight. This delay poses a significant challenge for Google, particularly amidst the slowdown in its cloud sales growth, contrasting with the accelerated growth of its larger rival, Microsoft. A portion of Microsoft's success can be attributed to selling OpenAI's technology to its customer base.
Also ReadGovernment to meet social media platforms including Meta and Google over deepfake crisis, says Ashwini Vaishnaw
Continue reading here:
What is Google Gemini? CEO Sundar Pichai says 'excited' about the innovation - Business Today
AI Unleashed :Transforming Humanity – Medium
Introduction:Artificial Intelligence (AI) has not only emerged from the annals of science fiction but has firmly planted itself as a cornerstone in multiple sectors. Its various forms, from machine learning to deep learning, are driving unprecedented change. While these advances are groundbreaking, they also necessitate a critical examination of AIs potential risks to humanity.
1. Machine Learning in Financial Tech:Machine learning, a critical facet of AI, is upending traditional finance. JPMorgans COIN platform exemplifies this, using ML to deconstruct commercial loan agreements, a task once demanding hundreds of thousands of man-hours. Beyond efficiency, ML in finance also extends to fraud detection and algorithmic trading, creating systems that are not only faster but more secure and intelligent.
2. Deep Learnings Impact on Healthcare:Deep learning, celebrated for its pattern recognition capabilities, is revolutionizing healthcare. Googles DeepMind, for instance, uses deep learning algorithms to accurately diagnose diseases such as cancer, dramatically improving early detection rates. This advancement transcends traditional diagnostics, offering a glimpse into a future where AI partners with medical professionals to save lives.
3. Supervised Learning in E-Commerce:E-commerce giants like Amazon and Netflix harness supervised learning to power recommendation engines, offering personalized experiences to users. This approach leverages massive datasets to predict customer preferences, transforming browsing into a curated experience that drives both satisfaction and revenue.
4. Unsupervised Learning in Marketing:Unsupervised learning is reshaping marketing by uncovering hidden patterns in consumer data. This AI form enables businesses to segment their markets more effectively, crafting targeted strategies that resonate with distinct customer groups.
5. Neural Networks in the Automotive Industry:The automotive industrys leap into the future is powered by neural networks, particularly in developing autonomous vehicles. Teslas self-driving cars, which use Convolutional Neural Networks (CNNs) for image recognition and decision-making, exemplify AIs role in enhancing road safety and redefining transportation.
6. NLP Revolutionizing Customer Service:Natural Language Processing (NLP) has transformed customer service. AI-driven chatbots and virtual assistants, used by companies like Apple and Amazon, offer instant, intelligent customer interactions. This innovation not only enhances customer experience but also streamlines operations.
7. Reinforcement Learning in Gaming and Robotics:In gaming and robotics, reinforcement learning is making significant strides. DeepMinds AlphaGo, which outplayed human Go champions, illustrates AIs potential in strategic decision-making. Robotics, too, benefits from this AI form, creating machines that learn and adapt like never before.
Theoretical Risks of AI:AIs rapid advancement, however, brings potential risks. Automation could lead to significant job displacement. In cybersecurity, AI-enhanced attacks present sophisticated new challenges. Philosophically, the concept of an AI singularity where AI outstrips human intelligence raises concerns about uncontrollable outcomes that may not align with human ethics.
Conclusion:AIs integration across sectors demands a nuanced approach, balancing its transformative potential with ethical considerations. By comprehending AIs capabilities and fostering robust ethical frameworks, we can harness AIs power responsibly, ensuring it serves humanitys best interests.
See the rest here:
AI Unleashed :Transforming Humanity - Medium
Researchers seek consensus on what constitutes Artificial General Intelligence – Tech Xplore
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread
by Peter Grad , Tech Xplore
close
A team of researchers at DeepMind focusing on the next frontier of artificial intelligenceArtificial General Intelligence (AGI)realized they needed to resolve one key issue first. What exactly, they asked, is AGI?
It is often viewed in general as a type of artificial intelligence that possesses the ability to understand, learn and apply knowledge across a broad range of tasks, operating like the human brain. Wikipedia broadens the scope by suggesting AGI is "a hypothetical type of intelligent agent [that] could learn to accomplish any intellectual task that human beings or animals can perform."
OpenAI's charter describes AGI as a set of "highly autonomous systems that outperform humans at most economically valuable work."
AI expert and founder of Geometric Intelligence Gary Marcus defined it as "any intelligence that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence."
With so many variations in definitions, the DeepMind team embraced a simple notion voiced centuries ago by Voltaire: "If you wish to converse with me, define your terms."
In a paper published on the preprint server arXiv, the researchers outlined what they termed "a framework for classifying the capabilities and behavior of AGI models."
In doing so, they hope to establish a common language for researchers as they measure progress, compare approaches and assess risks.
"Achieving human-level 'intelligence' is an implicit or explicit north-star goal for many in our field," said Shane Legg, who introduced the term AGI 20 years ago.
In an interview with MIT Review, Legg explained, "I see so many discussions where people seem to be using the term to mean different things, and that leads to all sorts of confusion. Now that AGI is becoming such an important topic we need to sharpen up what we mean."
In the arXiv paper, titled "Levels of AGI: Operationalizing Progress on the Path to AGI," the team summarized several principles required of an AGI model. They include a focus on the capabilities of a system, not the process.
"Achieving AGI does not imply that systems 'think' or 'understand' [or] possess qualities such as consciousness or sentience," the team emphasized.
An AGI system must also have the ability to learn new tasks, and know when to seek clarification or assistance from humans for a task.
Another parameter is a focus on potential, and not necessarily actual deployment of a program. "Requiring deployment as a condition of measuring AGI introduces non-technical hurdles such as legal and social considerations, as well as potential ethical and safety concerns," the researchers explained.
The team then compiled a list of intelligence thresholds ranging from "Level 0, No AGI," to "Level 5, Superhuman." Levels 14 included "Emerging," "Competent," "Expert" and "Virtuosos" levels of achievement.
Three programs met the threshold of the label AGI. But those three, generative text models (ChatGPT, Bard and Llama 2), reached only "Level 1, Emerging." No other current AI programs met the criteria for AGI.
Other programs listed as AI included SHRDLU, an early natural language understanding computer developed at MIT, listed at "Level 1, Emerging AI."
At "Level 2, Competent" are Siri, Alexa and Google Assistant. The grammar checker Grammarly ranks at "Level 3, Expert AI."
Higher up this list, at "Level 4, Virtuoso," are Deep Blue and AlphaGo. Topping the list, "Level 5, Superhuman," are DeepMind's AlphaFold, which predicts a protein's 3D structure from its amino acid sequence; and StockFish, a powerful open-source chess program.
However, there is no single proposed definition for AGI, and there is constant change.
"As we gain more insights into these underlying processes, it may be important to revisit our definition of AGI," says Meredith Ringel Morris, Google DeepMind's principal scientist for human and AI interaction.
"It is impossible to enumerate the full set of tasks achievable by a sufficiently general intelligence," the researchers said. "As such, an AGI benchmark should be a living benchmark. Such a benchmark should therefore include a framework for generating and agreeing upon new tasks."
More information: Meredith Ringel Morris et al, Levels of AGI: Operationalizing Progress on the Path to AGI, arXiv (2023). DOI: 10.48550/arxiv.2311.02462
Journal information: arXiv
2023 Science X Network
See the rest here:
Researchers seek consensus on what constitutes Artificial General Intelligence - Tech Xplore
These are OpenAIs board members who fired Sam Altman – Hindustan Times
ChatGPT-maker OpenAI said on Friday it has removed its co-founder and CEO Sam Altman after a review found he was not consistently candid in his communications with the board of directors. The board no longer has confidence in his ability to continue leading OpenAI, the artificial intelligence company said in a statement.
OpenAI said its board consists of the company's chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam DAngelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.
Ilya Sutskever is a Russian-born Israeli-Canadian computer scientist specialising in machine learning. Sutskever co-founded OpenAI and holds a prominent role within the organisation.
Sutskever is credited as a co-inventor, alongside Alex Krizhevsky and Geoffrey Hinton, of the neural network, AlexNet. He is also among the co-authors of the AlphaGo paper, Live Mint reported.
Holding a BSc in mathematics and computer science from the University of Toronto under the mentorship of Geoffrey Hinton, Sutskever's professional trajectory includes a brief postdoctoral stint with Andrew Ng at Stanford University, followed by a return to the University of Toronto to join DNNResearch, a venture stemming from Hinton's research group.
Google later acquired DNNResearch, appointing Sutskever as a research scientist at Google Brain, where he contributed to significant developments, including the creation of the sequence-to-sequence learning algorithm and work on TensorFlow. Transitioning from Google in late 2015, Sutskever took on the role of co-founder and chief scientist at OpenAI.
This year, he announced that he would co-lead OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in four years.
D'Angelo was born on August 21, 1984. An American internet entrepreneur, D'Angelo is known for co-founding and helming Quora. Previously, he held key positions at Facebook, serving as its chief technology officer and later as vice president of engineering until 2008. In June 2009, D'Angelo embarked on the Quora venture, personally injecting $20 million during their Series B financing phase.
Graduated with a BS in computer science from the California Institute of Technology in 2002, D'Angelo's involvement has extended to advisory and investment roles, advising and investing in Instagram before its acquisition by Facebook in 2012. In 2018, he joined the board of directors of OpenAI.
Tasha McCauley is an independent director at OpenAI and is recognised for her work as a technology entrepreneur in Los Angeles. She is also known in the public eye as the spouse of American actor Joseph Gordon.
McCauley serves as the CEO of GeoSim Systems. McCauley recent endeavours at GeoSim Systems focus on the creation of highly detailed and interactive virtual models of real cities. McCauley has also co-founded Fellow Robots. She held roles teaching robotics and served as the director of the Autodesk Innovation Lab at Singularity University.
Helen Toner is director of Strategy and Foundational Research Grants at Georgetowns Center for Security and Emerging Technology (CSET). She also serves in an uncompensated capacity on the non-profit board of directors for OpenAI. She previously worked as a senior research analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a research affiliate of Oxford Universitys Center for the Governance of AI.
Follow the latest breaking news and developments from India and around the world with Hindustan Times' newsdesk. From politics and policies to the economy and the environment, from local issues to national events and global affairs, we've got you covered....view detail
See the original post:
These are OpenAIs board members who fired Sam Altman - Hindustan Times
Sam Altman In Talks to Return to OpenAI | by Derick David | Utopian … – Medium
Sam Altman, cofounder of OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all US senators hosted by Senate Majority Leader Chuck Schumer at the US Capitol in Washington, on Sep 13, 2023.
OpenAI board is in discussions with Sam Altman to return as CEO, just a day after he was ousted. Sam was fired by the board on Friday with no notice and major investors including Microsoft were blindsided.
Sam co-founded OpenAI with Elon Musk and a team of AI scientists in 2015 with the goal of developing safe and beneficial AI. He has since been the face of the company, a leading figure in the field, and has been credited with the creation of ChatGPT.
Microsoft released a statement saying theyre still committed with their partnership with OpenAI but was caught off guard like other investors by the boards abrupt decision to oust CEO Sam Altman, leaving the companys future in doubt amidst fierce competition in the AI landscape with the rise LLMs like Google Bard, ChatGPT, and now xAI.
According to The Verge, OpenAI board is in discussions with Sam to return to the company as CEO, according to multiple people familiar with the matter. Altman, who was unexpectedly let go by the board, is undecided about his comeback and demands substantial governance alterations.
The 4 board members who voted out Sam Altman:
Helen Toner Director of strategy and foundational research grants at Georgetowns CSET, expert on Chinas AI landscape. She joined OpenAI board in September 2021.
Adam DAngelo CEO of Quora, advocate for OpenAIs capped-profit structure and nonprofit control. He joined OpenAI board in April 2018. He also crated Poe, an AI chatbot app which allows users to interact with many different chatbots (including ChatGPT, Claude, Llama, PaLM2, etc).
Tasha McCauley Adjunct senior management scientist at RAND Corporation, co-founder of Fellow Robots and GeoSim Systems. Shes also a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Sam Altman, Iyla Sutskever, and Elon Musk also signed.)
Ilya Sutskever OpenAI cofounder, Russian-born chief scientist, co-author of a key paper in neural networks, helped lead the AlphaGo project.
Unlike traditional companies, OpenAI board is not focused on making money for shareholders. In fact, none of the board members even own shares in the company. Instead, their goal is to make sure that artificial general intelligence (AGI) is developed in a way that benefits everyone, not just a select few.
This is a very different approach than the one taken by most companies. Typically, companies are run by a board of directors who are responsible for making decisions that will increase shareholder value.
This often means maximizing profits, even if it comes at the expense of other stakeholders, such as employees, customers, or the environment.
This is a challenging task, but its one that OpenAI board is taking very seriously. They are working with some of the worlds leading experts on AI to develop guidelines and safeguards that will help to ensure that AGI is used for the benefit of all.
Follow this link:
Sam Altman In Talks to Return to OpenAI | by Derick David | Utopian ... - Medium
Absolutely, here’s an article on the impact of upcoming technology – Medium
Photo by Possessed Photography on Unsplash
In the ever-evolving world of technology, one can hardly keep track of the pace at which advancements occur. In every industry, from healthcare to entertainment, technology is causing sweeping changes, redefining traditional norms, and enhancing efficiency on an unprecedented scale. This is an exploration of just a few of these innovating technological advancements that are defining the future.
Artificial Intelligence (AI), already disruptive in its impact, continues to push barriers. With the introduction of advanced systems such as GPT-3 by OpenAI or DeepMinds AlphaGo, the world is witnessing AIs potential in generating human-like text, accurate predictions, problem-solving and strategy development. Companies are reaping the benefits of AI, including improved customer service and streamlined operational processes.
Blockchain technology, while often associated solely with cryptocurrencies, has capabilities far beyond the world of finance. Its transparent and secure nature promises to reform industries like supply chain management, healthcare and even elections, reducing fraud and increasing efficiency.
In the realm of communication, 5G technology is set to revolutionize not only how we connect with each other but also how machines interconnect. Its ultra-fast, stable connection and low latency promise to drive the Internet of Things (IoT) to new heights, fostering an era of smart cities and autonomous vehicles.
Virtual and Augmented Reality (VR/AR) technologies have moved beyond the gaming industry to more practical applications. Industries such as real estate, tourism, and education are starting to realize the immense potential of these technologies for enhancing customer experience and learning outcomes.
Quantum computing, though still in its infancy, holds extraordinary promise with its potential to solve complex computational problems at unprecedented speeds. This technology could bring profound impacts to sectors such as pharmacology, weather forecasting, and cryptography.
These breakthroughs represent the astounding future that lies ahead, but they also hint at new challenges to be navigated. As we move forward, questions surrounding ethical implications, data privacy, and security need to be addressed. However, whats undeniable is the critical role technology will play in shaping our collective future. This evolution inspires awe and eager anticipation of what is yet to come.
Continue reading here:
Absolutely, here's an article on the impact of upcoming technology - Medium
What can the current EU AI approach do to overcome the challenges … – Modern Diplomacy
In the 1970s, as researchers started to grasp the intricacies of genetics, they were inevitably faced with the ethical implications of intentionally altering genes in living organisms. While no technology existed at that time to make such modifications, it was clear that its emergence was just around the corner. In 1975, a landmark conference was held in Asilomar, California, bringing together not just scientists but also about 140 professionals ranging from legal experts to writers and journalists. The goal was to address the potential risks associated with gene manipulation. The conference led to the creation of a set of guiding principles that continue to have a lasting impact today. Asilomar serves as a singular example of effective self-regulation, proactive risk mitigation, and open communication with the public.
Today, as we stand on the cusp of a new, AI-driven, era, theres again a palpable sense of anticipation across the globe, while new risks and opportunities spread out before us. AI has swiftly transitioned from being a technological novelty to a pervasive force, reshaping our daily lives and industries, ranging from pioneering projects like OpenAI to autonomous transport. he allure of generative AI applications has dwarfed past technological frenzies. While innovations like the internet, steam engine, and printing press have previously ushered in transformative epochs, AI holds the promise of instigating the most monumental shift in human history.
However, as this wave of innovation surges forward, the need for a comprehensive regulatory framework becomes increasingly urgent. An important goal, agreed by several stakeholders, should be to ensure the ethical, secure, and equitable use of AI for all. The conversation is not a hypothetical debate about the distant future; its about what must be done today to secure a prosperous future for humanity and the planet.
Numerous stakeholdersgovernments, international organisations, NGOs, and tech giants, are scrambling to address the myriad challenges posed by AI. Whether driven by genuine concern or merely to cultivate a contemporary image, different initiatives are underway. The European Commission is pioneering efforts to craft the first-ever legal framework for AI[1]. The proposed legislation establishes different rules for different risk levels and has the potential to address AI risks for the society. Yet, it is uncertain whether this European effort can address all current and especially future challenges. Two glaring gaps persist in the European legislative effort but also in numerous parallel national or international initiatives.
First, the vast majority of the current efforts are focused on the present and the impacts of narrow AI, that is the current version of AI tools capable of performing specific tasks (like ChatGPT, AlphaFold or AlphaGo). Yet, this preoccupation with narrow AI obscures the monumental, potentially catastrophic challenges presented by Artificial General Intelligence (AGI). AGI represents a form of AI with the capacity to comprehend, learn, and apply knowledge across a wide range of tasks and domains[2]. An AGI system connected to the internet and myriad sensors and smart devices could solve complex problems, seek information by any means (even directly interacting with humans), make logical deductions, and even rewrite its own code. AGI does not exist today, yet according to estimations by experts[3] could be available between 2035-2040, a timeline that coincides with the typical time needed to solidify a global AGI treaty and governance system. This synchronicity underscores the pressing need to pivot our focus, infusing foresight methodologies to discern and tackle imminent challenges and prepare for unknown ones.
The second challenge for the ongoing legislative efforts is the fragmentation. AI systems, much like living organisms, transcend political borders. Attempting to regulate AI through national or regional efforts entails a strong potential for failure, given the likely proliferation capabilities of AI. Major corporations and emerging AI startups outside the EUs control will persist in creating new technologies, making it nearly impossible to prevent European residents from accessing these advancements. In this light, several stakeholders[4] suggest that any policy and regulatory framework for AI must be established on a global scale. Additionally, Europes pursuit of continent-wide regulation poses challenges to remaining competitive in the global AI arena, if the sector enjoys a more relaxed regulatory framework in other parts of the world. Furthermore, Article 6 of the proposed EU Artificial Intelligence Act introduces provisions for high-risk AI systems, requiring developers and deployers themselves to ensure safety and transparency. However, the provisions self-assessment nature raises concerns about its effectiveness.
What must be done
In this rapidly changing and complex global landscape, is there any political space for the EU to take action? The pan-European study OurFutures[5] reveals that the vast majority of participants express deep concern about the future, with technology-related issues ranking high on their list, alongside social justice, nature, well-being, education, and community. Moreover, despite the emerging signs of mistrust towards governments globally, citizens in the EU maintain confidence in government leaders as catalysts for positive change (picture 1), while they also prioritize human condition and environment over economic prosperity.
Picture 1: Who are the changemakers and what matters more (OurFutures)
The clock is ticking, but governments still have the opportunity to address societal concerns by taking bold steps. In the case of AI, the EU should assume a leadership role in global initiatives and embrace longtermism as a fundamental value, ensuring a sustainable future for current and future generations:
EU as a global sounding board. While the European Commissions legislative initiative on A.I. signifies a leap in the right direction, structured productive collaboration with key international partners like USA, China, UNESCO and OECD is essential, with the aim to set-up a global AI regulatory framework. The success of the Asilomar conference was rooted in its ability to create a voluntary set of globally respected rules. Violators faced condemnation from the global community, exemplified by the case of He Jiankui[6], who created the worlds first genetically edited babies and was subsequently sentenced to prison. Drawing from its tradition of negotiating regulations with many diverse stakeholders, the EU should champion a global initiative under the UN to forge a consensus on AI regulation, and adapt to the diversity of approaches shown by other AI actors.
A technology monitoring system. A global technology observatory has been already suggested by the Millennium Project[7], the Carnegie Council for Ethics in International Affairs[8] and other experts. This organization should be empowered to supervise AI research, evaluate high-risk AI systems, and grant ISO-like certifications to AI systems that comply with standards. It should track technological progress and employ foresight methods to anticipate future challenges, particularly as AGI looms on the horizon. Such an entity, perhaps aptly named the International Science and Technology Organization (ISTO), building on the work done by ISO/IEC and the IEEE on ad hoc standards, could eventually extend its purview beyond AI, encapsulating fields like synthetic biology and cognitive science. Avoiding the usual challengesdissent over nuances, apprehensions of national sovereignty, and the intricate dance of geopolitics, could be done through the emergence of such an organism from the mentioned already extant standardization organizations. However, the EU, with its rich legacy, is perfectly poised to champion this cause, in close collaboration with the UN to expedite its realization.
Embrace Longtermism. Longtermism, the ethical view that prioritizes positively influencing the long-term future, is a moral imperative in an era of exponential technological advancements and complex challenges like the climate crisis. Embracing longtermism means designing policies that address risks as we transition from sub-human AI to greater-than-human AI. For the European Commission, initiatives to address AI challenges should not be viewed as mere regulation but as a unique opportunity to etch its commitment to a secure, ethical AI future into history. A longtermism perspective in AI matches with the idea of AI Alignment put forth by numerous scholars[9], which addresses diverse concerns related to AI safety, aiming to ensure that AI remains aligned with our objectives and avoids unintended consequences of going astray.
As the world races against the clock to regulate AI, the EU has the potential to be a trailblazer. EUs initiatives to address AI challenges should not be considered merely as a regulatory endeavorits an unparalleled opportunity. Embracing longtermism and spearheading the establishment of an ISTO could be EUs crowning achievement. Its time for the EU to step up, engage in proactive diplomacy, and pave the way for a sustainable AI future that respects the values and concerns of people today and tomorrow.
[1] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[2] https://www.gartner.com/en/information-technology/glossary/artificial-general-intelligence-agi
[3] Macey-Dare, Rupert, How Soon is Now? Predicting the Expected Arrival Date of AGI- Artificial General Intelligence (June 30, 2023). Available at SSRN:https://ssrn.com/abstract=4496418
[4] For example https://www.forbes.com/sites/hecparis/2022/09/09/regulating-artificial-intelligenceis-global-consensus-possible/?sh=a505f237035c
[5] https://knowledge4policy.ec.europa.eu/projects-activities/ourfutures-images-future-europe_en
[6] https://www.bbc.com/news/world-asia-china-50944461
[7] https://www.millennium-project.org/projects/workshops-on-future-of-worktechnology-2050-scenarios/
[8]Global AI Observatory (GAIO) : https://www.carnegiecouncil.org/media/article/a-framework-for-the-international-governance-of-ai
[9] For example: http://lcfi.ac.uk/projects/completed-projects/value-alignment-problem/
Excerpt from:
What can the current EU AI approach do to overcome the challenges ... - Modern Diplomacy