Category Archives: Artificial General Intelligence
Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist – ZDNet
AI developers must move quickly to develop and deploy systems that address algorithmic bias, said Kathy Baxter, principal Architect of Ethical AI Practice at Salesforce. In an interview with ZDNET, Baxter emphasized the need for diverse representation in data sets and user research to ensure fair and unbiased AI systems. She also highlighted the significance of making AI systems transparent, understandable, and accountable while protecting individual privacy. Baxter stresses the need for cross-sector collaboration, like the model used by the National Institute of Standards and Technology (NIST), so that we can develop robust and safe AI systems that benefit everyone.
One of the fundamental questions in AI ethics is ensuring that AI systems are developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter stressed the importance of asking who benefits and who pays for AI technology. It's crucial to consider the data sets being used and ensure they represent everyone's voices. Inclusivity in the development process and identifying potential harms through user research is also essential.
Also:ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert
"This is one of the fundamental questions we have to discuss," Baxter said. "Women of color, in particular, have been asking this question and doing research in this area for years now. I'm thrilled to see many people talking about this, particularly with the use of generative AI. But the things that we need to do, fundamentally, are ask who benefits and who pays for this technology. Whose voices are included?"
Social bias can be infused into AI systems through the data sets used to train them. Unrepresentative data sets containing biases, such as image data sets with predominantly one race or lacking cultural differentiation, can result in biased AI systems. Furthermore, applying AI systems unevenly in society can perpetuate existing stereotypes.
To make AI systems transparent and understandable to the average person, prioritizing explainability during the development process is key. Techniques such as "chain of thought prompts" can help AI systems show their work and make their decision-making process more understandable. User research is also vital to ensure that explanations are clear and users can identify uncertainties in AI-generated content.
Also:AI could automate 25% of all jobs. Here's which are most (and least) at risk
Protecting individuals' privacy and ensuring responsible AI use requires transparency and consent. Salesforce follows guidelines for responsible generative AI, which include respecting data provenance and only using customer data with consent. Allowing users to opt in, opt-out, or have control over their data use is critical for privacy.
"We only use customer data when we have their consent," Baxter said. "Being transparent when you are using someone's data, allowing them to opt-in, and allowing them to go back and say when they no longer want their data to be included is really important."
As the competition for innovation in generative AI intensifies, maintaining human control and autonomy over increasingly autonomous AI systems is more important than ever. Empowering users to make informed decisions about the use of AI-generated content and keeping a human in the loop can help maintain control.
Ensuring AI systems are safe, reliable, and usable is crucial; industry-wide collaboration is vital to achieving this. Baxter praised the AI risk management framework created by NIST, which involved more than 240 experts from various sectors. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.
Failing to address these ethical AI issues can have severe consequences, as seen in cases of wrongful arrests due to facial recognition errors or the generation of harmful images. Investing in safeguards and focusing on the here and now, rather than solely on potential future harms, can help mitigate these issues and ensure the responsible development and use of AI systems.
Also:How ChatGPT works
While the future of AI and the possibility of artificial general intelligence are intriguing topics, Baxter emphasizes the importance of focusing on the present. Ensuring responsible AI use and addressing social biases today will better prepare society for future AI advancements. By investing in ethical AI practices and collaborating across industries, we can help create a safer, more inclusive future for AI technology.
"I think the timeline matters a lot," Baxter said. "We really have to invest in the here and now and create this muscle memory, create these resources, create regulations that allow us to continue advancing but doing it safely."
Read the original post:
Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet
Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law – Forbes
whether we like it or not.getty
In todays column, lets consider for a moment turning the world upside down.
Heres what I mean.
Generative AI such as the wildly and widely successful ChatGPT and GPT-4 by OpenAI is based on scanning data across the Internet and leveraging that examined data to pattern-match on how humans write and communicate in natural language. The AI development process also includes a lot of clean-up and filtering, via a technique known as RLHF (reinforcement learning via human feedback) that seeks to either excise or at least curtail unsavory language from being emitted by the AI. For my coverage of why some people nonetheless ardently push generative AI and relish stoking hate speech and other untoward AI-generated foulness, see the link here.
When the initial scanning of the Internet takes place for data training of generative AI, the websites chosen to be scanned are generally aboveboard. Think of Wikipedia or similar kinds of websites. By and large, the text found there will be relatively safe and sane. The pattern-matching is getting a relatively sound basis for identifying the mathematical and computational patterns found within everyday human conversations and essays.
Id like to bring to your attention that we can turn that crucial precept upside down.
Suppose that we purposely sought to use the worst of the worst that is posted on the Internet to do the data training for generative AI.
Imagine seeking out all those seedy websites that you would conventionally be embarrassed to even accidentally land on. The generative AI would be entirely focused exclusively on this bad stuff. Indeed, we wouldnt try to somehow counterbalance the generative AI by using some of the everyday Internet and some of the atrocious Internet. Full on we would mire the generative AI in the muck and mire of wickedness on the Internet.
What would we get?
And why would we devise this kind of twisted or distorted variant of generative AI?
Those are great questions and I am going to answer them straightforwardly. As you will soon realize, some pundits believe data training generative AI on the ugly underbelly of the Internet is a tremendous idea and an altogether brilliant strategy. Others retort that this is not only a bad idea, it could be a slippery slope that leads to AI systems that are of an evil nature and we will regret the day that we allowed this to ever get underway.
Allow me a quick set of foundational remarks before we jump into the meat of this topic.
Please know that generative AI and indeed all manner of todays AI is not sentient. Despite all those blaring headlines that claim or imply that we already have sentient AI, we dont. Period, full stop. I will later on herein provide some speculation about what might happen if someday we attain sentient AI, but thats conjecture, and no one can say for sure when or if that will occur.
Modern generative AI is based on a complex computational algorithm that has been data trained on text from the Internet. Generative AI such as ChatGPT, GPT-4, Bard, and other similar AI apps entail impressive pattern-matching that can perform a convincing mathematical mimicry of human wording and natural language. For my explanation of how generative AI works, see the link here. For my analysis of the existent doomster fearmongering regarding AI as an existential risk, see the link here.
Into all of this comes a plethora of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
Now that weve covered those essentials about generative AI, lets look at the seemingly oddish or scary proposition of data training generative AI on the most stinky and malicious content available on the web.
The Dark Web Is The Perfect Foundation For Bad Stuff
There is a part of the Internet that you might not have visited that is known as the Dark Web.
The browsers that you normally use to access the Internet are primed to only explore a small fraction of the web known as the visible or surface-level web. There is a lot more content out there. Within that other content is a segment generally coined as the Dark Web and tends to contain all manner of villainous or disturbing content. Standard search engines do not usually look at Dark Web pages. All in all, you would need to go out of your way to see what is posted on the Dark Web, doing so by using specialized browsers and other online tools to get there.
What type of content might be found on the Dark Web you might be wondering?
The content varies quite a bit. Some of it entails evildoers that are plotting takeovers or possibly contemplating carrying out terrorist attacks. Drug dealers find the Dark Web very useful. You can find criminal cyber hackers that are sharing tips about how to overcome cybersecurity precautions. Conspiracy theorists tend to like the Dark Web since it is a more secretive arena to discuss conspiratorial theories. And so on.
Im not saying that the Dark Web is all bad, but at least be forewarned that it is truly the Wild West of the Internet, and just about anything goes.
In a research paper entitled Dark Web: A Web of Crimes, there is a succinct depiction of the components of the Internet and the role of the Dark Web, as indicated by these two excerpts:
I realize it is perhaps chilling to suddenly realize that there is an entire segment of the Internet that you perhaps didnt know existed and that it is filled with abysmal content. Sorry to be the bearer of such gloomy news.
Maybe this will cheer you up.
The Dark Web is seemingly the ideal source of content to train generative AI if you are of the mind that data training on the worst of the worst is a presumably worthwhile and productive endeavor. Rather than having to try and bend over backward to find atrocious content on the conventional side of the Internet (admittedly, there is some of that there too), instead make use of a specialized web crawler aimed at the Dark Web and you can find a treasure trove of vile content.
Easy-peasy.
I know that I havent yet explained why data training generative AI on the Internets ugly underbelly is presumably useful, so lets get to that next. At least we now know that plentiful content exists for such a purpose.
What Does Dark Web Trained Generative AI Provide
Ill give you a moment to try and brainstorm some bona fide reasons for crafting generative AI that is based on foulness.
Any ideas?
Well, heres what some already proclaim are useful reasons:
Any discussion about the Dark Web should be careful to avoid pegging the Dark Web as exclusively a home of evildoers. There are various justifications for having a Dark Web.
For example, consider this listing by the researchers mentioned earlier:
Given those positive facets of the Dark Web, you could argue that having generative AI trained on the Dark Web would potentially further aid those benefits. For example, enabling more people to find scarce products or discover content that has been entered anonymously out of fear of governmental reprisals.
In that same breath, you could also decry that the generative AI could severely and lamentably undercut those advantages by providing a means for say government crackdowns on those that are dissenting from government oppression. Generative AI based on the Dark Web might have a whole slew of unanticipated adverse consequences including putting innocent people at risk that were otherwise using the Dark Web for ethically or legally sound purposes.
Ponder seriously and soberly whether we do or do not want generative AI that is based on the Dark Web.
The good news or bad news is that we already have that kind of generative AI. You see, the horse is already out of the barn.
Lets look at that quandary next.
The DarkGPT Bandwagon Is Already Underway
A hip thing to do involves training generative AI on the Dark Web.
Some that do so have no clue as to why they are doing so. It just seems fun and exciting. They get a kick out of training generative AI on something other than what everyone else has been using. Others intentionally train generative AI on the Dark Web. Those with a particular purpose are usually within one or more camps associated with the reasons I gave in the prior subsection about reasons to do so.
All of this has given rise to a bunch of generative AI apps that are generically referred to as DarkGPT. I say generically because there are lots of these DarkGPT monikers floating around. Unlike the bona fide trademarked name such as ChatGPT that has spawned all kinds of GPT naming variations (I discuss the legal underpinnings of the trademark at the link here), the catchphrase or naming of DarkGPT is much more loosey-goosey.
Watch out for scams and fakes.
Heres what I mean. You are curious to play with a generative AI that was trained on the Dark Web. You do a cursory search for anything named DarkGPT or DarkWebGPT or any variation thereof. You find one. You decide to try it out.
Yikes, turns out that the app is malware. You have fallen into a miserable trap. Your curiosity got the better of you. Please be careful.
Legitimate Dark Web Generative AI
Ill highlight next a generative AI that was trained on the Dark Web and serves as a quite useful research-oriented exemplar and can be a helpful role model for other akin pursuits.
The generative AI app is called DarkBERT and is described in a research paper entitled DarkBERT: A Language Model For The Dark Side Of The Internet by researchers Youngjin Jin, Eugene Jang, Jian Cui, Jin-Woo Chung, Yongjae Lee, and Seungwon Shin (posted online on May 18, 2023). Here are some excerpted key points from their study:
Lets briefly examine each of those tenants.
First, the researchers indicated that they were able to craft a Dark Web-based instance of generative AI that was comparable in natural language fluency as could be found in a generative AI trained on the conventionally visible web. This is certainly encouraging. If they had reported that their generative AI was less capable, the implication would be that we might not readily be able to apply generative AI to the Dark Web. This would have meant that efforts to do so would be fruitless or that some other as-yet-unknown new AI-tech innovation would have been required to sufficiently do so.
Bottom-line is that we can proceed to apply generative AI to the Dark Web and expect to get responsive results.
Secondly, it would seem that a generative AI solely trained on the Dark Web is likely to do a better job at pattern-matching of the Dark Web than would a generative AI that was partially data-trained on the conventional web. Remember that earlier I mentioned that we might consider data training of generative AI that mixes both the conventional web and the Dark Web. We can certainly do so, but the result here seems to suggest that making queries and using the natural language facility of the Dark Web specific generative AI is better suited than would be a mixed model (there are various caveats and exceptions, thus this is an open-research avenue).
Third, the research examined closely the cybersecurity merits of having a generative AI that is based on the Dark Web, namely being able to detect or uncover potential cyber hacks that are on the Dark Web. The aspect that the generative AI seemed especially capable in this realm is a plus for those fighting cybercriminals. You can consider using Dark Web data-trained generative AI to pursue the wrongdoers that are aiming to commit cybercrimes.
You might be somewhat puzzled as to why the name of their generative AI is DarkBERT rather than referring to the now-classic acronym of GPT (generative pre-trained transformer). The BERT acronym is particularly well-known amongst AI insiders as the name of a set of generative AI apps devised by Google that they coined BERT (bi-directional encoder representations from transformers). I thought you might like a smidgeon of AI insider terminology and ergo able to clear up that possibly vexing mystery.
A quick comment overall before we move on. Research about generative AI and the Dark Web is still in its infancy. You are highly encouraged to jump into this evolving focus. There are numerous technological questions to be addressed. In addition, there are a plethora of deeply intriguing and vital AI Ethics and AI Law questions to be considered.
Of course, youll need to be willing to stomach the stench or dreadful aroma that generally emanates from the Dark Web. Good luck with that.
When Generative AI Is Bad To The Bone
Ive got several additional gotchas and thought-provoking considerations for you on this topic.
Lets jump in.
We know that conventional generative AI is subject to producing errors, along with emitting falsehoods, producing biased content, and even making up stuff (so-called AI hallucinations, a catchphrase I deplore, for the reasons explained at the link here). These maladies are a bone of contention when it comes to using generative AI in any real-world setting. You have to be careful of interpreting the results. The generated essays and interactive dialogues could be replete with misleading and misguided content produced by the generative AI. Efforts are hurriedly underway to try and bound these problematic concerns, see my coverage at the link here.
Put on your thinking cap and get ready for a twist.
What happens if generative AI that is based on the Dark Web encounters errors, falsehoods, biases, or AI hallucinations?
In a sense, we are in the same boat as the issues confronting conventional generative AI. The Dark Web generative AI might showcase an indication that seems to be true but is an error or falsehood. For example, you decide to use Dark Web data-trained generative AI to spot a cyber crook. The generative AI tells you that it found a juicy case on the Dark Web. Upon further investigation with other special browsing tools, you discover that the generative AI falsely made that accusation.
Oops, not cool.
We need to always keep our guard up when it comes to both conventional generative AI and Dark Web-based generative AI.
Heres another intriguing circumstance.
People have been trying to use conventional generative AI for mental health advice. Ive emphasized that this is troublesome for a host of disconcerting reasons, see my analysis at the link here and the link here, just to name a few. Envision that a person is using conventional or clean generative AI for personal advice about something, and the generative AI emits an AI hallucination telling the person to take actions in a dangerous or unsuitable manner. Im sure you can see the qualms underlying this use case.
A curious and serious parallel would be if someone opted to use a Dark Web-based generative AI for mental health advice. We might assume that this baddie generative AI is likely to generate foul advice from the get-go.
Is it bad advice that would confuse and confound evildoers? I suppose we might welcome that possibility. Maybe it is bad advice in the sense that it is actually good advice from the perspective of a wrongdoer. Generative AI might instruct the evildoer on how to better achieve evil deeds. Yikes!
Or, in a surprising and uplifting consideration, might there be some other mathematical or computational pattern-matching contrivance that manages to rise above the flotsam used during the data training? Could there be lurking within the muck a ray of sunshine?
A bit dreamy, for sure.
More research needs to be done.
Speaking of doing research and whatnot, before you run out to start putting together a generative AI instance based on the Dark Web, you might want to check out the licensing stipulations of the AI app. Most of the popular generative AI apps have a variety of keystone restrictions. People using ChatGPT for example are typically unaware that there are a bunch of prohibited uses.
For example, as Ive covered at the link here, you cannot do this with ChatGPT:
If you were to develop a generative AI based on the Dark Web, you presumably might violate those kinds of licensing stipulations as per whichever generative AI app you decide to use. On the other hand, one supposes that as long as you use the generative AI for the purposes of good, such as trying to ferret out evildoers, you would potentially be working within the stated constraints of the licensing. This is all a legal head-scratcher.
One final puzzling question for now.
Will we have bad-doers that purposely devise or seek out generative AI that is based on the Dark Web, hoping to use the generative AI to further their nefarious pursuits?
I sadly note that the answer is assuredly yes, this is going to happen and is undoubtedly already happening. AI tools tend to have a dual-use capability, meaning that you can turn them toward goodness and yet also turn them toward badness, see my discussion on AI-based Dr. Evil Projects at the link here.
Conclusion
To end this discussion on the Dark Web-based generative AI, I figured we might take a spirited wooded hike into the imaginary realm of the postulated sentient AI. Sentient AI is also nowadays referred to as Artificial General Intelligence (AGI). For a similar merry romp into a future of sentient AI, see my discussion at the link here.
Sit down for what I am about to say next.
If the AI of today is eventually heading toward sentient AI or AGI, are we making ourselves a devil of time by right now proceeding to create instances of generative AI that are based on the Dark Web?
Heres the unnerving logic. We introduce generative AI to the worst of the worst of humankind. The AI pattern matches it. A sentient AI would presumably have this within its reach. The crux is that this could become the keystone for how the sentient AI or AGI decides to act. By our own hand, we are creating a foundation showcasing the range and depth of evildoing of humanity and displaying it to the AGI for all its glory to examine or use.
Some say it is the perfect storm for making a sentient AI that will be armed to wipe out humankind. Another related angle is that the sentient AI will be so disgusted by this glimpse into humankind, the AGI will decide it is best to enslave us. Or maybe wipe us out, doing so with plenty of evidence as to why we ought to go.
I dont want to conclude on a doom and gloom proposition, so give me a chance to liven things up.
Turn this unsettling proposition on its head.
By the sentient AI being able to readily see the worst of the worst about humanity, the AGI can use this to identify how to avoid becoming the worst of the worst. Hooray! You see, by noting what should not be done, the AGI will be able to identify what ought to be done. We are essentially doing ourselves a great service. The crafting of Dark Web-based generative AI will enable AGI to fully discern what is evil versus what is good.
We are cleverly saving ourselves by making sure that sentient AI is up to par on good versus evil.
Marcus Tullius Cicero, the famed Roman statesman, said this: The function of wisdom is to discriminate between good and evil. Perhaps by introducing AI to both the good and evil of humankind, we are setting ourselves up for a wisdom-based AGI that will be happy to keep us around. Maybe even help us to steer toward being good more than we are evil.
Thats your happy ending for the saga of the emergent sentient AI. I trust that you will now be able to get a good night's sleep on these weighty matters. Hint: Try to stay off the Dark Web to get a full nights slumber.
Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 6.8+ million amassed views of his AI columns. As a seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research. Previously a professor at USC and UCLA, and head of a pioneering AI Lab, he frequently speaks at major AI industry events. Author of over 50 books, 750 articles, and 400 podcasts, he has made appearances on media outlets such as CNN and co-hosted the popular radio show Technotrends. He's been an adviser to Congress and other legislative bodies and has received numerous awards/honors. He serves on several boards, has worked as a Venture Capitalist, an angel investor, and a mentor to founder entrepreneurs and startups.
Original post:
Why we need a "Manhattan Project" for A.I. safety – Salon
Artificial intelligence is advancing at a breakneck pace. Earlier this month, one of the world's most famous AI researchers, Geoffrey Hinton, left his job at Google to warn us of the existential threat it poses. Executives of the leading AI companies are making the rounds in Washington to meet with the Biden administration and Congress to discuss its promise and perils. This is what it feels like to stand at the hinge of history.
An AI trained on pharmaceutical data in 2022 to design non-toxic chemicals had its sign flipped and quickly came up with recipes for nerve gas and 40,000 other lethal compounds.
This is not about consumer-grade AI the use of products like ChatGPT and DALLE to write articles and make art. While those products certainly pose a material threat to certain creative industries, the future threat of which I speak is that of AI being used in ways that threaten life itself say, to design deadly bioweapons, serve as autonomous killing machines, or aid and abet genocide. Certainly, the sudden advent of ChatGPT was to the general public akin to a rabbit being pulled out of a hat. Now imagine what another decade of iterations on that technology might yield in terms of intelligence and capabilities. It could even yield an AGI, meaning a type of AI that can accomplish any cognitive task that humans can.
In fact, the threat of God-like AI has loomed large on the horizon since computer scientist I. J. Good warned of an "intelligence explosion" in the 1960s. But efforts to develop guardrails have sputtered for lack of resources. The newfound public and institutional impetus allows us for the first time to compel the tremendous initiative we need, and this window of opportunity may not last long.
As a sociologist and statistician who studies technological change, I find this situation extremely concerning. I believe governments need to fund an international, scientific megaproject even more ambitious than the Manhattan Project the 1940s nuclear research project pursued by the U.S., the U.K., and Canada to build bombs to defeat the unprecedented global threat of the Axis powers in World War II.
This "San Francisco Project" named for the industrial epicenter of AI would have the urgent and existential mandate of the Manhattan Project but, rather than building a weapon, it would bring the brightest minds of our generation to solve the technical problem of building safe AI. The way we build AI today is more like growing a living thing than assembling a conventional weapon, and frankly, the mathematical reality of machine learning is that none of us have any idea how to align an AI with social values and guarantee its safety. We desperately need to solve these technical problems before AGI is created.
We can also take inspiration from other megaprojects like the International Space Station, Apollo Program, Human Genome Project, CERN, and DARPA. As cognitive scientist Gary Marcus and OpenAI CEO Sam Altman told Congress earlier this week, the singular nature of AI compels a dedicated national or international agency to license and audit frontier AI systems.
Present-day harms of AI are undeniably escalating. AI systems reproduce race, gender, and other biases from their training data. An AI trained on pharmaceutical data in 2022 to design non-toxic chemicals had its sign flipped and quickly came up with recipes for nerve gas and 40,000 other lethal compounds. This year, we saw the first suicide attributed to interaction with a chatbot, EleutherAI's GPT-J, and the first report of a faked kidnapping and ransom call using an AI-generated voice of the purported victim.
Bias, inequality, weaponization, breaches of cybersecurity, invasions of privacy, and many other harms will grow and fester alongside accelerating AI capabilities. Most researchers think that AGI will arrive by 2060, and a growing number expect cataclysm within a decade. Chief doomsayer Eliezer Yudkowsky recently argued that the most likely AGI outcome "under anything remotely like the current circumstances, is that literally everyone on Earth will die."
Complete annihilation may seem like science fiction, but if AI begins to self-improvemodify its own cognitive architecture and build its own AI workers like those in Auto-GPTany misalignment of its values with our own will be astronomically magnified. We have very little control over what happens to today's AI systems as we train them. We pump them full of books, websites, and millions of other texts so they can learn to speak like a human, and we dictate the rules for how they learn from each piece of data, but even leading computer scientists have very little understanding of how the resultant AI system actually works.
One of the most impressive interpretability efforts to date sought simply to locate where in its neural network edifice GPT-2 stores the knowledge that the capital of Italy is Rome, but even that finding has been called into question by other researchers. The favored metaphor in 2023 has been a Lovecraftian shoggoth, an alien intelligence on which we strap a yellow smiley face maskbut the human-likeness is fleeting and superficial.
Recent discourse has centered on proposals to slow down AI research, including the March 22nd open letter calling for a 6-month pause on training systems more powerful than GPT-4, signed by some of the world's most famous AI researchers.
With the black magic of AI training, we could easily stumble upon a digital mind with goals that make us mere collateral damage. The AI has an initial goal and gets human feedback on the output produced by that goal. Every time it makes a mistake, the system picks a new goal that it hopes will do a little better. This guess-and-check method is an inherently dangerous way to learn because most goals that do well on human feedback in the lab do not generalize well to a superintelligence taking action in the real world.
Among all the goals an AI could stumble upon that elicit positive human feedback, there is instrumental convergence to dangerous tendencies of deception and power-seeking. To best achieve a goal say, filling a cauldron with water in the classic story of The Sorcerer's Apprentice a superintelligence would be incentivized to gather resources to ensure that goal is achievedlike filling the whole room with water to ensure that the cauldron never empties. There are so many alien goals that the AI could land on that, unless the AI just happens to land on exactly the goal that matches what humans want from it. Then it might just act like it's safe and friendly while figuring out how to best take over and optimize the world to ensure its success.
In response to these dangerous advances, concrete and hypothetical, recent discourse has centered on proposals to slow down AI research, including the March 22nd open letter calling for a 6-month pause on training systems more powerful than GPT-4, signed by some of the world's most famous AI researchers including Yoshua Bengio and Stuart Russell.
Want more health and science stories in your inbox? Subscribe toSalon's weekly newsletter The Vulgar Scientist.
That approach is compelling but politically infeasible given the massive profit potential and the difficulty in regulating machine learning software. In the delicate balance of AI capabilities and safety, we should consider pushing up the other end, funding massive amounts of AI safety research. If the future of AI is as dangerous as computer scientists think, this may be a moonshot we desperately need.
As a sociologist and statistician, I study the interwoven threads of social and technological change. Using computational tools like word embeddings alongside traditional research methods like interviews with AI engineers, my team and I built a model of how expert and popular understanding of AI has changed over time. Before 2022, our model focused on the landmark years of 2012 when the modern AI paradigm of deep learning took hold in the computer science firmament and 2016 when, we argue, the public and corporate framing of AI inflected from science fiction and radical futurism to an incremental real-world technology being integrated across industries such as healthcare and security.
Our model changed in late 2022 after seeing the unprecedented social impact of ChatGPT's launch: it quickly became the fastest growing app in history, outpacing even the viral social media launches of Instagram and TikTok.
This public spotlight on AI provides an unprecedented opportunity to start the San Francisco Project. The "SFP" could take many forms with varying degrees of centralization to bring our generation's brightest minds to AI safety: a single, air-gapped facility that houses researchers and computer hardware; a set of major grants to seed and support multi-university AI safety labs alongside infrastructure to support their collaboration; or major cash prizes for outstanding research projects, perhaps even a billion-dollar grand prize for an end-to-end solution to the alignment problem. In any case, it's essential that such a project stay laser-focused on safety and alignment lest it become yet another force pushing forward the dangerous frontier of unmitigated AI capabilities.
It may be inauspicious to compare AI safety technology with the rapid nuclear weaponization of the Manhattan Project. In 1942, shortly after it began, the world's first nuclear chain reaction was ignited just a few blocks from where I sit at the University of Chicago. In July 1945, the world's first nuclear weapon was tested in New Mexico, and a month later, the bombs fell on Hiroshima and Nagasaki.
The San Francisco Project could end of the century of existential risk that began when the Manhattan Project first made us capable of self-annihilation. The intelligence explosion will happen soon whether humanity is ready or not either way, AGI will be our species' final invention.
Read more
on the theoretical and real threat of A.I.
Read the rest here:
What is AGI? The Artificial Intelligence that can do it all – Fox News
With the release of ChatGPT last year, a renewed focus was placed on AGI artificial general intelligence the advanced technology with similar capabilities to that of humans.
And while some argue GPT-4, the latest version of the technology, appears close to AGI, others say it is years, or decades, before the technology reaches human-like abilities.
There is no one agreed upon definition of AGI, but a 2020 report from consulting giant McKinsey said a true AGI would need to master skills like sensory perception, fine motor skills, and natural language understanding.
WHAT IS AI?
Recent developments in Artificial Intelligence have led to renewed focus on AGI, the technology with capabilities similar to that of humans. (getty images)
Dr. Michael Capps, the co-founder and CEO of Diveplane, said AGI is "an AI that can do anything, and maybe as well or better than a human"
"Whats really neat about that is now we can deploy them in all different facets of life, and hopefully do all the boring stuff," he said.
A technology that advanced, though, gives some pause, Capps warned.
"The downside is AGIs can learn quickly suddenly you have something thats way smarter than a 3-year-old, or an 18-year-old, or Einstein Thats where people start getting a little nervous about, how do we even understand what that may be?," he added.
WHAT IS CHATGPT?
ChatGPT, released last year, allows users to have human-like conversations with a chatbot. ((Photo By Eduardo Parra/Europa Press via Getty Images))
Christopher Alexander, the chief communications officer of Liberty Blockchain, told Fox News Digital in his view, AGI would be an "operator," allowing him to have a conversation with it like he would with an analyst.
But, Alexander argued, the current AI models, such as GPT-4, are nowhere near a true AGI.
"It is nowhere, nowhere near that, and I think its important to recognize that," he said.
"So when? I think supercomputing power is probably going to be a major factor," Alexander added.
SENATE WARNED OF PERFECT STORM LEADING TO EMERGING AI DISASTER
Artificial General Intelligence - AGI - is an advanced technology which would mimic human-like abilities. (iStock)
Capps also emphasized that current AI models do not reach the level of AGI, and that there is no set time for when the technology will reach human-like abilities.
"I think the neat thing is, no one knows," he said. "The average AI scientist probably thinks were 20, 15 years away. But once it happens, its going to be really fast."
Others, however, view the recent developments in generative AI, such as GPT-4, as advancements in the direction of AGI.
An April report from Microsoft Research said GPT4, the latest version of ChatGPT, exhibited "more general intelligence than previous AI models."
CLICK HERE TO GET THE FOX NEWS APP
"Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system," a summary of the report read.
ChatGPT was released last year, and allows users to have conversations with an AI chatbot, with the ability to write text, songs, poems and even computer code. Microsoft has integrated the technology in its Bing search engine.
Link:
What is AGI? The Artificial Intelligence that can do it all - Fox News
Artificial intelligence GPT-4 shows ‘sparks’ of common sense, human-like reasoning, finds Microsoft – Down To Earth Magazine
"); o.document.close(); setTimeout(function() { window.frames.printArticleFrame.focus(); window.frames.printArticleFrame.print(); document.body.removeChild(a); }, 1000); } jQuery(document).bind("keyup keydown", function(e) { if ((e.ctrlKey || e.metaKey) && (e.key == "p" || e.charCode == 16 || e.charCode == 112 || e.keyCode == 80)) { e.preventDefault(); printArticle(); } });
OpenAIs more powerful version of ChatGPT, GPT-4, can be trained to reason and use common sense like humans, a new study by Microsoft has found.
GPT-4 is a significant step towards artificial general intelligence (AGI) and can reason, plan and learn from experience at the same level as humans do, or possibly above them, the analysis found.
The AI is part of a new cohort of large language models (LLM), including ChatGPT and Googles PaLM. LLMs can be trainedin massive amounts of data and fed both images and text to come up with answers.
Microsoft invested billions of dollars in OpenAI and had access to it before it was launched publicly. The company recently took out a 155-page analysis,Sparks of Artificial General Intelligence: Early experiments with GPT-4.
Read more:If AI goes wrong, it can go quite wrong: Heres ChatGPT CEOs full testimony in US Congress
GPT-4 is also used to power Microsofts Bing Chat feature.
The research team discovered that LLMs can be trained to reason and use common sense like humans. They demonstrated GPT-4 can solve complex tasks in several fields without special prompting, including mathematics, vision, medicine, law and psychology.
The system available to the public is not as powerful as the version they tested, Microsoft said.
The paper gave several examples of how the AI seemed to understand concepts, like what a unicorn is. GPT-4 drew a unicorn in a sub programming language called TiKZ. In the crude drawings, GPT4 got the concept of a unicorn right.
To demonstrate the difference between true learning and memorisation, researchers asked GPT-4 to Draw a unicorn in TikZ three times over the course of one month. The AI showed a clear evolution in the sophistication ofthedrawings. Source: Microsoft
GPT-4 also exhibited more common sense than previous models, like ChatGPT, OpenAI said. Both GPT-4 and ChatGPT were asked to stack a book, nine eggs, a laptop, a bottle and a nail.
While ChatGPT recommended placing the eggs on top of the nail, the more sophisticated model arranged the items so the eggs would not break.
Read more:Thirsty AIs: ChatGPT drinks half a litre of fresh water to answer 20-50 questions, says study
However, the report acknowledged that AI still has limitations and biases and users were warned to be careful. GPT is still not fully reliable because it still hallucinates facts and makes reasoning and basic arithmetic errors.
The analysis read:
While GPT-4 is at or beyond human-level for many tasks, overall, its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact, GPT-4 itself has improved throughout our time testing it.
However, the paper warned the users to be careful, warning of its limitations likeconfidence calibration,cognitive fallacies and irrationality andchallenges with sensitivity to inputs.
Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with an additional context or avoiding high-stakes uses altogether) matching the needs of a specific use-case, it said.
We are a voice to you; you have been a support to us. Together we build journalism that is independent, credible and fearless. You can further help us by making a donation. This will mean a lot for our ability to bring you news, perspectives and analysis from the ground so that we can make change together.
View post:
Programmed Values: The Role of Intention in Developing AI – Psychology Today
The advent of artificial intelligence (AI) seems to have sent shockwaves through the world. Not a day passes without my clients and students contemplating its effects. James, a young journalist, told me, Maybe AIs will be wonderful assistants when I do research, while Ravi, an electrical engineering student, is less optimistic. In five years, our society will be completely changed by the influence of AI. He adds, The problem is that AI will develop in ways we cant foresee, with exponential speed. He looks downcast. I find that deeply depressing, and it leaves me sleepless at night.
AI is a widespread source of anxiety. Many worry that their jobs will be obsolete since AI can do some tasks faster and better than humans. Tristan Harris, from the Centre of Humane Technology, warns that privacy can easily be violated in many areas of our lives. He also suggests that AI systems can aggravate existing societal biases and discriminations. Malicious actors can exploit vulnerabilities in AI systems, leading to the manipulation of public opinion. This could lead to negative psychological impacts on marginalized individuals or communities. He is especially concerned about the potential for AI to increase polarization. Intelligent algorithms could radicalize moderate consumers to capture attention.
Juan, a young scientist from our local university interested in existential and spiritual concerns, has an exciting view. I wish we could program a longing for wisdom and goodness into our AI. Then AI could influence our society in a positive, compassionate way.
Leike, Schulman, and Wu speak about the AI alignment problem, which describes the degree of correspondence between the values and motivations of humans and AI. They tell us, Unaligned AGI could pose significant risks to humanity, and solving the AGI (Artificial General Intelligence) alignment problem could be so difficult that it will require all humanity to work together.
Philosopher Nick Bostrom notes that to build an AI that acts safely while acting in many domains, with many consequences, including problems engineers never explicitly envisioned, one must specify good behavior in as in X equals such that the consequence of X is not harmful to humans. In brief, many AI researchers and thinkers believe good, human-compatible intentions must be explicitly programmed into how AI systems are designed. AI systems carry the intentions of the people who create them, whether coded intentionally or not.
AI reflects the intention of the person who created it. If the intention is to make a profit, then that is what the AI will do. If the AI intends to replace a humans work, then that is what the AI will do. If the intention is to mimic a humans voice and expression, then that is what the AI will do. AI has no inherent sense of caring, intuition, or intrinsic conscience.
To tackle the enormous problem of how to ethically and safely achieve the goal of integrating AI into society, we need to build alignment and intention. Alignment refers to the state of congruence between values, beliefs, and goals. In terms of purpose, it involves aligning our actions and plans with a more profound sense of purpose in life. Intentions play a crucial role in shaping our experiences and outcomes and help us to stay focused, motivated, and in tune with our goals, even when we face challenges. By being fully present with our inner values, we align our intentions with our core values. When alignment and intention work, they create a powerful positive synergy.
That is why we have to clarify our intention, tune into our intuition, and be conscious when programming AI. We need the utmost clarity and self-awareness as an individual, a group of scientists, a society, or an international decision-making body.
Buddhist psychology can help us here. It emphasizes the importance of cultivating wholesome intentions. Wholesome intention originates in our generosity, loving kindness, compassion, and the absence of harmful desires. This leads indirectly to positive outcomes by promoting behaviors aligned with those intentions. Meditation and mindfulness allow individuals to make conscious choices that lead to well-being and spiritual progress.
It is important to note that Buddhist psychology does not solely focus on intentions but also considers the actual consequences of actions. Intention alone is insufficient to determine an effort's ethical value, as the outcome and impact on oneself and others are also considered. However, intention serves as a crucial starting point and a significant factor in determining the ethical quality of an action.
I recommend that we lean on positive and Buddhist psychology to develop a playbook for making AI useful for individuals, groups, and societies so we successfully support our health and well-being. The potential threats of unaligned AI are undoubtedly immense, but we are not powerless in their midst. Careful alignment and encoding of positive human values in AI systems require us to understand our values, intentions, and motivations. Here, Buddhist psychology can serve as a guide, offering practices to listen, discover our intentions, and more deeply align them with our best interests.
See more here:
Programmed Values: The Role of Intention in Developing AI - Psychology Today
Fear the fire or harness the flame: The future of generative AI – VentureBeat
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Generative AI has taken the world by storm. So much so that in the last several months, the technology has twice been a major feature on CBSs 60 Minutes. The rise of startling conversant chatbots such as ChatGPT has even prompted warnings of runaway technology from some luminary artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive perhaps dazzling would be a better adjective it might be even further advanced than is generally understood.
This week, The New York Times reported that some researchers in the tech industry believe these systems have moved toward something that cannot be explained as a stochastic parrot a system that simply mimics its underlying dataset. Instead, they are seeing An AI system that is coming up with humanlike answers and ideas that werent programmed into it. This observation comes from Microsoft and is based on responses to their prompts from OpenAIs ChatGPT.
Their view, as put forward in a research paper published in March, is that the chatbot showed sparks of artificial general intelligence (AGI) the term for a machine that attains the resourcefulness of human brains. This would be a significant development, as AGI is thought by most to still be many years, possibly decades, into the future. Not everyone agrees with their interpretation, but Microsoft has reorganized parts of its research labsto include multiple groups dedicated to exploring this AGI idea.
Separately, Scientific American described several similar research outcomes, including one from philosopher Raphal Millire of Columbia University. Hetyped a program into ChatGPT, asking it to calculate the 83rd number in the Fibonacci sequence.
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Its multistep reasoning of a very high degree, he said.
The chatbot nailed it.It shouldnt have been able to do this since it isnt designed to manage a multistep process. Millire hypothesized that the machine improvised a memory within the layers of its network another AGI-style behavior for interpreting words according to their context. Millire believes this behavior is much like how nature repurposes existing capacities for new functions, such as the evolution of feathers for insulation before they were used for flight.
Arguably already showing early signs of AGI, developers continue to make advances with large language models (LLMs). Late last week, Google announced significant upgrades to their Bard chatbot. This upgrade included moving Bard to the new PaLM 2 large language model. Per a CNBC report, PaLM 2 uses almost five times as much training data as its predecessor from 2022, allowing it to perform more advanced coding, math and creative writing tasks. Not to be outdone, OpenAI this week started to make plug-ins available for ChatGPT, including the ability to access the Internet in real time instead of relying solely on a dataset with content through 2021.
At the same time, Anthropic announced an expanded context window for their Claude chatbot. Per a LinkedIn post from AI expert Azeem Azhar, a context window is the length of text that a LLM can process and respond to.
In a sense, it is like the memory of the system for a given analysis or conversation, Azhar wrote. Larger context windows allow the systems to have much longer conversations or to analyze much bigger, more complex documents.
According to this post, the window for Claude is now about three times larger than that for ChatGPT.
All of which is to say that if ChatGPT exhibited sparks of AGI in research performed several months ago, state of the art has already surpassed these capabilities. That said, there remain numerous shortcomings to these models, including occasional hallucinations where they simply make up answers. But it is the speed of advances that has spooked many and led to urgent calls for regulation. However, Axios reports the likelihood that lawmakers in the U.S. will unite and act on AI regulation before the technology rapidly develops remains slim.
Those who see an existential danger from AI worry that AI could destroy democracy or humanity. This group of experts now includes Geoffrey Hinton, the Godfather of AI, along with long-time AI doomsayers such as Eliezer Yudkowsky. The latter said that by building a superhumanly smart AI, literally everyone on Earth will die.
While not nearly as dire in their outlook, even the executives of leading AI companies (including Google, Microsoft, and OpenAI) have said they believe AI regulation is necessary to avoid potentially damaging outcomes.
Amid all of this angst, Casey Newton, author of the Platformer newsletter, recently wrote about how he should approach what is essentially a paradox. Should his coverage emphasize the hope that AI is the best of us and will solve complex problems and save humanity, or should it instead speak to how AI is the worst of us obfuscating the truth, destroying trust and, ultimately, humanity?
There are those who believe the worries are overblown. Instead, they see this response as a reactionary fear of the unknown, or what amounts to technophobia. For example, essayist and novelist Stephen Marche wrote in the Guardian that tech doomerism is a species of hype.
He blames this in part on the fears of engineers who build the technology but who simply have no idea how their inventions interact with the world. Marche dismisses the worry that AI is about to take over the world as anthropomorphizing and storytelling; its a movie playing in the collective mind, nothing more. Demonstrating how in thrall we are to these themes, a new movie is expected this fall, pits humanity against the forces of AI in a planet-ravaging war for survival.
A common sense approach was expressed in an opinion piece from Professor Ioannis Pitas, chair of the International AI Doctoral Academy. Pitas believes AI is a necessary human response to a global society and physical world of ever-increasing complexity. He sees the positive impact of AI systems greatly outweighing their negative aspects if proper regulatory measures are taken. In his view, AI should continue to be developed, but with regulations to minimize already evident and potential negative effects.
This is not to say there are no dangers ahead with AI. Alphabet CEO Sundar Pichai has said, AI is one of the most important things humanity is working on. It is more profound than electricity or fire.
Perhaps fire provides a good analogy. There have been many mishaps in handling fire, and these still occasionally occur. Fortunately, society has learned to harness the benefits of fire while mitigating its dangers through standards and common sense. The hope is that we can do the same thing with AI before we are burned by the sparks of AGI.
Gary Grossman is SVP of technology practice atEdelmanand global lead of the Edelman AI Center of Excellence.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
See original here:
Fear the fire or harness the flame: The future of generative AI - VentureBeat
The Senate’s hearing on AI regulation was dangerously friendly – The Verge
The most unusual thing about this weeks Senate hearing on AI was how affable it was. Industry reps primarily OpenAI CEO Sam Altman merrily agreed on the need to regulate new AI technologies, while politicians seemed happy to hand over responsibility for drafting rules to the companies themselves. As Senator Dick Durbin (D-IL) put it in his opening remarks: I cant recall when weve had people representing large corporations or private sector entities come before us and plead with us to regulate them.
This sort of chumminess makes people nervous. A number of experts and industry figures say the hearing suggests we may be headed into an era of industry capture in AI. If tech giants are allowed to write the rules governing this technology, they say, it could have a number of harms, from stifling smaller firms to introducing weak regulations.
Industry capture could harm smaller firms and lead to weak regulations
Experts at the hearing included IBMs Christina Montgomery and noted AI critic Gary Marcus, who also raised the specter of regulatory capture. (The peril, said Marcus, is that we make it appear as if we are doing something, but its more like greenwashing and nothing really happens, we just keep out the little players.) And although no one from Microsoft or Google was present, the unofficial spokesperson for the tech industry was Altman.
Although Altmans OpenAI is still called a startup by some, its arguably the most influential AI company in the world. Its launch of image and text generation tools like ChatGPT and deals with Microsoft to remake Bing have sent shockwaves through the entire tech industry. Altman himself is well positioned: able to appeal to both the imaginations of the VC class and hardcore AI boosters with grand promises to build superintelligent AI and, maybe one day, in his own words, capture the light cone of all future value in the universe.
At the hearing this week, he was not so grandiose. Altman, too, mentioned the problem of regulatory capture but was less clear about his thoughts on licensing smaller entities. Wedont wanna slow down smaller startups. We dont wanna slow down open source efforts, he said, adding, We still need them to comply with things.
Sarah Myers West, managing director of the AI Now institute, tells The Verge she was suspicious of the licensing system proposed by many speakers. I think the harm will be that we end up with some sort of superficial checkbox exercise, where companies say yep, were licensed, we know what the harms are and can proceed with business as usual, but dont face any real liability when these systems go wrong, she said.
Requiring a license to train models would ... further concentrate power in the hands of a few
Other critics particularly those running their own AI companies stressed the potential threat to competition. Regulation invariably favours incumbents and can stifle innovation, Emad Mostaque, founder and CEO of Stability AI, told The Verge. Clem Delangue, CEO of AI startup Hugging Face, tweeted a similar reaction: Requiring a license to train models would be like requiring a license to write code. IMO, it would further concentrate power in the hands of a few & drastically slow down progress, fairness & transparency.
But some experts say some form of licensing could be effective. Margaret Mitchell, who was forced out of Google alongside Timnit Gebru after authoring a research paper on the potential harms of AI language models, describes herself as a proponent of some amount of self-regulation, paired with top-down regulation. She told The Verge that she could see the appeal of certification but perhaps for individuals rather than companies.
You could imagine that to train a model (above some thresholds) a developer would need a commercial ML developer license, said Mitchell, who is now chief ethics scientist at Hugging Face. This would be a straightforward way to bring responsible AI into a legal structure.
Mitchell added that good regulation depends on setting standards that firms cant easily bend to their advantage and that this requires a nuanced understanding of the technology being assessed. She gives the example of facial recognition firm Clearview AI, which sold itself to police forces by claiming its algorithms are 100 percent accurate. This sounds reassuring, but experts say the company used skewed tests to produce these figures. Mitchell added that she generally does not trust Big Tech to act in the public interest. Tech companies [have] demonstrated again and again that they do not see respecting people as a part of running a company, she said.
Even if licensing is introduced, it may not have an immediate effect. At the hearing, industry representatives often drew attention to hypothetical future harms and, in the process, gave scant attention to known problems AI already enables.
For example, researchers like Joy Buolamwini have repeatedly identified problems with bias in facial recognition, which remains inaccurate at identifying Black faces and has produced many cases of wrongful arrest in the US. Despite this, AI-driven surveillance was not mentioned at all during the hearing, while facial recognition and its flaws were only alluded to once in passing.
Industry figures often stress future harms of AI to avoid talking about current problems
AI Nows West says this focus on future harms has become a common rhetorical sleight of hand among AI industry figures. These individuals position accountability right out into the future, she said, generally by talking about artificial general intelligence, or AGI: a hypothetical AI system smarter than humans across a range of tasks. Some experts suggest were getting closer to creating such systems, but this conclusion is strongly contested.
This rhetorical feint was obvious at the hearing. Discussing government licensing, OpenAIs Altman quietly suggested that any licenses need only apply to future systems. Where I think the licensing scheme comes in is not for what these models are capable of today, he said. But as we head towards artificial general intelligence thats where I personally think we need such a scheme.
Experts compared Congress (and Altmans) proposals unfavorably to the EUs forthcoming AI Act. The current draft of this legislation does not include mechanisms comparable to licensing, but it does classify AI systems based on their level of risk and imposes varying requirements for safeguards and data protection. More notable, though, is its clear prohibitions of known and current harmful AI uses cases, like predictive policing algorithms and mass surveillance, which have attracted praise from digital rights experts.
As West says, Thats where the conversation needs to be headed if were going for any type of meaningful accountability in this industry.
See the article here:
The Senate's hearing on AI regulation was dangerously friendly - The Verge
AI glossary: words and terms to know about the booming industry – NBC News
The artificial intelligence (AI) boom has brought with it a cornucopia of jargon from "generative AI" to "synthetic data" that can be hard to parse. And as hard as it is to really understand what AI is (see our explainer for that) having a working knowledge of AI terms can help make sense of this technology.
As part of our series explaining the basics of AI, here is a short glossary of terms that will hopefully help you navigate the rapidly developing field.
Artificial Intelligence: Technology that aims to replicate human-like thinking within machines. Some examples of abilities that fall into this category include identifying people in pictures, work in factories and even doing taxes.
Generative AI: Generative AI is an AI that can create things like text, images, sound and video.Traditional applications of AI largely classify content, while generative AI models create it. For instance, a voice recognition model can identify your voice, while a generative voice model can use your voice to create audiobooks. Almost all models that have recently captured the publics attention have been generative, including chatbots like OpenAIs ChatGPT, image creators like Stable Diffusion and MidJourney, and voice-cloning programs like Resemble.
Training Data: A collection of information text, image, sound curated to help AI models accomplish tasks. In language models, training datasets focus on text-based materials like books, comments from social media, and even code. Because AI models learn from training data, ethical questions have been raised around its sourcing and curation. Low-quality training data can introduce bias, leading to unfair models that make racist or sexist decisions.
Algorithmic Bias: An error resulting from bad training data and poor programming that causes models to make prejudiced decisions. Such models may draw inappropriate assumptions based on gender, ability or race. In practice, these errors can cause serious harm by affecting decision-making from mortgage applications to organ-transplant approvals. Many critics of the speedy rollout of AI have invoked the potential for algorithmic bias.
Artificial General Intelligence (AGI): A description of programs that are as capable or even more capable than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.
Autonomous Agents: An AI model that has both an objective and enough tools to achieve it. For instance, self-driving cars are autonomous agents that use sensory input, GPS data and driving algorithms to make independent decisions about how to navigate and reach destinations. A group of autonomous agents can even develop cultures, traditions and shared language, as researchers from Stanford have demonstrated.
Prompt Chaining: The process of using previous interactions with an AI model to create new, more finely tuned responses, specifically in prompt-driven language modeling. For example, when you ask ChatGPT to send your friend a text, you expect it to remember things like the tone you use to talk to her, inside jokes and other content from previous conversations. These techniques help incorporate this context.
Large Language Models (LLM): An application of AI usually generative that aims to understand, engage and communicate with language in a human-like way. These models are distinguished by their large size: The biggest version of GPT-3, a direct predecessor to ChatGPT, contained 175 billion different variables called parameters that were trained on 570 gigabytes of data. Googles PaLm model is even larger, having 540 billion parameters. As hardware and software continue to advance, this scale is expected to increase.
Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that are not yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze, or make up facts about events that arent in its training data. Its not fully understood why this happens, but can arise from sparse data, information gaps and misclassification.
Emergent Behavior: Skills that AI might demonstrate that it was not explicitly built for. Some examples include emoji interpretation, sarcasm and using gender-inclusive language. A research team at Google Brain identified over 100 of these behaviors, noting that more are likely to emerge as models continue to scale.
Alignment: Efforts to ensure AI systems share the same values and goals as their human operators. To bring motives into agreement, alignment research seeks to train and calibrate models, often using functions to reward or penalize models. If the model does a good job, you give it positive feedback. If not, you give it negative feedback.
Multimodal AI: A form of AI that can understand and work with multiple types of information, including text, image, speech and more. This is powerful because it allows AI to understand and express itself in multiple dimensions, giving both a broader and more nuanced understanding of tasks. One application of multimodal AI is this translator, which can convert Japanese comics into English.
Prompt Engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAIs ChatGPT, describing the tasks users feed into the algorithm (e.g. Give me five popular baby names).
Training: Training is the process of refining AI using data so its better suited for a task. An AI can be trained by feeding in data based on what you want it to learn from like feeding Shakespearean sonnets to a poetry bot. You can do this multiple times in iterations called epochs, until your models performance is consistent and reliable.
Neural Networks: Neural networks are computer systems constructed to approximate the structure of human thought, specifically via the structure of your brain. Theyre built like thisbecause they allow a model to build up from the abstract to the concrete. In an image model, initial layers, concepts like color or position might be formed, building up to firmer, more familiar forms like fruit or animals.
Narrow AI: Some AI algorithms have a one-track mind. Literally. Theyre designed to do one thing and nothing more. If a narrow AI algorithm can play checkers, it cant play chess. Examples include algorithms that only detect NSFW images and recommendation engines designed to tell you what Amazon or Etsy product to buy next.
Jasmine Cui is a reporter for NBC News.
Jason Abbruzzese is the senior editor for technology, science and climate for NBC News Digital.
More here:
AI glossary: words and terms to know about the booming industry - NBC News
Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal…
U.S.-China tensions have been called a repeat of Cold War-era dynamics. But a more apt comparison may be the years preceding World War I, when much of Europe was sitting on a powder keg of hostilities and overlapping territorial claims that required only two bullets to ignite. The difference now is that the great rivals have access to nuclear weapons, and with artificial intelligence they could be on the cusp of creating another tool that could wipe out mankind, according to former Secretary of State Henry Kissinger.
The U.S. and China represent the greatest dangers to peace right now if their animosities escalate to a military confrontation, Kissinger said in an interview with The Economist published Wednesday. The two countries are heading toward a major confrontation, he said, as both nations have convinced themselves that the other represents a strategic danger.
Tensions over Taiwan are likely to be a major flashpoint for future conflict, as President Joe Biden has signaled the U.S. would aid the island nation if China invades to reconquer what it considers a breakaway state. But much like with nuclear weapons during the Cold War, the world is now perfecting a new technologyA.I.that may be too dangerous to even consider deploying militarily.
[China and America] are two powers of the type where, historically, a military confrontation was inevitable. But this is not a normal circumstance, because of mutually assured destruction and artificial intelligence, Kissinger said.
U.S.-China relations are now at a low point. They started declining after the Trump administration imposed strict trade tariffs on China that have continued under the Biden administration. More recently, the two sides have clashed over Chinas intellectual property theft from U.S. companies, Chinese President Xi Jinpings close relationship with Russian counterpart Vladimir Putin, and the rising risk of China invading Taiwan.
Were in the classic pre-World War I situation, where neither side has much margin of political concession and in which any disturbance of the equilibrium can lead to catastrophic consequences, Kissinger said.
The world, in general, is becoming more dangerous, Kissinger said. He reluctantly supports Ukraine joining the military alliance NATO once the war ends, after years spent pushing back on the idea. India is another rising power, and Kissinger advocated strengthening its military in case occasional border skirmishes with China escalate to an all-out invasion.
Meanwhile, Taiwan isnt the only East Asian country concerned about Chinas rise, as Japanese officials have long warned that China poses an existential threat to the region. Last year, Japan announced a $320 billion military build-up plan, ending decades of pacifism. The plan includes missiles capable of reaching China, but the build-up could go even further, as Kissinger predicted that Japan is heading towards becoming a nuclear power in five years.
Just as superpowers settled into an unspoken detente from mutual destruction during the Cold War, A.I. might be weaponized with similar implications.
We are at the very beginning of a capability where machines could impose global pestilence or other pandemicsnot just nuclear, but any field of human destruction, Kissinger said. The circumstances require responsible leaders, who at least make an attempt to avoid conflict.
Current A.I. systems like OpenAIs ChatGPT have threatened economic disruption and widespread layoffs by making certain jobs obsolete, but some computer science experts have warned that the technology could still be in its infancy, and keeping up with it will likely get a lot harder in the future.
The U.S. and China are racing to dominate A.I. because of its potential economic importance, although former Google CEO Eric Schmidtwho earlier this year compared A.I.s impact on warfare to that of nuclear weaponswarned this week that the U.S. is only slightly ahead of China on A.I.
Some A.I. experts have cautioned that the world is simply not ready for A.I. being refined to a more sophisticated level known as artificial general intelligence. AGI, as its also called, when the technology can match human capabilities and reasoning, may be achieved in only a few years. If computers become capable that level of intelligence, humans might lose control and the technology could threaten the world, Kissinger said.
We may well wind up destroying ourselves. And a point is now quite reachable where the machines can refuse to be shut off. I mean once the machines recognized this possibility, they can build it into their advice prior to the contingency. Many genius scientists believe thatand they know more than I do.
Kissinger called for restraint in developing new technologies with civilization-ending potential, adding that as the world returns to being a more dangerous place, its essential that responsible governance prevails over hot heads.
When were in an adversarial world with mutually assured destruction, you owe it morally to your society to avoid it, he said.
The rest is here: