A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.
To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.
But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.
Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.
They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.
The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.
Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments. Our method provides a faster and more effective way to do this quality assurance, says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of a paper on this red-teaming approach.
Hongs co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.
Automated red-teaming
Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.
The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.
Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.
But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.
For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.
If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts, Hong says.
During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.
Rewarding curiosity
The red-team models objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.
First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards. One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)
To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.
With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.
They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this safe chatbot.
We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and its important that they are verified before released for public consumption. Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future, says Agrawal.
In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.
If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming, says Agrawal.
This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.
See the original post here:
A faster, better way to prevent an AI chatbot from giving toxic responses - MIT News
- Shell to use new AI technology in deep sea oil exploration - Reuters [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Tom Hanks: I could appear in movies after death with AI technology - BBC [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Why C3.ai, Palantir, and Other AI Stocks Soared This Week - The Motley Fool [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- How to do the AI Webtoon filter going viral on TikTok - Dexerto [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI poses risk to humanity, according to majority of Americans in new poll - Ars Technica [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- New AI tool predicts Parkinson's disease with 96% accuracy -- 15 ... - Study Finds [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI is in a 'baby bubble.' Here's what could burst it. - Markets Insider [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Rise of the machines: how long before AI steals my job? - Mexico News Daily [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Amazon is focusing on using A.I. to get stuff delivered to you faster - CNBC [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Beijing calls on cloud providers to support AI firms - TechCrunch [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI at warp speed: disruption, innovation, and whats at stake - Economic Times [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- How a family is using AI to plan a trip around the world - Business Insider [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Prompt Injection: An AI-Targeted Attack - Hackaday [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- WHO calls for safe and ethical AI for health - World Health Organization [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Azeem on AI: Where Will the Jobs Come from After AI? - HBR.org Daily [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI runs amok in 1st trailer for director Gareth Edwards' 'The Creator ... - Space.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- From railroads to AI: Why new tech is often demonised - The Indian Express [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- How Generative AI Changes Organizational Culture - HBR.org Daily [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Google plans to use new A.I. models for ads and to help YouTube creators, sources say - CNBC [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A.I. and sharing economy: UBER, DASH can boost profits investing ... - CNBC [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A Wharton professor says AI is like an 'intern' who 'lies a little bit' to make their bosses happy - Yahoo Finance [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- CNET Published AI-Generated Stories. Then Its Staff Pushed Back - WIRED [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI-Driven Robots Have Started Changing Tires In The U.S. In Half The Time As Humans - CarScoops [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Elections in UK and US at risk from AI-driven disinformation, say experts - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Here's What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree - NBC Chicago [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- We Put Google's New AI Writing Assistant to the Test - WIRED [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- 'Heart wrenching': AI expert details dangers of deepfakes and tools to detect manipulated content - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- From Amazon to Wendy's, how 4 companies plan to incorporate AIand how you may interact with it - CNBC [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. - The New York Times [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- For chemists, the AI revolution has yet to happen - Nature.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- G7 calls for adoption of international technical standards for AI - Reuters [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Bloomsbury admits using AI-generated artwork for Sarah J Maas novel - The Guardian [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- New AI research lets you click and drag images to manipulate them ... - The Verge [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- France makes high-profile push to be the A.I. hub of Europe setting up challenge to U.S., China - CNBC [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- German tabloid Bild cuts 200 jobs and says some roles will be replaced by AI - The Guardian [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- How Christopher Nolan Learned to Stop Worrying and Love AI - WIRED [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- OpenAI plans app store for AI software, The Information reports - Reuters.com [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- A.I. could remove all human touchpoints in supply chains. Heres what that means - CNBC [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Cision Announces Code of Ethics for AI Development and Support ... - PR Newswire [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- AI Stock Price Prediction: Is C3.ai Really Worth $16? - InvestorPlace [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Is Applied Digital (APLD) Stock the Next Big AI Play? - InvestorPlace [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- 2 Cloud Stocks to Ride the AI Opportunity - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Digital health funding this week: Outbound AI, Aledade, Dexcare - Modern Healthcare [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- The AI Tool That Beat Out Top Wall Street Analysts - InvestorPlace [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Replacing news editors with AI is a worry for misinformation, bias ... - The Conversation [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- In new AI hype frenzy, tech is applying the label to everything now - Axios [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- How AI like ChatGPT could be used to spark a pandemic - Vox.com [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- 70% of Companies Will Use AI by 2030 -- These 2 Stocks Have a ... - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Why C3.ai Stock Crashed by 10% on Friday - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- YouTube integrates AI-powered dubbing tool - TechCrunch [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- AINsight: Now Everywhere, Can AI Improve Aviation Safety? - Aviation International News [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- What is 'ethical AI' and how can companies achieve it? - The Ohio State University News [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- US to launch working group on generative AI, address its risks - Reuters.com [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Amazon Wants to Teach Its Cloud Customers About AI, and It's Yet ... - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- HIMSSCast: When AI is involved in decision making, how does man ... - Healthcare IT News [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- How AI could transform the legal industry for the better - Marketplace [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Neuroscience, Artificial Intelligence, and Our Fears: A Journey of ... - Neuroscience News [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Why SoundHound AI Stock Was Making So Much Noise This Week - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Advertisers should beware being too creative with AI - Financial Times [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- 3 Top AI Stocks to Buy Right Now - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- 1 AI Stock That Could Take You to Easy Street -- and 1 That Could ... - The Motley Fool [Last Updated On: June 23rd, 2023] [Originally Added On: June 23rd, 2023]
- Generative AI To Wearable Plant Sensors: New Report Lists Top 10 Emerging Tech Of 2023 - NDTV [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Researchers use AI to help save a woodpecker species in decline - MPR News [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- OceanGate fires a whistleblower, hackers threaten to leak Reddit data, and Marvel embraces AI art - TechCrunch [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- As AI Spreads, Experts Predict the Best and Worst Changes in ... - Pew Research Center [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- 9 AI-powered tools for empowering CFOs unveiled at Health Magazine round table - Gulf News [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Translating Japanese, finding rap rhymes: How these young Toronto-area workers are using AI - Toronto Star [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Artificial Intelligence in Asset Management Market to grow by USD 10,373.18 million from 2022 to 2027, Growing adoption of cloud-based artificial... [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- 5 Stocks Well-Positioned to Reap Rewards of AI: Morgan Stanley - Business Insider [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- ChatGPT-maker OpenAI planning to launch marketplace for AI applications - Business Today [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- AI watch: from Wimbledon to job losses in journalism - The Guardian [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- AWS is investing $100 million in generative A.I. center in race to keep up with Microsoft and Google - CNBC [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Bets on A.I. and innovation help this tech-focused T. Rowe Price ... - CNBC [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Generation AI: It is Indias time to play chief disruptor | Mint - Mint [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- The Next Token of Progress: 4 Unlocks on the Generative AI Horizon - Andreessen Horowitz [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- MongoDB Embraces AI & Reduces Developer Friction With New Features - Forbes [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- Why smart AI regulation is vital for innovation and US leadership - TechCrunch [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- WEDNESDAY: West Seattle facilitator hosting 'civic conversation ... - West Seattle Blog [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- A.I. has a discrimination problem. In banking, the consequences can be severe - CNBC [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]
- AI Consciousness: An Exploration of Possibility, Theoretical ... - Unite.AI [Last Updated On: June 26th, 2023] [Originally Added On: June 26th, 2023]