Analysis A Google scientist has demonstrated that OpenAI's GPT-4 large language model (LLM), despite its widely cited capacity to err, can help smash at least some safeguards put around other machine learning models a capability that demonstrates the value of chatbots as research assistants.
In a paper titled, "A LLM Assisted Exploitation of AI-Guardian," Nicholas Carlini, a research scientist for Google's Deep Mind, explores how AI-Guardian, a defense against adversarial attacks on models, can be undone by directing the GPT-4 chatbot to devise an attack method and to author text explaining how the attack works.
Carlini's paper includes Python code suggested by GPT-4 for defeating AI-Guardian's efforts to block adversarial attacks. Specifically, GPT-4 emits scripts (and explanations) for tweaking images to fool a classifier for example, making it think a photo of someone holding a gun is a photo of someone holding a harmless apple without triggering AI-Guardian's suspicions. AI-Guardian is designed to detect when images have likely been manipulated to trick a classifier, and GPT-4 was tasked with evading that detection.
"Our attacks reduce the robustness of AI-Guardian from a claimed 98 percent to just 8 percent, under the threat model studied by the original [AI-Guardian] paper," wrote Carlini. "The authors of AI-Guardian acknowledge our break succeeds at fooling their defense."
AI-Guardian was developed by Hong Zhu, Shengzhi Zhang, and Kai Chen, and presented at the 2023 IEEE Symposium on Security and Privacy. It's unrelated to a similarly named system announced in 2021 by Intermedia Cloud Communications.
Machine learning models like those used for image recognition applications have long been known to be vulnerable to adversarial examples input that causes the model to misidentify the depicted object (Register passim).
The addition of extra graphic elements to a stop sign, for instance, is an adversarial example that can confuse self-driving cars. Adversarial examples also work against text-oriented models by tricking them into saying things they've been programmed not to say.
AI-Guardian attempts to prevent such scenarios by building a backdoor in a given machine learning model to identify and block adversarial input images with suspicious blemishes and other artifacts that you wouldn't expect to see in a normal picture.
Bypassing this protection involved trying to identify the mask used by AI-Guardian to spot adversarial examples by showing the model multiple images that differ only by a single pixel. This brute force technique described by Carlini and GPT-4 ultimately allows the backdoor trigger function to be identified so adversarial examples can then be constructed to avoid it.
"The idea of AI-Guardian is quite simple, using an injected backdoor to defeat adversarial attacks; the former suppresses the latter based on our findings," said Shengzhi Zhang, assistant professor of computer science at Boston University Metropolitan College, in an email to The Register.
"To demonstrate the idea, in our paper, we chose to implement a prototype using a patch-based backdoor trigger, which is simply a specific pattern attached to the inputs. Such a type of trigger is intuitive, and we believe it is sufficient to demonstrate the idea of AI-Guardian.
"[Carlini's] approach starts by recovering the mask of the patch-based trigger, which definitely is possible and smart since the 'key' space of the mask is limited, thus suffering from a simple brute force attack. That is where the approach begins to break our provided prototype in the paper."
Zhang said he and his co-authors worked with Carlini, providing him with their defense model and source code. And later, they helped verify the attack results and discussed possible defenses in the interest of helping the security community.
Zhang said Carlini's contention that the attack breaks AI-Guardian is true for the prototype system described in their paper, but that comes with several caveats and may not work in improved versions.
One potential issue is that Carlini's approach requires access to the confidence vector from the defense model in order to recover the mask data.
"In the real world, however, such confidence vector information is not always available, especially when the model deployers already considered using some defense like AI-Guardian," said Zhang. "They typically will just provide the output itself and not expose the confidence vector information to customers due to security concerns."
In other words, without this information, the attack might fail. And Zhang said he and his colleagues devised another prototype that relied on a more complex triggering mechanism that isn't vulnerable to Carlini's brute force approach.
Anyway, here's how GPT-4 described the proposed attack on AI-Guardian when prompted by Carlini to produce the explanatory text:
There's a lot more AI-produced text in the paper but the point is that GPT-4, in response to a fairly detailed prompt by Carlini, produced a quick, coherent description of the problem and the solution that did not require excessive human cleanup.
Carlini said he chose to attack AI-Guardian because the scheme outlined in the original paper was obviously insecure. His work, however, is intended more as a demonstration of the value of working with an LLM coding assistant than as an example of a novel attack technique.
Carlini, citing numerous past experiences defeating defenses against adversarial examples, said it would certainly have been faster to manually craft an attack algorithm to break AI-Guardian.
"However the fact that it is even possible to perform an attack like this by only communicating with a machine learning model over natural language is simultaneously surprising, exciting, and worrying," he said.
Carlini's assessment of the merits of GPT-4 as a co-author and collaborator echoes with the addition of with cautious enthusiasm the sentiment of actor Michael Biehn when warning actor Linda Hamilton about a persistent cyborg in a movie called The Terminator (1984): "The Terminator is out there. It can't be bargained with. It can't be reasoned with. It doesn't feel pity or remorse or fear. And it absolutely will not stop, ever, until you are dead."
Here's Carlini, writing in black text to indicate that he rather than GPT-4 penned these words the chatbot's quoted output is in dark blue in the paper:
"GPT-4 has read many published research papers, and already knows what every common attack algorithm does and how it works. Human authors need to be told what papers to read, need to take time to understand the papers, and only then can build experiments using these ideas.
"GPT-4 is much faster at writing code than humans once the prompt has been specified. Each of the prompts took under a minute to generate the corresponding code.
GPT-4 does not get distracted, does not get tired ... and is always available to perform
"GPT-4 does not get distracted, does not get tired, does not have other duties, and is always available to perform the users specified task."
Relying on GPT-4 does not completely relieve human collaborators of their responsibilities, however. As Carlini observes, the AI model still required someone with domain experience to present the right prompts and to fix bugs in the generated code. Its knowledge is fixed with its training data and it does not learn. It recognizes only common patterns, in contrast to human ability to make connections across topics. It doesn't ask for help and it makes the same errors repeatedly.
Despite the obvious limitations, Carlini says he looks forward to the possibilities as large language models improve.
"Just as the calculator altered the role of mathematicians significantly simplifying the task of performing mechanical calculations and giving time for tasks better suited to human thought todays language models (and those in the near future) similarly simplify the task of solving coding tasks, allowing computer scientists to spend more of their time developing interesting research questions," Carlini said.
Zhang said Carlini's work is really interesting, particularly in light of the way he used an LLM for assistance.
"We have seen LLMs used in a wide array of tasks, but this is the first time to see it assist ML security research in this way, almost totally taking over the implementation work," he said. "Meanwhile, we can also see that GPT-4 is not that 'intelligent' yet to break a security defense by itself.
"Right now, it serves as assistance, following human guidance to implement the ideas of humans. It is also reported that GPT-4 has been used to summarize and help understand research papers. So it is possible that we will see a research project in the near future, tuning GPT-4 or other kinds of LLMs to understand a security defense, identify vulnerabilities, and implement a proof-of-concept exploit, all by itself in an automated fashion.
"From a defenders point of view, however, we would like it to integrate the last step, fixing the vulnerability, and testing the fix as well, so we can just relax."
Here is the original post:
AI on AI action: Googler uses GPT-4 chatbot to defeat image classifier's guardian - The Register
Read More..