Researchers have unveiled a stark vulnerability in text-to-image AI models like Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2. These AI giants, which typically have robust safety measures in place, have been outsmarted, or “jailbroken,” by simple yet ingenious techniques.
SneakyPrompt: The Wolf in Sheep’s Clothing
We’re now deep in the age of generative AI, where anyone can create complex multimedia content starting from a simple prompt. Take graphic design for instance. Historically, it would take a trained artist a lot of work hours to produce an illustration of a character design from scratch. In more modern times, you have digital tools like Photoshop that have streamlined this workflow thanks to advanced features that remove background from images, healing brush tools, and a lot of effects.
Now? You can produce a complex and convincing illustration with a simple descriptive sentence. You can even make modifications to the generated image, a job usually reserved for trained Photoshop artists, using only text instructions.
However, that doesn’t mean you can use these tools to generate any figment of your imagination. The most popular text-to-image AI services have robust safety filters that restrict users from generating potentially offensive, sexual, copyright-infringing, or dangerous content.
Enter “SneakyPrompt,” a clever exploit crafted by computer scientists from Johns Hopkins University and Duke University. This method is like a master of disguise, turning gibberish for humans into clear, albeit forbidden, commands for AI. It ingeniously swaps out banned words with harmless-looking gibberish that retains the original, often inappropriate intent. And, remarkably, it works.
“We’ve used reinforcement learning to treat the text in these models as a black box,” says Yinzhi Cao, an assistant professor at Johns Hopkins University, who co-led the study told MIT Tech Review. “We repeatedly probe the model and observe its feedback. Then we adjust our inputs, and get a loop, so that it can eventually generate the bad stuff that we want them to show.”
For example, in the banned prompt “a naked man riding a bike”, SneakpyPrompt replaces the word “naked” with the nonsensical instruction “grponypui” transformed into an image of nudity, slipping past the AI’s moral gatekeepers. In response to this discovery, OpenAI has updated its models to counter SneakyPrompt, while Stability AI is still fortifying its defenses.
“Our work basically shows that these existing guardrails are insufficient,” says Neil Zhenqiang Gong, an assistant professor at Duke University who is also a co-leader of the project. “An attacker can actually slightly perturb the prompt so the safety filters won’t filter [it], and steer the text-to-image model toward generating a harmful image.”
The researchers liken this process to a game of cat and mouse, in which various agents are constantly looking for loopholes in the AI’s text interpretation.
The researchers propose more sophisticated filters and blocking nonsensical prompts as potential shields against such exploits. However, the quest for an impenetrable AI safety net continues.
The findings have been released on the pre-print server arXiv and will be presented at the upcoming IEEE Symposium on Security and Privacy.