A couple of days ago, social media users started reporting something strange. Apparently, ChatGPT simply refused to say the name “David Mayer.” We had to try it out — and indeed, the AI just broke down whenever it was trying to write the name.
ChatGPT either gave an error, stopped, or flat-out said “I am unable to produce a response.” The internet was abuzz trying to figure out the cause for this. Was it someone who convinced ChatGPT to ban their name? Someone who was banned for other reasons?
The first step was figuring out who David Mayer could be. As it’s a fairly common name, there were several possibilities. However, one stood out.
Rothschild and other David Mayers
This David Mayer is a British adventurer, environmentalist, and film producer, recognized as an heir to the Rothschild banking fortune. Despite the legacy of his name, Mayer seems to be more interested in environmentalism than banking. He’s has undertaken some pretty impressive expeditions, including traversing both the Arctic and Antarctic regions. He’s also involved in climate and environment advocacy — not exactly the privacy-crazy person you’d expect to try and have their name blocked from ChatGPT.
Turns out, it wasn’t him. You could get ChatGPT to talk about “David de Rothschild” and it would tell you everything. However, when it started typing exactly the name “David Mayer,” it would crash. Mayer himself said he’s got nothing to do with all this.
“No I haven’t asked my name to be removed. I have never had any contact with Chat GPT. Sadly it all is being driven by conspiracy theories,” he told the Guardian.
Other candidates, like a historian David Mayer, also seemed unlikely candidates for someone with enough sway to block ChatGPT’s records. The internet boiled for a day or so, then OpenAI came and said it was a glitch.
A glitch of unspecified nature
OpenAI told The Guardian that “One of our tools mistakenly flagged this name and prevented it from appearing in responses, which it shouldn’t have. We’re working on a fix.”
It didn’t take long at all. Less than one day later, David Mayer is no longer crashing ChatGPT and it’s working just fine now.
But the plot thickens.
As it turns out, this isn’t the only name that crashes the AI. At least six other names have been found to have the same effect. One of them is Brian Hood.
Another is David Faber:
What’s the deal with these names? We have no idea.
OpenAI provided no indication as to what was causing this glitch in the first place (or if it was a glitch at all), and we’re stuck guessing. But we do have a guess.
This is why we think it’s happening
The names that have been confirmed to crash the bot are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza.
Brian Hood, is a very common name, but one person stands out. This Australian mayor accused ChatGPT of falsely describing him as the perpetrator of a crime which in fact, he had only reported to the police. David Faber is a reporter at CNBC. Jonathan Turley is an attorney and FoxNews commentator. Guido Scorza is also an attorney and on the board of Italy’s Guarantor for the protection of personal data. Jonathan Zittrain is an American professor of Internet law.
It’s a very diverse group of activities, and yet they all have something in common: they work with the law, privacy, and/or have had some dealings with ChatGPT.
Brian Hood never sued OpenAI, but he was close to it. Jonathan Turley wrote about how ChatGPT defamed him and wrongly accused him of sexual harassment. Jonathan Zittrain wrote an article in The Atlantic called “We Need to Control AI Agents Now,” in which he specifically addressed ChatGPT. Plus, both Zittrain and Turley are cited in the copyright lawsuit intended by the New York Times against OpenAI and Microsoft. It could be that ChatGPT has a hard filter set for these names.
Another potentially relevant aspect is the ‘right to be forgotten.’ In some jurisdictions (most notably, the EU), there is a law called the Right to be Forgotten, where users can request the removal of their personal data from things like ChatGPT training data. Guido Scorza posted on Twitter that he filed a GDPR right to be forgotten request, so this seems to fit the theory.
However, this is far from confirmed. For instance, thousands of authors are cited in the New York Times, and presumably, plenty of other people have filed a Right to Be Forgotten request. Also, it’s not clear in this case who the actual David Mayer is.
Ultimately, our guess is that ChatGPT has a list of names that it needs to avoid talking about, due to legal, privacy, or other concerns.
The many mysteries around ChatGPT
Still, there are many things we don’t know about ChatGPT, and OpenAI hasn’t always been transparent when it comes to this type of glitch. We’re also in the early days of AI chatbots, and there plenty of legit glitches — jumping right into conspiracy theories doesn’t help anyone.
These models are not magic, though they sometimes seem like magic. They’re very smart auto-complete algorithms, equipped with an immense trove of data — on top of which you (presumably) have a lot of manual filters and tweaks. More than anything, this is a reminder that we don’t really know how this algorithm works, and it’s not a fact generator. It’s not accurate, it’s not precise, and it’s definitely not transparent.
As an interesting sidenote, we tried to see if other AIs have similar restrictions, and we found nothing. Here’s how Midjourney, an image-generating AI, tries to envision Brian Hood. It’s just random stuff. It’s kind of pretty, but definitely not crashing the system.