homehome Home chatchat Notifications


The David Mayer case: ChatGPT refuses to say some names. We have an idea why

Who are David Mayer and Brian Hood?

Mihai Andrei
December 4, 2024 @ 10:08 pm

share Share

A couple of days ago, social media users started reporting something strange. Apparently, ChatGPT simply refused to say the name “David Mayer.” We had to try it out — and indeed, the AI just broke down whenever it was trying to write the name.

Screen capture of ChatGPT unable to write David Mayer

ChatGPT either gave an error, stopped, or flat-out said “I am unable to produce a response.” The internet was abuzz trying to figure out the cause for this. Was it someone who convinced ChatGPT to ban their name? Someone who was banned for other reasons?

The first step was figuring out who David Mayer could be. As it’s a fairly common name, there were several possibilities. However, one stood out.

Rothschild and other David Mayers

This David Mayer is a British adventurer, environmentalist, and film producer, recognized as an heir to the Rothschild banking fortune. Despite the legacy of his name, Mayer seems to be more interested in environmentalism than banking. He’s has undertaken some pretty impressive expeditions, including traversing both the Arctic and Antarctic regions. He’s also involved in climate and environment advocacy — not exactly the privacy-crazy person you’d expect to try and have their name blocked from ChatGPT.

Turns out, it wasn’t him. You could get ChatGPT to talk about “David de Rothschild” and it would tell you everything. However, when it started typing exactly the name “David Mayer,” it would crash. Mayer himself said he’s got nothing to do with all this.

“No I haven’t asked my name to be removed. I have never had any contact with Chat GPT. Sadly it all is being driven by conspiracy theories,” he told the Guardian.

Other candidates, like a historian David Mayer, also seemed unlikely candidates for someone with enough sway to block ChatGPT’s records. The internet boiled for a day or so, then OpenAI came and said it was a glitch.

A glitch of unspecified nature

OpenAI told The Guardian that “One of our tools mistakenly flagged this name and prevented it from appearing in responses, which it shouldn’t have. We’re working on a fix.”

It didn’t take long at all. Less than one day later, David Mayer is no longer crashing ChatGPT and it’s working just fine now.

But the plot thickens.

As it turns out, this isn’t the only name that crashes the AI. At least six other names have been found to have the same effect. One of them is Brian Hood.

Screen capture of ChatGPT unable to write Brian Hood

Another is David Faber:

Screen capture of ChatGPT unable to write David Faber

What’s the deal with these names? We have no idea.

OpenAI provided no indication as to what was causing this glitch in the first place (or if it was a glitch at all), and we’re stuck guessing. But we do have a guess.

This is why we think it’s happening

The names that have been confirmed to crash the bot are Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza.

Brian Hood, is a very common name, but one person stands out. This Australian mayor accused ChatGPT of falsely describing him as the perpetrator of a crime which in fact, he had only reported to the police. David Faber is a reporter at CNBC. Jonathan Turley is an attorney and FoxNews commentator. Guido Scorza is also an attorney and on the board of Italy’s Guarantor for the protection of personal data. Jonathan Zittrain is an American professor of Internet law.

It’s a very diverse group of activities, and yet they all have something in common: they work with the law, privacy, and/or have had some dealings with ChatGPT.

Brian Hood never sued OpenAI, but he was close to it. Jonathan Turley wrote about how ChatGPT defamed him and wrongly accused him of sexual harassment. Jonathan Zittrain wrote an article in The Atlantic called “We Need to Control AI Agents Now,” in which he specifically addressed ChatGPT. Plus, both Zittrain and Turley are cited in the copyright lawsuit intended by the New York Times against OpenAI and Microsoft. It could be that ChatGPT has a hard filter set for these names.

Another potentially relevant aspect is the ‘right to be forgotten.’ In some jurisdictions (most notably, the EU), there is a law called the Right to be Forgotten, where users can request the removal of their personal data from things like ChatGPT training data. Guido Scorza posted on Twitter that he filed a GDPR right to be forgotten request, so this seems to fit the theory.

Screen capture of ChatGPT unable to generate an image of Brian Hood
ChatGPT also refuses to generate images using these names.

However, this is far from confirmed. For instance, thousands of authors are cited in the New York Times, and presumably, plenty of other people have filed a Right to Be Forgotten request. Also, it’s not clear in this case who the actual David Mayer is.

Ultimately, our guess is that ChatGPT has a list of names that it needs to avoid talking about, due to legal, privacy, or other concerns.

The many mysteries around ChatGPT

Still, there are many things we don’t know about ChatGPT, and OpenAI hasn’t always been transparent when it comes to this type of glitch. We’re also in the early days of AI chatbots, and there plenty of legit glitches — jumping right into conspiracy theories doesn’t help anyone.

These models are not magic, though they sometimes seem like magic. They’re very smart auto-complete algorithms, equipped with an immense trove of data — on top of which you (presumably) have a lot of manual filters and tweaks. More than anything, this is a reminder that we don’t really know how this algorithm works, and it’s not a fact generator. It’s not accurate, it’s not precise, and it’s definitely not transparent.

As an interesting sidenote, we tried to see if other AIs have similar restrictions, and we found nothing. Here’s how Midjourney, an image-generating AI, tries to envision Brian Hood. It’s just random stuff. It’s kind of pretty, but definitely not crashing the system.

Midjourney images for "Brian Hood"

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.