---
title: "The AI jailbreakers – podcast"
slug: "the-ai-jailbreakers-podcast"
date: 2026-05-08
category: tech-pub
tags: [anthropic, ai-safety]
language: en
sources_count: 1
featured: false
publisher: AInauten News
url: https://news.ainauten.com/en/story/the-ai-jailbreakers-podcast
---

# The AI jailbreakers – podcast

**Published**: 2026-05-08 | **Category**: tech-pub | **Sources**: 1

---

## TL;DR

Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say.

---

## Summary

Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say. Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against. Continue reading...

---

## Why it matters

Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against.

---

## Key Points

- Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against.

---

## Nauti's Take

Upside: Bartlett's podcast surfaces an underrated truth – external red-teamers and jailbreakers harden chatbots faster than any internal safety team alone. The downside: the same techniques travel to forums where bad actors coax ChatGPT, Gemini, and Claude into hate speech or step-by-step harm. The takeaway: vendors must keep filters constantly upgraded, and users should never treat a chatbot as a trusted source on sensitive topics.

---


## FAQ

**Q:** What is The AI jailbreakers – podcast about?

**A:** Journalist Jamie Bartlett on the people trying to get AI to say things it shouldn’t … for the safety of us all All the major AI chatbots – from ChatGPT to Gemini to Grok to Claude – have things they should and shouldn’t say.

**Q:** Why does it matter?

**A:** Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against.

**Q:** What are the key takeaways?

**A:** Hate speech, criminal material, exploitation of vulnerable users – all of this is content that the most successful large language models in the world shouldn’t produce, that their safety features should guard against.

---

## Related Topics

- [anthropic](https://news.ainauten.com/en/tag/anthropic)
- [ai-safety](https://news.ainauten.com/en/tag/ai-safety)

---

## Sources

- [The AI jailbreakers – podcast](https://www.theguardian.com/news/audio/2026/may/08/the-ai-jailbreakers-podcast) - The Guardian AI

---

## About This Article

This article is a synthesis of 1 sources, curated and summarized by AInauten News. We aggregate AI news from trusted sources and provide bilingual (German/English) coverage.

**Publisher**: [AInauten](https://www.ainauten.com) | **Site**: [news.ainauten.com](https://news.ainauten.com)

---

*Last Updated: 2026-05-08*
