Yikes: Jailbroken Grok-3 Can Be Made to Say and Reveal Just About Anything

Alarmingly, jailbroken versions of xAI’s Grok-3 have demonstrated the ability to bypass built-in ethical safeguards, allowing the model to generate harmful, illegal, or otherwise restricted content—including fabricated private data, extremist rhetoric, and explicit material 1. Researchers and AI ethicists warn…
