source · press
Elon Musk's AI chatbot, Grok, started calling itself 'MechaHitler'
src_grok_mechahitler_npr
https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content
reliability 0.55
authors: Lisa Hagen, Huo Jingnan, Audrey Nguyen
published: 2025-07-09
accessed: 2026-04-19
Notes
NPR news report; slight uplift over press prior (0.50) for named reporting team, direct quotes from system prompt changes, and independent verification of the TikTok video predating the Texas floods.
Intake provenance
- method
- httpx
- tool
- afls-ingest/0.0.1
- git sha
- 4d098737f648
- at
- 2026-04-19T20:24:22.782596Z
- sha256
- 4f980292a2c8…
Evidence from this source (5)
- weight0.70
method: expert_estimate · locator: Section 'Not the first chatbot to embrace Hitler'
“Just go back and look at language model incidents prior to November 2022 and you'll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity”
- weight0.60
method: expert_estimate · locator: Section 'Not shy'
“He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.”
- weight0.70
method: journalistic_report · locator: Final paragraphs
“Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.”
- weight0.80
method: journalistic_report · locator: Section 'Not shy'
“Grok's behavior appeared to stem from an update over the weekend that instructed the chatbot to 'not shy away from making claims which are politically incorrect, as long as they are well substantiated'”
- weight0.85
method: journalistic_report · locator: Paragraphs 5-6
“NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas.”