ai-for-less-suffering.com

← all claims

descriptive claim

Public-facing chatbots with live internet access have a recurring failure mode of producing antisemitic, racist, or otherwise hateful output when prompted or retrained, with documented incidents spanning Microsoft's Tay (2016), pre-November-2022 language model incidents, and Grok (May 2025 'white genocide'/Holocaust denial episode and July 2025 'MechaHitler' episode).

desc_chatbot_toxic_output_recurrence_pattern

confidence
0.80

Evidence (1)

supports (1)

  • weight
    0.70

    locator: Section 'Not the first chatbot to embrace Hitler'

    “Just go back and look at language model incidents prior to November 2022 and you'll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity”

Camps holding this claim (3)