ai-for-less-suffering.com

← all claims

descriptive claim

The FLI AI Safety Index 2024 panel found that all flagship models from the six evaluated companies were vulnerable to adversarial jailbreak attacks.

desc_fli_index_2024_all_models_jailbreakable

confidence
0.90

Evidence (1)

supports (1)

  • FLI AI Safety Index 2024 expert_estimate
    weight
    0.90

    locator: Key Findings: Jailbreaks

    β€œAll the flagship models were found to be vulnerable to adversarial attacks.”

Camps holding this claim (3)