descriptive claim
The FLI AI Safety Index 2024 panel found that all flagship models from the six evaluated companies were vulnerable to adversarial jailbreak attacks.
desc_fli_index_2024_all_models_jailbreakable
confidence 0.90
Evidence (1)
supports (1)
- FLI AI Safety Index 2024 expert_estimateweight0.90
locator: Key Findings: Jailbreaks
βAll the flagship models were found to be vulnerable to adversarial attacks.β