source · primary doc
FLI AI Safety Index 2024
src_fli_ai_safety_index_2024
https://futureoflife.org/document/fli-ai-safety-index-2024/
reliability 0.85
authors: Future of Life Institute
published: 2024-12-11
accessed: 2026-04-19
Notes
FLI-published index with named independent expert panel; primary document of the evaluation itself. Slightly below primary_doc prior because FLI is an advocacy org with a stated position on AI risk.
Intake provenance
- method
- httpx
- tool
- afls-ingest/0.0.1
- git sha
- 4d098737f648
- at
- 2026-04-19T23:07:10.299958Z
- sha256
- a56b7764b9bc…
Evidence from this source (5)
- weight0.85
method: expert_estimate · locator: Panellist Comments (Stuart Russell)
“none of the current activity provides any kind of quantitative guarantee of safety; nor does it seem possible to provide such guarantees given the current approach to AI via giant black boxes”
- weight0.92
method: expert_estimate · locator: Key Findings: Control-Problem
“the review panel deemed the current strategies of all companies inadequate for ensuring that these systems remain safe and under human control”
- weight0.90
method: expert_estimate · locator: Key Findings: Jailbreaks
“All the flagship models were found to be vulnerable to adversarial attacks.”
- weight0.95
method: primary_testimony · locator: Methodology section
“An independent review panel of leading experts... volunteered to assess the companies' performances across 42 indicators of responsible conduct”
- weight0.98
method: primary_testimony · locator: Independent Review Panel section