source · primary doc
Introducing the Frontier Safety Framework
src_deepmind_frontier_safety_framework
https://deepmind.google/blog/introducing-the-frontier-safety-framework/
authors: Anca Dragan, Helen King, Allan Dafoe
published: 2024-05-17
accessed: 2026-04-19
Notes
First-party announcement by Google DeepMind describing its own policy framework. Treated as primary_doc (policy artifact) rather than blog; slight downward adjustment from 0.90 prior because framework is self-described as exploratory and non-binding.
Intake provenance
- method
- httpx
- tool
- afls-ingest/0.0.1
- git sha
- 4d098737f648
- at
- 2026-04-19T20:23:07.609231Z
- sha256
- 0f4a70ad7433…
Evidence from this source (5)
- weight0.95
method: primary_testimony · locator: Section: The framework
“Identifying capabilities a model may have with potential for severe harm... Evaluating our frontier models periodically to detect when they reach these Critical Capability Levels... Applying a mitigation plan when a model passes our early warning evaluations.”
- weight0.90
method: primary_testimony · locator: Section: Risk domains and mitigation levels
“These measures, however, may also slow down the rate of innovation and reduce the broad accessibility of capabilities. Striking the optimal balance between mitigating risks and fostering access and innovation is paramount”
- weight0.90
method: primary_testimony · locator: Section: Risk domains and mitigation levels
“For machine learning R&D, the focus is on whether models with such capabilities would enable the spread of models with other critical capabilities, or enable rapid and unmanageable escalation of AI capabilities.”
- weight0.90
method: primary_testimony · locator: Intro paragraphs
“The Framework is exploratory and we expect it to evolve significantly... We aim to have this initial framework fully implemented by early 2025.”
- weight0.95
method: primary_testimony · locator: Section: Risk domains and mitigation levels
“Our initial set of Critical Capability Levels is based on investigation of four domains: autonomy, biosecurity, cybersecurity, and machine learning research and development (R&D).”