source · paper
Superintelligence Strategy: Expert Version
src_superintelligence_strategy_maim
https://arxiv.org/abs/2503.05628
reliability 0.78
authors: Dan Hendrycks, Eric Schmidt, Alexandr Wang
published: 2025-03-07
accessed: 2026-04-19
Notes
arXiv preprint, cs.CY. Strategy/policy argument by three prominent authors (CAIS director, former Google CEO, Scale AI CEO) rather than empirical research; slightly below paper prior (0.82) because claims are framework-level and advocacy-adjacent rather than measured.
Intake provenance
- method
- httpx
- tool
- afls-ingest/0.0.1
- git sha
- 4d098737f648
- at
- 2026-04-19T20:55:00.200951Z
- sha256
- 66ecf23165e2…
Evidence from this source (4)
- weight0.90
method: expert_estimate · locator: Abstract
“Given the relative ease of sabotaging a destabilizing AI project -- through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters -- MAIM already describes the strategic picture AI superpowers find themselves in.”
- weight0.95
method: expert_estimate · locator: Abstract
“Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.”
- weight0.95
method: expert_estimate · locator: Abstract
“We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state's aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals.”
- weight0.85
method: expert_estimate · locator: Abstract
“widespread proliferation of capable AI hackers and virologists would lower barriers for rogue actors to cause catastrophe.”