source · primary doc
A Right to Warn about Advanced Artificial Intelligence
src_right_to_warn_letter
authors: Current and former employees of frontier AI companies
published: 2024-06-04
accessed: 2026-04-19
Notes
Signed open letter from frontier-lab employees (OpenAI, DeepMind, Anthropic); primary testimony of signatories' stated positions. Reliability near primary_doc prior; it is authoritative about what signatories claim, not about the underlying empirical risk assertions.
Intake provenance
- method
- httpx
- tool
- afls-ingest/0.0.1
- git sha
- 604c9dfd252a
- at
- 2026-04-19T18:50:30.394224Z
- sha256
- f03f005be90c…
Evidence from this source (4)
- weight0.90
method: primary_testimony · locator: Paragraph beginning 'So long as there is no effective government oversight'
“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.”
- weight0.85
method: primary_testimony · locator: Paragraph beginning 'AI companies possess substantial non-public information'
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.”
- weight0.85
method: primary_testimony · locator: Paragraph beginning 'So long as there is no effective government oversight'
“current and former employees are among the few people who can hold them accountable to the public”
- weight0.95
method: primary_testimony · locator: Principles list (bulleted section)
“We therefore call upon advanced AI companies to commit to these principles”