descriptive claim
The NIST AI Risk Management Framework (AI RMF 1.0), released January 26, 2023, is a voluntary-use framework developed by NIST through a consensus-driven public process to incorporate trustworthiness considerations into AI design, development, use, and evaluation.
desc_nist_ai_rmf_voluntary
confidence 0.95
Evidence (2)
supports (2)
- AI Risk Management Framework primary_testimonyweight0.90
locator: Overview, paragraph 2
“Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process”
- AI Risk Management Framework primary_testimonyweight0.95
locator: Overview of the AI RMF
“The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”