Risk assessment for new genAI use cases
Require a lightweight risk review before deploying genAI in a new workflow, consistent with a risk-management framework approach.
Many organizations are moving from experimentation with generative AI to everyday use in drafting, summarizing, coding, customer support, and analytics. That shift creates new risks: data leakage, hallucinated outputs, IP and confidentiality issues, security threats, and unclear accountability for decisions influenced by AI-generated content.
NIST’s AI Risk Management Framework (AI RMF) provides a voluntary structure for managing AI risk across the lifecycle. NIST has also published a Generative AI Profile to help organizations identify and address risks that are particularly relevant to generative systems. This proposal turns that guidance into an internal decision: what governance model should our company adopt right now, and what controls are non-negotiable?
The objective is not to slow innovation. The objective is to make genAI use safe, auditable, and aligned with business goals—so teams can adopt tools confidently, customers are protected, and leadership can demonstrate responsible oversight.
Use comments to list real use cases (marketing copy, HR templates, code suggestions, customer emails), identify the data types involved, and flag compliance constraints. The best option will be the one that enables value while controlling risk consistently.
Require a lightweight risk review before deploying genAI in a new workflow, consistent with a risk-management framework approach.
Require human review before sending AI-assisted content to customers, regulators, or the public.
Maintain logs for approved genAI usage in business processes to enable auditing and incident response.
Train staff on safe use, including explicit guidance on prohibited data types and common genAI failure modes.
Allow broad genAI use for low-risk work, with mandatory training, documented do’s/don’ts, and role-based guidance tied to risk management practices.
Permit genAI use only for approved categories of work (e.g., drafting, summarization) and require documented risk assessment for new use cases, aligning with structured risk management.
Restrict genAI use in sensitive workflows (customer decisions, regulated outputs, confidential data) unless an explicit approval and monitoring process is in place, reflecting the need to address generative-AI-specific risks.
Employees use only company-approved genAI systems configured with organizational safeguards, logging, and access controls.
Permit public genAI tools for low-risk tasks, with strict prohibitions on entering confidential or personal data and with defined review requirements.
Overview of NIST’s voluntary framework to manage risks associated with AI and incorporate trustworthiness considerations.
Companion guidance proposing actions to address risks unique to generative AI systems.