Company generative AI policy: which governance model should we adopt?

Proposta del gruppo Concorder Enterprise Hub
1 Moderatore
Marino avatar

Testo della proposta

Ecco la questione che vogliamo affrontare insieme: clicca su ogni paragrafo per aggiungere il tuo contributo votabile

Context: genAI adoption is fast, governance must catch up

Many organizations are moving from experimentation with generative AI to everyday use in drafting, summarizing, coding, customer support, and analytics. That shift creates new risks: data leakage, hallucinated outputs, IP and confidentiality issues, security threats, and unclear accountability for decisions influenced by AI-generated content.

NIST’s AI Risk Management Framework (AI RMF) provides a voluntary structure for managing AI risk across the lifecycle. NIST has also published a Generative AI Profile to help organizations identify and address risks that are particularly relevant to generative systems. This proposal turns that guidance into an internal decision: what governance model should our company adopt right now, and what controls are non-negotiable?

The objective is not to slow innovation. The objective is to make genAI use safe, auditable, and aligned with business goals—so teams can adopt tools confidently, customers are protected, and leadership can demonstrate responsible oversight.

What this proposal asks you to vote on

  • Governance model: Choose a policy tier that matches our risk tolerance and operational reality.
  • Tooling approach: Decide whether we limit work to approved systems or allow public tools under strict rules.
  • Controls: Select the baseline safeguards we implement across departments.

Use comments to list real use cases (marketing copy, HR templates, code suggestions, customer emails), identify the data types involved, and flag compliance constraints. The best option will be the one that enables value while controlling risk consistently.

Opzioni di voto

Vota le diverse opzioni proposte per trovare insieme la soluzione migliore.

Risk assessment for new genAI use cases

Require a lightweight risk review before deploying genAI in a new workflow, consistent with a risk-management framework approach.

0 Ancora nessun voto

Human review requirement for external-facing outputs

Require human review before sending AI-assisted content to customers, regulators, or the public.

0 Ancora nessun voto

Logging and auditability for business use

Maintain logs for approved genAI usage in business processes to enable auditing and incident response.

0 Ancora nessun voto

Mandatory training and a clear “do not input” data list

Train staff on safe use, including explicit guidance on prohibited data types and common genAI failure modes.

0 Ancora nessun voto

0
0
0

Tier 1: Open use with clear rules and training

What it means

Allow broad genAI use for low-risk work, with mandatory training, documented do’s/don’ts, and role-based guidance tied to risk management practices.

0 Ancora nessun voto
👍1 pro
Marino avatar
Pro icon
NIST AI RMF is intended to help organizations incorporate trustworthiness considerations into design, development, use, and evaluation of AI systems (NIST AI RMF).
0
0
0

Tier 2: Controlled use for approved use cases (default)

What it means

Permit genAI use only for approved categories of work (e.g., drafting, summarization) and require documented risk assessment for new use cases, aligning with structured risk management.

0 Ancora nessun voto
0
0
0

Tier 3: High-control model for sensitive domains

What it means

Restrict genAI use in sensitive workflows (customer decisions, regulated outputs, confidential data) unless an explicit approval and monitoring process is in place, reflecting the need to address generative-AI-specific risks.

0 Ancora nessun voto
👍1 pro
Marino avatar
Pro icon
NIST’s Generative AI Profile is designed to help organizations identify unique risks posed by generative AI and propose risk-management actions aligned to organizational goals (NIST Generative AI Profile).

Approved tools only (enterprise-controlled environments)

Employees use only company-approved genAI systems configured with organizational safeguards, logging, and access controls.

0 Ancora nessun voto

Allow public tools with strict rules

Permit public genAI tools for low-risk tasks, with strict prohibitions on entering confidential or personal data and with defined review requirements.

0 Ancora nessun voto

Fonti

Commenti