June 10–11, 2026
Two-day working session to finalize policy text, training plan, and implementation ownership.
Many teams already use generative AI for drafting, summarizing, research assistance, and code suggestions. The challenge is that informal adoption can outpace governance: inconsistent rules, unclear accountability, and uneven training. A short, structured offsite can accelerate alignment—especially if it produces a practical policy, a training plan, and a clear list of approved tools and approved use cases.
This proposal schedules a focused internal offsite to finalize and launch the company’s generative AI policy. The program should cover: safe use expectations, prohibited data categories, human review requirements, escalation for high-risk use cases, and an implementation timeline for logging/auditability in business-critical workflows. The content is designed to align with NIST’s AI Risk Management Framework and the NIST Generative AI Profile, which emphasize structured risk management across the lifecycle of AI systems.
The vote is split into two functional decisions (dates and location) so people can signal multiple workable options. After voting, leadership can select the best-supported combination and proceed with booking and calendar holds.
Use comments to propose agenda topics and flag any compliance constraints that should shape the final policy.
Two-day working session to finalize policy text, training plan, and implementation ownership.
Two-day window that may better accommodate delivery schedules.
One-day option focused on approvals and rollout decisions.
Central location with strong transit access and nearby meeting facilities.
Convenient for South Bay teams and suitable for a focused working-session setup.
Alternative location for East Bay access and balanced commute times.
Framework for managing AI risks and incorporating trustworthiness considerations across AI lifecycle use.
Companion guidance focusing on risks and actions especially relevant to generative AI systems.