AI Policy in Motion

Why Moving at the Speed of the AI Models You Govern Is Critical

Corey Minard, Julie DeMuth Mellendorf
3/25/2026
Team reviews information on a screen, representing ongoing collaboration to maintain AI governance and adapt to evolving risks.

A living AI policy keeps governance aligned with evolving models, emerging risks, and regulatory change. Find out how to build one.

Static AI policies quickly become obsolete in a world where large language models evolve monthly and regulatory guidance shifts by the quarter. Yet many organizations are still governing AI as if it were a slow-moving IT asset, something that can be reviewed, approved, and parked. Governance frameworks must be dynamic, flexible, and responsive to the lightning speed of AI development. Organizations can take intentional steps to build living AI policies, not as documents to be published but as operating mechanisms designed to adapt to new technologies, emerging risks, and evolving enterprise behavior.

Why static policy fails

AI development cycles no longer operate on a yearly road map. Foundational models, such as Open AI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, update every few weeks. Business units experiment daily with new tools, integrations, and workflows that embed AI in customer support, analytics, or product offerings.

However, many organizations still treat AI policy like they treat their data retention or expense policies: They update it once a year and hope it lasts.

It doesn’t. And pretending otherwise creates a widening gap between what the policy says and what the business actually does. That gap is where risk accumulates.

We heard specific pain points firsthand in a recent cross-industry roundtable.

  • “By the time we rolled out our AI policy, the tools it referenced had already changed.”
  • “We don’t know how to keep our policies relevant without creating update fatigue.”
  • “How often should we update our AI policy? Quarterly? Monthly?”

These are the right concerns. They are also a signal that the underlying model is broken. Policy can no longer be treated as a static artifact. Forging answers to such critical issues means moving away from policy as a document and toward policy as an active governance process.

The nature of a living AI policy

A living AI policy is not a static document. It is an operating model. It is a governance process embedded directly into development, procurement, and day-to-day workflows – not something teams are expected to remember after once-a-year training.

Core characteristics of effective AI policies today include:

  • Tiered based on risk (for example, high-risk systems require prior review and sign-off)
  • Dynamic and regularly updated to reflect new capabilities or regulations
  • Integrated into model development workflows, not separate from them
  • Tracked and version-controlled like software documentation
  • Supported by training and ongoing awareness campaigns

In other words, AI policy should behave more like code than like a memo. If it doesn’t change when the systems change, it isn’t governing anything.

How to build an effective AI policy

Start with risk classification. Use NIST’s map function to identify which systems are high-risk and require stronger governance, such as decision-making models in human resources, lending, or healthcare. Then, map policy enforcement requirements based on use case and system criticality.

Next, recognize that if policy is not embedded where decisions are made, it will be ignored where it matters most. Embed policy checkpoints into operational processes, including:

  • Procurement workflows for external AI tools
  • Software development pipelines for internal models
  • Product launch reviews
  • Incident response planning

Implement quarterly or even monthly policy sprints to keep guidance fresh. Short, targeted updates aligned to real changes in models, vendors, or regulations are far more effective than a massive annual rewrite no one reads or follows.

Tactical recommendations

  • Refer to ISO/IEC 42001’s continual improvement clause to formalize update cycles
  • Develop lightweight update processes, such as change logs, and quick-reference summaries
  • Launch AI policy roadshows and refreshers to embed policy culture
  • Monitor regulatory developments, including updates to the EU AI Act, to align implementation timelines accordingly

The goal is not policy perfection. It is policy relevance.

Frameworks and tools

  • NIST AI RMF: Manage and measure functions for policy feedback loops
  • ISO/IEC 42001: Dynamic policy life cycle, continual improvement clauses
  • CISA Secure by Design: Iterative policy approach aligned with system risk levels

AI governance isn’t a one-and-done exercise. Policies that don’t move at the rapid speed of the AI models they govern quickly become governance theater – technically compliant, but operationally irrelevant. The organizations that win will be those that treat policy as a living process, not a static PDF. Because in AI, stale policy is not neutral. It is a risk the organization is actively choosing to accept.

Mitigate AI risk with AI governance
If your company uses AI, you need an AI governance plan. We can help.

Contact our AI governance team

Our team specializes in helping companies build robust, future-ready AI governance, including living AI policies. Contact us to get started.
Corey Minard
Corey Minard
Senior Manager, Risk Consulting
Julie DeMuth Mellendorf
Julie DeMuth Mellendorf
Studio Quality and Risk Management Leader, Crowe Studio

Related insights