3 Ways AI Governance Is Getting Harder for Boards

Greg B. Hahn, Julie DeMuth Mellendorf, Corey Minard
4/30/2026
3 Ways AI Governance Is Getting Harder for Boards

AI governance is becoming more complex as embedded tools, rapid adoption, and unclear accountability challenge traditional oversight models.

Six months ago, conversations in boardrooms centered on whether and how aggressively to invest in AI. Today, that question has largely been answered. Far more than a discrete initiative or emerging capability, AI is becoming embedded in the fabric of enterprises.

At the same time, the strategic posture regarding AI is evolving unevenly. In some organizations, AI is tightly aligned to value creation and competitive advantage. In others, adoption is being driven by broad mandates, such as “automate 30% of work” or “deploy multiple use cases per employee,” that accelerate activity without clear strategic intent.

Regardless of the path, the outcome is the same: AI use is expanding rapidly, often in ways that outpace governance.

As adoption accelerates, a new reality is emerging. Governance models are under increasing strain, even as leading frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001 reinforce the need for structured, life cycle-based oversight. Governance must be formalized, and structures must evolve to operate in a more distributed, fast-moving, and less visible AI environment.

Compounding this challenge, the nature of AI itself is changing. Organizations are moving beyond assistive AI tools that support human work toward agentic AI systems that can plan, reason, use tools, and execute multistep tasks with limited human intervention. Agentic systems generate outputs for review as assistive tools do, but they also act, interact with other systems, and produce downstream consequences.

While agentic AI raises new governance considerations, many organizations are still working through foundational challenges associated with generative AI, including data exposure, inconsistent use, and limited visibility. Governance efforts must balance both realities simultaneously.

In recent discussions with directors and executives, three challenges surfaced repeatedly, each making AI governance more complex, less centralized, and harder to operationalize.

Challenge 1: The governance perimeter is dissolving

AI has moved beyond bespoke models and isolated use cases and is increasingly embedded, often by default, in the enterprise software organizations already rely on.

From enterprise resource planning and customer relationship management platforms to human resources systems and productivity tools, vendors are rapidly integrating AI capabilities into core workflows. These features often are deployed not through formal transformation programs, but via routine updates, configuration changes, or gradual user adoption. This shift fundamentally changes the governance equation.

While maintaining a centralized inventory is becoming more difficult, regulatory expectations have not changed. Frameworks such as ISO/IEC 42001 still require organizations to maintain a comprehensive view of AI systems, including third-party and embedded capabilities. As AI becomes more deeply integrated into externally managed platforms, visibility fragments and governance blind spots increase.

Boards should be asking:

  • How do we maintain an accurate inventory of AI capabilities when they are embedded in vendor platforms?
  • What governance expectations do we place on third-party providers, and how do we verify them?
  • Are our risk, compliance, and procurement processes equipped to evaluate AI-enabled software?
  • Have we assessed intellectual property and data retention risks embedded in vendor AI terms of service?
  • Do we understand which vendor platforms are deploying autonomous or agent-like capabilities and what access those systems have in our environment?

Governance goes beyond what an organization builds. It also oversees with what an organization buys, configures, and inherits, including what it might not fully see.

Challenge 2: AI adoption is happening faster than it is being governed

AI use is spreading organically across the enterprise, often outside formal channels and increasingly driven by accessibility, utility, and, in some cases, executive mandate.

Employees are incorporating AI into daily workflows: drafting communications, analyzing data, generating code, and supporting decision-making. This “citizen development” dynamic is accelerating innovation, but it is also creating a layer of AI use that is difficult to see, measure, and control.

In parallel, AI agents are beginning to take actions on behalf of users, such as scheduling meetings, querying data, executing code, and triggering workflows across enterprise systems. These capabilities are often embedded in tools employees already use and might be activated without formal IT oversight. When users configure these systems with access to sensitive data or enterprise platforms, they are effectively enabling autonomous actions within the organization, often without a corresponding governance review. The result is a form of unmanaged AI driven by speed and practicality.

This bottom-up adoption creates a distinct class of risk. Dataflows into and out of AI tools might be invisible to IT, security, and privacy teams. Sensitive or proprietary information might be exposed through prompts or integrations. Practices vary across teams, which can lead to decreased consistency and reliability. Organizations lack a clear view of which use cases are high-risk, business critical, or subject to regulatory scrutiny.

The privacy implications are particularly significant. When employees input client data, proprietary information, or personal data into AI tools, they might inadvertently create dataflows that conflict with principles such as data minimization, purpose limitation, and cross-border restrictions. Agentic capabilities further amplify this risk by aggregating and acting on data across systems in ways that might exceed original intent.

More fundamentally, this trend challenges the assumption that governance can be enforced through policies alone. Policies that are not embedded into workflows, systems, and controls are routinely bypassed, not out of malice but because of momentum.

Boards should be asking:

  • Do we understand how AI is actually being used across the organization, not just how it is intended to be used?
  • Where might sensitive or proprietary information be exposed?
  • How are we identifying and prioritizing higher-risk or business-critical use cases?
  • Are governance policies implemented into technical controls or do they primarily exist on paper?
  • Do we have visibility into where autonomous or agentic capabilities are being deployed across the enterprise?

In this environment, governance becomes as much a cultural and behavioral challenge as a structural one. More than functioning as supporting elements, awareness, training, and embedded guardrails are central to effective control.

Challenge 3: The accountability gap is widening

AI is moving from experimentation into production, often faster than governance frameworks can mature.

AI-generated outputs are now influencing customer interactions, operational decisions, financial forecasts, and internal processes. Yet in many organizations, accountability remains unclear or fragmented.

Questions that were once theoretical are now immediate:

  • Who is accountable for an AI-influenced decision?
  • What level of human validation is required?
  • Can outputs be explained, reproduced, and audited if challenged?
  • Are escalation paths defined when outputs appear unreliable or biased?

Traditional governance models designed for deterministic systems do not translate cleanly to generative or probabilistic AI. Outputs might vary, explainability might be limited, and reliance on third-party models introduces additional opacity.

Accountability is further complicated by the fact that AI spans multiple functions, including technology, risk, legal, compliance, and the business, which makes clear ownership more difficult but no less essential.

This challenge becomes more pronounced as agentic systems are deployed. Multistep decision chains might execute without discrete human checkpoints, requiring governance models that define boundaries, escalation triggers, and oversight mechanisms for autonomous behavior.

Boards should be asking:

  • Which AI use cases are now business critical?
  • Where are we relying on outputs that we cannot fully validate?
  • Is accountability clearly defined across functions and committees?
  • Do risk, compliance, and internal audit functions have the capability to assess AI systems in practice?
  • Where are autonomous capabilities being deployed, and what controls govern their behavior?

As AI becomes embedded in core processes, governance must evolve from principles to operational discipline and be supported by clear ownership, cross-functional coordination, defined controls, and robust assurance.

From control to adaptability

Taken together, these shifts point to a broader conclusion: AI governance is becoming more distributed, more dynamic, and more dependent on organizational behavior than many anticipated.

As AI systems become more capable, and in some cases more autonomous, it is critical to recognize that while governance oversees tools, it also oversees systems that influence, and in some cases execute, decisions.

The dual challenge for boards is to strengthen oversight and to adapt it, which might require:

  • Expanding governance beyond centralized inventories to include vendor ecosystems and user behavior
  • Reinforcing cultural adoption through training, communication, and incentives
  • Clarifying accountability as AI becomes embedded in decision-making
  • Confirming assurance functions evolve alongside the technology they oversee
  • Establishing governance structures for autonomous capabilities, including access controls, escalation triggers, and auditability

Boards should expect management to report on AI governance with increasing rigor, similar to financial reporting, cybersecurity, and compliance. This rigor includes visibility into use case inventory, risk classification, policy adherence, incident trends, and control effectiveness.

AI is not slowing down, and its integration into enterprises is accelerating in ways that make it less visible and harder to contain. Boards that recognize this shift and adapt their governance models accordingly will be better positioned to support innovation while maintaining trust. Those that rely on static frameworks might find that the ground has already moved beneath them.

Mitigate AI risk with AI governance
If your company is using AI, you need an AI governance plan. We can help.

Connect with our team to discuss how your organization can strengthen AI governance and manage emerging risks


Greg Hahn
Greg B. Hahn
Principal, Consulting Markets & Growth Leader 
Julie DeMuth Mellendorf
Julie DeMuth Mellendorf
Studio Quality and Risk Management Leader
Corey Minard
Corey Minard
Senior Manager, Risk Consulting

Related insights