Six months ago, conversations in boardrooms centered on whether and how aggressively to invest in AI. Today, that question has largely been answered. Far more than a discrete initiative or emerging capability, AI is becoming embedded in the fabric of enterprises.
At the same time, the strategic posture regarding AI is evolving unevenly. In some organizations, AI is tightly aligned to value creation and competitive advantage. In others, adoption is being driven by broad mandates, such as “automate 30% of work” or “deploy multiple use cases per employee,” that accelerate activity without clear strategic intent.
Regardless of the path, the outcome is the same: AI use is expanding rapidly, often in ways that outpace governance.
As adoption accelerates, a new reality is emerging. Governance models are under increasing strain, even as leading frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001 reinforce the need for structured, life cycle-based oversight. Governance must be formalized, and structures must evolve to operate in a more distributed, fast-moving, and less visible AI environment.
Compounding this challenge, the nature of AI itself is changing. Organizations are moving beyond assistive AI tools that support human work toward agentic AI systems that can plan, reason, use tools, and execute multistep tasks with limited human intervention. Agentic systems generate outputs for review as assistive tools do, but they also act, interact with other systems, and produce downstream consequences.
While agentic AI raises new governance considerations, many organizations are still working through foundational challenges associated with generative AI, including data exposure, inconsistent use, and limited visibility. Governance efforts must balance both realities simultaneously.
In recent discussions with directors and executives, three challenges surfaced repeatedly, each making AI governance more complex, less centralized, and harder to operationalize.
AI has moved beyond bespoke models and isolated use cases and is increasingly embedded, often by default, in the enterprise software organizations already rely on.
From enterprise resource planning and customer relationship management platforms to human resources systems and productivity tools, vendors are rapidly integrating AI capabilities into core workflows. These features often are deployed not through formal transformation programs, but via routine updates, configuration changes, or gradual user adoption. This shift fundamentally changes the governance equation.
While maintaining a centralized inventory is becoming more difficult, regulatory expectations have not changed. Frameworks such as ISO/IEC 42001 still require organizations to maintain a comprehensive view of AI systems, including third-party and embedded capabilities. As AI becomes more deeply integrated into externally managed platforms, visibility fragments and governance blind spots increase.
Boards should be asking:
Governance goes beyond what an organization builds. It also oversees with what an organization buys, configures, and inherits, including what it might not fully see.
AI use is spreading organically across the enterprise, often outside formal channels and increasingly driven by accessibility, utility, and, in some cases, executive mandate.
Employees are incorporating AI into daily workflows: drafting communications, analyzing data, generating code, and supporting decision-making. This “citizen development” dynamic is accelerating innovation, but it is also creating a layer of AI use that is difficult to see, measure, and control.
In parallel, AI agents are beginning to take actions on behalf of users, such as scheduling meetings, querying data, executing code, and triggering workflows across enterprise systems. These capabilities are often embedded in tools employees already use and might be activated without formal IT oversight. When users configure these systems with access to sensitive data or enterprise platforms, they are effectively enabling autonomous actions within the organization, often without a corresponding governance review. The result is a form of unmanaged AI driven by speed and practicality.
This bottom-up adoption creates a distinct class of risk. Dataflows into and out of AI tools might be invisible to IT, security, and privacy teams. Sensitive or proprietary information might be exposed through prompts or integrations. Practices vary across teams, which can lead to decreased consistency and reliability. Organizations lack a clear view of which use cases are high-risk, business critical, or subject to regulatory scrutiny.
The privacy implications are particularly significant. When employees input client data, proprietary information, or personal data into AI tools, they might inadvertently create dataflows that conflict with principles such as data minimization, purpose limitation, and cross-border restrictions. Agentic capabilities further amplify this risk by aggregating and acting on data across systems in ways that might exceed original intent.
More fundamentally, this trend challenges the assumption that governance can be enforced through policies alone. Policies that are not embedded into workflows, systems, and controls are routinely bypassed, not out of malice but because of momentum.
Boards should be asking:
In this environment, governance becomes as much a cultural and behavioral challenge as a structural one. More than functioning as supporting elements, awareness, training, and embedded guardrails are central to effective control.
AI is moving from experimentation into production, often faster than governance frameworks can mature.
AI-generated outputs are now influencing customer interactions, operational decisions, financial forecasts, and internal processes. Yet in many organizations, accountability remains unclear or fragmented.
Questions that were once theoretical are now immediate:
Traditional governance models designed for deterministic systems do not translate cleanly to generative or probabilistic AI. Outputs might vary, explainability might be limited, and reliance on third-party models introduces additional opacity.
Accountability is further complicated by the fact that AI spans multiple functions, including technology, risk, legal, compliance, and the business, which makes clear ownership more difficult but no less essential.
This challenge becomes more pronounced as agentic systems are deployed. Multistep decision chains might execute without discrete human checkpoints, requiring governance models that define boundaries, escalation triggers, and oversight mechanisms for autonomous behavior.
Boards should be asking:
As AI becomes embedded in core processes, governance must evolve from principles to operational discipline and be supported by clear ownership, cross-functional coordination, defined controls, and robust assurance.
Taken together, these shifts point to a broader conclusion: AI governance is becoming more distributed, more dynamic, and more dependent on organizational behavior than many anticipated.
As AI systems become more capable, and in some cases more autonomous, it is critical to recognize that while governance oversees tools, it also oversees systems that influence, and in some cases execute, decisions.
The dual challenge for boards is to strengthen oversight and to adapt it, which might require:
Boards should expect management to report on AI governance with increasing rigor, similar to financial reporting, cybersecurity, and compliance. This rigor includes visibility into use case inventory, risk classification, policy adherence, incident trends, and control effectiveness.
AI is not slowing down, and its integration into enterprises is accelerating in ways that make it less visible and harder to contain. Boards that recognize this shift and adapt their governance models accordingly will be better positioned to support innovation while maintaining trust. Those that rely on static frameworks might find that the ground has already moved beneath them.