AI in Financial Reporting: Balancing Innovation and Risk

Clayton J. Mitchell, Paul Elggren, John Norton
7/15/2025
AI in Financial Reporting: Balancing Innovation and Risk

Using AI tools in financial reporting can increase efficiency, but it requires robust governance. See how to balance the innovation of AI with risk management best practices.

As AI continues to reshape the business landscape, its adoption in accounting and financial reporting introduces transformative opportunities and potential risks. From automating transaction processing to reviewing contracts, AI tools can help increase efficiency, streamline operations, and uncover new insights. However, the adoption of AI also presents new challenges and considerations for maintaining effective risk management and control environments, especially when it comes to internal control over financial reporting (ICFR).

Organizations must proactively define governance strategies, risk tolerances, and accountability structures to make sure AI enhances, rather than compromises, financial reporting integrity. Using our proprietary AI governance framework, our AI governance and risk team outlines some key considerations for companies that are considering implementing AI in their financial reporting-related processes.

Policies and standards

With how quickly AI is evolving, it is tempting to start using AI immediately and create policies either during or after implementation. However, this approach can introduce unnecessary risk, especially regarding financial reporting. Organizations should consider implementing a variety of policies and standards before implementation, including:

  • Access controls. Define procedures for granting, removing, and reviewing access to financial systems using AI tools.
  • Data integrity and training. Clearly understand the input and output data. In addition to the AI model itself, data will have a significant impact in influencing the result from the training phase through the ongoing maintenance of the AI.
  • Use case road maps. Create strategic plans that identify AI opportunities, associated risks, and necessary controls to mitigate material misstatements.
  • Governance frameworks. Develop policies tailored to organizational comfort levels to create a consistent and informed approach to AI integration.
  • SOX risk assessment. Understand how AI fits into in-scope business and IT processes as well as the overall internal control environment, including its involvement in control execution, the reliability of its data outputs used in the financial reporting process, and how a human-in-the-loop could mitigate the risks introduced.

Training, awareness, and engagement

An organization’s risk tolerance significantly influences its training and awareness strategy. For those with an extremely low risk tolerance, training might emphasize avoiding AI altogether. For organizations with a moderate-to-high risk tolerance, training and awareness should focus on setting guardrails for AI use and understanding its impact on ICFR.

Additionally, organizations should consider the risk exposure created because of adoption, including the impact to internal control over financial reporting for publicly traded companies. It’s essential to incorporate a variety of stakeholders as part of the training, awareness, and engagement phase, including:

  • Second line of defense. IT compliance, system and organization controls program managers, and SOX program management office are all roles that should be considered the human-in-the-loop to double-check AI output.
  • Third line of defense. Internal audit is a third line of defense to check against second line of defense roles.
  • External auditors. Engaging external auditors early can help organizations clarify expectations and identify potential risks.
  • IT teams. IT teams should vet AI-related system changes, especially those touching financially relevant data, to comply with IT general controls.

Accountability and responsibility

Much like any other organizational initiative, accountability and responsibility are key because if everyone is responsible, no one has actual accountability. A few different roles that should hold organizational-level responsibility for AI use in ICFR include:

  • AI governance leader or committee. Every company using AI should have a person or committee charged with AI governance that manages the day-to-day monitoring of governance, including creating and implementing policies, managing risk, and educating the broader organization.
  • CEO and CFO. Especially in publicly traded companies, ultimate accountability for AI in financial reporting lies with the CFO, who must sign off on internal controls (302 certification) and financial statements (906 certification).
  • Chief accounting officer or controller and chief audit executive. These roles support the CFO by helping understand and manage financial risks.
  • Chief information officer and chief information security officer. Depending on organizational structure, these roles oversee the system-level responsibilities and changes introduced by AI.

Transparency

Organizations need to be open and transparent about their AI use, especially when considering how AI could negatively affect their business from a financial reporting perspective. Companies should demonstrate transparency in a variety of ways, including:

  • Documenting AI processes. Creating detailed AI process documentation that outlines AI use, including operational steps and decision-making logic, can support accurate reporting, enhance transparency, and help demonstrate to external auditors the organization’s awareness and management of associated risks.
  • Disclosing AI use. Risks and operational effects of AI use should be reflected in formal documentation, including in Form 10-K disclosures.
  • Communicating with auditors. Clear communication with external auditors regarding the roles and limitations of AI can assist organizations in identifying and mitigating potential risks.

Design and development

As organizations design and develop AI systems in financial reporting, they need to consider how these systems align with their strategic goals and with their current governance and risk posture. Following are a few things for companies to keep in mind:

  • Risk appetite. Organizations need to be educated on and understand the specific risks AI introduces into their internal control environments, which might require external expertise.
  • Quick wins. Companies should identify where AI systems could significantly reduce manual effort, focus on implementing those initiatives first, and build on that momentum for more complex projects.
  • Prioritizing objective processes. Starting AI use in an area that produces objective results makes human-in-a-loop verification easier and can also help build organizational momentum for AI.

Implementation and use

When organizations implement AI systems into their current financial reporting processes, a few strategies can help support success, including:

  • Adequate timelines. Organizations should expect the implementation process to take six to 12 months and plan to run AI processes alongside their existing systems during that time to build data and auditor confidence.
  • Phased implementation. Beginning with low-risk, easily verifiable areas, such as extracting key contract terms, rather than subjective tasks like accounting estimates, can help build momentum for additional AI initiatives.
  • Agile methodology. Prioritizing quick wins while maintaining alignment with broader AI goals helps companies build on their success and keep up with the constant changes in AI technology.

Testing and monitoring

Continual and consistent validation and monitoring are key to success when it comes to AI, but it’s especially important for AI systems in financial reporting. Validation controls can follow a tiered system to help catch any issues:

  • Control 1: Human-in-the-loop. This first control is the simplest and most auditable control, and it requires a human to double-check the AI output.
  • Control 2: IT application control. This second control produces an output with a confidence score, and it is monitored using sample reviews.
  • Control 3: AI checking AI. This control is a future-state model that would allow one AI program built on a different foundation to check the work of another.

Additionally, monitoring should include:

  • Regular reviews. The frequency of reviews depends on process criticality, but, at a minimum, quarterly reviews can help companies catch issues, identify deviations from expected results, and make adjustments.
  • Extensive documentation. Companies should maintain logs of all AI tasks, decision points, and changes and update those documents every time there is a change that could materially impact the output.
  • Governance reevaluation. Creating road maps to reassess strategies at regular intervals can help organizations stay on track when encountering constant technology updates.
  • Board oversight. Organizations should have a board in place that understands the opportunities and risks introduced by AI and that has the requisite expertise to oversee and monitor those risks.

By embracing AI in financial reporting with a thoughtful, risk-aware approach, organizations might discover benefits while maintaining the trust and confidence of stakeholders. The key is to balance the innovation of AI while upholding the integrity and reliability of financial reporting processes.

Mitigate AI risk with AI governance
If your company uses AI, you need an AI governance plan. We can help.

Contact our AI governance and risk team

If you suspect there are vulnerabilities in your AI governance approach, our team specializes in helping companies build robust, future-ready AI governance – and we can help yours, too.

Clayton J. Mitchell
Clayton J. Mitchell
Managing Principal, Fintech
Paul Elggren
Paul Elggren
Managing Director, Internal Audit Consulting
John Norton
John Norton
Consulting