Balancing Risk and Reward in the Face of AI Regulations

How To Determine Your AI Risk Appetite

Clayton J. Mitchell, David Moncure
1/23/2025
Balancing Risk and Reward in the Face of AI Regulations

AI technologies – and the AI regulation landscape – are changing every day. Organizations can take proactive steps to determine their risk appetite and stay compliant.

AI regulations and guidance are still in their infancy, and the continual changes and shifts are complex. While some regulations provide prescriptive rules, others offer principles-based guidance, but all require organizations to understand the impact of AI, adapt to various developments, and implement effective change management strategies. Proposed legislation is pending at various state and national levels, and sectoral guidance is forming, so it can be difficult for companies to model their conduct after established regulatory frameworks.

It is not just regulators, such as the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC), that businesses need to consider. In many cases, standard-setters such as the National Institute of Standards and Technology, the International Organization for Standardization, and others offer best practices that also set expectations. Additionally, businesses need to consider how non-AI focused regulations, such as Section 5 of the Federal Trade Commission Act, “Unfair or Deceptive Acts or Practices,” and cybersecurity and privacy laws affect AI initiatives.

So how can companies operate within their risk appetite and tolerance with such uncertainty in the regulatory landscape? Our AI governance team offers some suggestions that apply to organizations across industries.

Get a custom AI governance strategy
Keep pace with AI with an AI governance strategy tailored to your needs.

A brief overview of the current regulatory environment

Amid ambiguity, several notable AI-related regulations and initiatives have emerged, and they are shaping the current regulatory environment:

  • The European Union (EU) Artificial Intelligence Act: Published in July 2024, this legislation aims to offer a coordinated framework to regulate the development, deployment, and use of AI systems in the EU. It’s expected to be the foundation for AI regulations worldwide, and it uses a risk-based approach.
  • The FTC’s Operation AI Comply: This enforcement initiative takes direct action against companies that use AI to deceptively or unfairly market their products or services.
  • The DOJ’s “Evaluation of Corporate Compliance Programs” (ECCP): In September 2024, the DOJ updated its guidance on evaluating the effectiveness of corporate compliance programs, which includes a specific focus on considerations for AI-related risks and controls.
  • State-level initiatives: The Colorado Artificial Intelligence Act was the first comprehensive AI law at a state level, though several states have since taken steps to regulate AI, including the Texas attorney general’s enforcement initiatives and Utah’s Artificial Intelligence Policy Act. California also is expected to become a pace-setter in AI regulations.

As the regulatory landscape evolves, companies must stay vigilant and proactive to address the implications of emerging regulations, standards, and guidance on their AI governance frameworks and strategies for AI use case development and deployment.

The different levels of AI risk appetite

When implementing AI solutions, it’s critical to weigh the risk exposure of each use case at an aggregated level against risk appetite. A company’s overall risk appetite is defined by how much risk the organization will accept or take to achieve its strategy and objectives. Typically, organizations define their risk appetite at each of the tier one risk pillars in a risk taxonomy, such as operational, reputational, legal and regulatory, privacy, and financial risk. For example, some organizations might have a low appetite for reputational risk because their brand is precious, but they are willing to take on a moderate level of operational or financial risk.

In terms of AI implementation, organizations should consider all the factors of their risk appetite, as AI can connect many overlapping pillars of risk to broader enterprise risk management (ERM) and compliance programs, such as privacy, cybersecurity, and third-party risk management. Several factors can influence how organizations evaluate risk exposure and their approach to operating within their risk appetite for AI implementation, including:

  • Internal AI development versus third-party, external AI tools. Organizations need to weigh the advantages and disadvantages between AI tools and use cases that are developed internally versus those purchased from a third party, though both can be helpful depending on the organization’s needs. When considering internally built AI tools, organizations need to weigh the development time and cost against available resources as well as any expertise gaps – including those related to the integration of appropriate security, data accuracy and completeness, and compliance controls. They also should consider if internal capacity for scalability and regular maintenance exists.

    For AI tools purchased from a third party, organizations should consider the potential for integration issues and the level of dependency on the third party for items such as updates, access, and data considerations, including exposing intellectual property, inappropriate usage of personal data, and limited transparency. Organizations also should address any security and privacy concerns. For example, is the organization able to transfer certain data to a third party if it’s a cloud-hosted tool, or is the organization subject to additional cyber risks if the vendor does not have adequate controls?

    In making this decision, organizations should remember to apply the appropriate risk and control considerations to each situation. For example, if certain personal or sensitive data cannot be shared with any outside vendors, the organization would need to create an internal tool versus working with a third party.

  • Industry and regulatory environment. Highly regulated sectors, such as healthcare and financial services, tend to have lower risk tolerances, as they must adhere to strict compliance obligations. Companies in these industries should be more cautious when adopting AI technologies without defensible governance controls.
  • Existing technology risk posture. Overall acumen for technology risk plays a role. Organizations that continually transform their business with advanced technology, such as AI, might have a greater risk appetite in the adoption of emerging technologies and use cases.
  • Nature of AI use cases. Specific AI use cases can be an insightful test of risk exposure. If the acceptable use of a technology is determined to be internal facing, for example, the risk exposure might be less than a technology used for an external or customer-facing use case. A company should make decisions about exposure based on its risk appetite. For example, a business using AI for internal-facing use cases, such as meeting summary reports or internal documentation, might be willing to implement fewer and less stringent processes and controls, as the outputs are only used inside of the business.

    Conversely, if a business uses AI for external-facing purposes, such as a webpage copy or a request for proposal, it might not be willing to expose itself to as much risk because the information will have direct customer impact, resulting in the organization implementing additional governance and controls. An organization can have different risk appetites depending on the risk pillars it prioritizes, and its willingness for risk exposure will vary with each AI use case.

These determinations all serve as a guiding star for how the business should build its AI governance program, structure, guardrails, and oversight mechanisms as well as how AI initiatives will integrate with existing risk management practices such as cybersecurity, privacy, and third-party risk.

Risk appetite isn’t binary; it exists on a spectrum. At one end are highly regulated companies that are seeking ways to implement AI technology while also mitigating risk. In the middle are organizations taking a balanced approach. They might pilot AI initiatives with more focused use cases like an agent to execute a specific task, implement human oversight, and adapt risk management practices as they learn. At the other end are companies pushing forward with broad external AI applications core to their business. An organization’s position on this spectrum shapes how it responds to evolving AI regulations and its overall strategy for protecting investments in AI capabilities long term.

Determining AI risk appetite in the face of changing regulations

As the AI revolution accelerates, companies across industries are grappling with the complex and rapidly evolving regulatory landscape. With a patchwork of emerging laws and guidelines at the federal, state, and international levels, organizations should carefully assess their risk appetite and develop a comprehensive AI governance strategy and program to mitigate potential exposures. Following are a few key areas of focus that can help companies determine the right risk appetite now, while staying prepared for future regulations.

  • Evaluate existing data governance principles, policies, and guidelines, and add to, supplement, or modify the data governance program where needed. Businesses likely already have a structure in place to monitor for risk in other areas, and this structure should be expanded to include AI initiatives.

    Organizations should track regulatory changes, assessment analysis, project scoping, and project management rigor and oversight as it relates to AI, either through the AI governance program (if it exists) or designated person or risk management committees (including those at management and board levels). Proper risk management and governance isn’t just about protection; it allows the organization to experiment with pilot uses and responsibly deploy the successes into production.

  • Evolve corporate compliance and privacy and cybersecurity functions, which might involve updating existing policies and procedures, conducting risk assessments, and implementing controls specifically tailored to AI applications. Compliance teams should work closely with other stakeholders, such as legal, privacy, and security teams, to ensure a holistic and consistent approach to AI governance. This collaboration is essential, as AI systems often intersect with various areas of regulatory concern, including data privacy, cybersecurity, and ethical considerations.

    Furthermore, compliance professionals should stay abreast of emerging regulatory developments, such as the ECCP and Operation AI Comply, as well as enforcement actions or other signals from guiding agencies, including state attorneys general that are increasingly evaluating AI use through privacy and cybersecurity lenses. Documentation, or lack thereof, of decisions made to implement a reasonably and defensibly designed and executed compliance program typically can affect the way in which a judgment or enforcement action is made.

  • Holistically integrate AI governance into the overarching ERM framework to assess and manage AI risks in a consistent manner and align with the company’s broader risk management programs. Organizations should consider establishing dedicated AI risk management processes, including risk assessments, control frameworks, monitoring mechanisms, and metrics such as key performance indicators and key risk indicators, model testing and validation, and performance monitoring.

    Organizations should tailor these processes to the specific AI use cases and data sources involved, considering factors such as the potential for bias, privacy and cybersecurity implications, and the impact of AI decisions on individuals or society.

  • Adjust and adapt strategies and investments to involve allocating resources for ongoing monitoring and analysis of emerging regulations, both domestically and internationally. Organizations with a global footprint or customer base should be particularly mindful of the potential impact of international regulations, such as the EU’s Artificial Intelligence Act, as well as pending legislation in Asia and South America, which could have far-reaching implications for multinational companies.

    Additionally, companies should be prepared to invest in updating their AI systems, processes, and governance frameworks as new regulations and guidelines are introduced. Such updates might involve retraining AI models, implementing new controls, or enhancing transparency and explainability measures to meet evolving regulatory requirements.

Preparing for today – and the future 

Determining AI risk appetite and risk management strategies in the face of evolving regulations is a difficult task, but it’s imperative that companies start these conversations now so they can have appropriate controls and structures in place and be ahead of the curve when new regulations go into effect.

For companies that are newer to the development, implementation, and use of AI for internally or externally facing operations, it can be helpful to consult with a third party that offers extensive expertise in AI technology; industry-specific regulations; data governance program assessments, development, and implementation; and incorporation of these elements into the existing risk management structure.

Contact our AI governance team

Combining deep risk management expertise with extensive industry experience, our team is here to help you integrate AI governance into your current risk management posture. Contact us today.
Clayton J. Mitchell
Clayton J. Mitchell
Managing Principal, Fintech
David Moncure at Crowe LLP
David Moncure
Principal, Forensics & Legal Consulting