AI regulations and guidance are still in their infancy, and the continual changes and shifts are complex. While some regulations provide prescriptive rules, others offer principles-based guidance, but all require organizations to understand the impact of AI, adapt to various developments, and implement effective change management strategies. Proposed legislation is pending at various state and national levels, and sectoral guidance is forming, so it can be difficult for companies to model their conduct after established regulatory frameworks.
It is not just regulators, such as the U.S. Department of Justice (DOJ) and the Federal Trade Commission (FTC), that businesses need to consider. In many cases, standard-setters such as the National Institute of Standards and Technology, the International Organization for Standardization, and others offer best practices that also set expectations. Additionally, businesses need to consider how non-AI focused regulations, such as Section 5 of the Federal Trade Commission Act, “Unfair or Deceptive Acts or Practices,” and cybersecurity and privacy laws affect AI initiatives.
So how can companies operate within their risk appetite and tolerance with such uncertainty in the regulatory landscape? Our AI governance team offers some suggestions that apply to organizations across industries.
Amid ambiguity, several notable AI-related regulations and initiatives have emerged, and they are shaping the current regulatory environment:
As the regulatory landscape evolves, companies must stay vigilant and proactive to address the implications of emerging regulations, standards, and guidance on their AI governance frameworks and strategies for AI use case development and deployment.
When implementing AI solutions, it’s critical to weigh the risk exposure of each use case at an aggregated level against risk appetite. A company’s overall risk appetite is defined by how much risk the organization will accept or take to achieve its strategy and objectives. Typically, organizations define their risk appetite at each of the tier one risk pillars in a risk taxonomy, such as operational, reputational, legal and regulatory, privacy, and financial risk. For example, some organizations might have a low appetite for reputational risk because their brand is precious, but they are willing to take on a moderate level of operational or financial risk.
In terms of AI implementation, organizations should consider all the factors of their risk appetite, as AI can connect many overlapping pillars of risk to broader enterprise risk management (ERM) and compliance programs, such as privacy, cybersecurity, and third-party risk management. Several factors can influence how organizations evaluate risk exposure and their approach to operating within their risk appetite for AI implementation, including:
For AI tools purchased from a third party, organizations should consider the potential for integration issues and the level of dependency on the third party for items such as updates, access, and data considerations, including exposing intellectual property, inappropriate usage of personal data, and limited transparency. Organizations also should address any security and privacy concerns. For example, is the organization able to transfer certain data to a third party if it’s a cloud-hosted tool, or is the organization subject to additional cyber risks if the vendor does not have adequate controls?
In making this decision, organizations should remember to apply the appropriate risk and control considerations to each situation. For example, if certain personal or sensitive data cannot be shared with any outside vendors, the organization would need to create an internal tool versus working with a third party.
Conversely, if a business uses AI for external-facing purposes, such as a webpage copy or a request for proposal, it might not be willing to expose itself to as much risk because the information will have direct customer impact, resulting in the organization implementing additional governance and controls. An organization can have different risk appetites depending on the risk pillars it prioritizes, and its willingness for risk exposure will vary with each AI use case.
These determinations all serve as a guiding star for how the business should build its AI governance program, structure, guardrails, and oversight mechanisms as well as how AI initiatives will integrate with existing risk management practices such as cybersecurity, privacy, and third-party risk.
Risk appetite isn’t binary; it exists on a spectrum. At one end are highly regulated companies that are seeking ways to implement AI technology while also mitigating risk. In the middle are organizations taking a balanced approach. They might pilot AI initiatives with more focused use cases like an agent to execute a specific task, implement human oversight, and adapt risk management practices as they learn. At the other end are companies pushing forward with broad external AI applications core to their business. An organization’s position on this spectrum shapes how it responds to evolving AI regulations and its overall strategy for protecting investments in AI capabilities long term.
As the AI revolution accelerates, companies across industries are grappling with the complex and rapidly evolving regulatory landscape. With a patchwork of emerging laws and guidelines at the federal, state, and international levels, organizations should carefully assess their risk appetite and develop a comprehensive AI governance strategy and program to mitigate potential exposures. Following are a few key areas of focus that can help companies determine the right risk appetite now, while staying prepared for future regulations.
Organizations should track regulatory changes, assessment analysis, project scoping, and project management rigor and oversight as it relates to AI, either through the AI governance program (if it exists) or designated person or risk management committees (including those at management and board levels). Proper risk management and governance isn’t just about protection; it allows the organization to experiment with pilot uses and responsibly deploy the successes into production.
Furthermore, compliance professionals should stay abreast of emerging regulatory developments, such as the ECCP and Operation AI Comply, as well as enforcement actions or other signals from guiding agencies, including state attorneys general that are increasingly evaluating AI use through privacy and cybersecurity lenses. Documentation, or lack thereof, of decisions made to implement a reasonably and defensibly designed and executed compliance program typically can affect the way in which a judgment or enforcement action is made.
Organizations should tailor these processes to the specific AI use cases and data sources involved, considering factors such as the potential for bias, privacy and cybersecurity implications, and the impact of AI decisions on individuals or society.
Additionally, companies should be prepared to invest in updating their AI systems, processes, and governance frameworks as new regulations and guidelines are introduced. Such updates might involve retraining AI models, implementing new controls, or enhancing transparency and explainability measures to meet evolving regulatory requirements.
Determining AI risk appetite and risk management strategies in the face of evolving regulations is a difficult task, but it’s imperative that companies start these conversations now so they can have appropriate controls and structures in place and be ahead of the curve when new regulations go into effect.
For companies that are newer to the development, implementation, and use of AI for internally or externally facing operations, it can be helpful to consult with a third party that offers extensive expertise in AI technology; industry-specific regulations; data governance program assessments, development, and implementation; and incorporation of these elements into the existing risk management structure.
Related insights