Fighting AI Bias

Challenges and Strategies

Clayton J. Mitchell, Corey Minard
5/22/2025
Fighting AI Bias: Challenges and Strategies

To make the most of AI, companies need to find ways to mitigate its inherent biases. Our team offers tested strategies and real-world examples.

AI has the potential to transform industries and improve decision-making processes, but it also carries the risk of silently perpetuating and amplifying biases. Bias refers to systematic errors in AI systems that result in unfair outcomes for certain groups, often along lines of race, gender, age, geography, or other characteristics.

Businesses using AI need to recognize these risks early to avoid harm and to build trust and comply with evolving regulatory expectations, including New York City’s Local Law 144 and the Colorado Artificial Intelligence Act. Additionally, several states now require insurers to conduct bias testing and model assessments, and the California Privacy Rights Act requires companies that use models and algorithms to explain to consumers how they work and confirm they aren’t causing harm. Addressing AI bias begins with understanding its nuances.

Breaking down the nuances of AI bias

Most discussions of AI bias focus narrowly on statistical disparities across protected attributes. However, bias in AI is multifaceted:

  • Contextual bias. A model might perform well overall but underperform in specific environments (for example, rural versus urban data) and introduce geographic-based harm. Models trained on limited data can fail to generalize well across diverse environments.
  • Labeling bias. Training labels often reflect human assumptions that can reinforce historical inequities. For example, fraud or risk labels assigned by people might perpetuate unfair stereotypes.
  • Proxy variables. While models might not explicitly use protected variables such as race, they can rely on proxy variables that indirectly lead to discrimination. Factors like ZIP code or purchase behavior can serve as proxies for protected attributes.
  • Bias in data versus decisions. It’s crucial to distinguish data bias (unbalanced training datasets) from decision bias (how model outputs are used). A fair model can still yield unfair outcomes if deployed within a biased process or system.

Recognizing these nuances is key to developing a comprehensive understanding of AI bias and its potential impacts across different contexts. But what does this look like in practice?

  • Hiring and resume screening tools. Models trained on historical hiring data have unfairly penalized candidates from nontraditional educational backgrounds with employment gaps or whose resumes contain gender-associated terms.
  • Credit scoring and lending models. Despite excluding protected variables such as race and gender, bias persists through indirect variables like income proxies, such as education levels, mobile phone providers, or geographic data, all of which affect minority applicants. Seemingly innocuous data can serve as proxies for protected attributes and lead to indirect discrimination.
  • Healthcare algorithms. AI-driven risk scoring models have been found to underpredict care needs for patients of certain races and ethnicities by relying on historical spending as a flawed proxy for health status. This bias can perpetuate existing disparities and exacerbate inequities in access to care.
  • Facial recognition systems. These systems deployed in surveillance and identification contexts have demonstrated higher error rates for people of color and women, which raises concerns about real-world harm and civil liberties violations.

Following is a chart that breaks down challenges, solutions, and actions organizations can take and the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF) and International Organization for Standardization (ISO) industry standards that should be met when evaluating models.

Challenges of – and solutions to – AI bias

Challenge

No standard definition of fairness across domains

Solution

Define fairness objectives.

Action

Engage stakeholders to define and document desired outcomes. Common objectives include equal opportunity, demographic parity, or calibrated fairness. Aligning on these upfront supports a consistent understanding of fairness across the organization and documenting them allows for testing.

Standard reference

NIST AI RMF: Govern (5.1)

ISO 42001: Policy & Risk Criteria (6.1.2)

Challenge

Lack of representative training data

Solution

Assess and document data bias.

Action

Perform a thorough analysis of training and test data to identify potential sources of bias. Evaluate representation across groups, label integrity, and the presence of proxy variables. Document findings in a centralized repository for transparency.

Standard reference

ISO 42001: Annex A.8.2 (Input data quality)

Challenge

Difficulty assessing third-party and vendor-provided models

Solution

Test model outputs.

Action

Use fairness metrics to assess model performance and help identify disparate outcomes and potential discrimination. One common fairness metric is disparate impact ratios, which evaluate whether an AI model disproportionately harms protected groups by comparing the rate of favorable outcomes between a group of protected individuals and a group of nonprotected individuals. Another is false positive parity, which evaluates if a machine learning model’s false positive rate remains the same across different demographics.

Standard reference

NIST AI RMF: Measure (5.3)

ISO 42001: A.9.3 – Monitoring & Validation

Challenge

Lack of action items in the face of challenges

Solution

Implement mitigation strategies.

Action

Based on testing results, implement appropriate bias mitigation techniques to model outputs, such as reweighting training data, which shifts the importance of samples in a dataset based on identified attributes; adversarial debiasing, which helps reduce bias by detecting if predictions are independent of protected attributes; and postprocessing adjustments, which make changes to the model’s predictions after it’s been trained.

Standard reference

NIST AI RMF: Manage (5.4)

ISO 42001: Annex B.9

Challenge

Limited organizational understanding of bias types

Solution

Support explainability.

Action

Evaluate transparency to support interpretation and understanding of model behavior.

Standard reference

NIST AI RMF: Explainable and Interpretable (3.5)

ISO 42001: Annex B.11 – Interpretability

Challenge

Inconsistent documentation across the AI life cycle

Solution

Establish governance.

Action

Build review checkpoints and define clear responsibilities for fairness oversight throughout the AI life cycle. Establish and document expectations and processes for continuous monitoring, auditing, and course-correction as needed.

Standard reference

NIST AI RMF: Govern (5.1)

ISO 42001: Clause 5.3 – Roles & Authorities

Challenge

Solution

Action

Standard reference

No standard definition of fairness across domains

Define fairness objectives.

Engage stakeholders to define and document desired outcomes. Common objectives include equal opportunity, demographic parity, or calibrated fairness. Aligning on these upfront supports a consistent understanding of fairness across the organization and documenting them allows for testing.

NIST AI RMF: Govern (5.1)

ISO 42001: Policy & Risk Criteria (6.1.2)

Challenge

Solution

Action

Standard reference

Lack of representative training data

Assess and document data bias.

Perform a thorough analysis of training and test data to identify potential sources of bias. Evaluate representation across groups, label integrity, and the presence of proxy variables. Document findings in a centralized repository for transparency.

ISO 42001: Annex A.8.2 (Input data quality)

Challenge

Solution

Action

Standard reference

Difficulty assessing third-party and vendor-provided models

Test model outputs.

Use fairness metrics to assess model performance and help identify disparate outcomes and potential discrimination. One common fairness metric is disparate impact ratios, which evaluate whether an AI model disproportionately harms protected groups by comparing the rate of favorable outcomes between a group of protected individuals and a group of nonprotected individuals. Another is false positive parity, which evaluates if a machine learning model’s false positive rate remains the same across different demographics.

NIST AI RMF: Measure (5.3)

ISO 42001: A.9.3 – Monitoring & Validation

Challenge

Solution

Action

Standard reference

Lack of action items in the face of challenges

Implement mitigation strategies.

Based on testing results, implement appropriate bias mitigation techniques to model outputs, such as reweighting training data, which shifts the importance of samples in a dataset based on identified attributes; adversarial debiasing, which helps reduce bias by detecting if predictions are independent of protected attributes; and postprocessing adjustments, which make changes to the model’s predictions after it’s been trained.

NIST AI RMF: Manage (5.4)

ISO 42001: Annex B.9

Challenge

Solution

Action

Standard reference

Limited organizational understanding of bias types

Support explainability.

Evaluate transparency to support interpretation and understanding of model behavior.

NIST AI RMF: Explainable and Interpretable (3.5)

ISO 42001: Annex B.11 – Interpretability

Challenge

Solution

Action

Standard reference

Inconsistent documentation across the AI life cycle

Establish governance.

Build review checkpoints and define clear responsibilities for fairness oversight throughout the AI life cycle. Establish and document expectations and processes for continuous monitoring, auditing, and course-correction as needed.

NIST AI RMF: Govern (5.1)

ISO 42001: Clause 5.3 – Roles & Authorities

AI bias testing: Prevention versus detection

Bias mitigation should be a core component of any organization’s AI governance policy and control environment – not an afterthought. At Crowe, bias mitigation is a core component of our AI governance framework. We help clients implement both preventive and detective strategies.

  • Preventive approach
    • Create standardized model documentation artifacts, such as AI model cards, which provide a comprehensive overview of each model, including its fairness goals, known limitations, and bias mitigation strategies.
    • Review and understand AI risk assessments aligned with industry standards, such as NIST AI RMF and ISO 42001, to classify risk levels and regulatory exposure for each model.
  • Detective approach
    • Establish robust vendor assurance review practices to assess and test third-party AI models, especially closed-box systems.
    • Perform model testing for fairness and bias and execute against various use cases to identify potential biases and provide actionable recommendations for mitigation.

Client successes in mitigating AI bias

Crowe specialists’ practical experience has underscored the importance of operational AI governance:

  • We helped a national lender evaluate its underwriting models for fairness, identifying proxy variables that could lead to disparate impacts and strengthening their compliance with fair lending laws.
  • We worked with a financial institution to audit fraud detection models, uncovering performance disparities across demographic segments and recommending corrective rebalancing strategies.

AI bias is evolving just as quickly as AI itself, and addressing it isn’t a one-time fix. It’s a continual effort. Ongoing assessment, transparent documentation, and strong governance are essential to ensure accountability and reduce potential harm. By taking a proactive, preventive approach, organizations can stay ahead of emerging challenges and position themselves for long-term success.

Mitigate AI risk with AI governance
If your company is using AI, you need an AI governance plan. We can help.

Contact our AI governance team

Wondering how to identify – and mitigate – AI bias in the tools your company uses? Contact our team to see how we can put our extensive, industry-specific experience to work for your business.

Contact us today

Clayton J. Mitchell
Clayton J. Mitchell
Managing Principal, Fintech
Corey Minard
Corey Minard
Senior Manager, Consulting