AI has the potential to transform industries and improve decision-making processes, but it also carries the risk of silently perpetuating and amplifying biases. Bias refers to systematic errors in AI systems that result in unfair outcomes for certain groups, often along lines of race, gender, age, geography, or other characteristics.
Businesses using AI need to recognize these risks early to avoid harm and to build trust and comply with evolving regulatory expectations, including New York City’s Local Law 144 and the Colorado Artificial Intelligence Act. Additionally, several states now require insurers to conduct bias testing and model assessments, and the California Privacy Rights Act requires companies that use models and algorithms to explain to consumers how they work and confirm they aren’t causing harm. Addressing AI bias begins with understanding its nuances.
Most discussions of AI bias focus narrowly on statistical disparities across protected attributes. However, bias in AI is multifaceted:
Recognizing these nuances is key to developing a comprehensive understanding of AI bias and its potential impacts across different contexts. But what does this look like in practice?
Following is a chart that breaks down challenges, solutions, and actions organizations can take and the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF) and International Organization for Standardization (ISO) industry standards that should be met when evaluating models.
Challenge |
---|
No standard definition of fairness across domains |
Solution |
Define fairness objectives. |
Action |
Engage stakeholders to define and document desired outcomes. Common objectives include equal opportunity, demographic parity, or calibrated fairness. Aligning on these upfront supports a consistent understanding of fairness across the organization and documenting them allows for testing. |
Standard reference |
NIST AI RMF: Govern (5.1) ISO 42001: Policy & Risk Criteria (6.1.2) |
Challenge |
---|
Lack of representative training data |
Solution |
Assess and document data bias. |
Action |
Perform a thorough analysis of training and test data to identify potential sources of bias. Evaluate representation across groups, label integrity, and the presence of proxy variables. Document findings in a centralized repository for transparency. |
Standard reference |
ISO 42001: Annex A.8.2 (Input data quality) |
Challenge |
---|
Difficulty assessing third-party and vendor-provided models |
Solution |
Test model outputs. |
Action |
Use fairness metrics to assess model performance and help identify disparate outcomes and potential discrimination. One common fairness metric is disparate impact ratios, which evaluate whether an AI model disproportionately harms protected groups by comparing the rate of favorable outcomes between a group of protected individuals and a group of nonprotected individuals. Another is false positive parity, which evaluates if a machine learning model’s false positive rate remains the same across different demographics. |
Standard reference |
NIST AI RMF: Measure (5.3) ISO 42001: A.9.3 – Monitoring & Validation |
Challenge |
---|
Lack of action items in the face of challenges |
Solution |
Implement mitigation strategies. |
Action |
Based on testing results, implement appropriate bias mitigation techniques to model outputs, such as reweighting training data, which shifts the importance of samples in a dataset based on identified attributes; adversarial debiasing, which helps reduce bias by detecting if predictions are independent of protected attributes; and postprocessing adjustments, which make changes to the model’s predictions after it’s been trained. |
Standard reference |
NIST AI RMF: Manage (5.4) ISO 42001: Annex B.9 |
Challenge |
---|
Limited organizational understanding of bias types |
Solution |
Support explainability. |
Action |
Evaluate transparency to support interpretation and understanding of model behavior. |
Standard reference |
NIST AI RMF: Explainable and Interpretable (3.5) ISO 42001: Annex B.11 – Interpretability |
Challenge |
---|
Inconsistent documentation across the AI life cycle |
Solution |
Establish governance. |
Action |
Build review checkpoints and define clear responsibilities for fairness oversight throughout the AI life cycle. Establish and document expectations and processes for continuous monitoring, auditing, and course-correction as needed. |
Standard reference |
NIST AI RMF: Govern (5.1) ISO 42001: Clause 5.3 – Roles & Authorities |
Challenge |
Solution |
Action |
Standard reference |
---|---|---|---|
No standard definition of fairness across domains |
Define fairness objectives. |
Engage stakeholders to define and document desired outcomes. Common objectives include equal opportunity, demographic parity, or calibrated fairness. Aligning on these upfront supports a consistent understanding of fairness across the organization and documenting them allows for testing. |
NIST AI RMF: Govern (5.1) ISO 42001: Policy & Risk Criteria (6.1.2) |
Challenge |
Solution |
Action |
Standard reference |
---|---|---|---|
Lack of representative training data |
Assess and document data bias. |
Perform a thorough analysis of training and test data to identify potential sources of bias. Evaluate representation across groups, label integrity, and the presence of proxy variables. Document findings in a centralized repository for transparency. |
ISO 42001: Annex A.8.2 (Input data quality) |
Challenge |
Solution |
Action |
Standard reference |
---|---|---|---|
Difficulty assessing third-party and vendor-provided models |
Test model outputs. |
Use fairness metrics to assess model performance and help identify disparate outcomes and potential discrimination. One common fairness metric is disparate impact ratios, which evaluate whether an AI model disproportionately harms protected groups by comparing the rate of favorable outcomes between a group of protected individuals and a group of nonprotected individuals. Another is false positive parity, which evaluates if a machine learning model’s false positive rate remains the same across different demographics. |
NIST AI RMF: Measure (5.3) ISO 42001: A.9.3 – Monitoring & Validation |
Challenge |
Solution |
Action |
Standard reference |
---|---|---|---|
Lack of action items in the face of challenges |
Implement mitigation strategies. |
Based on testing results, implement appropriate bias mitigation techniques to model outputs, such as reweighting training data, which shifts the importance of samples in a dataset based on identified attributes; adversarial debiasing, which helps reduce bias by detecting if predictions are independent of protected attributes; and postprocessing adjustments, which make changes to the model’s predictions after it’s been trained. |
NIST AI RMF: Manage (5.4) ISO 42001: Annex B.9 |
Challenge |
Solution |
Action |
Standard reference |
---|---|---|---|
Limited organizational understanding of bias types |
Support explainability. |
Evaluate transparency to support interpretation and understanding of model behavior. |
NIST AI RMF: Explainable and Interpretable (3.5) ISO 42001: Annex B.11 – Interpretability |
Challenge |
Solution |
Action |
Standard reference |
---|---|---|---|
Inconsistent documentation across the AI life cycle |
Establish governance. |
Build review checkpoints and define clear responsibilities for fairness oversight throughout the AI life cycle. Establish and document expectations and processes for continuous monitoring, auditing, and course-correction as needed. |
NIST AI RMF: Govern (5.1) ISO 42001: Clause 5.3 – Roles & Authorities |
Bias mitigation should be a core component of any organization’s AI governance policy and control environment – not an afterthought. At Crowe, bias mitigation is a core component of our AI governance framework. We help clients implement both preventive and detective strategies.
Crowe specialists’ practical experience has underscored the importance of operational AI governance:
AI bias is evolving just as quickly as AI itself, and addressing it isn’t a one-time fix. It’s a continual effort. Ongoing assessment, transparent documentation, and strong governance are essential to ensure accountability and reduce potential harm. By taking a proactive, preventive approach, organizations can stay ahead of emerging challenges and position themselves for long-term success.
Wondering how to identify – and mitigate – AI bias in the tools your company uses? Contact our team to see how we can put our extensive, industry-specific experience to work for your business.
Related insights