Home Blog Bias and Fairness in AI Algorithms

Bias and Fairness in AI Algorithms

Published: January 4, 2023
Editor at Plat.AI
Editor: Ani Mosinyan
Reviewer at Plat.AI
Reviewer: Alek Kotolyan

Very often, AI faces problems with bias and fairness. There are multiple industries and applications where machine learning contributes to creating a society where some groups and individuals are disadvantaged. Some of the most common issues involve facial recognition leading to problems detecting darker-skinned faces and healthcare diagnoses across ethnicities. Keep reading to learn more about bias and fairness in AI algorithms and how to develop AI systems that help make decisions that lead to fair and equitable outcomes.

What is AI Bias?

AI bias, algorithm bias, or machine learning bias is the tendency of the algorithm to incorporate and reflect human biases. Artificial Intelligence models and their predictions are heavily determined by the data used for training. Therefore, many of the biases are a result of skewed data. 

AI algorithms

Primarily, if the data is not high-quality and scrubbed, it can include biases.  Skewed datasets comprise incomplete, inaccurate, and inappropriate training data that produce poor decisions.  Therefore, the data fed into the AI engine can influence the AI model and potentially introduce bias at any stage of the machine learning lifecycle

Then, if the model is trained on those biases, it can learn to incorporate them into its algorithm and use it when making predictions or decisions. So, a fair AI model requires high-quality training sets of accurate, complete, consistent, valid, and uniform data.

Here are some real-life AI bias examples to better understand what AI bias is:

In 2009, Nikon launched the Coolpix digital compact cameras equipped with the new blink detection feature. While snapping a picture of a person, the blink detection feature highlights the face on the screen with a yellow box. After the photo was captured, a warning “Did someone blink?” was displayed if someone’s eyes weren’t fully open, so another image could be taken if required. However, this feature was inaccurate in detecting blinks for Asian consumers, leading many to state that the feature was biased.

In 2015, Google’s Photos application mistakenly labeled a photo of a black couple as gorillas. Unfortunately, Google’s AI categorization and labeling feature was another example of how AI internalizes biases. Nevertheless, users were able to remove the incorrectly identified photo classification within the application, helping improve its accuracy over time.

Where Does AI Bias Come From?

As mentioned previously, AI models and their predictions are heavily determined by the data used for training. Hence, the largest source of bias in an AI system is the data it was trained on. 

Here are the top six ways bias can be introduced into AI:

Historical Bias

Historical bias is the existing bias in our current world that has seeped into our data. It arises when the data used to train a model no longer reflects the current reality. Historical bias tends to show up for groups and individuals that have been historically disadvantaged or excluded.

For example, suppose an AI system is tasked with selecting eligible loan applicants in the financial sector. Based on the data the model was trained on, it could show preferential treatment toward men versus women. This bias can result from the AI model being trained on historical data that included a large income disparity between men and women. 

Representation Bias

Representation bias happens from how data scientists define and sample a population to create a dataset. Let’s consider Nikon’s example. The data used to train its facial recognition for its blink detection feature was mainly based on people with Caucasian features, leading to issues in detecting the facial features of Asian people.

Measurement Bias

Measurement bias occurs when the data collected for training differs from the data collected in the real world. It also occurs when the data collected has inaccurate features or labels due to inconsistent annotation during the data labeling stage of a project.

For example, bias can occur when a medical diagnostic system is trained to forecast the likelihood of illness based on proxy metrics, like the number of doctor visits, instead of actual symptoms.

Evaluation Bias

Evaluation bias happens during the model iteration and evaluation. For example, bias can arise when the benchmark data used to compare the model to other models that perform similar tasks does not represent the population the model will serve.

For example, suppose you want to build a machine-learning model to predict voting turnout across the country. So, you take a series of features like age, profession, income, and political alignment to predict whether a person will vote. However, by only evaluating your model on people in your local area, you have inadvertently designed a system that only works for them, excluding other areas of the country.

Aggregation Bias

Aggregation bias arises when distinct groups or populations are inappropriately combined during model construction, resulting in a model that only performs well for the majority. In other words, one single model is unlikely to suit all groups. Nonetheless, scientists aggregate data to simplify it.

This type of bias most commonly arises in medical applications. For instance, when diagnosing and monitoring diabetes, models have historically used levels of Hemoglobin A1c (HbA1c) to make their predictions. However, these levels differ in complicated ways across ethnicities, and a single model for all populations is bound to exhibit bias. Hence, it is important to make the system sensitive to these ethnic differences by either including ethnicity as a feature in the data or building separate models for different ethnic groups.

Another example outside of healthcare includes analyzing salary increases based on how long the employee has been with the company. For instance, the model can show a strong correlation: the longer you work in finance, information technology, and education, the more you get paid. However, this may not be true for athletes as they earn high salaries early on in their careers while they are still at their physical peak, but it drops off as they stop competing. Therefore, by aggregating athletes with other professions, the AI algorithm may be biased against them.

Bias in Human Review

When deciding whether to accept or disregard a model’s prediction, a human reviewer might override a correct model prediction based on their own bias and introduce their own biases. This can happen when a person allows their prejudices to affect the algorithm during an evaluation, which can significantly impact the model’s performance.

How to Mitigate AI Bias?

AI bias

Applying bias mitigation strategies can increase the performance of AI algorithms. That said, bias in AI systems can be addressed at three stages of the modeling workflow:

  1. Pre-processing: data scientists can modify datasets before a model is trained by running tests on raw data to detect bias. Then they can use pre-processing AI algorithms to mitigate bias by changing the weights applied to the training samples.
  2. During processing: data scientists can modify the learning algorithms during the model training process by changing the parameter value to produce optimal results.
  3. Post-processing: during this stage, data scientists can address bias by retraining the model. Retraining is often achieved by introducing new data, rebuilding the model from scratch, or configuring the model parameters.

What is AI Fairness?

Fairness in AI is a growing field that seeks to remove bias and discrimination from algorithms and decision-making models. Machine learning fairness addresses and eliminates algorithmic bias from machine learning models based on sensitive attributes like race and ethnicity, gender, sexual orientation, disability, and socioeconomic class.

When trained using overrepresented groups, a model may produce inaccurate predictions for underrepresented groups. To remedy the issue, researchers can develop algorithms that impose fairness to reduce performance disparities affecting marginalized groups that exclude the dataset’s sensitive characteristics. For example, the new algorithms can employ a calibration technique to ensure the model predicts an input, regardless of whether any sensitive attributes are added.

How to Make Machine Learning Fairer?

Here are three ways for data scientists and machine learning experts to improve the fairness of their models:

  • Ensure the use of diverse and high-quality training data in the model.
  • Identify any vulnerabilities in public data sets. Vulnerabilities may result from poor-quality data sets, like misaligned and mislabeled datasets and inconsistent benchmarking.
  • Use less sensitive information during the model training process to avoid privacy issues.
  • Utilize tools that can help prevent and eliminate bias in machine learning, like IBM’s AI Fairness 360, Google’s What-If Tool, Model Cards, Toolkit, Microsoft’s fairlearn.py, and Deon.
Fairness in AI

IBM’s AI Fairness 360 is a Python toolkit that helps users focus on technical solutions through fairness metrics and evaluate, report, and mitigate discrimination bias in machine learning models.

Google’s What-If Tool is a visualization tool that explores a model’s performance on a data set, assessing against preset definitions of fairness constraints, like equality of opportunity.

Microsoft’s fairlearn.py is an open-source Python toolkit with an interactive visualization dashboard and unfairness mitigation algorithms to help users analyze trade-offs between fairness and model performance.

Deon is an ethics checklist that evaluates and reviews applications for potential ethical implications from the early stages of data collection to implementation.

New tools are also being developed to make machine learning fairer. For instance, Facebook is working on an internal tool called Fairness Flow to detect bias in its AI models. At the same time, Microsoft created its internal tool called the FATE (Fairness, Accountability, Transparency, and Ethics) group to identify algorithmic bias. Google also has its own fairness project called PAIR (People + AI Research) operational for two years.

Final Thoughts

The existence of bias in a machine learning model indicates that the model is unfair. As a result, AI algorithms may discriminate against disadvantaged groups across different industries, including healthcare, e-commerce, cybersecurity, banking, and finance. 

To avoid these pitfalls, data scientists and machine learning experts must examine and correct the algorithmic models to remove potential biases and ensure fairness. Some technical solutions may be implemented to reduce the risk of introducing bias into your AI model, such as testing algorithms and utilizing explainable AI tools that can help users understand and trust the output created by machine learning algorithms.

Try our real-time predictive modeling engine and create your first custom model in five minutes – no coding necessary!

  • Fully operational AI with automated model building and deployment
  • Data preprocessing and analysis tools
  • Custom modeling solutions
  • Actionable analytics
  • A personalized approach to real-time decision making
By signing up, you agree to our Terms & Conditions and Privacy Policy.


Lory Seraydarian

Lory Seraydarian

Staff WriterLory Seraydarian is a writer with a background in Journalism. Lory has covered various topics such as politics, healthcare, religion, and arts to fulfill her curious nature. Lory is always up for new adventures that will challenge her and lead her to new discoveries.


Recent Blogs