AIMultiple ResearchAIMultiple Research

What is AI Bias in the Healthcare and How To Avoid It in 2024

As the benefits of AI are being recognized by business leaders around the world, investments in AI solutions rise. The global market is projected to reach $126 billion by 2025 (see Figure 1). However, like every other technology, people have concerns over the implementation of AI and have trust issues regarding the fact that it can inherit bias from its training data. 

An AI model is considered biased when it generates an erroneous or biased output based on insufficient or poor model training. To learn more about AI bias, check out this comprehensive article. Implementing AI can be expensive. Therefore, it is important for business leaders to understand the technology and the concerns associated with it. This is especially true in the healthcare sector.

This article explores AI bias in the healthcare sector to guide healthcare institutions/facilities for their future investments in the technology.

Figure 1. AI software market growth from 2018 to 2025

a graph showing an increasing trend in the AI software market from 2018 to 2025
Source: Statista

What does AI bias mean for healthcare?

AI is improving various areas in healthcare, including surgery, radiology, pathology, etc. While AI bias can only cause inefficiencies in an industry such as manufacturing, it can have dangerous consequences in the healthcare sector. For example, a biased result generated from an AI-enabled computer vision system for radiology can lead to an incorrect diagnosis. This can pose a serious risk for patients. 

How does AI bias happen in healthcare?

The results created by an AI model can be considered impartial or objective. However, those results are entirely based on the input data used to train the model. This means that the person collecting the data to train the model may inadvertently transfer their bias to the dataset.

Here is a simple example of AI bias in Alzheimer’s diagnosis (See Figure 2):

  1. Gathering Training Data: The data gathered for training is biased and covers a limited range of patients.
  2. Application: When applied to the real population with a wider range of patient types, the AI model erroneously classifies some population that was not available in the training data as unknown.

Figure 2. AI bias in Alzheimer’s diagnosis

A flow chart showing how issues in the training data of an early Alzheimer diagnosis system, can transform into a biased model.

What are the types of biases in healthcare?

The initial step is understanding the sources of the data and which biases can be present there. The following types of biases can be found in training data:

  • Racial bias: This kind of AI bias happens when the training data does not cover all racial classes. For example, a recent study identified bias in the dataset of pulse oximetry sensors where they did not accurately measure and detect low blood oxygenation in Black patients, which could result in increased risk for hypoxemia.
  • Gender bias: Different diseases/illnesses have different effects on women and men. If the AI algorithm does not incorporate gender differences, the output can be inaccurate. Therefore, the training data for such algorithms must include all gender types.
  • Socioeconomic (SES) bias: Studies have shown that the bias of clinicians related to SES factors of patients transfers into the training data for the AI model and results in inequalities in the output. For example, studies show that people with low incomes have a higher health risk than people with high incomes. If the data is only collected from an expensive private clinic, then the AI model will become biased.
  • Bias in linguistics: There are AI models that use audio data to diagnose diseases such as Alzheimer’s. If these models are not trained with a wide range of accents, their outputs can be biased. For example, an AI algorithm created in Canada used speech samples from only Canadian English speakers, putting English speakers of other accents in the country at a disadvantage.

How to avoid AI bias?

A completely unbiased AI is not possible in the current world. However, steps can be taken to improve the quality and range of the data to reduce bias.

The following points can help reduce AI bias:

  • At the development stage, business leaders must gather a team of AI developers from multiple backgrounds. This can help widen the horizon of the development process.
  • At an industrial level, there is a need for regulatory action by government and academic bodies to ensure that AI development in the healthcare sector is unbiased and fair.
  • In the healthcare sector, the availability of high-quality data is a challenge due to its sensitive nature. Work needs to be done to ensure that enough medical data is available for training a robust and fair AI model.
  • A data-centric approach to AI development can help address biases in AI systems.
  • Check our comprehensive list of actions you can take to reduce biases in your AI systems.

Further reading

If you have any questions, feel free to contact us:

Find the Right Vendors
Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on

Shehmir Javaid
Shehmir Javaid is an industry analyst in AIMultiple. He has a background in logistics and supply chain technology research. He completed his MSc in logistics and operations management and Bachelor's in international business administration From Cardiff University UK.

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments