Guidehouse

Bias Detection Tools

This repository is meant to address social bias issues in machine learning models in a healthcare setting.

Abstract

The use of Artificial Intelligence and Machine Learning in the healthcare industry has become increasingly popular for clinical decision support. However, AI/ML algorithms and models can become inaccurate and unreliable due to changes in underlying data, user behavior, data capture and management practices. As a result, biases can be embedded in the data and if not addressed, can lead to social inequities and suboptimal health outcomes for certain populations. To address this problem, our team has developed a Social Bias Mitigation Script and a Predictive Bias Mitigation Script to detect biases and suggest correction techniques. Our scripts are easy to use and are compatible with the machine learning library scikit-learn and are designed to tackle problems such as predictions, diagnosis, and treatment recommendations. The scripts are applicable in any healthcare setting and are cost-effective and inexpensive to obtain, use and maintain. We have provided guidelines, frameworks, and process charts on how to address social biases at every step of the process from invoking nation-wide initiatives to mitigating bias at the modeling stage. We believe by implementing our tool, in any healthcare setting, medical and clinical research, or social and medical policies, the users can easily detect biases, and reevaluate methods, and policies to ensure a more equitable system that reduces potential harms to the biased-against group.

Github Repository

Please use the link provided below to access the Github codebase.

Link to the github repository

Supporting Documentation

Please use the link below to access the documentation.

Link to the pdf document

Video demonstration

Please use the video link below for a detailed walkthrough on how to use the tool.

Link to the video demonstration