A report published by the Center for Applied Artificial Intelligence at the University of Chicago Booth School of Business has found that algorithms used to inform healthcare delivery and planning across the US are reinforcing racial and economic biases.

The Algorithmic Bias Playbook details how biased algorithms are influencing how patients are treated by hospitals, insurers and other businesses.

The playbook sets out a plan of action healthcare organisations can take to create an inventory of their algorithms, screen them for bias, adjust them or scrap them if the bias cannot be fixed and set up structures to prevent future bias.

Berkeley School of Public Health associate professor Ziad Obermeyer, who co-authored the report, told STAT: “These algorithms are in very widespread use and affecting decisions for millions and millions of people, and nobody is catching it.”

The researchers found that bias was common both in clinical calculators and checklists as well as more complex, artificial intelligence (AI) informed algorithms.

The report flags biases in algorithms that: determine the severity of knee osteoarthritis; measure mobility; predict the onset of serious illness; and identify which patients may fail to attend appointments or may benefit from additional outreach to manage their health.

The researchers also found that the Emergency Services Index, which groups patients based on the urgency of their medical needs in emergency departments, performed poorly in assessing black patients. It is used in about 80% of US hospitals.

Work started on the Algorithmic Bias Playbook after a high-profile 2019 study found that a prominent AI software that determined which patients get access to high-risk healthcare management programmes routinely prioritised healthier white people over less healthy black people.

This was because the algorithm factored in historic healthcare costs into its final decision. Because of structural inequalities in the US healthcare system, black people at a given level of health tend to generate lower costs than white people at equivalent levels of health. This meant black patients were much sicker than white ones for a given level of predicted risk.

By collecting the algorithmic disparities between black and white patients, the percentage of black people enrolled in the programmes leapt from 18% to 47%.