An artificially intelligent (AI) software system that determines which patients get access to high-risk healthcare management programmes routinely lets healthier white people into the programmes ahead of less healthy black people, a new study has found.
The study was carried about by researchers at the University of California, Berkeley, the University of Chicago Booth School of Business and Boston-based non-profit Partners HealthCare.
The researchers obtained the algorithm-predicted risk score for 43,539 white patients and 6,079 black patients enrolled at an academic hospital that uses an AI-based system to determine which patients will be given access to a high-risk care management programme. They then compared the risk score to more direct measures of a patient’s health, such as chronic illnesses. They found that, for a given risk score, black patients had significantly poorer health than white patients.
UC Berkeley associate professor of health policy and management Ziad Obermeyer said: “The algorithms encode racial bias by using health care costs to determine patient ‘risk’, or who was mostly likely to benefit from care management programs.
“Because of the structural inequalities in our health care system, blacks at a given level of health end up generating lower costs than whites. As a result, black patients were much sicker at a given level of the algorithm’s predicted risk.”
Fixing the bias in the algorithm could more than double the number of black patients automatically admitted to these programs.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
Roman Family University professor of computation and behavioural science Sendhil Mullainathan said: “Instead of being trained to find the sickest, in a physiological sense, these algorithms ended up being trained to find the sickest in the sense of those whom we spend the most money on. And there are systemic racial differences in health care in who we spend money on.”
Patients whose risk scores landed in the top 97% were automatically selected for enrolment in a care management program. By correcting the algorithmic disparities between black and white patients, the researchers found the percentage of black people in the automatic enrolee group leapt from 18% to 47%.
By tweaking the software to use other variables to predict patient risk, such as avoidable cost or the number of chronic conditions that needed treatment in a year, the researchers were able to correct much of the bias initially built into the program.
Once the manufacturer of the software was alerted to the flaws in its algorithm, the company was “very motivated to address the issue”, Obermeyer said.
Obermeyer said: “Algorithms can do terrible things, or algorithms can do wonderful things. Which one of those things they do is basically up to us.”