Experts have touted the important role AI can play in shining a light on women's health challenges, as research warns of the biases exhibited by the algorithms that underpin the technology.
In avoiding gender bias in healthcare, the data fed into analytic systems that use AI remains a key concern, yet the biases AI systems may reveal about the underlying data can also prove beneficial, speakers on a panel about addressing gender inequalities in healthcare have said.
Dr Antonella Santuccione Chadha, CEO of Swiss non-profit the Women’s Brain Foundation, highlighted a recent paper that shed light on the underlying biases within AI-driven systems. Chadha was speaking on a panel session at the Medica convention, taking place in Dusseldorf, Germany between 17-20 November.
The paper demonstrated that Apple’s automated speech recognition (ASR) system, Siri, displayed gender bias and limitations in offering guidance around health concerns specific to women. For example, when requesting guidance around menstrual pain, Siri, based on the data fed into it through its underlying deep learning models, was simply unable to offer any form of meaningful guidance.
However, in a sense, Chadha views the current deficiencies with AI as beneficial. “It's thanks to AI that we start to become more aware of the bias. Without AI, I don’t believe we would be as aware of it as we are today,” she explained during the panel.
AI is common throughout digitisation across industries and is continuing to advance in the healthcare space. GlobalData analysis forecasts that AI in healthcare will reach a valuation of $19bn by 2027.
Ambient AI tools for transcribing patient consultations, such as Microsoft’s recently launched Dragon Copilot and AI triaging systems, are all intended to aid practitioners and optimise workflows.
However, research has highlighted that biases in the algorithms driving AI systems can reinforce discriminatory conclusions surrounding women’s health and lead to flawed care decisions being made.
For example, recent research by the London School of Economics’ (LSE) Care Policy & Evaluation Centre (CPEC) found that Google’s ‘Gemma’ AI model downplayed women’s physical and mental issues in compared with men’s when used to generate case note summaries for social workers.
By knowing what biases exist in data analysis, Chadha added that evaluations can be made around what gender-specific healthcare issues require more focus and sensitivity around.
Also on the panel, Professor Helen Crimlisk, associate medical director for innovation, research, and development at the Sheffield Health and Social Care UK National Health Service (NHS) Foundation Trust highlighted that data collected and synthesised with AI tools determines what biases may, even inadvertently, be brought into clinical practice.
However, Crimlisk acknowledged that collecting diversity data, from gender to other metrics such as ethnicity, can be a challenge, with what data to include and how to classify it for beneficial application in healthcare being further matters that require close consideration.
She said: “People can feel uncomfortable about disclosing their diversity data.
“While we want to have the maximum amount of data in that respect, we have to acknowledge that, for pragmatic reasons, we probably have to limit exactly how much data we collect in a systematised way.”


