The US Food and Drug Administration (FDA) is seeking industry comment on practical approaches towards measuring and evaluating the performance of AI-enabled medical devices in the real world.
With a feedback window open until 1 December, the agency is particularly interested in public comments outlining strategies to detect, assess, and mitigate performance changes over time to ensure that medical devices with an AI component on the US market remain safe and effective throughout their lifecycle.
Many AI-enabled medical devices marketed in the US are primarily evaluated through retrospective testing or static benchmarking. The FDA noted that while these approaches can help in establishing a baseline understanding of a given device’s performance, such measures are not designed to predict behaviour in dynamic, real-world environments.
Download sample pages of selected reports
Explore a selection of report samples we have handpicked for you. Get a preview of the insights inside. Download your free copy today.
The FDA added that ongoing, systematic performance monitoring is increasingly recognised as relevant to maintaining safe and effective AI use by observing how systems actually behave during clinical deployment.
As of this month, the agency has approved 141 AI-enabled medical devices in 2025, bringing the total approved devices to 1,250.
The FDA has already taken action to implement several initiatives, including its Predetermined Change Control Plans (PCCPs) to adapt and streamline the regulatory process for medical device manufacturers. This allows manufacturers under the PCCP to pre-authorise certain changes to AI algorithms post-approval, such as retraining or updates, without the need to make a new submission.
The FDA also has the Total Product Life Cycle (TPLC) approach, a policy that encourages employees to develop a longitudinal, integrated, broader, and deeper view of device safety, effectiveness, and quality.
Despite these policies being in place, the call for further public feedback suggests the FDA is keen to continue evolving its approach to ensure AI medical devices remain fit-for-purpose across various real-world applications.
Europe’s take on AI medical device regulation
While at this time it is unclear whether the FDA will move to shore up protections for AI medical devices before they reach the market, the EU has recently passed measures to assess conformity pre-market. Passed in August 2024, the EU AI Act for the enforcement of AI systems determined to be “high risk” comes into effect in 2026.
Under the regulation, independent third parties assess conformity and have to meet stringent data governance, transparency, and risk management standards.
Published in February in Nature Medicine, a research team, comprising authors from the Else Kröner Fresenius Center (EKFZ) for Digital Health and the UK’s Nuffield Department of Surgical Sciences at the University of Oxford, proposed that the integration of transparent and mandatory feedback collection mechanisms directly into the user interfaces of AI-based digital health tools (DHT) can be a viable way to track the safety and suitability of AI devices over time.
According to the paper, such measures could significantly improve user experience, increase patient safety by the early identification of any problems, reduce the administrative burden in monitoring tools, and heighten public confidence in AI-based devices.
Unlock up to 35% savings on GlobalData reports
Use the code at checkout in the report store
-
20% OFF
Buy 2 reports
Use code:
Bundle20
-
25% OFF
Buy 3 reports
Use code:
Bundle25
-
30% OFF
Buy 4 reports
Use code:
Bundle30
-
35% OFF
Buy 5+ reports
Use code:
Bundle35
Valid on all reports priced $995 and above. Cannot be combined with other offers.
Still deciding what will work best for your business?
Ask our experts for help.
Enquire before buying