As artificial intelligence’s (AI) use in healthcare ramps up, stakeholders need to be cognizant that while not all US legislation has kept up, existing laws are often still applicable, a legal expert has said.
AI is being used in healthcare in numerous ways, from performing basic functions such as transcribing patient communications, to evaluating radiologic image reports and assisting in remote patient monitoring.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
According to GlobalData analysis, AI in healthcare is forecast to reach a $19bn valuation by 2027.
That valuation considers a wide range of AI applications in the health industry. In the US, even simple AI use cases may be subject to the Health Insurance Portability and Accountability Act (HIPAA) and additional state privacy laws.
Partner at law firm Holland & Knight, Shannon Hartsfield told Medical Device Network: “For example, if an AI agent is engaged in listening in the exam room and carrying out tasks based on the information recorded, the physician using that tool needs to obtain the patient’s consent if they are practicing in a state like Florida that requires all parties to consent to a recording.
“There are also state laws prohibiting deceptive and unfair trade practices. Users should be made aware that they are interacting with an AI agent and not a real human. While such laws regarding consent to record predate AI, they could still affect how a tool is used,” Hartsfield added.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe structure of HIPAA means that it prohibits covered entities, such as health plans or certain healthcare providers (HCP), from using or disclosing protected health information for purposes unrelated to treatment, payment, or healthcare operations. In addition, Hartsfield notes that vendors serving covered entities must be aware that they are not allowed to use and disclose protected health information (PHI for their own purposes.
“For example, using PHI in order for the vendor to develop its own agentic AI tools could potentially run afoul of HIPAA and state privacy laws,” Hartsfield explained. “If PHI is used to improve AI tools, it should be done as needed for the health plan or HCP’s own treatment or health care operations purposes, such as providing quality care.”
To reduce the potential for legal challenges related to AI, Hartsfield advises that vendors pay careful attention to data privacy and security.
“HIPAA requires covered entities to conduct a risk analysis to identify potential threats or hazards to the security of PHI. This likely requires a detailed assessment of how data flows into and out of the AI tool, how the tool will avoid sharing PHI with unauthorised third parties, and how the vendor developing the tool will ensure compliance with its own HIPAA obligations,” Hartsfield said.
According to Hartsfield, with AI’s rise in healthcare, there is ongoing tension between the desire to advance AI capabilities quickly against the need to regulate against potential negative effects.
Hartsfield concludes: “It’s challenging to create a balance between protecting the public from potential dangers of AI, while avoiding stifling innovation unnecessarily.”
