Google has announced that its artificial intelligence (AI) system, Med-PaLM 2, will be available to a select group of Google Cloud customers for testing and feedback. The system, which is a large language model (LLM), is developed for the healthcare sector to answer medical questions accurately and safely.

The technology has been evaluated against clinician-backed parameters including medical reasoning, scientific consensus, bias, and likelihood of possible harm. Google report that Med-PaLM 2 scored impressively on medical exam-style questions, but development is continuing for how to use the technology to its maximum potential.

Multiple avenues are being explored to provide benefit to healthcare workers, researchers, and patients. The limited access of the system to clients is an important step in its route to widespread implementation.

The potential for AI is evident – GlobalData predicts the global market for specialised AI applications will be worth $146 billion by 2030. AI in the healthcare sector is a key driver of the market. With advantages such as freeing up clinician time, shortening patient waits, and reducing overall costs, AI technologies are already being widely employed in health facilities.

However, with AI becoming an ever more frequent fixture in healthcare systems as providers look to ease the pressures exerted by ageing populations, tech security is gaining more importance amidst digital patient interaction and data storage.

In March 2023, the U.S. Food and Drug Administration (FDA) issued guidance for cybersecurity requirements in submitted medical devices and digital healthcare systems in light of new US law. Google states that all technologies it builds follow its AI Principles guidelines – the testing and feedback from clients will inform the company how efficiently, and how safely, the AI platform is working.