A student team at USC Viterbi School of Engineering in the US has developed an artificial intelligence (AI)-based tool to detect the early onset of Alzheimer’s disease.

The machine learning algorithms use the speech patterns of an individual to diagnose the onset of the disease.

Discover B2B Marketing That Performs

Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.

Find out more

USC Viterbi School of Engineering computer science undergraduate students Leena Mathur and Nisha Chatwani led the research.

The team undertook machine learning research to analyse speech patterns, as well as the choice of words, that could help automatic systems detect the disease.

According to the Alzheimer’s Association, around six million people in the US have Alzheimer’s disease, and it is said to be the sixth-leading cause of death in the country.

Usually, doctors conduct tests such as the Cookie Theft picture test to check memory loss and other thinking abilities.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

In this test, patients are shown a picture and asked to describe what they see. The doctors then analyse their speech patterns to check if a person has Alzheimer’s disease.

However, this process of detection can be expensive and take months to finish.

Furthermore, around 58% of the 44 million people suffering from Alzheimer’s across the world live in less developed countries, where such testing methods are not easily accessible.

Mathur said: “We were inspired to start this project because we found the problem of dementia diagnosis compelling, specifically the development of low-cost, non-invasive and scalable systems that can do this effectively.”

With the new low-cost, AI-based tool, the student team has automated the diagnosis through analysing speech patterns process.

The team collected a dataset of audio clips, as well as transcripts, of 293 patients describing a stimulus image. This dataset was taken from a National Institute of Health study conducted at the University of Pittsburgh.

The team integrated this dataset into the machine learning model.

The tool analysed the speech patterns in the clips and used information from verbal and audio modalities to take clues for diagnosing Alzheimer’s disease.

Mathur continued: “For example, our feature extraction captures aspects of verbal structure that psychologists have linked to analytic thinking, such as the structure and use of prepositions and conjunctions.

“People with Alzheimer’s dementia, while responding to the stimulus photo, leveraged language that was significantly less indicative of analytic thinking. In addition, participants with Alzheimer’s tended to use the past tense significantly more than the control group, which informed our models.”

The team now plans to explore multimodal methods that integrate and sync information drawn from both modalities for a better diagnosis of the disease.

Medical Device Network Excellence Awards - Nominations Closed

Nominations are now closed for the Medical Device Network Excellence Awards. A big thanks to all the organisations that entered – your response has been outstanding, showcasing exceptional innovation, leadership, and impact

Excellence in Action
HemoSonics has won the 2025 Marketing Award for its impactful promotion of theQuantra Hemostasis System and leadership in blood management education. See how targeted campaigns, thought leadership content, and hands on clinician training are accelerating Quantra’s market traction and shaping the future of hemostasis testing.

Discover the Impact