The Trevor Project, supported by the Google AI Impact Challenge, released the Crisis Contact Simulator earlier this year, a groundbreaking tool for training mental health counsellors. However, questions remain about how adept AI technology is at recognising context, which is vital to providing adequate and appropriate mental health care.

The Trevor Project, a leading American non-profit organisation (NPO) specialising in suicide prevention among LGBTQ youths, launched its AI-led counsellor training tool earlier this year. Trainees can now partake in sophisticated conversations with the simulator, supplementing their broader training around helping young people in crises.

A machine learning (ML) model taught the realistic simulator, named Riley, about syntax using natural language processing (NLP). Riley was then given a social backstory and was trained to speak in an emotional manner consistent with humans by using transcripts from previous role-play training activities.

The growing mental health crisis

There is a clear need for improvement in the provision of mental health services. The Covid-19 pandemic exacerbated the mental health epidemic in countries such as the UK, where, according to the Office for National Statistics, the number of adults experiencing depression in early 2021 was more than double the pre-pandemic level.

The use of AI in mental health service training is nothing new. Counsellors can use NLP to listen in on their sessions and identify what was discussed, how often they used certain words and how much they spoke during the session. Since 2019, Ieso Digital Health has been using data-led insights to help clinicians make accurate diagnoses. Similarly, Ginger uses NLP to provide insights to clinicians about their conversations with patients.

AI can make elements of role-play training more efficient and less time-consuming, freeing up time for clinicians to speak to patients. “About 68% of our digital volunteer counsellors do shifts on night and weekends,” Kendra Gaunt, data and AI product manager at The Trevor Project, told Health IT Analytics. “So now they can be trained on nights and weekends as well.”

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

AI can also help uncover insights and patterns across patients and trainees that human teams may overlook and suggest appropriate performance targets. Furthermore, researchers are investigating whether facial expressions and tone of speech can be helpful indicators of mental disorders and how this can be deployed in technology.

Removing the human element brings its own risks

However, there are significant risks to removing the human element from counselling. Humans respond to nuances in speech and expressions to a more sophisticated level than technology, which thus calls into question the effectiveness of role-playing with a machine trained on limited data. When dealing with crises, these risks must be mitigated.

In addition, developing safe and reliable AI-led technology in this space requires data from an exceedingly large sample that represents multiple cross-sections of society. This raises more general concerns over the use of AI such as the invasive nature of the surveillance involved and the logistical difficulties of obtaining enough data. Patient-facing AI technology must also draw from a standard or ideal way of responding to situations, which is currently subjective and should be heavily researched and clarified before being used to programme conversational platforms.

AI-led approaches should thus supplement, not substitute, human-to-human channels of care. When it comes to mental health, it is better to be safe than sorry.