The Trevor Project mental health bot should be used with caution
Join Our Newsletter - Get important industry news and analysis sent to your inbox – sign up to our e-Newsletter here
X

The Trevor Project’s mental health bot should be used with caution

By GlobalData Thematic Research 16 Sep 2021 (Last Updated September 16th, 2021 14:28)

The Crisis Contact Simulator tool released by The Trevor Project can be useful in revolutionising mental health care, though it should be used carefully.

The Trevor Project, supported by the Google AI Impact Challenge, released the Crisis Contact Simulator earlier this year, a groundbreaking tool for training mental health counsellors. However, questions remain about how adept AI technology is at recognising context, which is vital to providing adequate and appropriate mental health care.

The Trevor Project, a leading American non-profit organisation (NPO) specialising in suicide prevention among LGBTQ youths, launched its AI-led counsellor training tool earlier this year. Trainees can now partake in sophisticated conversations with the simulator, supplementing their broader training around helping young people in crises.

A machine learning (ML) model taught the realistic simulator, named Riley, about syntax using natural language processing (NLP). Riley was then given a social backstory and was trained to speak in an emotional manner consistent with humans by using transcripts from previous role-play training activities.

The growing mental health crisis

There is a clear need for improvement in the provision of mental health services. The Covid-19 pandemic exacerbated the mental health epidemic in countries such as the UK, where, according to the Office for National Statistics, the number of adults experiencing depression in early 2021 was more than double the pre-pandemic level.

The use of AI in mental health service training is nothing new. Counsellors can use NLP to listen in on their sessions and identify what was discussed, how often they used certain words and how much they spoke during the session. Since 2019, Ieso Digital Health has been using data-led insights to help clinicians make accurate diagnoses. Similarly, Ginger uses NLP to provide insights to clinicians about their conversations with patients.

AI can make elements of role-play training more efficient and less time-consuming, freeing up time for clinicians to speak to patients. “About 68% of our digital volunteer counsellors do shifts on night and weekends,” Kendra Gaunt, data and AI product manager at The Trevor Project, told Health IT Analytics. “So now they can be trained on nights and weekends as well.”

AI can also help uncover insights and patterns across patients and trainees that human teams may overlook and suggest appropriate performance targets. Furthermore, researchers are investigating whether facial expressions and tone of speech can be helpful indicators of mental disorders and how this can be deployed in technology.

Removing the human element brings its own risks

However, there are significant risks to removing the human element from counselling. Humans respond to nuances in speech and expressions to a more sophisticated level than technology, which thus calls into question the effectiveness of role-playing with a machine trained on limited data. When dealing with crises, these risks must be mitigated.

In addition, developing safe and reliable AI-led technology in this space requires data from an exceedingly large sample that represents multiple cross-sections of society. This raises more general concerns over the use of AI such as the invasive nature of the surveillance involved and the logistical difficulties of obtaining enough data. Patient-facing AI technology must also draw from a standard or ideal way of responding to situations, which is currently subjective and should be heavily researched and clarified before being used to programme conversational platforms.

AI-led approaches should thus supplement, not substitute, human-to-human channels of care. When it comes to mental health, it is better to be safe than sorry.

Up Next