How AI Is Helping to Detect Suicide Risk in LGBTQ Youth, Veterans

'LGBTQ youth in crisis deserve the best care that can be provided to them.'

This is part of a series of stories to bring awareness to the issue of suicide, in honor of National Suicide Prevention Month. If you or someone you know needs support, contact the 24-hour National Suicide Prevention Lifeline at 1-800-273-8255.

Artificial intelligence can suggest songs you might like, fly drones and power Twitter trolls.

But it’s also expanding in the medical world, including suicide prevention services. Earlier this year, The Trevor Project, a nonprofit that supports at-risk LGBTQ youth all over the country, won a $1.5 million AI Impact grant from Google to expand its suicide prevention services.

With suicide the 10th most common cause of death in the United States, claiming the lives of more than 47,000 people in 2017, according to the CDC, it’s no wonder organizations are looking to get ahead of the problem. Artificial intelligence can help.

“We know there are over 1.8 million LGBTQ youth in this country who seriously consider suicide every year,” said Sam Dorison, chief of staff at The Trevor Project. “We are not serving nearly 1.8 million of them.”

Helping the most vulnerable

One of the big questions the organization had to answer to win the grant was how it would change the world with the money.

While The Trevor Project is in the early stages of figuring out how exactly it will do that, Dorison envisions the organization’s artificial intelligence tool will scan messages sent through its live chat and text platforms for signifiers that the person on the other end is at risk of suicide.

“Are there things embedded in those messages that if our counselors were aware of them could allow them to serve them better?” he asked. “Are there certain resources that are likely going to be more useful to them?”

Sometimes, he said, people who contact the organization don’t say outright that they’re having suicidal thoughts. But the way they talk about their situation can reveal they’re at risk. Artificial intelligence can help counselors read between the lines.

But this AI tool won’t be replacing the Trevor Project counselors that talk to youth on a daily basis — it’s not a chat bot, Dorison said. It will help those counselors connect youth to the best resources for their situation.

“No decision should be made purely on an AI algorithm,” he said. “It’s in no way replacing what we do.”

[ICYMI: Is It Getting Easier to Come Out? LGBTQ Youth Say Yes and No]

Connecting thousands with support

The Trevor Project isn’t the only organization with goals to to use AI for suicide prevention. The Department of Veterans Affairs uses it in their REACH VET program, launched in 2016. The following year, Facebook starting using AI to predict suicide risk. The Crisis Text Line began using AI in 2018.

As more organizations look to implement suicide prevention tools powered by artificial intelligence, it’s not as simple as duplicating another company’s model. They all need to work differently.

The Trevor Project will have to wait for people to contact them in order to use their AI tool. On the other hand, the VA already has lots of information on veterans in the form of medical records.

Through the REACH VET program, AI searches medical records and develops a suicide risk score for each veteran based on 60 factors, including previous suicide attempts, medications, chronic pain, location and age. Every month, an updated risk group is generated and that information is passed on to the veterans’ health care providers.

“There’s lots of ways to do predictive risk, but you have to do something with (the information),” said Aaron Eagan, deputy director for innovation and program development at the Veterans Health Administration Office of Mental Health and Suicide Prevention.

This system isn’t meant to stand alone either. It’s a way to alert a provider who perhaps didn’t realize a patient was at risk, Eagan said.

“Where it really helps is identifying those we don’t think of as actively suicidal but we know now statistically they’re at an increased risk,” he said.

[Read: How to Help Someone Experiencing PTSD]

A warm reception

“There was concern early on that veterans wouldn’t like us calling them and talking to them as being identified by some model,” Eagan said. Especially about something as sensitive as suicide.

But that wasn’t the case.

REACH VET connects with 30,000 veterans a year and, for the most part, Eagan said, they’ve been really receptive. He said that’s partly thanks to the model built around getting the information to a provider the veteran already has a relationship with.

The VA has also partnered with CompanionMX to create a mental health app for veterans.

The app collects data on calls, texts and location. It also asks veterans to record a weekly audio diary about how they’re feeling.

But it’s not the content of the voice message that the app is interested in. It’s monitoring vocal energy, pitch and harmonics.

“We’re not measuring what people are saying but how they’re saying it,” CompanionMX Chief Medical Officer Carl Marci said.

Once the information is gathered, it goes to health care providers and veterans alike. The goal is to keep veterans informed on their own mental health.

Marci said this has led to more fruitful and meaningful conversations between veterans and their doctors.

“One of the core features of depression is that patients are not always aware of how depressed they are,” he said.

Cautionary tales?

AI tools can help people. But, in the case of suicide prevention, real lives are on the line. It’s important that organizations are thoughtful and thorough when developing this technology, said John Torous, head of the American Psychological Association’s Committee on Mental Health Information Technology.

Facebook turned to AI prevention tools after live-streamed suicides became a problem on the platform. Now Facebook’s AI monitors what people post and their friend’s reactions. If necessary, Facebook calls the police.

But one of the main problems with this, Torous said, is that Facebook sends police to people’s houses without verifying that there’s a suicide risk.

Facebook hasn’t released any information about whether these AI detections have been accurate. Mark Zuckerberg wrote in 2018 that Facebook’s suicide prevention AI has helped 3,500 people globally. However, reports obtained by the New York Times showed “Facebook’s approach has had mixed results.”

And where all this information about suicide risk is going has Mason Marks, a health law scholar, worried.

“Facebook says it never shares suicide prediction data with advertisers or data brokers,” Marks wrote for The Washington Post. “Still, the public must take Facebook’s word for it at a time when trust in the company is waning.”

Mental health technology projects are often happening without a greater discussion of ethics and best practices, Torous said.

“What happens if we go too fast?” he said. “Do we end up causing more harm? I guess there’s just a lot of unknowns.”

For now, companies are trying to figure it out on their own. The VA and CompanionMX are both HIPAA compliant. And The Trevor Project puts confidentiality as a top priority. But nobody is quite sure where Facebook stands.

Still, it’s not a reason to shy away from technology, Torous said. He just cautions people and businesses to take it slow.

The Trevor Project hopes to roll out their services by the end of the year. But they won’t until everything is up to standard, Dorison said.

“LGBTQ youth in crisis deserve the best care that can be provided to them,” Dorison said. “What we do here will hopefully apply to other vulnerable populations in health care more broadly.”

Headshot of writer Heather Adams. Heather Adams

Heather Adams is a freelance reporter based in Los Angeles. She often reports on religion, foster care and disability rights. Follow her on Facebook, Twitter and Instagram for more on these topics, plus photos of her two dogs.