Machine Learning & Mental Health: How an AI-based app is changing the way we ask about suicide

Insights | Machine Learning & Mental Health: How an AI-based app is changing the way we ask about suicide

This interview is part of HORAN Campus Health’s series, “Moving Mental Health Forward”, stories that spotlight leaders in the mental health arena and examine the ways innovation and new approaches are changing how we perceive and treat mental health conditions.

Mental health conditions are tricky to diagnose. Not everyone wants to admit they’re feeling anxious or depressed or suicidal. Stigma makes talking about these conditions tough and, as a result, people in crisis slip through the cracks. 

Clarigent Health, a health tech startup based in Mason, Ohio, is working to change that. Their latest software, Clairity, is an AI-based tool designed to analyze a person’s speech and identify whether they are at risk of serious mental health conditions, including suicidal ideation. The application can be used in a number of clinical and non-clinical settings, with a simple, verbal assessment that takes just 5 minutes to complete.

This is a personal mission for Clarigent Health co-founder and CEO Don Wright, whose son died by suicide in 2017. Here, Don talks to HORAN about this breakthrough software, the need for new ways to address mental health conditions, and how technology can improve outcomes.

 

Your software was developed using technology that was researched and developed for decades by a team at Cincinnati Children’s Hospital. Can you tell us more about the foundational research and how it evolved?

The primary inventor is Dr. John Pestion. His research is very well known in the suicide prevention world. He had a strong interest in natural language processing, [the field of study around teaching computers to understand and interpret language the way humans can], so he started looking at suicide notes and built an algorithm that could determine whether a suicide note was legitimate or not. He then had some professional writers write suicide notes, and he tested physicians, graduate students studying behavioral health, and the algorithm to see who would do better at figuring out which were legitimate, and which were made up. And the software won. Through natural language processing, the AI was able to discern whether something was a real suicide note.

 

The technology behind your software is complex, but the concept is simple: practitioners ask their patients a series of questions and record their responses, which are run through an algorithm that looks for signs of anxiety, depression or suicidal ideation. It does this in part by identifying suicidal “thought markers.” What is the AI looking for?

There are 4,000 things it’s looking for, like how people are constructing sentences, or words that are found X many times or within three other words of this word. The easier ones to explain are things like, personal pronoun use changes as somebody becomes more suicidal. You stop saying “I” and say “we” or “you.” Also, people use end words and final words more, which seems obvious, but it’s sometimes very hard to detect unless you’re really looking for it. What’s interesting is even though we’ve known some of this from research 15 years ago, we didn’t tell our AI to look for that. The AI found it.

 

AI is at the forefront of the innovation conversation and the tone tends to straddle excitement and apprehension. As you promote this technology, are you ever met with uncertainty about AI’s use in addressing mental health?

We have seen a few professionals who say, how could a computer know better than me? And our answer is, look at the statistics. Research shows that providers can detect whether someone is acutely suicidal only 50 percent of the time. Our trials have shown our software can tell 70 to 90 percent of the time. It’s a tool to help that professional, it’s not meant to replace them. If you were trying to do all the things the machine is doing in your head while you were talking to someone, you wouldn’t hear what they’re saying. This allows practitioners to spend quality time with their patients while the system is listening.

 

This is an innovative way for mental health providers to screen their patients for mental health risk. Can it also be used outside that setting?

What’s interesting is we now know at least two-thirds of people who die by suicide have been in a doctor’s office of some sort within weeks of when they die, almost half within days. What if there was a technology where the doctor could have figured out that their patient actually needed to see somebody right away? The software is built so the person interpreting the results doesn’t have to be there. And there’s an automated version where an avatar pops up and asks the questions. Primary care doctors, eye doctors, dentists, school nurses, school counselors—anyone can be taught to use it. 

 

Historically, practitioners have relied on written questionnaires to screen for anxiety, depression and suicidal ideation. Why is AI more reliable?

For one, it’s an easier conversation. Even though mental health professionals are trained to talk about these concerns, it can sound confrontational. If I say I think you’re suicidal after you’ve already said you aren’t, I’m also saying I think you’re lying. It’s also harder to trick. There are all kinds of reasons people lie on those screenings. If you say you’re having suicidal thoughts, they might admit you [to a hospital].

 

The questions you use are different, too. Current screening methods ask things like “Over the past month, have you wished you could go to sleep and not wake up?” and “Over the last two weeks, how often have you been feeling down, depressed or hopeless?” Yours are more open-ended. Why is this important?

If someone asks you, how many times in the last two weeks have you thought about harming yourself, do you even know how to answer that question? For kids, this is especially difficult. Go ask any kid on the street, how often do you feel hopeless, which is one of the standard questions on the paper scales—I don’t even know how to answer that question! You have to be pretty insightful about yourself to answer those questions properly. 

 

The numbers regarding mental health in America are alarming: 50 million Americans suffer from depression and related conditions, 500 million prescriptions are written for these conditions every year, and 400,000 die by suicide every year. Clearly, we need a new approach. Does a need as pressing as this one spark innovation? 

It absolutely does. For a very long time, there was a lot of stigma around mental health conditions. The world didn’t understand mental health issues. They’re difficult to diagnose and difficult to treat. For a long time, you didn’t have mental health options in health care plans provided by employers. It was easier to go to the chiropractor than to see a psychiatrist and get it paid for. Today, only 10 percent of psychiatrists still do talk therapy. It’s mostly medicine management.

 

That explains those 500 million prescriptions.

And the other thing about those 500 million prescriptions is 75 percent of antidepressants are distributed by non-behavioral health professionals. We did everything wrong forever, but it’s getting better. And money drives innovation. A lot of the time research is going on but nobody turns it into something that helps the masses because there’s no way to do it if there’s no way to pay for it. It’s not about greed or getting rich. How can you build a company that can distribute your product if you can’t get the money? What I have seen recently, though, is a lot of grant money going toward practical applications of mental health and suicide prevention programs instead of research. They’re pushing a lot of money at the federal and state level into getting these technologies and treatment options out into the population and I think that’s wonderful.

 

Access to care is a major barrier to treatment. How can this technology help address that? 

One of the huge problems in mental health is a lack of resources. Two-thirds of kids who are taken to a clinic for behavioral health issues are seen by non-behavioral health professionals because there’s not enough clinicians. Especially in poorer communities, there just aren't resources. But now we have the ability to use technology to screen people and find out who needs what kind of help. Instead of sending everyone to the emergency department, you can get people tested earlier and more often, and find treatment options that are a lot less invasive. To have something that can objectively figure out whether someone needs help is a good first step.

This interview has been edited and condensed for clarity.

 

At HORAN Campus Health, we champion bold innovations and offer customized health plans, resources, and tools designed to improve the overall health and wellbeing of your students. Together, we can offer the holistic support students need to thrive on campus and succeed in life.

To get started, visit our Campus Health webpage to fill out a simple contact form or get in touch with Phillip Arrington, Vice President of HORAN Campus Health, at PhillipA@horanassoc.com.