Kai Koerber was a junior at Marjory Stoneman Douglas High School when a gunman killed 14 students and three staff members there on Valentine’s Day in 2018.
Seeing his peers — and himself — struggle with returning to normal, he wanted to do something to help people manage their emotions on their own terms.
While some of his classmates at the Parkland, Florida, school have worked on advocating for gun control, entered politics or simply taken a step back to heal and focus on their studies, Koerber’s background in technology — he’d originally wanted to be a rocket scientist — led him in a different direction: to build a smartphone app.
The result was Joy: AI Wellness Platform, which uses artificial intelligence to suggest bite-sized mindfulness activities for people based on how they are feeling. The algorithm Koerber’s team built is designed to recognize how people feel from the sounds of their voices — regardless of the words or language they speak.
“In the immediate aftermath of the tragedy, the first thing that came to mind after we’ve experienced this horrible, traumatic event — how are we going to personally recover?” he said. “It’s great to say OK, we’re going to build a better legal infrastructure to prevent gun sales, increased background checks, all the legislative things. But people really weren’t thinking about … the mental health side of things.”
Like many of his peers, Koerber said he suffered from post-traumatic stress disorder for a “very long time” and only recently has it gotten a little better.
“So, when I came to Cal, I was like, ‘Let me just start a research team that builds some groundbreaking AI and see if that’s possible,’” said the 23-year-old, who graduated from the University of California at Berkeley earlier this year. “The idea was to provide a platform to people who were struggling with, let’s say sadness, grief, anger … to be able to get a mindfulness practice or wellness practice on the go that meets our emotional needs on the go.”
He said it was important to offer activities that can be done quickly, sometimes lasting just a few seconds, wherever the user might be.
Mohammed Zareef-Mustafa, a former classmate of Koerber’s who’s been using the app for a few months, said the voice-emotion recognition part is “different than anything I’ve ever seen before.”
“I use the app about three times a week, because the practices are short and easy to get into. It really helps me quickly de-stress before I have to do things, like job interviews,” he said.
To use Joy, you simply speak into the app. The AI is supposed to recognize how you are feeling from your voice, then suggest short activities.
It doesn’t always get your mood right, so it’s possible to manually pick your disposition. Let’s say you are feeling “neutral” at the moment. The app suggests several activities, such as 15-second exercise called “mindful consumption” that encourages you to “think about all the lives and beings involved in producing what you eat or use that day.”
Yet another activity helps you practice making an effective apology. Feeling sad? A suggestion pops up asking you to track how many times you’ve laughed over a seven-day period and tally it up at the end of the week to see what moments gave you a sense of joy, purpose or satisfaction.
The iPhone app is available for an $8 monthly subscription, with a discount if you subscribe for a whole year. It’s a work in progress, and as it goes with AI, the more people use it, the more accurate it becomes.
A plethora of wellness apps on the market claim to help people with mental health issues, but it’s not always clear whether they work, said Colin Walsh, a professor of biomedical informatics at Vanderbilt University who has studied the use of AI in suicide prevention. According to Walsh, it is feasible to take someone’s voice and glean some aspects of their emotional state.
“The challenge is if you as a user feel like it’s not really representing what you think your current state is like, that’s an issue,” he said. “There should be some mechanism by which that feedback can go back.”
The stakes also matter. Facebook, for instance, faced criticism for its suicide prevention tool, which used AI (as well as humans) to flag users who may be contemplating suicide, and — in some serious cases — contact law enforcement to check on the person. But if the stakes are lower, Walsh said, if the technology is simply directing someone to spend some time outside, it’s unlikely to cause harm.
Koerber said people tend to forget, after mass shootings, that survivors don’t just “bounce back right away” from the trauma they experienced. It takes years to recover.
“This is something that people carry with them, in some way, shape or form, for the rest of their lives,” he said.
your ad here