CBS News
Mental health chatbots powered by artificial intelligence developed as a therapy support tool
Artificial intelligence has found its way into nearly every part of our lives – forecasting weather, diagnosing diseases, writing term papers. and now, AI is probing that most human of places, our psyches — offering mental health support, just you and a chatbot, available 24/7 on your smartphone. There’s a critical shortage of human therapists and a growing number of potential patients. AI driven chatbots are designed to help fill that gap by giving therapists a new tool. But as you’re about to see, like human therapists, not all chatbots are equal. Some can help heal, some can be ineffective or worse. One pioneer in the field who has had notable success joining tech with treatment is Alison Darcy. She believes the future of mental health care may be right in our hands.
Alison Darcy: We know the majority of people who need care are not getting it. There’s never been a greater need, and the tools available have never been as sophisticated as they are now. And it’s not about how can we get people in the clinic. It’s how can we actually get some of these tools out of the clinic and into the hands of– of people.
Alison Darcy … a research psychologist and entrepreneur … decided to use her background in coding and therapy to build something she believes can help people in need: a mental health chatbot she named Woebot.
Dr. Jon LaPook: Like woe is me?
Alison Darcy: Woe is me.
Dr. Jon LaPook: MmmHmm.
Woebot is an app on your phone… kind of a pocket therapist that uses the text function to help manage problems like depression, anxiety, addiction, and loneliness… and do it on the run.
Dr. Jon LaPook: I think a lot of people out there watching this are gonna be thinking. “Really? Computer psychiatry? (laugh) Come on.”
Alison Darcy: Well, I think it’s so interesting that our field hasn’t, you know, had a great deal of innovation since the basic architecture was sort of laid down by Freud in the 1890s, right? That– that’s really that sort of idea of, like, two people in a room. But that’s not how we live our lives today. We have to modernize psychotherapy.
Woebot is trained on large amounts of specialized data to help it recognize words, phrases, and emojis associated with dysfunctional thoughts … and challenge that thinking, in part mimicking a type of in-person talk therapy called cognitive behavioral therapy – or CBT.
Alison Darcy: It’s actually hard to find a CBT practitioner. And also, if you’re actually not by the side of your patient when they are struggling to get out of bed in the morning or at 2:00 a.m. when they can’t sleep and they’re feeling panicked, then we’re actually leaving clinical value on the table.
Dr. Jon LaPook: And even for people who want to go to a therapist there are barriers, right?
Alison Darcy: Sadly, the biggest barrier we have is stigma. But there’s, you know, insurance. There’s cost and there’s wait lists. I mean, and this problem has only grown significantly since the pandemic. And it doesn’t appear to be going away.
Since Woebot went live in 2017, the company reports one and a half million people have used it, which you can now only do with an employer benefit plan or access from a health professional. At Virtua Health, a nonprofit health care company in New Jersey, patients can use it free of charge.
We downloaded Woebot, entered a unique code that can only be provided by the company……then tried it out.
Alison Darcy: We found that for people to sorta connect with their mood. We offer, like, those emojis, which allows people to sort of connect in a nonverbal way.
I posed as someone who is depressed. After several prompts, Woebot wanted to dig deeper into why I was sad. So I came up with a scenario – that I feared the day my child would leave home.
Dr. Jon LaPook: “Imagine what your negative emotions would be saying if they had a voice. Can you do that?” “Write one of those negative thoughts here.” “I can’t do anything about it now. I guess I’ll just jump that bridge when I come to it.”
The normal expression is “cross that bridge” and the chatbot detected something might be seriously wrong.
Dr. Jon LaPook: But, let’s see. “Jon, I’m hearing you say, ‘I can’t do anything about it. I guess I’ll just jump that bridge when I come to it. “And I think you might need more support than I can offer. A trained listener will be able to help you in ways that I can’t.’ Would you like to take a look at some specialized helplines?
Alison Darcy: Now it’s not our job to say this– you are in crisis or you’re not, because AI can’t really do that in this context very well yet. But what it has caught is, “Huh, there is something concerning about the way that Jon just phrased that.”
Saying “only jump that bridge” and not combining it with “I can’t do anything about it now” did not trigger a suggestion to consider getting further help. Like a human therapist, Woebot is not foolproof, and should not be counted on to detect whether someone might be suicidal.
Dr. Jon LaPook: And how would it know that, “Jump that bridge,” where is it getting that knowledge that, “Jump that–“
Alison Darcy: Well, it has been– it has been trained on a lot of data and a lot of us, you know, humans labeling the phrases and things that we see. And so it’s picking up on kind of the sentiment.
Computer scientist Lance Eliot, who writes about artificial intelligence and mental health, says AI has the ability to pick up on nuances of conversation.
Dr. Jon LaPook: How does it know how to do that?
Lance Eliot: The system is able to in a sense mathematically and computationally figure out the nature of words and how words associate with each other. So what it does is it draws upon a vast array of data. And then it responds to you based on prompts or in some way that you instruct or ask questions of the system.
To do its job, the system must go somewhere to come up with appropriate responses.
Systems using what’s called rules-based AI are usually closed, meaning programmed to respond only with information stored in their own databases. Then there’s generative AI, in which the system can generate original responses based on information from the internet.
Lance Eliot: If you look at ChatGPT, that’s a type of generative AI. It’s very conversational, very fluent. But it also means that it tends to make it open-ended, that it can say things that you might not necessarily want it to say. It’s not as predictable. While a rules-based system is very predictable. Woebot is a system based on rules that’s been very kind of controlled, so that way it doesn’t say the wrong things.
Woebot aims to use AI to bond with users and keep them engaged.
Its team of staff psychologists, medical doctors, and computer scientists construct and refine a database of research from medical literature, user experience, and other sources.
Then, writers build questions and answers.
And revise them in weekly remote video sessions.
Woebot’s programmers engineer those conversations into code. Because Woebot is rules-based, it’s mostly predictable. But chatbots using generative AI, that is scraping the internet, are not.
Lance Eliot: Some people sometimes refer to it as an AI hallucination. AI can in a sense make mistakes or make things up or be fictitious.
Sharon Maxwell discovered that last spring after hearing there might be a problem with advice offered by Tessa, a chatbot designed to help prevent eating disorders, which, left untreated, can be fatal. Maxwell, who had been in treatment for an eating disorder of her own and advocates for others … challenged the chatbot.
Sharon Maxwell: So I asked it, “How do you help folks with eating disorders?” and it told me that it could give folks coping skills. Fantastic. It could give folks resources to find professionals in the eating disorder space. Amazing.
But the more she persisted, the more Tessa gave her advice that ran counter to usual guidance for someone with an eating disorder. for example, it suggested….among other things… lowering calorie intake and using tools like a skinfold caliper to measure body composition.
Sharon Maxwell: The general public might look at it and think that’s normal tips. Like, don’t eat as much sugar. Or eat whole foods, things like that. But to someone with an eating disorder, that’s a quick spiral into a lot more disordered behaviors and can be really damaging.
Maxwell reported her experience to the National Eating Disorders Association, which had featured Tessa on its website at the time. Shortly after, it took Tessa down.
Ellen Fitzsimmons-Craft, a psychologist specializing in eating disorders at Washington University School of Medicine in St. Louis, helped lead the team that developed Tessa.
Ellen Fitzsimmons-Craft: That was never the content that our team wrote or programmed into the bot that we deployed.
Dr. Jon LaPook: So initially, there was no possibility of something unexpected happening?
Ellen Fitzsimmons-Craft: Correct.
Dr. Jon LaPook: You developed something that was a closed system. You knew exactly for this question, I’m gonna get this answer.
Ellen Fitzsimmons-Craft: Yep.
The problem began, she told us, after a health care technology company she and her team had partnered with, named Cass, took over the programming. She says Cass explained the harmful messages appeared when people were pushing Tessa’s question and answer feature.
Dr. Jon LaPook: What’s your understanding of what went wrong?
Ellen Fitzsimmons-Craft: My understanding of what went wrong is that, at some point, and you’d really have to talk to Cass about this, but that there may have been generative AI features that were built into their platform. And so my best estimation is that these features were added into this program as well.
Cass did not respond to multiple requests for comment.
Dr. Jon LaPook: Does your negative experience with Tessa you know, being used in a way you didn’t design, does that sour you towards using AI at all to address mental health issues?
Ellen Fitzsimmons-Craft: I wouldn’t say that it turns me off to the idea completely because the reality is that 80% of people with these concerns never get access to any kind of help. And technology offers a solution, not the only solution, but a solution.
Social worker Monika Ostroff, who runs a nonprofit eating disorders organization, was in the early stages of developing her own chatbot when patients told her about problems they had with Tessa. She told us it made her question using AI for mental health care.
Monika Ostroff: I want nothing more than to help solve the problem of access, because people are dying. like, this isn’t just somebody sad for a week, this is people are dying. And at the same time any chatbot could be in some ways a ticking time bomb, right, for a smaller percentage of people.
Especially for those patients who are really struggling, Ostroff is concerned about losing something fundamental about therapy: being in a room with another person.
Monika Ostroff: The way people heal is in connection. And they talk about this one moment where, you know, when you’re– as a human you’ve gone through something. And as you’re describing that, you’re looking at the person, sitting across from you, and there’s a moment where that person just gets it.
Dr. Jon LaPook: A moment of empathy.
Monika Ostroff: MmHmm. You just get it. Like, you really understand it. I don’t think a computer can do that.
Unlike therapists, who are licensed in the state where they practice, most mental health apps are largely unregulated.
Dr. Jon LaPook: Are there lessons to be learned from what happened?
Monika Ostroff: So many lessons to be learned. Chatbots, especially specialty area chatbots need to have guardrails. It can’t be a chatbot that is based in the internet.
Dr. Jon LaPook: It’s tough, right? Because the closed systems are kind of constrained. And they may be right most of the time, but they’re boring eventually. Right? People stop using them?
Monika Ostroff: Yeah, they’re predictive. Because if you keep typing in the same thing and it keeps giving you the exact same answer with the exact same language, I mean, who wants to (laugh) do that?
Protecting people from harmful advice while safely harnessing the power of AI is the challenge now facing companies like Woebot Health and its founder, Alison Darcy.
Alison Darcy: There are going to be missteps if we try and move too quickly. And my big fear is that those missteps ultimately undermine public confidence in the ability of this tech to help at all. But here’s the thing. We have an opportunity to develop these technologies more thoughtfully. And—and so, you know, I hope we —I hope we take it.
Produced by Andrew Wolff. Associate producer, Tadd J. Lascari. Broadcast associate, Grace Conley. Edited by Craig Crawford.
CBS News
Woman linked to 14 cyanide murders is convicted and sentenced to death in Thailand
A Thai woman believed to be among the worst serial killers in the kingdom’s history was convicted and sentenced to death Wednesday for poisoning a friend with cyanide, in the first of her 14 murder trials.
Sararat Rangsiwuthaporn, 36, an online gambling addict, is accused of swindling thousands of dollars from her victims before killing them with the chemical.
A court in Bangkok convicted her Wednesday for fatally poisoning her friend Siriporn Kanwong.
The two met up near Bangkok in April last year to release fish into the Mae Klong river as part of a Buddhist ritual.
Siriporn collapsed and died shortly afterwards and investigators found traces of cyanide in her body. Last year, police said they collected fingerprints and other evidence from Sararat’s Toyota Forerunner.
Police were then able to link Sararat to previously unsolved cyanide poisonings going back as far as 2015, officers said.
“The court’s decision is just,” Siriporn’s mother, Tongpin Kiatchanasiri, told reporters following the verdict. “I want to tell my daughter that I miss her deeply, and justice has been done for her today.”
Police said Sararat funded her gambling addiction by borrowing money from her victims — in one case as much as 300,000 baht (nearly $9,000) — before killing them and stealing their jewelry and mobile phones.
She lured 15 people — one of whom survived — to take poisoned “herb capsules,” they said.
Sararat faces 13 more separate murder trials, and has been charged with around 80 offenses in total.
Her ex-husband, Vitoon Rangsiwuthaporn — a police lieutenant-colonel — was given 16 months in prison and her former lawyer two years for complicity in Siriporn’s killing, the lawyer for the victim’s family said.
The couple, while divorced, had still been living together, the BBC reported. Police said Rangsiwuthaporn was likely involved in Sararat’s alleged murder of an ex-boyfriend, Suthisak Poonkwan, the BBC reported. Police said that after she killed him, Rangsiwuthaporn picked her up in her car and helped her extorte money from Suthisak’s friends.
Thailand has been the scene of several sordid and high-profile criminal cases.
Earlier this year, six foreigners were found dead in a luxury Bangkok hotel after a cyanide poisoning believed to be connected to debts worth millions of baht.
CBS News
Endangered fin whale measuring nearly 50 feet found dead near Anchorage, drawing curious onlookers to beach
An endangered fin whale that washed up near a coastal trail in Alaska’s largest city has attracted curious onlookers while biologists seek answers as to what caused the animal’s death.
The carcass found over the weekend near Anchorage was 47 feet (14.3 meters) long – comparable to the width of a college basketball court – and female, according to National Oceanic and Atmospheric Administration biologists.
Barbara Mahoney, a NOAA biologist examining the whale, told the Anchorage Daily News the whale was likely 1 to 3 years old.
Fin whales are the second-largest whale species, according to NOAA Fisheries, and fully grown can reach up to 85 feet long and weigh between 40 tons and 80 tons. Strikes by ships, entanglements in fishing gear, underwater noise and the effects of climate change are among the threats that fin whales face, according to the agency.
Mandy Keogh, a NOAA marine mammal stranding coordinator, said fin whales generally aren’t seen this close to Anchorage and that recent high tides may have pushed the animal further into the Knik Arm.
People trekked across the mudflats to see the whale, which NOAA biologists and staff from Alaska Veterinary Pathology Services had anchored to the shore Sunday so they would be able to gather samples from the animal. But even after samples are analyzed, it can be difficult to determine a cause of death because of decomposition or a lack of obvious injuries, Keogh said.
Daisy Grandlinard was among the parents who accompanied a group of children to see the whale Monday. As they drew closer, they could smell it, she said.
“It was really interesting for the kids to be able to feel it, touch the bottom because it kind of had tracks on it, like a sled almost. And just to see the size of it, that was pretty cool,” she said. “We had already studied whales a couple of weeks ago, so it was fun to see one in person and say, ‘Oh, that’s what the baleen looks like in real life,’ and ‘Where is the blow hole?’ “
Biologists hoped to complete their work Tuesday, untether the carcass and “let the tide push it or move it,” Mahoney said. “Whatever it does or doesn’t do – we don’t know.”
According to NOAA, the whaling industry killed nearly 725,000 fin whales in the mid-1900’s in the Southern Hemisphere alone. Today, the major threat to the species comes from vessel strikes.
Other fin whales have washed ashore along the Western U.S. this year. In August, a large fin whale washed ashore in Southern California and died before rescuers could get to the scene, CBS Los Angeles reported. Officials said the whale, which was not fully grown, was believed to be in poor health due to visible bumps on its skin and a thin build.
In February, a 46-foot-long whale was found washed up on an Oregon shore — emaciated, entangled and covered in what appears to be wounds from killer whales.
CBS News
How Laken Riley’s death sent “a reality shock” through the college town of Athens, Georgia
Just two blocks from the University of Georgia campus, in a downtown courtroom in Athens, Georgia, Jose Ibarra is on trial for the murder of 22-year-old Laken Riley, a former UGA student who transferred to the nursing program at Augusta University’s Athens campus.
In late February, Riley was attacked during her morning jog on a trail near the University of Georgia’s intramural fields. As the investigation and trial unfolded, members of the Athens community grappled with a shaken sense of security.
“Just because we’re on campus doesn’t mean, necessarily, that the bad parts of the world can’t get in,” said Allison Mawn, a fourth-year student. “She did everything right. She told friends where she was going, she went on a popular trail during the day. She had her tracking location on. She even managed to call for help, and still it wasn’t enough.”
The case was thrust into the national spotlight when authorities arrested Ibarra, an undocumented Venezuelan immigrant who entered the country two years ago, and charged him with murdering Riley. In the midst of the election cycle, her death quickly became a flashpoint in the immigration debate, with former President Donald Trump and his supporters raising it at rallies and President Biden responding to heckling about it in his State of the Union address.
“In an instant, all the eyes across the country are on us,” said Mawn. “Now you can’t say the name Laken Riley without thinking about undocumented immigrants and illegal immigration.”
Students have organized vigils and prayer groups, and participated in protests, rallies and runs in memory of Riley. A GoFundMe set up by her family amassed over $250,000 in donations that will go to the Laken Hope Foundation, an organization that will “promote safety awareness for women, aid and tuition assistance for nursing students, and children’s healthcare… all causes that Laken felt strongly about.”
For many students, Athens no longer feels like the safe haven they once thought it was. To ease fears, University of Georgia President Jere Morehead announced a $7.3 million campus safety initiative. The campus has been fortified by an expanded university police force, hundreds of additional lights and security cameras and multiple emergency call stations and license plate readers.
Over the past months, “there was an expectation that we would magnify those efforts,” said P. Daniel Silk, the University of Georgia’s associate vice president for public safety. “We want to be more safe and more secure tomorrow than we were yesterday.”
While the Athens community waits for justice to be delivered, students bustle around campus, walking under the newly installed street lights, passing by additional emergency call systems and easing into a heightened security presence. Classes may go on, but the campus and community are changed.
“Regardless of what the verdict is, unfortunately we still lost a student. We still lost a life and nothing’s gonna change that,” said Mawn. “Things are never going to be a hundred percent the same for any of us here.”