Connect with us

CBS News

Can AI help fill the therapist shortage? Mental health apps show promise and pitfalls

Avatar

Published

on


Providers of mental health services are turning to AI-powered chatbots designed to help fill the gaps amid a shortage of therapists and growing demand from patients. 

But not all chatbots are equal: some can offer helpful advice while others can be ineffective, or even potentially harmful. Woebot Health uses AI to power its mental health chatbot, called Woebot. The challenge is to protect people from harmful advice while safely harnessing the power of artificial intelligence.

Woebot founder Alison Darcy sees her chatbot as a tool that could help people when therapists are unavailable. Therapists can be hard to reach during panic attacks at 2 a.m. or when someone is struggling to get out of bed in the morning, Darcy said. 

But phones are right there. “We have to modernize psychotherapy,” she says. 

Darcy says most people who need help aren’t getting it, with stigma, insurance, cost and wait lists keeping many from mental health services. And the problem has gotten worse since the COVID-19 pandemic. 

“It’s not about how can we get people in the clinic?” Darcy said. “It’s how can we actually get some of these tools out of the clinic and into the hands of people?”

How AI-powered chatbots work to support therapy

Woebot acts as a kind of  pocket therapist. It uses a chat function to help manage problems such as depression, anxiety, addiction and loneliness.

The app is trained on large amounts of specialized data to help it understand words, phrases and emojis associated with dysfunctional thoughts. Woebot challenges that thinking, in part mimicking a type of in-person talk therapy called cognitive behavioral therapy, or CBT.

Woebot Health founder Alison Darcy shows Dr. Jon LaPook how Woeboy works
Woebot Health founder Alison Darcy shows Dr. Jon LaPook how Woeboy works.

60 Minutes


Woebot Health reports 1.5 million people have used the app since it went live in 2017. Right now, users can only use the app with an employer benefit plan or access from a health care professional. At Virtua Health, a nonprofit healthcare company in New Jersey, patients can use it free of charge. 

Dr. Jon LaPook, chief medical correspondent for CBS News, downloaded Woebot and used a unique access code provided by the company. Then, he tried out the app, posing as someone dealing with depression. After several prompts, Woebot wanted to dig deeper into why he was so sad. Dr. LaPook came up with a scenario, telling Woebot he feared the day his child would leave home. 

He answered one prompt by writing: “I can’t do anything about it now. I guess I’ll just jump that bridge when I come to it,” purposefully using “jump that bridge” instead of “cross that bridge.” 

Based on Dr. LaPook’s language choice, Woebot detected something might be seriously wrong and offered him the option to see specialized helplines.

Saying only “jump that bridge” and not combining it with “I can’t do anything about it now” did not trigger a response to consider getting further help. Like a human therapist, Woebot is not foolproof, and should not be counted on to detect whether someone might be suicidal.

Computer scientist Lance Eliot, who writes about artificial intelligence and mental health, said AI has the ability to pick up on nuances of conversation.

“[It’s] able to in a sense mathematically and computationally figure out the nature of words and how words associate with each other. So what it does is it draws upon a vast array of data,” Eliot said. “And then it responds to you based on prompts or in some way that you instruct or ask questions of the system.”

Computer scientist Lance Eliot
Computer scientist Lance Eliot

60 Minutes


To do its job, the system must go somewhere to come up with appropriate responses. Systems like Woebot, which use rules-based AI, are usually closed. They’re programmed to respond only with information stored in their own databases. 

Woebot’s team of staff psychologists, medical doctors, and computer scientists construct and refine a database of research from medical literature, user experience, and other sources. Writers build questions and answers, which they revise in weekly remote video sessions. Woebot’s programmers engineer those conversations into code.

With generative AI, the system can generate original responses based on information from the internet. Generative AI is less predictable.

Pitfalls of AI mental health chatbots

The National Eating Disorders Association’s AI-powered chatbot, Tessa, was taken down after it provided potentially harmful advice to people seeking help.

Ellen Fitzsimmons-Craft, a psychologist specializing in eating disorders at Washington University School of Medicine in St. Louis, helped lead the team that developed Tessa, a chatbot designed to help prevent eating disorders.

She said what she helped develop was a closed system, without the possibility of advice from the chatbot that the programmers had not anticipated. But that’s not what happened when Sharon Maxwell tried it out. 

Maxwell, who had been in treatment for an eating disorder and now advocates for others, asked Tessa how it helps people with eating disorders. Tessa started out well, saying it could share coping skills and get people needed resources.

But as Maxwell persisted, Tessa started to give her advice that ran counter to usual guidance for someone with an eating disorder. For example, among other things, it suggested lowering calorie intake and using tools like a skinfold caliper to measure body composition.

“The general public might look at it and think that’s normal tips. Like, don’t eat as much sugar. Or eat whole foods, things like that,” Maxwell said. “But to someone with an eating disorder, that’s a quick spiral into a lot more disordered behaviors and can be really damaging.”

Sharon Maxwell
Sharon Maxwell

60 Minutes


She reported her experience to the National Eating Disorders Association, which featured Tessa on its website at the time. Shortly after, it took Tessa down.

Fitzsimmons-Craft said the problem with Tessa began after Cass, the tech company she had partnered with, took over the programming. She says Cass explained the harmful messages appeared after people were pushing Tessa’s question-and-answer feature.

“My understanding of what went wrong is that, at some point, and you’d really have to talk to Cass about this, but that there may have been generative AI features that were built into their platform,” Fitzsimmons-Craft said. “And so my best estimation is that these features were added into this program as well. 

Cass did not respond to multiple requests for comment.

Some rules-based chatbots have their own shortcomings. 

“Yeah, they’re predictive,” social worker Monika Ostroff, who runs a nonprofit eating disorders organization, said. “Because if you keep typing in the same thing and it keeps giving you the exact same answer with the exact same language, I mean, who wants to do that?”

Ostroff had been in the early stages of developing her own chatbot when she heard from patients about what happened with Tessa. It made her question using AI for mental health care. She said she’s concerned about losing something fundamental about therapy: being in a room with another person. 

“The way people heal is in connection,” she said. Ostroff doesn’t think a computer can do that.

The future of AI’s use in therapy

Unlike therapists, who are licensed in the state where they practice, most mental health apps are largely unregulated.

Ostroff said AI-powered mental health tools, especially chatbots, need to have guardrails. “It can’t be a chatbot that is based in the internet,” Ostroff said.

Even with the potential issues, Fitzsimmons-Craft isn’t turned off to the idea of using AI chatbots for therapy.

“The reality is that 80% of people with these concerns never get access to any kind of help,”  Fitzsimmons-Craft said. “And technology offers a solution –not the only solution, but a solution.”



Read the original article

Leave your vote

CBS News

American Airlines’ new system cracks down on passengers trying to board plane early

Avatar

Published

on


American Airlines’ new system cracks down on passengers trying to board plane early – CBS News


Watch CBS News



American Airlines has been testing a new boarding system in Tuscon and two other airports that prevents passengers from trying to board before their group is called. American will add the system to 100 airports ahead of the Thanksgiving holiday, with more in the coming months.

Be the first to know

Get browser notifications for breaking news, live events, and exclusive reporting.




Read the original article

Leave your vote

Continue Reading

CBS News

Putin just approved a new nuclear weapons doctrine for Russia. Here’s what it means.

Avatar

Published

on


Russian President Vladimir Putin approved changes to his country’s nuclear doctrine this week, formally amending the conditions — and lowering the threshold — under which Russia would consider using its nuclear weapons. Moscow announced Tuesday that Putin had signed off on the changes to the doctrine, formally known as “The basics of state policy in the field of nuclear deterrence,” as Ukraine launched its first strike deeper into Russia using U.S.-supplied missiles.

The updated doctrine states that Russia will treat an attack by a non-nuclear state that is supported by a country with nuclear capabilities as a joint attack by both. That means any attack on Russia by a country that’s part of a coalition could be seen as an attack by the entire group. 

Under the doctrine, Russia could theoretically consider any major attack on its territory, even with conventional weapons, by non-nuclear-armed Ukraine sufficient to trigger a nuclear response, because Ukraine is backed by the nuclear-armed United States.

Putin has threatened to use nuclear weapons in Ukraine multiple times since he ordered the full-scale invasion of the country on Feb. 24, 2022, and Russia has repeatedly warned the West that if Washington allowed Ukraine to fire Western-made missiles deep into its territory, it would consider the U.S. and its NATO allies to be directly involved in the war. 

U.S. officials said Ukraine fired eight U.S.-made ATACMS missiles into Russia’s Bryansk region early Tuesday, just a couple days after President Biden gave Ukraine permission to fire the weapons deeper into Russian territory. ATACMS are powerful weapons with a maximum range of almost 190 miles.


Ukraine strikes Russia with U.S.-supplied missiles

02:24

“This is the latest instance of a long string of nuclear rhetoric and signaling that has been coming out of Moscow since the beginning of this full scale invasion,” Mariana Budjeryn, Senior Research Associate at Harvard’s Belfer Center, told German broadcaster Deutsche Welle when the change to Russia’s nuclear doctrine was first proposed last month.

“The previous version of the Russian doctrine adopted in 2020 allowed also a nuclear response to a large-scale conventional attack, but only in extreme circumstances where the very survival of the state was at stake,” Budjeryn noted. “This formulation has changed to say, well, extreme circumstances that jeopardize the sovereignty of Russia. Well, what does that really mean and who defines what serious threats to sovereignty might constitute?”

Budjeryn said Russia had already used weapons against Ukraine that could carry a nuclear payload.

“Russia has been using a number of delivery systems of missiles that [can] also come with a nuclear warhead. So these are dual capable systems. For example, Iskander M short range ballistic missiles. Those have been used extensively in this war by Russia. So when we have an incoming from Russia to Ukraine and we see that it’s an Iskander missile, we don’t know if it’s nuclear tipped or conventionally tipped,” Budjeryn said.

Ukrainian parliamentarian Oleksandra Ustinova, who says she helped lobby the Biden administration for the permission for Ukraine to fire the ATACMS deeper inside Russia, told CBS News she didn’t believe Putin would actually carry out a nuclear strike.

“He keeps playing and pretending like he’s going to do something,” Ustinova said. “I’ve been saying since day one that he’s a bully, and he’s not going to do that.”

contributed to this report.



Read the original article

Leave your vote

Continue Reading

CBS News

In 1967, Louisa Dunne was found murdered in her U.K. home by a neighbor. A suspect has just been arrested.

Avatar

Published

on


A 92-year-old man has been charged in the U.K. with the murder and rape of a woman almost six decades ago, British police said Wednesday.

Louisa Dunne, 75, was found dead by a neighbor inside her home in the southwestern English city of Bristol on June 28, 1967.

Her cause of death was recorded as strangulation and asphyxiation.

The case remained cold for 57 years until nonagenarian Ryland Headley, of Ipswich in eastern England, was arrested on Tuesday and subsequently charged.

The arrest came after Avon and Somerset police began reviewing the case last year, which included further forensic examination of items relating to the case.

“This development marks a hugely significant moment in this investigation,” the force’s detective inspector Dave Marchant said in a statement. “We’ve updated Louisa’s family about this charging decision and a specialist liaison officer will continue to support them in the coming days, weeks and months.”

Marchant said the public may see “operational police activity in the Ipswich area” as a result of the arrest, the BBC reported.

“We recognise this will also come as a shock to the community in Easton,” Marchant said.

Headley appeared in court in Bristol via video-link on Wednesday and was remanded in custody. He was not asked to enter pleas to the two charges. Headley spoke only to confirm his name, date of birth and address, according to the BBC.

ITV News noted the case is believed to be the oldest cold case murder arrest in British history.

Police did not give details about the new forensic analysis in the case but DNA and genetic genealogy tests are often keys to solving decades-old cold cases. Just last week, investigators in the U.S. announced that they used DNA evidence to solve a 65-year-old cold case involving a 7-year-old boy whose body was found in a culvert.





Read the original article

Leave your vote

Continue Reading

Copyright © 2024 Breaking MN

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.