Rechercher dans ce blog

Friday, September 30, 2022

Can Smartphones Help Predict Suicide? - The New York Times

A unique research project is tracking hundreds of people at risk for suicide, using data from smartphones and wearable biosensors to identify periods of high danger — and intervene.

CAMBRIDGE, Mass. — In March, Katelin Cruz left her latest psychiatric hospitalization with a familiar mix of feelings. She was, on the one hand, relieved to leave the ward, where aides took away her shoelaces and sometimes followed her into the shower to ensure that she would not harm herself.

But her life on the outside was as unsettled as ever, she said in an interview, with a stack of unpaid bills and no permanent home. It was easy to slide back into suicidal thoughts. For fragile patients, the weeks after discharge from a psychiatric facility are a notoriously difficult period, with a suicide rate around 15 times the national rate, according to one study.

This time, however, Ms. Cruz, 29, left the hospital as part of a vast research project which attempts to use advances in artificial intelligence to do something that has eluded psychiatrists for centuries: to predict who is likely to attempt suicide and when that person is likely to attempt it, and then, to intervene.

On her wrist, she wore a Fitbit programmed to track her sleep and physical activity. On her smartphone, an app was collecting data about her moods, her movement and her social interactions. Each device was providing a continuous stream of information to a team of researchers on the 12th floor of the William James Building, which houses Harvard’s psychology department.

In the field of mental health, few new areas generate as much excitement as machine learning, which uses computer algorithms to better predict human behavior. There is, at the same time, exploding interest in biosensors that can track a person’s mood in real time, factoring in music choices, social media posts, facial expression and vocal expression.

Matthew K. Nock, a Harvard psychologist who is one of the nation’s top suicide researchers, hopes to knit these technologies together into a kind of early-warning system that could be used when an at-risk patient is released from the hospital.

He offers this example of how it could work: The sensor reports that a patient’s sleep is disturbed, she reports a low mood on questionnaires and GPS shows she is not leaving the house. But an accelerometer on her phone shows that she is moving around a lot, suggesting agitation. The algorithm flags the patient. A ping sounds on a dashboard. And, at just the right time, a clinician reaches out with a phone call or a message.

There are plenty of reasons to doubt that an algorithm can ever achieve this level of accuracy. Suicide is such a rare event, even among those at highest risk, that any effort to predict it is bound to result in false positives, forcing interventions on people who may not need them. False negatives could thrust legal responsibility onto clinicians.

Algorithms require granular, long-term data from a large number of people, and it’s nearly impossible to observe large numbers of people who die by suicide. Finally, the data needed for this kind of monitoring raises red flags about invading the privacy of some of society’s most vulnerable people.

Dr. Nock is familiar with all these arguments but has persisted, in part out of sheer frustration. “With all due respect to people who’ve been doing this work for decades, for a century, we haven’t learned a great deal about how to identify people at risk and how to intervene,” he said. “The suicide rate now is the same it was literally 100 years ago. So just if we’re being honest, we’re not getting better.”

Kayana Szymczak for The New York Times

For psychiatrists, few tasks are more nerve-racking than caring for patients they know to be at risk for suicide while they are at home and unsupervised.

Dr. Karen L. Swartz, a professor of psychiatry at Johns Hopkins University, calls it “the gray zone.” She was fresh out of training when she first wrestled with this problem, caring for a prickly, intelligent woman who admitted she had suicidal thoughts, and even alluded to a plan, but dreaded the thought of being hospitalized.

Dr. Swartz turned to the woman’s husband for advice. If you force her into the hospital, he said, she will fire you.

So Dr. Swartz decided to take the risk, allowing the woman to remain at home, tweaking her medications and waiting. She spent the next weeks on tenterhooks, and, slowly, the patient improved. “It was one of those things where I just genuinely hoped I was right,” she said. It never gets easier, said Dr. Swartz, who now trains young psychiatrists: With experience, it only becomes clearer that suicidal thoughts can come and go without warning.

“We are asked to predict something that is highly unpredictable,” she said.

Increasingly, health care systems are turning to machine learning to make this call. Algorithms based on vast data sets — drawn from electronic medical records as well as scores of other factors — are used to assign patients a risk score, so that individuals at exceptionally high risk can be provided with extra attention.

Algorithms have proven more accurate than traditional methods, which, according to a 2017 review of published research, had not improved in 50 years and were only slightly better than chance at predicting an outcome. These methods are already used in some clinical settings. Since 2017, the Department of Veterans Affairs has used an algorithm to flag the 0.1 percent of veterans at the highest risk for suicide, a few thousand patients in a population of six million.

This approach has yielded some success. A study published last year in JAMA Network Open found that veterans enrolled in REACH VET, a program for at-risk patients, were 5 percent less likely to have a documented suicide attempt, and less likely to be admitted to a psychiatric facility or visit the emergency room. But the study found no significant change in the rate of suicide.

The expectations that have built up around this research are so high that experts take pains to temper them. Michael Schoenbaum, a senior adviser at the National Institute of Mental Health, compared it to the excitement, 25 years ago, around the search for biological markers for mental illnesses — a case in which, he pointed out, “the optimists were wrong.”

“We are waiting to see when and where and maybe even whether signals like that are valid and reliable,” he said. “The evidence so far, it’s exciting in the sense that any signal is promising. This is something that we couldn’t do before at all.” But, he warned, “we are looking for something we haven’t found yet.”

And some of Dr. Nock’s colleagues say they doubt algorithmic predictions will ever be precise enough to intervene in the narrow window that precedes a suicide attempt.

“It’s certainly not an easily solvable problem,” said Nick Allen, the director of the Center for Digital Mental Health at the University of Oregon, who helped develop EARS, an app that tracks mood based on factors like music choice, facial expression and the use of language.

“It’s probably, in some senses, not a solvable problem, for the same reason that we have school shootings and the same reason that we can’t predict a lot of this kind of stuff,” Dr. Allen said. “You know, the math is just really daunting.”

Kayana Szymczak for The New York Times

On an August afternoon in the William James building, a lanky data scientist named Adam Bear sat in front of a monitor in Dr. Nock’s lab, wearing flip-flops and baggy shorts, staring at the zigzagging graphs of a subject’s stress levels over the course of a week.

When moods are mapped as data, patterns emerge, and it’s Mr. Bear’s job to look for them. He spent his summer poring through the days and hours of 571 subjects who, after seeking medical care for suicidal thoughts, agreed to be tracked continuously for six months. While they were being tracked, two died by suicide and between 50 and 100 made attempts.

It is, Dr. Nock believes, the largest reservoir of information ever collected about the daily lives of people struggling with suicidal thoughts.

The team is most interested in the days preceding suicide attempts, which would allow time for intervention. Already, some signs have emerged: Although suicidal urges often do not change in the period before an attempt, the ability to resist those urges does seem to diminish. Something simple — sleep deprivation — seems to contribute to that.

Dr. Nock has been looking for ways to study these patients since 1994, when he had an experience that shocked him profoundly. During an undergraduate internship in the United Kingdom, he was assigned to a locked unit for violent and self-injurious patients. There, he saw things he had never encountered: Patients had cuts up and down their arms. One of them pulled out his own eyeball. A young man he befriended, who seemed to be improving, was later found in the Thames.

Another shock came when he began to pepper the clinicians with questions about treating these patients and realized how little they knew: He recalls being told, “We give them some medicine, we talk to them and we hope they get better.”

One reason, he concluded, was that it had never been possible to study a large number of people with suicidal ideation in the same way that we are able to observe patients with heart disease or tuberculosis. “Psychology hasn’t advanced as much as other sciences because we’ve been largely doing it wrong,” he said. “We haven’t gone out and found some behavior that is important in nature, and gone out and observed it.”

But with the advent of phone-based apps and wearable sensors, he added, “we have data from so many different channels, and we have, increasingly, the ability to analyze those data, and observe people as they’re out living their lives.” One dilemma in designing the study was what to do when participants expressed a strong desire to hurt themselves. Dr. Nock decided they should intervene.

“There’s a downside to this because you get fewer attempts and fewer suicides, because, scientifically, we’re now decreasing our likelihood of finding a signal,” he said. But, he added, “I keep coming back to the issue of, what if it was my child?”

Interventions have become a routine part of life in the lab. If, in a routine questionnaire, a subject reports a strong desire to harm themselves, and it is between the hours of 9 a.m. and 9 p.m., they receive a call within 15 minutes from one of the researchers, who asks whether they have made an attempt.

“We’re kind of this faceless person, so there’s less discomfort,” said Narise Ramlal, a research assistant in the lab. But Dr. Nock wonders — and hopes to test — whether digital interventions may prove to be more effective.

“Many people don’t want a human to contact them when they’re a high risk,” he said. “Not to say that we’re going to replace humans with machines, but they can probably be a lot more efficient than we are now.”

Kayana Szymczak for The New York Times

It was around 9 p.m., a few weeks into the six-month study, when the question popped up on Ms. Cruz’s phone: “Right now how strong is your desire to kill yourself?”

Without stopping to think, she dragged her finger all the way to the end of the bar: 10. A few seconds later, she was asked to choose between two statements: “I am definitely not going to kill myself today” and “I am definitely going kill myself today.” She scrolled to the second.

Fifteen minutes later, her phone rang. It was a member of the research team calling her. The woman called 911 and kept Ms. Cruz on the line until the police knocked on her door, and she passed out. Later, when she regained consciousness, a medical team was giving her a sternum rub, a painful procedure used to revive people after overdoses.

Ms. Cruz has a pale, seraphic face and a fringe of dark curls. She had been studying for a nursing degree when a cascade of mental health crises sent her life swerving in a different direction. She maintains an A-student’s nerdy interest in science, joking that the rib cage on her T-shirt is “totally anatomically correct.”

Right away, she had been intrigued by the trial, and she responded dutifully six times a day, when the apps on her phone surveyed her about her suicidal thoughts. The pings were intrusive, but also comforting. “It felt like I wasn’t being ignored,” she said. “To have somebody know how I feel, that takes some of the weight off.”

On the night of her attempt, she was alone in a hotel room in Concord. She didn’t have enough money for another night there, and her possessions were mounded in trash bags on the floor. She was tired, she said, “of feeling like I had nobody and nothing.” Looking back, Ms. Cruz said she thought the technology — its anonymity and lack of judgment — made it easier to ask for help.

“I think it’s almost easier to tell the truth to a computer,” she said.

But many in the field are wary of the idea that technology can ever substitute for a clinician’s care. One reason is that patients in a crisis become skilled at deception, said Justin Melnick, 24, a doctoral student who survived a suicide attempt in 2019 and is now an advocate for people with mental illness.

He recalled cutting short telephone conversations with his mother, the person best able to pull him off “the precipice,” and then switching his phone off. “And it was like, OK, that door has been closed,” he said. He described these evasions as “an act of defiance.” Why, he asked, would a person in that frame of mind agree to wear a sensor?

In the end, he said, what helped him turn the corner was people — a support group, which met weekly in a circle of chairs for sessions of dialectical behavioral therapy, and a network of friends, family and clinicians who know him well enough to recognize his behavior. When that happens, he said, “we can generally ride that wave together.”

Ms. Cruz does not have a network like that. Last month, as temperatures in Massachusetts were dipping into the 40s, she was living in a tent with her boyfriend, huddling together under a blanket for warmth. In the morning, they waited until McDonald’s opened so they could dry out their sweatshirts and shoes and charge their devices.

She was faithful about taking her medications — five of them — but was scrambling to find a new therapist: The only one in her area who accepts Medicaid has an eight-month waiting list.

Last week, as the six-month clinical trial came to an end, she filled out her final questionnaire with a twinge of sorrow. She would miss the $1 she received for each response. And she would miss the sense that someone was watching her, even if it was someone faceless, at a distance, through a device.

“Honestly, it makes me feel a little bit safer to know that somebody cares enough to read that data every day, you know?” she said. “I’ll be kind of sad when it’s over.”

If you are having thoughts of suicide, text the National Suicide Prevention Lifeline at 988 or go to SpeakingOfSuicide.com/resources for a list of additional resources.

Adblock test (Why?)



Health - Latest - Google News
September 30, 2022 at 11:27PM
https://ift.tt/CzZ16Ta

Can Smartphones Help Predict Suicide? - The New York Times
Health - Latest - Google News
https://ift.tt/8zgNcbp

No comments:

Post a Comment

Featured Post

Measles is “growing global threat,” CDC tells doctors in alert message - Ars Technica

Enlarge / A baby with measles. CDC The Centers for Disease Control and Prevention is putting clinicians on alert about the growing r...

Popular Posts