User login
For MD-IQ use only
Doctors have failed them, say those with transgender regret
In a unique Zoom conference,
The forum was convened on what was dubbed #DetransitionAwarenessDay by Genspect, a parent-based organization that seeks to put the brakes on medical transitions for children and adolescents. The group has doubts about the gender-affirming care model supported by the World Professional Association for Transgender Health, the American Medical Association, the American Academy of Pediatrics, and other medical groups.
“Affirmative” medical care is defined as treatment with puberty blockers and cross-sex hormones for those with gender dysphoria to transition to the opposite sex and is often followed by gender reassignment surgery. However, there is growing concern among many doctors and other health care professionals as to whether this is, in fact, the best way to proceed for those under aged 18, in particular, with several countries pulling back on medical treatment and instead emphasizing psychotherapy first.
The purpose of the second annual Genspect meeting was to shed light on the experiences of individuals who have detransitioned – those that identified as transgender and transitioned, but then decided to end their medical transition. People logged on from all over the United States, Canada, New Zealand, Australia, the United Kingdom, Germany, Spain, Chile, and Brazil, among other countries.
“This is a minority within a minority,” said Genspect advisor Stella O’Malley, adding that the first meeting in 2021 was held because “too many people were dismissing the stories of the detransitioners.” Ms. O’Malley is a psychotherapist, a clinical advisor to the Society for Evidence-Based Gender Medicine, and a founding member of the International Association of Therapists for Desisters and Detransitioners.
“It’s become blindingly obvious over the last year that ... ‘detrans’ is a huge part of the trans phenomenon,” said Ms. O’Malley, adding that detransitioners have been “undermined and dismissed.”
Laura Edwards-Leeper, PhD (@DrLauraEL), a prominent gender therapist who has recently expressed concern regarding adequate gatekeeping when treating youth with gender dysphoria, agreed.
She tweeted: “You simply can’t call yourself a legit gender provider if you don’t believe that detransitioners exist. As part of the informed consent process for transitioning, it is unethical to not discuss this possibility with young people.” Dr. Edwards-Leeper is professor emeritus at Pacific University in Hillsboro, Ore.
Speakers in the forum largely offered experiences, not data. They pointed out that there has been little to no study of detransition, but all testified that it was less rare than it has been portrayed by the transgender community.
Struggles with going back
“There are so many reasons why people detransition,” said Sinead Watson, aged 30, a Genspect advisor who transitioned from female to male, starting in 2015, and who decided to detransition in 2019. Citing a study by Lisa Littman, MD, MPH, published in 2021, Ms. Watson said the most common reasons for detransitioning were realizing that gender dysphoria was caused by other issues; internal homophobia; and the unbearable nature of transphobia.
Ms. Watson said the hardest part of detransitioning was admitting to herself that her transition had been a mistake. “It’s embarrassing and you feel ashamed and guilty,” she said, adding that it may mean losing friends who now regard you as a “bigot, while you’re also dealing with transition regret.”
“It’s a living hell, especially when none of your therapists or counselors will listen to you,” she said. “Detransitioning isn’t fun.”
Carol (@sourpatches2077) said she knew for a year that her transition had been a mistake.
“The biggest part was I couldn’t tell my family,” said Carol, who identifies as a lesbian. “I put them through so much. It seems ridiculous to go: ‘Oops, I made this huge [expletive] mistake,’ ” she said, describing the moment she did tell them as “devastating.”
Grace (@hormonehangover) said she remembers finally hitting a moment of “undeniability” some years after transitioning. “I accept it, I’ve ruined my life, this is wrong,” she remembers thinking. “It was devastating, but I couldn’t deny it anymore.”
Don’t trust therapists
People experiencing feelings of unease “need a therapist who will listen to them,” said Ms. Watson. When she first detransitioned, her therapists treated her badly. “They just didn’t want to speak about detransition,” she said, adding that “it was like a kick in the stomach.”
Ms. Watson said she’d like to see more training about detransition, but also on “preventative techniques,” adding that many people transition who should not. “I don’t want more detransitioners – I want less.
“In order for that to happen, we need to treat people with gender dysphoria properly,” said Ms. Watson, adding that the affirmative model is “disgusting, and that’s what needs to change.”
“I would tell somebody to not go to a therapist,” said Carol. Identifying as a butch lesbian, she felt like her therapists had pushed her into transitioning to male. “The No. 1 thing not understood by the mental health professionals is that the vast majority of homosexuals were gender-nonconforming children.” She added that this is especially true of butch lesbians.
Therapists – and doctors – also need to acknowledge both the trauma of transition and detransition, she said.
Kaiser, where she had transitioned, offered her breast reconstruction. Carol said it felt demeaning. “Like you’re Mr. Potatohead: ‘Here, we can just ... put on some new parts and you’re good to go.’ ”
“Doctors are concretizing transient obsessions,” said Helena Kerschner (@lacroicsz), quoting a chatroom user.
Ms. Kerschner gave a presentation on “fandom”: becoming obsessed with a movie, book, TV show, musician, or celebrity, spending every waking hour chatting online or writing fan fiction, or attempting to interact with the celebrity online. It’s a fantasy-dominated world and “the vast majority” of participants are teenage girls who are “identifying as trans,” in part, because they are fed a community-reinforced message that it’s better to be a boy.
Therapists and physicians who help them transition “are harming them for life based on something they would have grown out of or overcome without the permanent damage,” Ms. Kerschner added.
Doctors ‘gaslighting’ people into believing that transition is the answer
A pervasive theme during the webinar was that many people are being misdiagnosed with gender dysphoria, which may not be resolved by medical transition.
Allie, a 22-year-old who stopped taking testosterone after 1½ years, said she initially started the transition to male when she gave up trying to figure out why she could not identify with, or befriend, women, and after a childhood and adolescence spent mostly in the company of boys and being more interested in traditionally male activities.
She endured sexual abuse as a teenager and her parents divorced while she was in high school. Allie also had multiple suicide attempts and many incidents of self-harm. When she decided to transition, at age 18, she went to a private clinic and received cross-sex hormones within a few months of her first and only 30-minute consultation. “There was no explorative therapy,” she said, adding that she was never given a formal diagnosis of gender dysphoria.
For the first year, she said she was “over the freaking moon” because she felt like it was the answer. But things started to unravel while she attended university, and she attempted suicide attempt at age 20. A social worker at the school identified her symptoms – which had been the same since childhood – as autism. She then decided to cease her transition.
Another detransitioner, Laura Becker, said it took 5 years after her transition to recognize that she had undiagnosed PTSD from emotional and psychiatric abuse. Despite a history of substance abuse, self-harm, suicidal ideation, and other mental health issues, she was given testosterone and had a double mastectomy at age 20. She became fixated on gay men, which devolved into a methamphetamine- and crack-fueled relationship with a man she met on the gay dating platform Grindr.
“No one around me knew any better or knew how to help, including the medical professionals who performed the mastectomy and who casually signed off and administered my medical transition,” she said.
Once she was aware of her PTSD she started to detransition, which itself was traumatic, said Laura.
Limpida, aged 24, said he felt pushed into transitioning after seeking help at a Planned Parenthood clinic. He identified as trans at age 15 and spent years attempting to be a woman socially, but every step made him feel more miserable, he said. When he went to the clinic at age 21 to get estrogen, he said he felt like the staff was dismissive of his mental health concerns – including that he was suicidal, had substance abuse, and was severely depressed. He was told he was the “perfect candidate” for transitioning.
A year later, he said he felt worse. The nurse suggested he seek out surgery. After Limpida researched what was involved, he decided to detransition. He has since received an autism diagnosis.
Robin, also aged 24, said the idea of surgery had helped push him into detransitioning, which began in 2020 after 4 years of estrogen. He said he had always been gender nonconforming and knew he was gay at an early age. He believes that gender-nonconforming people are “gaslighted” into thinking that transitioning is the answer.
Lack of evidence-based, informed consent
Michelle Alleva, who stopped identifying as transgender in 2020 but had ceased testosterone 4 years earlier because of side effects, cited what she called a lack of evidence base for the effectiveness and safety of medical transitions.
“You need to have a really, really good evidence base in place if you’re going straight to an invasive treatment that is going to cause permanent changes to your body,” she said.
Access to medical transition used to involve more “gatekeeping” through mental health evaluations and other interventions, she said, but there has been a shift from treating what was considered a psychiatric issue to essentially affirming an identity.
“This shift was activist driven, not evidence based,” she emphasized.
Most studies showing satisfaction with transition only involve a few years of follow-up, she said. She added that the longest follow-up study of transition, published in 2011 and spanning 30 years, showed that the suicide rate 10-15 years post surgery was 20 times higher than the general population.
Studies of regret were primarily conducted before the rapid increase in the number of trans-identifying individuals, she said, which makes it hard to draw conclusions about pediatric transition. Getting estimates on this population is difficult because so many who detransition do not tell their clinicians, and many studies have short follow-up times or a high loss to follow-up.
Ms. Alleva also took issue with the notion that physicians were offering true informed consent, noting that it’s not possible to know if someone is psychologically sound if they haven’t had a thorough mental health evaluation and that there are so many unknowns with medical transition, including that many of the therapies are not approved for the uses being employed.
With regret on the rise, “we need professionals that are prepared for detransitioners,” said Ms. Alleva. “Some of us have lost trust in health care professionals as a result of our experience.”
“It’s a huge feeling of institutional betrayal,” said Grace.
A version of this article first appeared on Medscape.com.
In a unique Zoom conference,
The forum was convened on what was dubbed #DetransitionAwarenessDay by Genspect, a parent-based organization that seeks to put the brakes on medical transitions for children and adolescents. The group has doubts about the gender-affirming care model supported by the World Professional Association for Transgender Health, the American Medical Association, the American Academy of Pediatrics, and other medical groups.
“Affirmative” medical care is defined as treatment with puberty blockers and cross-sex hormones for those with gender dysphoria to transition to the opposite sex and is often followed by gender reassignment surgery. However, there is growing concern among many doctors and other health care professionals as to whether this is, in fact, the best way to proceed for those under aged 18, in particular, with several countries pulling back on medical treatment and instead emphasizing psychotherapy first.
The purpose of the second annual Genspect meeting was to shed light on the experiences of individuals who have detransitioned – those that identified as transgender and transitioned, but then decided to end their medical transition. People logged on from all over the United States, Canada, New Zealand, Australia, the United Kingdom, Germany, Spain, Chile, and Brazil, among other countries.
“This is a minority within a minority,” said Genspect advisor Stella O’Malley, adding that the first meeting in 2021 was held because “too many people were dismissing the stories of the detransitioners.” Ms. O’Malley is a psychotherapist, a clinical advisor to the Society for Evidence-Based Gender Medicine, and a founding member of the International Association of Therapists for Desisters and Detransitioners.
“It’s become blindingly obvious over the last year that ... ‘detrans’ is a huge part of the trans phenomenon,” said Ms. O’Malley, adding that detransitioners have been “undermined and dismissed.”
Laura Edwards-Leeper, PhD (@DrLauraEL), a prominent gender therapist who has recently expressed concern regarding adequate gatekeeping when treating youth with gender dysphoria, agreed.
She tweeted: “You simply can’t call yourself a legit gender provider if you don’t believe that detransitioners exist. As part of the informed consent process for transitioning, it is unethical to not discuss this possibility with young people.” Dr. Edwards-Leeper is professor emeritus at Pacific University in Hillsboro, Ore.
Speakers in the forum largely offered experiences, not data. They pointed out that there has been little to no study of detransition, but all testified that it was less rare than it has been portrayed by the transgender community.
Struggles with going back
“There are so many reasons why people detransition,” said Sinead Watson, aged 30, a Genspect advisor who transitioned from female to male, starting in 2015, and who decided to detransition in 2019. Citing a study by Lisa Littman, MD, MPH, published in 2021, Ms. Watson said the most common reasons for detransitioning were realizing that gender dysphoria was caused by other issues; internal homophobia; and the unbearable nature of transphobia.
Ms. Watson said the hardest part of detransitioning was admitting to herself that her transition had been a mistake. “It’s embarrassing and you feel ashamed and guilty,” she said, adding that it may mean losing friends who now regard you as a “bigot, while you’re also dealing with transition regret.”
“It’s a living hell, especially when none of your therapists or counselors will listen to you,” she said. “Detransitioning isn’t fun.”
Carol (@sourpatches2077) said she knew for a year that her transition had been a mistake.
“The biggest part was I couldn’t tell my family,” said Carol, who identifies as a lesbian. “I put them through so much. It seems ridiculous to go: ‘Oops, I made this huge [expletive] mistake,’ ” she said, describing the moment she did tell them as “devastating.”
Grace (@hormonehangover) said she remembers finally hitting a moment of “undeniability” some years after transitioning. “I accept it, I’ve ruined my life, this is wrong,” she remembers thinking. “It was devastating, but I couldn’t deny it anymore.”
Don’t trust therapists
People experiencing feelings of unease “need a therapist who will listen to them,” said Ms. Watson. When she first detransitioned, her therapists treated her badly. “They just didn’t want to speak about detransition,” she said, adding that “it was like a kick in the stomach.”
Ms. Watson said she’d like to see more training about detransition, but also on “preventative techniques,” adding that many people transition who should not. “I don’t want more detransitioners – I want less.
“In order for that to happen, we need to treat people with gender dysphoria properly,” said Ms. Watson, adding that the affirmative model is “disgusting, and that’s what needs to change.”
“I would tell somebody to not go to a therapist,” said Carol. Identifying as a butch lesbian, she felt like her therapists had pushed her into transitioning to male. “The No. 1 thing not understood by the mental health professionals is that the vast majority of homosexuals were gender-nonconforming children.” She added that this is especially true of butch lesbians.
Therapists – and doctors – also need to acknowledge both the trauma of transition and detransition, she said.
Kaiser, where she had transitioned, offered her breast reconstruction. Carol said it felt demeaning. “Like you’re Mr. Potatohead: ‘Here, we can just ... put on some new parts and you’re good to go.’ ”
“Doctors are concretizing transient obsessions,” said Helena Kerschner (@lacroicsz), quoting a chatroom user.
Ms. Kerschner gave a presentation on “fandom”: becoming obsessed with a movie, book, TV show, musician, or celebrity, spending every waking hour chatting online or writing fan fiction, or attempting to interact with the celebrity online. It’s a fantasy-dominated world and “the vast majority” of participants are teenage girls who are “identifying as trans,” in part, because they are fed a community-reinforced message that it’s better to be a boy.
Therapists and physicians who help them transition “are harming them for life based on something they would have grown out of or overcome without the permanent damage,” Ms. Kerschner added.
Doctors ‘gaslighting’ people into believing that transition is the answer
A pervasive theme during the webinar was that many people are being misdiagnosed with gender dysphoria, which may not be resolved by medical transition.
Allie, a 22-year-old who stopped taking testosterone after 1½ years, said she initially started the transition to male when she gave up trying to figure out why she could not identify with, or befriend, women, and after a childhood and adolescence spent mostly in the company of boys and being more interested in traditionally male activities.
She endured sexual abuse as a teenager and her parents divorced while she was in high school. Allie also had multiple suicide attempts and many incidents of self-harm. When she decided to transition, at age 18, she went to a private clinic and received cross-sex hormones within a few months of her first and only 30-minute consultation. “There was no explorative therapy,” she said, adding that she was never given a formal diagnosis of gender dysphoria.
For the first year, she said she was “over the freaking moon” because she felt like it was the answer. But things started to unravel while she attended university, and she attempted suicide attempt at age 20. A social worker at the school identified her symptoms – which had been the same since childhood – as autism. She then decided to cease her transition.
Another detransitioner, Laura Becker, said it took 5 years after her transition to recognize that she had undiagnosed PTSD from emotional and psychiatric abuse. Despite a history of substance abuse, self-harm, suicidal ideation, and other mental health issues, she was given testosterone and had a double mastectomy at age 20. She became fixated on gay men, which devolved into a methamphetamine- and crack-fueled relationship with a man she met on the gay dating platform Grindr.
“No one around me knew any better or knew how to help, including the medical professionals who performed the mastectomy and who casually signed off and administered my medical transition,” she said.
Once she was aware of her PTSD she started to detransition, which itself was traumatic, said Laura.
Limpida, aged 24, said he felt pushed into transitioning after seeking help at a Planned Parenthood clinic. He identified as trans at age 15 and spent years attempting to be a woman socially, but every step made him feel more miserable, he said. When he went to the clinic at age 21 to get estrogen, he said he felt like the staff was dismissive of his mental health concerns – including that he was suicidal, had substance abuse, and was severely depressed. He was told he was the “perfect candidate” for transitioning.
A year later, he said he felt worse. The nurse suggested he seek out surgery. After Limpida researched what was involved, he decided to detransition. He has since received an autism diagnosis.
Robin, also aged 24, said the idea of surgery had helped push him into detransitioning, which began in 2020 after 4 years of estrogen. He said he had always been gender nonconforming and knew he was gay at an early age. He believes that gender-nonconforming people are “gaslighted” into thinking that transitioning is the answer.
Lack of evidence-based, informed consent
Michelle Alleva, who stopped identifying as transgender in 2020 but had ceased testosterone 4 years earlier because of side effects, cited what she called a lack of evidence base for the effectiveness and safety of medical transitions.
“You need to have a really, really good evidence base in place if you’re going straight to an invasive treatment that is going to cause permanent changes to your body,” she said.
Access to medical transition used to involve more “gatekeeping” through mental health evaluations and other interventions, she said, but there has been a shift from treating what was considered a psychiatric issue to essentially affirming an identity.
“This shift was activist driven, not evidence based,” she emphasized.
Most studies showing satisfaction with transition only involve a few years of follow-up, she said. She added that the longest follow-up study of transition, published in 2011 and spanning 30 years, showed that the suicide rate 10-15 years post surgery was 20 times higher than the general population.
Studies of regret were primarily conducted before the rapid increase in the number of trans-identifying individuals, she said, which makes it hard to draw conclusions about pediatric transition. Getting estimates on this population is difficult because so many who detransition do not tell their clinicians, and many studies have short follow-up times or a high loss to follow-up.
Ms. Alleva also took issue with the notion that physicians were offering true informed consent, noting that it’s not possible to know if someone is psychologically sound if they haven’t had a thorough mental health evaluation and that there are so many unknowns with medical transition, including that many of the therapies are not approved for the uses being employed.
With regret on the rise, “we need professionals that are prepared for detransitioners,” said Ms. Alleva. “Some of us have lost trust in health care professionals as a result of our experience.”
“It’s a huge feeling of institutional betrayal,” said Grace.
A version of this article first appeared on Medscape.com.
In a unique Zoom conference,
The forum was convened on what was dubbed #DetransitionAwarenessDay by Genspect, a parent-based organization that seeks to put the brakes on medical transitions for children and adolescents. The group has doubts about the gender-affirming care model supported by the World Professional Association for Transgender Health, the American Medical Association, the American Academy of Pediatrics, and other medical groups.
“Affirmative” medical care is defined as treatment with puberty blockers and cross-sex hormones for those with gender dysphoria to transition to the opposite sex and is often followed by gender reassignment surgery. However, there is growing concern among many doctors and other health care professionals as to whether this is, in fact, the best way to proceed for those under aged 18, in particular, with several countries pulling back on medical treatment and instead emphasizing psychotherapy first.
The purpose of the second annual Genspect meeting was to shed light on the experiences of individuals who have detransitioned – those that identified as transgender and transitioned, but then decided to end their medical transition. People logged on from all over the United States, Canada, New Zealand, Australia, the United Kingdom, Germany, Spain, Chile, and Brazil, among other countries.
“This is a minority within a minority,” said Genspect advisor Stella O’Malley, adding that the first meeting in 2021 was held because “too many people were dismissing the stories of the detransitioners.” Ms. O’Malley is a psychotherapist, a clinical advisor to the Society for Evidence-Based Gender Medicine, and a founding member of the International Association of Therapists for Desisters and Detransitioners.
“It’s become blindingly obvious over the last year that ... ‘detrans’ is a huge part of the trans phenomenon,” said Ms. O’Malley, adding that detransitioners have been “undermined and dismissed.”
Laura Edwards-Leeper, PhD (@DrLauraEL), a prominent gender therapist who has recently expressed concern regarding adequate gatekeeping when treating youth with gender dysphoria, agreed.
She tweeted: “You simply can’t call yourself a legit gender provider if you don’t believe that detransitioners exist. As part of the informed consent process for transitioning, it is unethical to not discuss this possibility with young people.” Dr. Edwards-Leeper is professor emeritus at Pacific University in Hillsboro, Ore.
Speakers in the forum largely offered experiences, not data. They pointed out that there has been little to no study of detransition, but all testified that it was less rare than it has been portrayed by the transgender community.
Struggles with going back
“There are so many reasons why people detransition,” said Sinead Watson, aged 30, a Genspect advisor who transitioned from female to male, starting in 2015, and who decided to detransition in 2019. Citing a study by Lisa Littman, MD, MPH, published in 2021, Ms. Watson said the most common reasons for detransitioning were realizing that gender dysphoria was caused by other issues; internal homophobia; and the unbearable nature of transphobia.
Ms. Watson said the hardest part of detransitioning was admitting to herself that her transition had been a mistake. “It’s embarrassing and you feel ashamed and guilty,” she said, adding that it may mean losing friends who now regard you as a “bigot, while you’re also dealing with transition regret.”
“It’s a living hell, especially when none of your therapists or counselors will listen to you,” she said. “Detransitioning isn’t fun.”
Carol (@sourpatches2077) said she knew for a year that her transition had been a mistake.
“The biggest part was I couldn’t tell my family,” said Carol, who identifies as a lesbian. “I put them through so much. It seems ridiculous to go: ‘Oops, I made this huge [expletive] mistake,’ ” she said, describing the moment she did tell them as “devastating.”
Grace (@hormonehangover) said she remembers finally hitting a moment of “undeniability” some years after transitioning. “I accept it, I’ve ruined my life, this is wrong,” she remembers thinking. “It was devastating, but I couldn’t deny it anymore.”
Don’t trust therapists
People experiencing feelings of unease “need a therapist who will listen to them,” said Ms. Watson. When she first detransitioned, her therapists treated her badly. “They just didn’t want to speak about detransition,” she said, adding that “it was like a kick in the stomach.”
Ms. Watson said she’d like to see more training about detransition, but also on “preventative techniques,” adding that many people transition who should not. “I don’t want more detransitioners – I want less.
“In order for that to happen, we need to treat people with gender dysphoria properly,” said Ms. Watson, adding that the affirmative model is “disgusting, and that’s what needs to change.”
“I would tell somebody to not go to a therapist,” said Carol. Identifying as a butch lesbian, she felt like her therapists had pushed her into transitioning to male. “The No. 1 thing not understood by the mental health professionals is that the vast majority of homosexuals were gender-nonconforming children.” She added that this is especially true of butch lesbians.
Therapists – and doctors – also need to acknowledge both the trauma of transition and detransition, she said.
Kaiser, where she had transitioned, offered her breast reconstruction. Carol said it felt demeaning. “Like you’re Mr. Potatohead: ‘Here, we can just ... put on some new parts and you’re good to go.’ ”
“Doctors are concretizing transient obsessions,” said Helena Kerschner (@lacroicsz), quoting a chatroom user.
Ms. Kerschner gave a presentation on “fandom”: becoming obsessed with a movie, book, TV show, musician, or celebrity, spending every waking hour chatting online or writing fan fiction, or attempting to interact with the celebrity online. It’s a fantasy-dominated world and “the vast majority” of participants are teenage girls who are “identifying as trans,” in part, because they are fed a community-reinforced message that it’s better to be a boy.
Therapists and physicians who help them transition “are harming them for life based on something they would have grown out of or overcome without the permanent damage,” Ms. Kerschner added.
Doctors ‘gaslighting’ people into believing that transition is the answer
A pervasive theme during the webinar was that many people are being misdiagnosed with gender dysphoria, which may not be resolved by medical transition.
Allie, a 22-year-old who stopped taking testosterone after 1½ years, said she initially started the transition to male when she gave up trying to figure out why she could not identify with, or befriend, women, and after a childhood and adolescence spent mostly in the company of boys and being more interested in traditionally male activities.
She endured sexual abuse as a teenager and her parents divorced while she was in high school. Allie also had multiple suicide attempts and many incidents of self-harm. When she decided to transition, at age 18, she went to a private clinic and received cross-sex hormones within a few months of her first and only 30-minute consultation. “There was no explorative therapy,” she said, adding that she was never given a formal diagnosis of gender dysphoria.
For the first year, she said she was “over the freaking moon” because she felt like it was the answer. But things started to unravel while she attended university, and she attempted suicide attempt at age 20. A social worker at the school identified her symptoms – which had been the same since childhood – as autism. She then decided to cease her transition.
Another detransitioner, Laura Becker, said it took 5 years after her transition to recognize that she had undiagnosed PTSD from emotional and psychiatric abuse. Despite a history of substance abuse, self-harm, suicidal ideation, and other mental health issues, she was given testosterone and had a double mastectomy at age 20. She became fixated on gay men, which devolved into a methamphetamine- and crack-fueled relationship with a man she met on the gay dating platform Grindr.
“No one around me knew any better or knew how to help, including the medical professionals who performed the mastectomy and who casually signed off and administered my medical transition,” she said.
Once she was aware of her PTSD she started to detransition, which itself was traumatic, said Laura.
Limpida, aged 24, said he felt pushed into transitioning after seeking help at a Planned Parenthood clinic. He identified as trans at age 15 and spent years attempting to be a woman socially, but every step made him feel more miserable, he said. When he went to the clinic at age 21 to get estrogen, he said he felt like the staff was dismissive of his mental health concerns – including that he was suicidal, had substance abuse, and was severely depressed. He was told he was the “perfect candidate” for transitioning.
A year later, he said he felt worse. The nurse suggested he seek out surgery. After Limpida researched what was involved, he decided to detransition. He has since received an autism diagnosis.
Robin, also aged 24, said the idea of surgery had helped push him into detransitioning, which began in 2020 after 4 years of estrogen. He said he had always been gender nonconforming and knew he was gay at an early age. He believes that gender-nonconforming people are “gaslighted” into thinking that transitioning is the answer.
Lack of evidence-based, informed consent
Michelle Alleva, who stopped identifying as transgender in 2020 but had ceased testosterone 4 years earlier because of side effects, cited what she called a lack of evidence base for the effectiveness and safety of medical transitions.
“You need to have a really, really good evidence base in place if you’re going straight to an invasive treatment that is going to cause permanent changes to your body,” she said.
Access to medical transition used to involve more “gatekeeping” through mental health evaluations and other interventions, she said, but there has been a shift from treating what was considered a psychiatric issue to essentially affirming an identity.
“This shift was activist driven, not evidence based,” she emphasized.
Most studies showing satisfaction with transition only involve a few years of follow-up, she said. She added that the longest follow-up study of transition, published in 2011 and spanning 30 years, showed that the suicide rate 10-15 years post surgery was 20 times higher than the general population.
Studies of regret were primarily conducted before the rapid increase in the number of trans-identifying individuals, she said, which makes it hard to draw conclusions about pediatric transition. Getting estimates on this population is difficult because so many who detransition do not tell their clinicians, and many studies have short follow-up times or a high loss to follow-up.
Ms. Alleva also took issue with the notion that physicians were offering true informed consent, noting that it’s not possible to know if someone is psychologically sound if they haven’t had a thorough mental health evaluation and that there are so many unknowns with medical transition, including that many of the therapies are not approved for the uses being employed.
With regret on the rise, “we need professionals that are prepared for detransitioners,” said Ms. Alleva. “Some of us have lost trust in health care professionals as a result of our experience.”
“It’s a huge feeling of institutional betrayal,” said Grace.
A version of this article first appeared on Medscape.com.
French fries vs. almonds every day for a month: What changes?
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
Eat french fries every day for a month? Sure, as long as it’s for science.
That’s exactly what 107 people did in a scientific study, while 58 others ate a daily serving of almonds with the same number of calories.
At the end of the study, the researchers found no significant differences between the groups in people’s total amount of fat or their fasting glucose measures, according to the study, published Feb. 18 in the American Journal of Clinical Nutrition.
The french fry eaters gained a little more weight, but it was not statistically significant. The people who ate french fries gained 0.49 kilograms (just over a pound), vs. about a tenth of a kilogram (about one-fifth of a pound) in the group of people who ate almonds.
“The take-home is if you like almonds, eat some almonds. If you like potatoes, eat some potatoes, but don’t overeat either,” said study leader David B. Allison, PhD, a professor at Indiana University’s School of Public Health in Bloomington. ‘It’s probably good to have a little bit of each – each has some unique advantages in terms of nutrition.”
“This study confirms what registered dietitian nutritionists already know – all foods can fit. We can eat almonds, french fries, kale, and cookies,” said Melissa Majumdar, a registered dietitian and certified specialist in obesity and weight management at Emory University Hospital Midtown in Atlanta. ‘The consumption of one food or the avoidance of another does not make a healthy diet.”
At the same time, people should not interpret the results to mean it’s OK to eat french fries all day, every day. “We know that while potatoes are nutrient dense, the frying process reduces the nutritional value,” Ms. Majumdar said.
“Because french fries are often consumed alongside other nutrient-poor or high-fat foods, they should not be consumed daily but can fit into an overall balanced diet,” she added.
Would you like fries with that?
The researchers compared french fries to almonds because almonds are known for positive effects on energy balance, body composition, and low glycemic index. The research was partly funded by the Alliance for Potato Research and Education.
French fries are an incredibly popular food in the United States. According to an August 2021 post on the food website Mashed, Americans eat an average of 30 pounds of french fries each year.
Although consumption of almonds is increasing, Americans eat far less in volume each year than they do fries – an estimated 2.4 pounds of almonds per person, according to August 2021 figures from the Almond Board of California.
Dr. Allison and colleagues recruited 180 healthy adults for the study. Their average age was 30, and about two-thirds were women.
They randomly assigned 60 people to add about a medium serving of plain french fries (Tater Pals Ovenable Crinkle Cut Fries, Simplot Foods) to their diet. Another 60 people were assigned to the same amount of Tater Pals fries with herbs (oregano, basil, garlic, onion, and rosemary), and another 60 people ate Wonderful brand roasted and salted almonds.
Investigators told people to add either the potatoes or nuts to their diet every day for a month and gave no further instructions.
After some people dropped out of the study, results were based on 55 who ate regular french fries, 52 who ate french fries with herbs and spices, and 58 who ate the nuts.
The researchers scanned people to detect any changes in fat mass. They also measured changes in body weight, carbohydrate metabolism, and fasting blood glucose and insulin.
Key findings
Changes in total body fat mass and fat mass were not significantly different between the french fry groups and the almond group.
In terms of glycemic control, eating french fries for a month “is no better or worse than consuming a caloric equivalent of nuts,” the researchers noted.
Similarly, the change in total fat mass did not differ significantly among the three treatment groups.
Adding the herb and spice mix to the french fries did not make a significant difference on glycemic control, contrary to what the researchers thought might happen.
And fasting glucose, insulin, and HbA1c levels did not differ significantly between the combined french fry and almond groups. When comparisons were made among the three groups, the almond group had a lower insulin response, compared to the plain french fry group.
Many different things could be explored in future research, said study coauthor Rebecca Hanson, a registered dietitian nutritionist and research study coordinator at the University of Alabama at Birmingham. “People were not told to change their exercise or diet, so there are so many different variables,” she said. Repeating the research in people with diabetes is another possibility going forward.
The researchers acknowledged that 30 days may not have been long enough to show a significant difference. But they also noted that many previous studies were observational while they used a randomized controlled trial, considered a more robust study design.
Dr. Allison, the senior author, emphasized that this is just one study. “No one study has all the answers.
“I don’t want to tell you our results are the be all and end all or that we’ve now learned everything there is to learn about potatoes and almonds,” he said.
“Our study shows for the variables we looked at ... we did not see important, discernible differences,” he said. “That doesn’t mean if you ate 500 potatoes a day or 500 kilograms of almonds it would be the same. But at these modest levels, it doesn’t seem to make much difference.”
The study was funded by grants from the National Institutes of Health and from the Alliance for Potato Research and Education.
Asked if the industry support should be a concern, Ms. Majumdar said, “Funding from a specific food board does not necessarily dilute the results of a well-designed study. It’s not uncommon for a funding source to come from a food board that may benefit from the findings. Research money has to come from somewhere.
“This study has reputable researchers, some of the best in the field,” she said.
The U.S. produces the most almonds in the world, and California is the only state where almonds are grown commercially. Asked for the almond industry’s take on the findings, “We don’t have a comment,” said Rick Kushman, a spokesman for the Almond Board of California.
A version of this article first appeared on WebMD.com.
FROM AMERICAN JOURNAL OF CLINICAL NUTRITION
Is cancer testing going to the dogs? Nope, ants
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
The oncologist’s new best friend
We know that dogs have very sensitive noses. They can track criminals and missing persons and sniff out drugs and bombs. They can even detect cancer cells … after months of training.
And then there are ants.
Cancer cells produce volatile organic compounds (VOCs), which can be sniffed out by dogs and other animals with sufficiently sophisticated olfactory senses. A group of French investigators decided to find out if Formica fusca is such an animal.
First, they placed breast cancer cells and healthy cells in a petri dish. The sample of cancer cells, however, included a sugary treat. “Over successive trials, the ants got quicker and quicker at finding the treat, indicating that they had learned to recognize the VOCs produced by the cancerous cells, using these as a beacon to guide their way to the sugary delight,” according to IFL Science.
When the researchers removed the treat, the ants still went straight for the cancer cells. Then they removed the healthy cells and substituted another type of breast cancer cell, with just one type getting the treat. They went for the cancer cells with the treat, “indicating that they were capable of distinguishing between the different cancer types based on the unique pattern of VOCs emitted by each one,” IFL Science explained.
It’s just another chapter in the eternal struggle between dogs and ants. Dogs need months of training to learn to detect cancer cells; ants can do it in 30 minutes. Over the course of a dog’s training, Fido eats more food than 10,000 ants combined. (Okay, we’re guessing here, but it’s got to be a pretty big number, right?)
Then there’s the warm and fuzzy factor. Just look at that picture. Who wouldn’t want a cutie like that curling up in the bed next to you?
Console War II: Battle of the Twitter users
Video games can be a lot of fun, provided you’re not playing something like Rock Simulator. Or Surgeon Simulator. Or Surgeon Simulator 2. Yes, those are all real games. But calling yourself a video gamer invites a certain negative connotation, and nowhere can that be better exemplified than the increasingly ridiculous console war.
For those who don’t know their video game history, back in the early 90s Nintendo and Sega were the main video game console makers. Nintendo had Mario, Sega had Sonic, and everyone had an opinion on which was best. With Sega now but a shell of its former self and Nintendo viewed as too “casual” for the true gaming connoisseur, today’s battle pits Playstation against Xbox, and fans of both consoles spend their time trying to one-up each other in increasingly silly online arguments.
That brings us nicely to a Twitter user named “Shreeveera,” who is very vocal about his love of Playstation and hatred of the Xbox. Importantly, for LOTME purposes, Shreeveera identified himself as a doctor on his profile, and in the middle of an argument, Xbox enthusiasts called his credentials into question.
At this point, most people would recognize that there are very few noteworthy console-exclusive video games in today’s world and that any argument about consoles essentially comes down to which console design you like or which company you find less distasteful, and they would step away from the Twitter argument. Shreeveera is not most people, and he decided the next logical move was to post a video of himself and an anesthetized patient about to undergo a laparoscopic cholecystectomy.
This move did prove that he was indeed a doctor, but the ethics of posting such a video with a patient in the room is a bit dubious at best. Since Shreeveera also listed the hospital he worked at, numerous Twitter users review bombed the hospital with one-star reviews. Shreeveera’s fate is unknown, but he did take down the video and removed “doctor by profession” from his profile. He also made a second video asking Twitter to stop trying to ruin his life. We’re sure that’ll go well. Twitter is known for being completely fair and reasonable.
Use your words to gain power
We live in the age of the emoji. The use of emojis in texts and emails is basically the new shorthand. It’s a fun and easy way to chat with people close to us, but a new study shows that it doesn’t help in a business setting. In fact, it may do a little damage.
The use of images such as emojis in communication or logos can make a person seem less powerful than someone who opts for written words, according to Elinor Amit, PhD, of Tel Aviv University and associates.
Participants in their study were asked to imagine shopping with a person wearing a T-shirt. Half were then shown the logo of the Red Sox baseball team and half saw the words “Red Sox.” In another scenario, they were asked to imagine attending a retreat of a company called Lotus. Then half were shown an employee wearing a shirt with an image of lotus flower and half saw the verbal logo “Lotus.” In both scenarios, the individuals wearing shirts with images were seen as less powerful than the people who wore shirts with words on them.
Why is that? In a Eurekalert statement, Dr. Amit said that “visual messages are often interpreted as a signal for desire for social proximity.” In a world with COVID-19, that could give anyone pause.
That desire for more social proximity, in turn, equals a suggested loss of power because research shows that people who want to be around other people more are less powerful than people who don’t.
With the reduced social proximity we have these days, we may want to keep things cool and lighthearted, especially in work emails with people who we’ve never met. It may be, however, that using your words to say thank you in the multitude of emails you respond to on a regular basis is better than that thumbs-up emoji. Nobody will think less of you.
Should Daylight Savings Time still be a thing?
This past week, we just experienced the spring-forward portion of Daylight Savings Time, which took an hour of sleep away from us all. Some of us may still be struggling to find our footing with the time change, but at least it’s still sunny out at 7 pm. For those who don’t really see the point of changing the clocks twice a year, there are actually some good reasons to do so.
Sen. Marco Rubio, sponsor of a bill to make the time change permanent, put it simply: “If we can get this passed, we don’t have to do this stupidity anymore.” Message received, apparently, since the measure just passed unanimously in the Senate.
It’s not clear if President Biden will approve it, though, because there’s a lot that comes into play: economic needs, seasonal depression, and safety.
“I know this is not the most important issue confronting America, but it’s one of those issues where there’s a lot of agreement,” Sen. Rubio said.
Not total agreement, though. The National Association of Convenience Stores is opposed to the bill, and Reuters noted that one witness at a recent hearing said the time change “is like living in the wrong time zone for almost eight months out of the year.”
Many people, however, seem to be leaning toward the permanent spring-forward as it gives businesses a longer window to provide entertainment in the evenings and kids are able to play outside longer after school.
Honestly, we’re leaning toward whichever one can reduce seasonal depression.
Characterizing Opioid Response in Older Veterans in the Post-Acute Setting
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
Older adults admitted to post-acute settings frequently have complex rehabilitation needs and multimorbidity, which predisposes them to pain management challenges.1,2 The prevalence of pain in post-acute and long-term care is as high as 65%, and opioid use is common among this population with 1 in 7 residents receiving long-term opioids.3,4
Opioids that do not adequately control pain represent a missed opportunity for deprescribing. There is limited evidence regarding efficacy of long-term opioid use (> 90 days) for improving pain and physical functioning.5 In addition, long-term opioid use carries significant risks, including overdose-related death, dependence, and increased emergency department visits.5 These risks are likely to be pronounced among veterans receiving post-acute care (PAC) who are older, have comorbid psychiatric disorders, are prescribed several centrally acting medications, and experience substance use disorder (SUD).6
Older adults are at increased risk for opioid toxicity because of reduced drug clearance and smaller therapeutic window.5 Centers for Disease Control and Prevention (CDC) guidelines recommend frequently assessing patients for benefit in terms of sustained improvement in pain as well as physical function.5 If pain and functional improvements are minimal, opioid use and nonopioid pain management strategies should be considered. Some patients will struggle with this approach. Directly asking patients about the effectiveness of opioids is challenging. Opioid users with chronic pain frequently report problems with opioids even as they describe them as indispensable for pain management.7,8
Earlier studies have assessed patient perspectives regarding opioid difficulties as well as their helpfulness, which could introduce recall bias. Patient-level factors that contribute to a global sense of distress, in addition to the presence of painful physical conditions, also could contribute to patients requesting opioids without experiencing adequate pain relief. One study in veterans residing in PAC facilities found that individuals with depression, posttraumatic stress disorder (PTSD), and SUD were more likely to report pain and receive scheduled analgesics; this effect persisted in individuals with PTSD even after adjusting for demographic and functional status variables.9 The study looked only at analgesics as a class and did not examine opioids specifically. It is possible that distressed individuals, such as those with uncontrolled depression, PTSD, and SUD, might be more likely to report high pain levels and receive opioids with inadequate benefit and increased risk. Identifying the primary condition causing distress and targeting treatment to that condition (ie, depression) is preferable to escalating opioids in an attempt to treat pain in the context of nonresponse. Assessing an individual’s aggregate response to opioids rather than relying on a single self-report is a useful addition to current pain management strategies.
The goal of this study was to pilot a method of identifying opioid-nonresponsive pain using administrative data, measure its prevalence in a PAC population of veterans, and explore clinical and demographic correlates with particular attention to variates that could indicate high levels of psychological and physical distress. Identifying pain that is poorly responsive to opioids would give clinicians the opportunity to avoid or minimize opioid use and prioritize treatments that are likely to improve the resident’s pain, quality of life, and physical function while minimizing recall bias. We hypothesized that pain that responds poorly to opioids would be prevalent among veterans residing in a PAC unit. We considered that veterans with pain poorly responsive to opioids would be more likely to have factors that would place them at increased risk of adverse effects, such as comorbid psychiatric conditions, history of SUD, and multimorbidity, providing further rationale for clinical equipoise in that population.6
Methods
This was a small, retrospective cross-sectional study using administrative data and chart review. The study included veterans who were administered opioids while residing in a single US Department of Veterans Affairs (VA) community living center PAC (CLC-PAC) unit during at least 1 of 4 nonconsecutive, random days in 2016 and 2017. The study was approved by the institutional review board of the Ann Arbor VA Health System (#2017-1034) as part of a larger project involving models of care in vulnerable older veterans.
Inclusion criteria were the presence of at least moderate pain (≥ 4 on a 0 to 10 scale); receiving ≥ 2 opioids ordered as needed over the prespecified 24-hour observation period; and having ≥ 2 pre-and postopioid administration pain scores during the observation period. Veterans who did not meet these criteria were excluded. At the time of initial sample selection, we did not capture information related to coprescribed analgesics, including a standing order of opioids. To obtain the sample, we initially characterized all veterans on the 4 days residing in the CLC-PAC unit as those reporting at least moderate pain (≥ 4) and those who reported no or mild pain (< 4). The cut point of 4 of 10 is consistent with moderate pain based on earlier work showing higher likelihood of pain that interferes with physical function.10 We then restricted the sample to veterans who received ≥ 2 opioids ordered as needed for pain and had ≥ 2 pre- and postopioid administration numeric pain rating scores during the 24-hour observation period. This methodology was chosen to enrich our sample for those who received opioids regularly for ongoing pain. Opioids were defined as full µ-opioid receptor agonists and included hydrocodone, oxycodone, morphine, hydromorphone, fentanyl, tramadol, and methadone.
Medication administration data were obtained from the VA corporate data warehouse, which houses all barcode medication administration data collected at the point of care. The dataset includes pain scores gathered by nursing staff before and after administering an as-needed analgesic. The corporate data warehouse records data/time of pain scores and the analgesic name, dosage, formulation, and date/time of administration. Using a standardized assessment form developed iteratively, we calculated opioid dosage in oral morphine equivalents (OME) for comparison.11,12 All abstracted data were reexamined for accuracy. Data initially were collected in an anonymized, blinded fashion. Participants were then unblinded for chart review. Initial data was captured in resident-days instead of unique residents because an individual resident might have been admitted on several observation days. We were primarily interested in how pain responded to opioids administered in response to resident request; therefore, we did not examine response to opioids that were continuously ordered (ie, scheduled). We did consider scheduled opioids when calculating total daily opioid dosage during the chart review.
Outcome of Interest
The primary outcome of interest was an individual’s response to as-needed opioids, which we defined as change in the pain score after opioid administration. The pre-opioid pain score was the score that immediately preceded administration of an as-needed opioid. The postopioid administration pain score was the first score after opioid administration if obtained within 3 hours of administration. Scores collected > 3 hours after opioid administration were excluded because they no longer accurately reflected the impact of the opioid due to the short half-lives. Observations were excluded if an opioid was administered without a recorded pain score; this occurred once for 6 individuals. Observations also were excluded if an opioid was administered but the data were captured on the following day (outside of the 24-hour window); this occurred once for 3 individuals.
We calculated a ∆ score by subtracting the postopioid pain rating score from the pre-opioid score. Individual ∆ scores were then averaged over the 24-hour period (range, 2-5 opioid doses). For example, if an individual reported a pre-opioid pain score of 10, and a postopioid pain score of 2, the ∆ was recorded as 8. If the individual’s next pre-opioid score was 10, and post-opioid score was 6, the ∆ was recorded as 4. ∆ scores over the 24-hour period were averaged together to determine that individual’s response to as-needed opioids. In the previous example, the mean ∆ score is 6. Lower mean ∆ scores reflect decreased responsiveness to opioids’ analgesic effect.
Demographic and clinical data were obtained from electronic health record review using a standardized assessment form. These data included information about medical and psychiatric comorbidities, specialist consultations, and CLC-PAC unit admission indications and diagnoses. Medications of interest were categorized as antidepressants, antipsychotics, benzodiazepines, muscle relaxants, hypnotics, stimulants, antiepileptic drugs/mood stabilizers (including gabapentin and pregabalin), and all adjuvant analgesics. Adjuvant analgesics were defined as medications administered for pain as documented by chart notes or those ordered as needed for pain, and analyzed as a composite variable. Antidepressants with analgesic properties (serotonin-norepinephrine reuptake inhibitors and tricyclic antidepressants) were considered adjuvant analgesics. Psychiatric information collected included presence of mood, anxiety, and psychotic disorders, and PTSD. SUD information was collected separately from other psychiatric disorders.
Analyses
The study population was described using tabulations for categorical data and means and standard deviations for continuous data. Responsiveness to opioids was analyzed as a continuous variable. Those with higher mean ∆ scores were considered to have pain relatively more responsive to opioids, while lower mean ∆ scores indicated pain less responsive to opioids. We constructed linear regression models controlling for average pre-opioid pain rating scores to explore associations between opioid responsiveness and variables of interest. All analyses were completed using Stata version 15. This study was not adequately powered to detect differences across the spectrum of opioid responsiveness, although the authors have reported differences in this article.
Results
Over the 4-day observational period there were 146 resident-days. Of these, 88 (60.3%) reported at least 1 pain score of ≥ 4. Of those, 61 (41.8%) received ≥ 1 as-needed opioid for pain. We identified 46 resident-days meeting study criteria of ≥ 2 pre- and postanalgesic scores. We identified 41 unique individuals (Figure 1). Two individuals were admitted to the CLC-PAC unit on 2 of the 4 observation days, and 1 individual was admitted to the CLC-PAC unit on 3 of the 4 observation days. For individuals admitted several days, we included data only from the initial observation day.
Response to opioids varied greatly in this sample. The mean (SD) ∆ pain score was 3.4 (1.6) and ranged from 0.5 to 6.3. Using linear regression, we found no relationship between admission indication, medical comorbidities (including active cancer), and opioid responsiveness (Table).
Psychiatric disorders were highly prevalent, with 25 individuals (61.0%) having ≥ 1 any psychiatric diagnosis identified on chart review. The presence of any psychiatric diagnosis was significantly associated with reduced responsiveness to opioids (β = −1.08; 95% CI, −2.04 to −0.13; P = .03). SUDs also were common, with 17 individuals (41.5%) having an active SUD; most were tobacco/nicotine. Twenty-six veterans (63.4%) had documentation of SUD in remission with 19 (46.3%) for substances other than tobacco/nicotine. There was no indication that any veteran in the sample was prescribed medication for opioid use disorder (OUD) at the time of observation. There was no relationship between opioid responsiveness and SUDs, neither active or in remission. Consults to other services that suggested distress or difficult-to-control symptoms also were frequent. Consults to the pain service were significantly associated with reduced responsiveness to opioids (β = −1.75; 95% CI, −3.33 to −0.17; P = .03). Association between psychiatry consultation and reduced opioid responsiveness trended toward significance (β = −0.95; 95% CI, −2.06 to 0.17; P = .09) (Figures 2 and 3). There was no significant association with palliative medicine consultation and opioid responsiveness.
A poorer response to opioids was associated with a significantly higher as-needed opioid dosage (β = −0.02; 95% CI, −0.04 to −0.01; P = .002) as well as a trend toward higher total opioid dosage (β = −0.005; 95% CI, −0.01 to 0.0003; P = .06) (Figure 4). Thirty-eight (92.7%) participants received nonopioid adjuvant analgesics for pain. More than half (56.1%) received antidepressants or gabapentinoids (51.2%), although we did not assess whether they were prescribed for pain or another indication. We did not identify a relationship between any specific psychoactive drug class and opioid responsiveness in this sample.
Discussion
This exploratory study used readily available administrative data in a CLC-PAC unit to assess responsiveness to opioids via a numeric mean ∆ score, with higher values indicating more pain relief in response to opioids. We then constructed linear regression models to characterize the relationship between the mean ∆ score and factors known to be associated with difficult-to-control pain and psychosocial distress. As expected, opioid responsiveness was highly variable among residents; some residents experienced essentially no reduction in pain, on average, despite receiving opioids. Psychiatric comorbidity, higher dosage in OMEs, and the presence of a pain service consult significantly correlated with poorer response to opioids. To our knowledge, this is the first study to quantify opioid responsiveness and describe the relationship with clinical correlates in the understudied PAC population.
Earlier research has demonstrated a relationship between the presence of psychiatric disorders and increased likelihood of receiving any analgesics among veterans residing in PAC.9 Our study adds to the literature by quantifying opioid response using readily available administrative data and examining associations with psychiatric diagnoses. These findings highlight the possibility that attempting to treat high levels of pain by escalating the opioid dosage in patients with a comorbid psychiatric diagnosis should be re-addressed, particularly if there is no meaningful pain reduction at lower opioid dosages. Our sample had a variety of admission diagnoses and medical comorbidities, however, we did not identify a relationship with opioid responsiveness, including an active cancer diagnosis. Although SUDs were highly prevalent in our sample, there was no relationship with opioid responsiveness. This suggests that lack of response to opioids is not merely a matter of drug tolerance or an indication of drug-seeking behavior.
Factors Impacting Response
Many factors could affect whether an individual obtains an adequate analgesic response to opioids or other pain medications, including variations in genes encoding opioid receptors and hepatic enzymes involved in drug metabolism and an individual’s opioid exposure history.13 The phenomenon of requiring more drug to produce the same relief after repeated exposures (ie, tolerance) is well known.14 Opioid-induced hyperalgesia is a phenomenon whereby a patient’s overall pain increases while receiving opioids, but each opioid dose might be perceived as beneficial.15 Increasingly, psychosocial distress is an important factor in opioid response. Adverse selection is the process culminating in those with psychosocial distress and/or SUDs being prescribed more opioids for longer durations.16 Our data suggests that this process could play a role in PAC settings. In addition, exaggerating pain to obtain additional opioids for nonmedical purposes, such as euphoria or relaxation, also is possible.17
When clinically assessing an individual whose pain is not well controlled despite escalating opioid dosages, prescribers must consider which of these factors likely is predominant. However, the first step of determining who has a poor opioid response is not straightforward. Directly asking patients is challenging; many individuals perceive opioids to be helpful while simultaneously reporting inadequately controlled pain.7,8 The primary value of this study is the possibility of providing prescribers a quick, simple method of assessing a patient’s response to opioids. Using this method, individuals who are responding poorly to opioids, including those who might exaggerate pain for secondary gain, could be identified. Health care professionals could consider revisiting pain management strategies, assess for the presence of OUD, or evaluate other contributors to inadequately controlled pain. Although we only collected data regarding response to opioids in this study, any pain medication administered as needed (ie, nonsteroidal anti-inflammatory drugs, acetaminophen) could be analyzed using this methodology, allowing identification of other helpful pain management strategies. We began the validation process with extensive chart review, but further validation is required before this method can be applied to routine clinical practice.
Patients who report uncontrolled pain despite receiving opioids are a clinically challenging population. The traditional strategy has been to escalate opioids, which is recommended by the World Health Organization stepladder approach for patients with cancer pain and limited life expectancy.18 Applying this approach to a general population of patients with chronic pain is ineffective and dangerous.19 The CDC and the VA/US Department of Defense (VA/DoD) guidelines both recommend carefully reassessing risks and benefits at total daily dosages > 50 OME and avoid increasing dosages to > 90 OME daily in most circumstances.5,20 Our finding that participants taking higher dosages of opioids were not more likely to have better control over their pain supports this recommendation.
Limitations
This study has several limitations, the most significant is its small sample size because of the exploratory nature of the project. Results are based on a small pilot sample enriched to include individuals with at least moderate pain who receive opioids frequently at 1 VA CLC-PAC unit; therefore, the results might not be representative of all veterans or a more general population. Our small sample size limits power to detect small differences. Data collected should be used to inform formal power calculations before subsequent larger studies to select adequate sample size. Validation studies, including samples from the same population using different dates, which reproduce findings are an important step. Moreover, we only had data on a single dimension of pain (intensity/severity), as measured by the pain scale, which nursing staff used to make a real-time clinical decision of whether to administer an as-needed opioid. Future studies should consider using pain measures that provide multidimensional assessment (ie, severity, functional interference) and/or were developed specifically for veterans, such as the Defense and Veterans Pain Rating Scale.21
Our study was cross-sectional in nature and addressed a single 24-hour period of data per participant. The years of data collection (2016 and 2017) followed a decline in overall opioid prescribing that has continued, likely influenced by CDC and VA/DoD guidelines.22 It is unclear whether our observations are an accurate reflection of individuals’ response over time or whether prescribing practices in PAC have shifted.
We did not consider the type of pain being treated or explore clinicians’ reasons for prescribing opioids, therefore limiting our ability to know whether opioids were indicated. Information regarding OUD and other SUDs was limited to what was documented in the chart during the CLC-PAC unit admission. We did not have information on length of exposure to opioids. It is possible that opioid tolerance could play a role in reducing opioid responsiveness. However, simple tolerance would not be expected to explain robust correlations with psychiatric comorbidities. Also, simple tolerance would be expected to be overcome with higher opioid dosages, whereas our study demonstrates less responsiveness. These data suggests that some individuals’ pain might be poorly opioid responsive, and psychiatric factors could increase this risk. We used a novel data source in combination with chart review; to our knowledge, barcode medication administration data have not been used in this manner previously. Future work needs to validate this method, using larger sample sizes and several clinical sites. Finally, we used regression models that controlled for average pre-opioid pain rating scores, which is only 1 covariate important for examining effects. Larger studies with adequate power should control for multiple covariates known to be associated with pain and opioid response.
Conclusions
Opioid responsiveness is important clinically yet challenging to assess. This pilot study identifies a way of classifying pain as relatively opioid nonresponsive using administrative data but requires further validation before considering scaling for more general use. The possibility that a substantial percentage of residents in a CLC-PAC unit could be receiving increasing dosages of opioids without adequate benefit justifies the need for more research and underscores the need for prescribers to assess individuals frequently for ongoing benefit of opioids regardless of diagnosis or mechanism of pain.
Acknowledgments
The authors thank Andrzej Galecki, Corey Powell, and the University of Michigan Consulting for Statistics, Computing and Analytics Research Center for assistance with statistical analysis.
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
1. Marshall TL, Reinhardt JP. Pain management in the last 6 months of life: predictors of opioid and non-opioid use. J Am Med Dir Assoc. 2019;20(6):789-790. doi:10.1016/j.jamda.2019.02.026
2. Tait RC, Chibnall JT. Pain in older subacute care patients: associations with clinical status and treatment. Pain Med. 2002;3(3):231-239. doi:10.1046/j.1526-4637.2002.02031.x
3. Pimentel CB, Briesacher BA, Gurwitz JH, Rosen AB, Pimentel MT, Lapane KL. Pain management in nursing home residents with cancer. J Am Geriatr Soc. 2015;63(4):633-641. doi:10.1111/jgs.13345
4. Hunnicutt JN, Tjia J, Lapane KL. Hospice use and pain management in elderly nursing home residents with cancer. J Pain Symptom Manage. 2017;53(3):561-570. doi:10.1016/j.jpainsymman.2016.10.369
5. Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain — United States, 2016. MMWR Recomm Rep. 2016;65(No. RR-1):1-49. doi:10.15585/mmwr.rr6501e1
6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the Veterans Health Administration’s Stratification Tool for Opioid Risk Mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34-49. doi:10.1037/ser0000099
7. Goesling J, Moser SE, Lin LA, Hassett AL, Wasserman RA, Brummett CM. Discrepancies between perceived benefit of opioids and self-reported patient outcomes. Pain Med. 2018;19(2):297-306. doi:10.1093/pm/pnw263
8. Sullivan M, Von Korff M, Banta-Green C. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345-353. doi:10.1016/j.pain.2010.02.037
9. Brennan PL, Greenbaum MA, Lemke S, Schutte KK. Mental health disorder, pain, and pain treatment among long-term care residents: evidence from the Minimum Data Set 3.0. Aging Ment Health. 2019;23(9):1146-1155. doi:10.1080/13607863.2018.1481922
10. Woo A, Lechner B, Fu T, et al. Cut points for mild, moderate, and severe pain among cancer and non-cancer patients: a literature review. Ann Palliat Med. 2015;4(4):176-183. doi:10.3978/j.issn.2224-5820.2015.09.04
11. Centers for Disease Control and Prevention. Calculating total daily dose of opioids for safer dosage. 2017. Accessed December 15, 2021. https://www.cdc.gov/drugoverdose/pdf/calculating_total_daily_dose-a.pdf
12. Nielsen S, Degenhardt L, Hoban B, Gisev N. Comparing opioids: a guide to estimating oral morphine equivalents (OME) in research. NDARC Technical Report No. 329. National Drug and Alcohol Research Centre; 2014. Accessed December 15, 2021. http://www.drugsandalcohol.ie/22703/1/NDARC Comparing opioids.pdf
13. Smith HS. Variations in opioid responsiveness. Pain Physician. 2008;11(2):237-248.
14. Collin E, Cesselin F. Neurobiological mechanisms of opioid tolerance and dependence. Clin Neuropharmacol. 1991;14(6):465-488. doi:10.1097/00002826-199112000-00001
15. Higgins C, Smith BH, Matthews K. Evidence of opioid-induced hyperalgesia in clinical populations after chronic opioid exposure: a systematic review and meta-analysis. Br J Anaesth. 2019;122(6):e114-e126. doi:10.1016/j.bja.2018.09.019
16. Howe CQ, Sullivan MD. The missing ‘P’ in pain management: how the current opioid epidemic highlights the need for psychiatric services in chronic pain care. Gen Hosp Psychiatry. 2014;36(1):99-104. doi:10.1016/j.genhosppsych.2013.10.003
17. Substance Abuse and Mental Health Services Administration. Key substance use and mental health indicators in the United States: results from the 2018 National Survey on Drug Use and Health. HHS Publ No PEP19-5068, NSDUH Ser H-54. 2019;170:51-58. Accessed December 15, 2021. https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHNationalFindingsReport2018/NSDUHNationalFindingsReport2018.pdf
18. World Health Organization. WHO’s cancer pain ladder for adults. Accessed September 21, 2018. www.who.int/ncds/management/palliative-care/Infographic-cancer-pain-lowres.pdf
19. Ballantyne JC, Kalso E, Stannard C. WHO analgesic ladder: a good concept gone astray. BMJ. 2016;352:i20. doi:10.1136/bmj.i20
20. The Opioid Therapy for Chronic Pain Work Group. VA/DoD clinical practice guideline for opioid therapy for chronic pain. US Dept of Veterans Affairs and Dept of Defense; 2017. Accessed December 15, 2021. https://www.healthquality.va.gov/guidelines/Pain/cot/VADoDOTCPG022717.pdf
21. Defense & Veterans Pain Rating Scale (DVPRS). Defense & Veterans Center for Integrative Pain Management. Accessed July 21, 2021. https://www.dvcipm.org/clinical-resources/defense-veterans-pain-rating-scale-dvprs/
22. Guy GP Jr, Zhang K, Bohm MK, et al. Vital signs: changes in opioid prescribing in the United States, 2006–2015. MMWR Morb Mortal Wkly Rep. 2017;66(26):697-704. doi:10.15585/mmwr.mm6626a4
Pollution levels linked to physical and mental health problems
Other analyses of data have found environmental air pollution from sources such as car exhaust and factory output can trigger an inflammatory response in the body. What’s new about a study published in RMD Open is that it explored an association between long-term exposure to pollution and risk of autoimmune diseases, wrote Giovanni Adami, MD, of the University of Verona (Italy) and colleagues.
“Environmental air pollution, according to the World Health Organization, is a major risk to health and 99% of the population worldwide is living in places where recommendations for air quality are not met,” said Dr. Adami in an interview. The limited data on the precise role of air pollution on rheumatic diseases in particular prompted the study, he said.
To explore the potential link between air pollution exposure and autoimmune disease, the researchers reviewed medical information from 81,363 adults via a national medical database in Italy; the data were submitted between June 2016 and November 2020.
The average age of the study population was 65 years, and 92% were women; 22% had at least one coexisting health condition. Each study participant was linked to local environmental monitoring via their residential postcode.
The researchers obtained details about concentrations of particulate matter in the environment from the Italian Institute of Environmental Protection that included 617 monitoring stations in 110 Italian provinces. They focused on concentrations of 10 and 2.5 (PM10 and PM2.5).
Exposure thresholds of 30 mcg/m3 for PM10 and 20 mcg/m3 for PM2.5 are generally considered harmful to health, they noted. On average, the long-term exposure was 16 mcg/m3 for PM2.5 and 25 mcg/m3 for PM10 between 2013 and 2019.
Overall, 9,723 individuals (12%) were diagnosed with an autoimmune disease between 2016 and 2020.
Exposure to PM10 was associated with a 7% higher risk of diagnosis with any autoimmune disease for every 10 mcg/m3 increase in concentration, but no association appeared between PM2.5 exposure and increased risk of autoimmune diseases.
However, in an adjusted model, chronic exposure to PM10 above 30 mcg/m3 and to PM2.5 above 20 mcg/m3 were associated with a 12% and 13% higher risk, respectively, of any autoimmune disease.
Chronic exposure to high levels of PM10 was specifically associated with a higher risk of rheumatoid arthritis, but no other autoimmune diseases. Chronic exposure to high levels of PM2.5 was associated with a higher risk of rheumatoid arthritis, connective tissue diseases, and inflammatory bowel diseases.
In their discussion, the researchers noted that the smaller diameter of PM2.5 molecules fluctuate less in response to rain and other weather, compared with PM10 molecules, which might make them a more accurate predictor of exposure to chronic air pollution.
The study findings were limited by several factors including the observational design, which prohibits the establishment of cause, and a lack of data on the start of symptoms and dates of diagnoses for autoimmune diseases, the researchers noted. Other limitations include the high percentage of older women in the study, which may limit generalizability, and the inability to account for additional personal exposure to pollutants outside of the environmental exposure, they said.
However, the results were strengthened by the large sample size and wide geographic distribution with variable pollution exposure, they said.
“Unfortunately, we were not surprised at all,” by the findings, Dr. Adami said in an interview.
“The biological rationale underpinning our findings is strong. Nevertheless, the magnitude of the effect was overwhelming. In addition, we saw an effect even at threshold of exposure that is widely considered as safe,” Dr. Adami noted.
Clinicians have been taught to consider cigarette smoking or other lifestyle behaviors as major risk factors for the development of several autoimmune diseases, said Dr. Adami. “In the future, we probably should include air pollution exposure as a risk factor as well. Interestingly, there is also accumulating evidence linking acute exposure to environmental air pollution with flares of chronic arthritis,” he said.
“Our study could have direct societal and political consequences,” and might help direct policy makers’ decisions on addressing strategies aimed to reduce fossil emissions, he said. As for additional research, “we certainly need multination studies to confirm our results on a larger scale,” Dr. Adami emphasized. “In addition, it is time to take action and start designing interventions aimed to reduce acute and chronic exposure to air pollution in patients suffering from RMDs.”
Consider the big picture of air quality
The Italian study is especially timely “given our evolving and emerging understanding of environmental risk factors for acute and chronic diseases, which we must first understand before we can address,” said Eileen Barrett, MD, of the University of New Mexico, Albuquerque, in an interview.
“I am largely surprised about the findings, as most physicians aren’t studying ambient air quality and risk for autoimmune disease,” said Dr. Barrett. “More often we think of air quality when we think of risk for respiratory diseases than autoimmune diseases, per se,” she said.
“There are several take-home messages from this study,” said Dr. Barrett. “The first is that we need more research to understand the consequences of air pollutants on health. Second, this study reminds us to think broadly about how air quality and our environment can affect health. And third, all clinicians should be committed to promoting science that can improve public health and reduce death and disability,” she emphasized.
The findings do not specifically reflect associations between pollution and other conditions such as chronic obstructive pulmonary disease and asthma although previous studies have shown an association between asthma and COPD exacerbations and air pollution, Dr. Barrett said.
“Further research will be needed to confirm the associations reported in this study,” Dr. Barrett said.
More research in other countries, including research related to other autoimmune diseases, and with other datasets on population and community level risks from poor air quality, would be helpful, and that information could be used to advise smart public policy, Dr. Barrett added.
Air pollution’s mental health impact
Air pollution’s effects extend beyond physical to the psychological, a new study of depression in teenagers showed. This study was published in Developmental Psychology.
Previous research on the environmental factors associated with depressive symptoms in teens has focused mainly on individual and family level contributors; the impact of the physical environment has not been well studied, the investigators, Erika M. Manczak, PhD, of the University of Denver and colleagues, wrote.
In their paper, the authors found a significant impact of neighborhood ozone exposure on the trajectory of depressive symptoms in teens over a 4-year period.
“Given that inhaling pollution activates biological pathways implicated in the development of depression, including immune, cardiovascular, and neurodevelopmental processes, exposure to ambient air pollution may influence the development and/or trajectory of depressive symptoms in youth,” they said.
The researchers recruited 213 adolescents in the San Francisco Bay area through local advertisements. The participants were aged 9-13 years at baseline, with an average age of 11 years. A total of 121 were female, 47% were white, 8.5% were African American, 12.3% were Asian, 10.4% were nonwhite Latin, and 21.7% were biracial or another ethnicity. The participants self-reported depressive symptoms and other psychopathology symptoms up to three times during the study period. Ozone exposure was calculated based on home addresses.
After controlling for other personal, family, and neighborhood variables, the researchers found that higher levels of ozone exposure were significantly associated with increased depressive symptoms over time, and the slope of trajectory of depressive symptoms became steeper as the ozone levels increased (P less than .001). Ozone did not significantly predict the trajectory of any other psychopathology symptoms.
“The results of this study provide preliminary support for the possibility that ozone is an overlooked contributor to the development or course of youth depressive symptoms,” the researchers wrote in their discussion.
“Interestingly, the association between ozone and symptom trajectories as measured by Anxious/Depressed subscale of the [Youth Self-Report] was not as strong as it was for the [Children’s Depression Inventory-Short Version] or Withdrawn/Depressed scales, suggesting that associations are more robust for behavioral withdrawal symptoms of depression than for other types of symptoms,” they noted.
The study findings were limited by the use of self-reports and by the inability of the study design to show causality, the researchers said. Other limitations include the use of average assessments of ozone that are less precise, lack of assessment of biological pathways for risk, lack of formal psychiatric diagnoses, and the small geographic region included in the study, they said.
However, the results provide preliminary evidence that ozone exposure is a potential contributing factor to depressive symptoms in youth, and serve as a jumping-off point for future research, they noted. Future studies should address changes in systemic inflammation, neurodevelopment, or stress reactivity, as well as concurrent psychosocial or biological factors, and temporal associations between air pollution and mental health symptoms, they concluded.
Environmental factors drive inflammatory responses
Peter L. Loper Jr., MD, considers the findings of the Developmental Psychology study to be unsurprising but important – because air pollution is simply getting worse.
“As the study authors cite, there is sufficient data correlating ozone to negative physical health outcomes in youth, but a paucity of data exploring the impact of poor air quality on mental health outcomes in this demographic,” noted Dr. Loper, of the University of South Carolina, Columbia, in an interview.
“As discussed by the study researchers, any environmental exposure that increases immune-mediated inflammation can result in negative health outcomes. In fact, there is already data to suggest that similar cytokines, or immune cell signalers, that get released by our immune system due to environmental exposures and that contribute to asthma, may also be implicated in depression and other mental health problems,” he noted.
“Just like downstream symptom indicators of physical illnesses such as asthma are secondary to immune-mediated pulmonary inflammation, downstream symptom indicators of mental illness, such as depression, are secondary to immune-mediated neuroinflammation,” Dr. Loper emphasized. “The most well-characterized upstream phenomenon perpetuating the downstream symptom indicators of depression involve neuroinflammatory states due to psychosocial and relational factors such as chronic stress, poor relationships, or substance use. However, any environmental factor that triggers an immune response and inflammation can promote neuroinflammation that manifests as symptoms of mental illness.”
The message for teens with depression and their families is that “we are a product of our environment,” Dr. Loper said. “When our environments are proinflammatory, or cause our immune system to become overactive, then we will develop illness; however, the most potent mediator of inflammation in the brain, and the downstream symptoms of depression, is our relationships with those we love most,” he said.
Dr. Loper suggested research aimed at identifying other sources of immune-mediated inflammation caused by physical environments and better understanding how environmental phenomenon like ozone may compound previously established risk factors for mental illness could be useful.
The RMD Open study received no outside funding, and its authors had no financial conflicts.
The Developmental Psychology study was supported by the National Institute of Mental Health and the Stanford University Precision Health and Integrated Diagnostics Center. The researchers for that report, and Dr. Loper and Dr. Barrett had no conflicts to disclose.
Other analyses of data have found environmental air pollution from sources such as car exhaust and factory output can trigger an inflammatory response in the body. What’s new about a study published in RMD Open is that it explored an association between long-term exposure to pollution and risk of autoimmune diseases, wrote Giovanni Adami, MD, of the University of Verona (Italy) and colleagues.
“Environmental air pollution, according to the World Health Organization, is a major risk to health and 99% of the population worldwide is living in places where recommendations for air quality are not met,” said Dr. Adami in an interview. The limited data on the precise role of air pollution on rheumatic diseases in particular prompted the study, he said.
To explore the potential link between air pollution exposure and autoimmune disease, the researchers reviewed medical information from 81,363 adults via a national medical database in Italy; the data were submitted between June 2016 and November 2020.
The average age of the study population was 65 years, and 92% were women; 22% had at least one coexisting health condition. Each study participant was linked to local environmental monitoring via their residential postcode.
The researchers obtained details about concentrations of particulate matter in the environment from the Italian Institute of Environmental Protection that included 617 monitoring stations in 110 Italian provinces. They focused on concentrations of 10 and 2.5 (PM10 and PM2.5).
Exposure thresholds of 30 mcg/m3 for PM10 and 20 mcg/m3 for PM2.5 are generally considered harmful to health, they noted. On average, the long-term exposure was 16 mcg/m3 for PM2.5 and 25 mcg/m3 for PM10 between 2013 and 2019.
Overall, 9,723 individuals (12%) were diagnosed with an autoimmune disease between 2016 and 2020.
Exposure to PM10 was associated with a 7% higher risk of diagnosis with any autoimmune disease for every 10 mcg/m3 increase in concentration, but no association appeared between PM2.5 exposure and increased risk of autoimmune diseases.
However, in an adjusted model, chronic exposure to PM10 above 30 mcg/m3 and to PM2.5 above 20 mcg/m3 were associated with a 12% and 13% higher risk, respectively, of any autoimmune disease.
Chronic exposure to high levels of PM10 was specifically associated with a higher risk of rheumatoid arthritis, but no other autoimmune diseases. Chronic exposure to high levels of PM2.5 was associated with a higher risk of rheumatoid arthritis, connective tissue diseases, and inflammatory bowel diseases.
In their discussion, the researchers noted that the smaller diameter of PM2.5 molecules fluctuate less in response to rain and other weather, compared with PM10 molecules, which might make them a more accurate predictor of exposure to chronic air pollution.
The study findings were limited by several factors including the observational design, which prohibits the establishment of cause, and a lack of data on the start of symptoms and dates of diagnoses for autoimmune diseases, the researchers noted. Other limitations include the high percentage of older women in the study, which may limit generalizability, and the inability to account for additional personal exposure to pollutants outside of the environmental exposure, they said.
However, the results were strengthened by the large sample size and wide geographic distribution with variable pollution exposure, they said.
“Unfortunately, we were not surprised at all,” by the findings, Dr. Adami said in an interview.
“The biological rationale underpinning our findings is strong. Nevertheless, the magnitude of the effect was overwhelming. In addition, we saw an effect even at threshold of exposure that is widely considered as safe,” Dr. Adami noted.
Clinicians have been taught to consider cigarette smoking or other lifestyle behaviors as major risk factors for the development of several autoimmune diseases, said Dr. Adami. “In the future, we probably should include air pollution exposure as a risk factor as well. Interestingly, there is also accumulating evidence linking acute exposure to environmental air pollution with flares of chronic arthritis,” he said.
“Our study could have direct societal and political consequences,” and might help direct policy makers’ decisions on addressing strategies aimed to reduce fossil emissions, he said. As for additional research, “we certainly need multination studies to confirm our results on a larger scale,” Dr. Adami emphasized. “In addition, it is time to take action and start designing interventions aimed to reduce acute and chronic exposure to air pollution in patients suffering from RMDs.”
Consider the big picture of air quality
The Italian study is especially timely “given our evolving and emerging understanding of environmental risk factors for acute and chronic diseases, which we must first understand before we can address,” said Eileen Barrett, MD, of the University of New Mexico, Albuquerque, in an interview.
“I am largely surprised about the findings, as most physicians aren’t studying ambient air quality and risk for autoimmune disease,” said Dr. Barrett. “More often we think of air quality when we think of risk for respiratory diseases than autoimmune diseases, per se,” she said.
“There are several take-home messages from this study,” said Dr. Barrett. “The first is that we need more research to understand the consequences of air pollutants on health. Second, this study reminds us to think broadly about how air quality and our environment can affect health. And third, all clinicians should be committed to promoting science that can improve public health and reduce death and disability,” she emphasized.
The findings do not specifically reflect associations between pollution and other conditions such as chronic obstructive pulmonary disease and asthma although previous studies have shown an association between asthma and COPD exacerbations and air pollution, Dr. Barrett said.
“Further research will be needed to confirm the associations reported in this study,” Dr. Barrett said.
More research in other countries, including research related to other autoimmune diseases, and with other datasets on population and community level risks from poor air quality, would be helpful, and that information could be used to advise smart public policy, Dr. Barrett added.
Air pollution’s mental health impact
Air pollution’s effects extend beyond physical to the psychological, a new study of depression in teenagers showed. This study was published in Developmental Psychology.
Previous research on the environmental factors associated with depressive symptoms in teens has focused mainly on individual and family level contributors; the impact of the physical environment has not been well studied, the investigators, Erika M. Manczak, PhD, of the University of Denver and colleagues, wrote.
In their paper, the authors found a significant impact of neighborhood ozone exposure on the trajectory of depressive symptoms in teens over a 4-year period.
“Given that inhaling pollution activates biological pathways implicated in the development of depression, including immune, cardiovascular, and neurodevelopmental processes, exposure to ambient air pollution may influence the development and/or trajectory of depressive symptoms in youth,” they said.
The researchers recruited 213 adolescents in the San Francisco Bay area through local advertisements. The participants were aged 9-13 years at baseline, with an average age of 11 years. A total of 121 were female, 47% were white, 8.5% were African American, 12.3% were Asian, 10.4% were nonwhite Latin, and 21.7% were biracial or another ethnicity. The participants self-reported depressive symptoms and other psychopathology symptoms up to three times during the study period. Ozone exposure was calculated based on home addresses.
After controlling for other personal, family, and neighborhood variables, the researchers found that higher levels of ozone exposure were significantly associated with increased depressive symptoms over time, and the slope of trajectory of depressive symptoms became steeper as the ozone levels increased (P less than .001). Ozone did not significantly predict the trajectory of any other psychopathology symptoms.
“The results of this study provide preliminary support for the possibility that ozone is an overlooked contributor to the development or course of youth depressive symptoms,” the researchers wrote in their discussion.
“Interestingly, the association between ozone and symptom trajectories as measured by Anxious/Depressed subscale of the [Youth Self-Report] was not as strong as it was for the [Children’s Depression Inventory-Short Version] or Withdrawn/Depressed scales, suggesting that associations are more robust for behavioral withdrawal symptoms of depression than for other types of symptoms,” they noted.
The study findings were limited by the use of self-reports and by the inability of the study design to show causality, the researchers said. Other limitations include the use of average assessments of ozone that are less precise, lack of assessment of biological pathways for risk, lack of formal psychiatric diagnoses, and the small geographic region included in the study, they said.
However, the results provide preliminary evidence that ozone exposure is a potential contributing factor to depressive symptoms in youth, and serve as a jumping-off point for future research, they noted. Future studies should address changes in systemic inflammation, neurodevelopment, or stress reactivity, as well as concurrent psychosocial or biological factors, and temporal associations between air pollution and mental health symptoms, they concluded.
Environmental factors drive inflammatory responses
Peter L. Loper Jr., MD, considers the findings of the Developmental Psychology study to be unsurprising but important – because air pollution is simply getting worse.
“As the study authors cite, there is sufficient data correlating ozone to negative physical health outcomes in youth, but a paucity of data exploring the impact of poor air quality on mental health outcomes in this demographic,” noted Dr. Loper, of the University of South Carolina, Columbia, in an interview.
“As discussed by the study researchers, any environmental exposure that increases immune-mediated inflammation can result in negative health outcomes. In fact, there is already data to suggest that similar cytokines, or immune cell signalers, that get released by our immune system due to environmental exposures and that contribute to asthma, may also be implicated in depression and other mental health problems,” he noted.
“Just like downstream symptom indicators of physical illnesses such as asthma are secondary to immune-mediated pulmonary inflammation, downstream symptom indicators of mental illness, such as depression, are secondary to immune-mediated neuroinflammation,” Dr. Loper emphasized. “The most well-characterized upstream phenomenon perpetuating the downstream symptom indicators of depression involve neuroinflammatory states due to psychosocial and relational factors such as chronic stress, poor relationships, or substance use. However, any environmental factor that triggers an immune response and inflammation can promote neuroinflammation that manifests as symptoms of mental illness.”
The message for teens with depression and their families is that “we are a product of our environment,” Dr. Loper said. “When our environments are proinflammatory, or cause our immune system to become overactive, then we will develop illness; however, the most potent mediator of inflammation in the brain, and the downstream symptoms of depression, is our relationships with those we love most,” he said.
Dr. Loper suggested research aimed at identifying other sources of immune-mediated inflammation caused by physical environments and better understanding how environmental phenomenon like ozone may compound previously established risk factors for mental illness could be useful.
The RMD Open study received no outside funding, and its authors had no financial conflicts.
The Developmental Psychology study was supported by the National Institute of Mental Health and the Stanford University Precision Health and Integrated Diagnostics Center. The researchers for that report, and Dr. Loper and Dr. Barrett had no conflicts to disclose.
Other analyses of data have found environmental air pollution from sources such as car exhaust and factory output can trigger an inflammatory response in the body. What’s new about a study published in RMD Open is that it explored an association between long-term exposure to pollution and risk of autoimmune diseases, wrote Giovanni Adami, MD, of the University of Verona (Italy) and colleagues.
“Environmental air pollution, according to the World Health Organization, is a major risk to health and 99% of the population worldwide is living in places where recommendations for air quality are not met,” said Dr. Adami in an interview. The limited data on the precise role of air pollution on rheumatic diseases in particular prompted the study, he said.
To explore the potential link between air pollution exposure and autoimmune disease, the researchers reviewed medical information from 81,363 adults via a national medical database in Italy; the data were submitted between June 2016 and November 2020.
The average age of the study population was 65 years, and 92% were women; 22% had at least one coexisting health condition. Each study participant was linked to local environmental monitoring via their residential postcode.
The researchers obtained details about concentrations of particulate matter in the environment from the Italian Institute of Environmental Protection that included 617 monitoring stations in 110 Italian provinces. They focused on concentrations of 10 and 2.5 (PM10 and PM2.5).
Exposure thresholds of 30 mcg/m3 for PM10 and 20 mcg/m3 for PM2.5 are generally considered harmful to health, they noted. On average, the long-term exposure was 16 mcg/m3 for PM2.5 and 25 mcg/m3 for PM10 between 2013 and 2019.
Overall, 9,723 individuals (12%) were diagnosed with an autoimmune disease between 2016 and 2020.
Exposure to PM10 was associated with a 7% higher risk of diagnosis with any autoimmune disease for every 10 mcg/m3 increase in concentration, but no association appeared between PM2.5 exposure and increased risk of autoimmune diseases.
However, in an adjusted model, chronic exposure to PM10 above 30 mcg/m3 and to PM2.5 above 20 mcg/m3 were associated with a 12% and 13% higher risk, respectively, of any autoimmune disease.
Chronic exposure to high levels of PM10 was specifically associated with a higher risk of rheumatoid arthritis, but no other autoimmune diseases. Chronic exposure to high levels of PM2.5 was associated with a higher risk of rheumatoid arthritis, connective tissue diseases, and inflammatory bowel diseases.
In their discussion, the researchers noted that the smaller diameter of PM2.5 molecules fluctuate less in response to rain and other weather, compared with PM10 molecules, which might make them a more accurate predictor of exposure to chronic air pollution.
The study findings were limited by several factors including the observational design, which prohibits the establishment of cause, and a lack of data on the start of symptoms and dates of diagnoses for autoimmune diseases, the researchers noted. Other limitations include the high percentage of older women in the study, which may limit generalizability, and the inability to account for additional personal exposure to pollutants outside of the environmental exposure, they said.
However, the results were strengthened by the large sample size and wide geographic distribution with variable pollution exposure, they said.
“Unfortunately, we were not surprised at all,” by the findings, Dr. Adami said in an interview.
“The biological rationale underpinning our findings is strong. Nevertheless, the magnitude of the effect was overwhelming. In addition, we saw an effect even at threshold of exposure that is widely considered as safe,” Dr. Adami noted.
Clinicians have been taught to consider cigarette smoking or other lifestyle behaviors as major risk factors for the development of several autoimmune diseases, said Dr. Adami. “In the future, we probably should include air pollution exposure as a risk factor as well. Interestingly, there is also accumulating evidence linking acute exposure to environmental air pollution with flares of chronic arthritis,” he said.
“Our study could have direct societal and political consequences,” and might help direct policy makers’ decisions on addressing strategies aimed to reduce fossil emissions, he said. As for additional research, “we certainly need multination studies to confirm our results on a larger scale,” Dr. Adami emphasized. “In addition, it is time to take action and start designing interventions aimed to reduce acute and chronic exposure to air pollution in patients suffering from RMDs.”
Consider the big picture of air quality
The Italian study is especially timely “given our evolving and emerging understanding of environmental risk factors for acute and chronic diseases, which we must first understand before we can address,” said Eileen Barrett, MD, of the University of New Mexico, Albuquerque, in an interview.
“I am largely surprised about the findings, as most physicians aren’t studying ambient air quality and risk for autoimmune disease,” said Dr. Barrett. “More often we think of air quality when we think of risk for respiratory diseases than autoimmune diseases, per se,” she said.
“There are several take-home messages from this study,” said Dr. Barrett. “The first is that we need more research to understand the consequences of air pollutants on health. Second, this study reminds us to think broadly about how air quality and our environment can affect health. And third, all clinicians should be committed to promoting science that can improve public health and reduce death and disability,” she emphasized.
The findings do not specifically reflect associations between pollution and other conditions such as chronic obstructive pulmonary disease and asthma although previous studies have shown an association between asthma and COPD exacerbations and air pollution, Dr. Barrett said.
“Further research will be needed to confirm the associations reported in this study,” Dr. Barrett said.
More research in other countries, including research related to other autoimmune diseases, and with other datasets on population and community level risks from poor air quality, would be helpful, and that information could be used to advise smart public policy, Dr. Barrett added.
Air pollution’s mental health impact
Air pollution’s effects extend beyond physical to the psychological, a new study of depression in teenagers showed. This study was published in Developmental Psychology.
Previous research on the environmental factors associated with depressive symptoms in teens has focused mainly on individual and family level contributors; the impact of the physical environment has not been well studied, the investigators, Erika M. Manczak, PhD, of the University of Denver and colleagues, wrote.
In their paper, the authors found a significant impact of neighborhood ozone exposure on the trajectory of depressive symptoms in teens over a 4-year period.
“Given that inhaling pollution activates biological pathways implicated in the development of depression, including immune, cardiovascular, and neurodevelopmental processes, exposure to ambient air pollution may influence the development and/or trajectory of depressive symptoms in youth,” they said.
The researchers recruited 213 adolescents in the San Francisco Bay area through local advertisements. The participants were aged 9-13 years at baseline, with an average age of 11 years. A total of 121 were female, 47% were white, 8.5% were African American, 12.3% were Asian, 10.4% were nonwhite Latin, and 21.7% were biracial or another ethnicity. The participants self-reported depressive symptoms and other psychopathology symptoms up to three times during the study period. Ozone exposure was calculated based on home addresses.
After controlling for other personal, family, and neighborhood variables, the researchers found that higher levels of ozone exposure were significantly associated with increased depressive symptoms over time, and the slope of trajectory of depressive symptoms became steeper as the ozone levels increased (P less than .001). Ozone did not significantly predict the trajectory of any other psychopathology symptoms.
“The results of this study provide preliminary support for the possibility that ozone is an overlooked contributor to the development or course of youth depressive symptoms,” the researchers wrote in their discussion.
“Interestingly, the association between ozone and symptom trajectories as measured by Anxious/Depressed subscale of the [Youth Self-Report] was not as strong as it was for the [Children’s Depression Inventory-Short Version] or Withdrawn/Depressed scales, suggesting that associations are more robust for behavioral withdrawal symptoms of depression than for other types of symptoms,” they noted.
The study findings were limited by the use of self-reports and by the inability of the study design to show causality, the researchers said. Other limitations include the use of average assessments of ozone that are less precise, lack of assessment of biological pathways for risk, lack of formal psychiatric diagnoses, and the small geographic region included in the study, they said.
However, the results provide preliminary evidence that ozone exposure is a potential contributing factor to depressive symptoms in youth, and serve as a jumping-off point for future research, they noted. Future studies should address changes in systemic inflammation, neurodevelopment, or stress reactivity, as well as concurrent psychosocial or biological factors, and temporal associations between air pollution and mental health symptoms, they concluded.
Environmental factors drive inflammatory responses
Peter L. Loper Jr., MD, considers the findings of the Developmental Psychology study to be unsurprising but important – because air pollution is simply getting worse.
“As the study authors cite, there is sufficient data correlating ozone to negative physical health outcomes in youth, but a paucity of data exploring the impact of poor air quality on mental health outcomes in this demographic,” noted Dr. Loper, of the University of South Carolina, Columbia, in an interview.
“As discussed by the study researchers, any environmental exposure that increases immune-mediated inflammation can result in negative health outcomes. In fact, there is already data to suggest that similar cytokines, or immune cell signalers, that get released by our immune system due to environmental exposures and that contribute to asthma, may also be implicated in depression and other mental health problems,” he noted.
“Just like downstream symptom indicators of physical illnesses such as asthma are secondary to immune-mediated pulmonary inflammation, downstream symptom indicators of mental illness, such as depression, are secondary to immune-mediated neuroinflammation,” Dr. Loper emphasized. “The most well-characterized upstream phenomenon perpetuating the downstream symptom indicators of depression involve neuroinflammatory states due to psychosocial and relational factors such as chronic stress, poor relationships, or substance use. However, any environmental factor that triggers an immune response and inflammation can promote neuroinflammation that manifests as symptoms of mental illness.”
The message for teens with depression and their families is that “we are a product of our environment,” Dr. Loper said. “When our environments are proinflammatory, or cause our immune system to become overactive, then we will develop illness; however, the most potent mediator of inflammation in the brain, and the downstream symptoms of depression, is our relationships with those we love most,” he said.
Dr. Loper suggested research aimed at identifying other sources of immune-mediated inflammation caused by physical environments and better understanding how environmental phenomenon like ozone may compound previously established risk factors for mental illness could be useful.
The RMD Open study received no outside funding, and its authors had no financial conflicts.
The Developmental Psychology study was supported by the National Institute of Mental Health and the Stanford University Precision Health and Integrated Diagnostics Center. The researchers for that report, and Dr. Loper and Dr. Barrett had no conflicts to disclose.
FROM RMD OPEN
Cancer increases patients’ risk for cardiovascular deaths
and irrespective of cancer type, according to a population-based study.
The retrospective analysis, which included data from more than 200,000 patients with cancer, found that a new cancer diagnosis significantly increased the risk of cardiovascular (CV) death (hazard ratio [HR], 1.33) as well as other CV events, including stroke (HR, 1.44), heart failure (HR, 1.62) and pulmonary embolism (HR, 3.43).
From the results, the researchers concluded that a “new cancer diagnosis is independently associated with a significantly increased risk for cardiovascular death and nonfatal morbidity regardless of cancer site.”
The findings were published in the Journal of the American College of Cardiology: CardioOncology (2022 Mar;4[1]:85-94).
Patients with cancer and cancer survivors are known to have an increased risk for heart failure, but evidence on the risk for other CV outcomes remains less clear. In addition, the authors noted, many cancer therapies – including chest irradiation and chemotherapy – can increase a person’s risk of incident CV disease during treatment and after, but data on the long-term CV risk among cancer survivors conflict.
D. Ian Paterson, MD, of the University of Alberta, Edmonton, and coauthors wanted to clarify how a new cancer diagnosis at various sites and stages might affect a person’s risk for fatal and nonfatal CV events over the long term.
The current analysis included data from 224,016 patients with a new cancer diagnosis identified from an administrative database of more than 4.5 million adults residing in Alberta. The researcher identified 73,360 CV deaths and 470,481 nonfatal CV events between April 2007 and December 2018.
Comparing CV events in those with and in those without cancer, the authors found that patients with cancer had a 33% increased risk for CV mortality over the 12-year study follow-up, after adjusting for sociodemographic data and comorbidities (HR, 1.33; 95% confidence interval [CI], 1.29-1.37). Patients with cancer also had an increased risk for stroke (HR, 1.44), heart failure (HR, 1.62) and pulmonary embolism (HR, 3.43), though not myocardial infarction (HR, 1.01; 95% CI, 0.97 – 1.05), compared to those without cancer.
The extent of the risk varied somewhat by cancer stage, time from diagnosis, and cancer type.
A new cancer diagnosis put patients at a significantly higher risk of CV mortality, heart failure, stroke, or pulmonary embolism, regardless of the cancer site, but the risk of CV events was highest for patients with genitourinary, gastrointestinal, thoracic, nervous system, and hematologic malignancies. These patients accounted for more than half of the cancer cohort and more than 70% of the incident CV burden.
Patients with more advanced cancer were at the highest risk for poor CV outcomes, but even those with very early-stage disease faced an elevated risk.
The risk for CV events was greatest in the first year following a cancer diagnosis for all outcomes (HRs, 1.24-8.36) but remained significantly elevated for CV death, heart failure, and pulmonary embolism a decade later.
Overall, the authors concluded that “patients with cancer constitute a high-risk population for CV disease” over the long term and suggested that those with cancer “may benefit from comanagement that includes cardiologists as well as stroke and thrombosis specialists.”
In an accompanying editorial, Hiroshi Ohtsu of Juntendo University in Tokyo, and colleagues concluded that the work “has remarkable strengths” and important clinical implications. However, they said that additional steps may be warranted before translating these findings to clinical practice.
For example, the study is limited by its retrospective population-based design and the lack of data on cancer therapy as well as on several patient factors, including ethnicity, smoking, and physical activity.
The study authors agreed, noting that future work should evaluate how cancer therapies and other potential contributors to poor CV outcomes influence patients’ risk.
“Such work would potentially lead to better prediction of CV risk for patients with cancer and survivors and improved prevention and treatment strategies,” they wrote.
The study was supported by a foundation grant from the Canadian Institutes of Health Research. The authors have disclosed no relevant financial relationships. The editorial was supported in part by funding to individual authors from the Japan Society for the Promotion of Science/Ministry of Education, Culture, Sports, Science and Technology, the Ministry of Health, Labour and Welfare, and the Agency for Medical Research and Development.
A version of this article first appeared on Medscape.com.
and irrespective of cancer type, according to a population-based study.
The retrospective analysis, which included data from more than 200,000 patients with cancer, found that a new cancer diagnosis significantly increased the risk of cardiovascular (CV) death (hazard ratio [HR], 1.33) as well as other CV events, including stroke (HR, 1.44), heart failure (HR, 1.62) and pulmonary embolism (HR, 3.43).
From the results, the researchers concluded that a “new cancer diagnosis is independently associated with a significantly increased risk for cardiovascular death and nonfatal morbidity regardless of cancer site.”
The findings were published in the Journal of the American College of Cardiology: CardioOncology (2022 Mar;4[1]:85-94).
Patients with cancer and cancer survivors are known to have an increased risk for heart failure, but evidence on the risk for other CV outcomes remains less clear. In addition, the authors noted, many cancer therapies – including chest irradiation and chemotherapy – can increase a person’s risk of incident CV disease during treatment and after, but data on the long-term CV risk among cancer survivors conflict.
D. Ian Paterson, MD, of the University of Alberta, Edmonton, and coauthors wanted to clarify how a new cancer diagnosis at various sites and stages might affect a person’s risk for fatal and nonfatal CV events over the long term.
The current analysis included data from 224,016 patients with a new cancer diagnosis identified from an administrative database of more than 4.5 million adults residing in Alberta. The researcher identified 73,360 CV deaths and 470,481 nonfatal CV events between April 2007 and December 2018.
Comparing CV events in those with and in those without cancer, the authors found that patients with cancer had a 33% increased risk for CV mortality over the 12-year study follow-up, after adjusting for sociodemographic data and comorbidities (HR, 1.33; 95% confidence interval [CI], 1.29-1.37). Patients with cancer also had an increased risk for stroke (HR, 1.44), heart failure (HR, 1.62) and pulmonary embolism (HR, 3.43), though not myocardial infarction (HR, 1.01; 95% CI, 0.97 – 1.05), compared to those without cancer.
The extent of the risk varied somewhat by cancer stage, time from diagnosis, and cancer type.
A new cancer diagnosis put patients at a significantly higher risk of CV mortality, heart failure, stroke, or pulmonary embolism, regardless of the cancer site, but the risk of CV events was highest for patients with genitourinary, gastrointestinal, thoracic, nervous system, and hematologic malignancies. These patients accounted for more than half of the cancer cohort and more than 70% of the incident CV burden.
Patients with more advanced cancer were at the highest risk for poor CV outcomes, but even those with very early-stage disease faced an elevated risk.
The risk for CV events was greatest in the first year following a cancer diagnosis for all outcomes (HRs, 1.24-8.36) but remained significantly elevated for CV death, heart failure, and pulmonary embolism a decade later.
Overall, the authors concluded that “patients with cancer constitute a high-risk population for CV disease” over the long term and suggested that those with cancer “may benefit from comanagement that includes cardiologists as well as stroke and thrombosis specialists.”
In an accompanying editorial, Hiroshi Ohtsu of Juntendo University in Tokyo, and colleagues concluded that the work “has remarkable strengths” and important clinical implications. However, they said that additional steps may be warranted before translating these findings to clinical practice.
For example, the study is limited by its retrospective population-based design and the lack of data on cancer therapy as well as on several patient factors, including ethnicity, smoking, and physical activity.
The study authors agreed, noting that future work should evaluate how cancer therapies and other potential contributors to poor CV outcomes influence patients’ risk.
“Such work would potentially lead to better prediction of CV risk for patients with cancer and survivors and improved prevention and treatment strategies,” they wrote.
The study was supported by a foundation grant from the Canadian Institutes of Health Research. The authors have disclosed no relevant financial relationships. The editorial was supported in part by funding to individual authors from the Japan Society for the Promotion of Science/Ministry of Education, Culture, Sports, Science and Technology, the Ministry of Health, Labour and Welfare, and the Agency for Medical Research and Development.
A version of this article first appeared on Medscape.com.
and irrespective of cancer type, according to a population-based study.
The retrospective analysis, which included data from more than 200,000 patients with cancer, found that a new cancer diagnosis significantly increased the risk of cardiovascular (CV) death (hazard ratio [HR], 1.33) as well as other CV events, including stroke (HR, 1.44), heart failure (HR, 1.62) and pulmonary embolism (HR, 3.43).
From the results, the researchers concluded that a “new cancer diagnosis is independently associated with a significantly increased risk for cardiovascular death and nonfatal morbidity regardless of cancer site.”
The findings were published in the Journal of the American College of Cardiology: CardioOncology (2022 Mar;4[1]:85-94).
Patients with cancer and cancer survivors are known to have an increased risk for heart failure, but evidence on the risk for other CV outcomes remains less clear. In addition, the authors noted, many cancer therapies – including chest irradiation and chemotherapy – can increase a person’s risk of incident CV disease during treatment and after, but data on the long-term CV risk among cancer survivors conflict.
D. Ian Paterson, MD, of the University of Alberta, Edmonton, and coauthors wanted to clarify how a new cancer diagnosis at various sites and stages might affect a person’s risk for fatal and nonfatal CV events over the long term.
The current analysis included data from 224,016 patients with a new cancer diagnosis identified from an administrative database of more than 4.5 million adults residing in Alberta. The researcher identified 73,360 CV deaths and 470,481 nonfatal CV events between April 2007 and December 2018.
Comparing CV events in those with and in those without cancer, the authors found that patients with cancer had a 33% increased risk for CV mortality over the 12-year study follow-up, after adjusting for sociodemographic data and comorbidities (HR, 1.33; 95% confidence interval [CI], 1.29-1.37). Patients with cancer also had an increased risk for stroke (HR, 1.44), heart failure (HR, 1.62) and pulmonary embolism (HR, 3.43), though not myocardial infarction (HR, 1.01; 95% CI, 0.97 – 1.05), compared to those without cancer.
The extent of the risk varied somewhat by cancer stage, time from diagnosis, and cancer type.
A new cancer diagnosis put patients at a significantly higher risk of CV mortality, heart failure, stroke, or pulmonary embolism, regardless of the cancer site, but the risk of CV events was highest for patients with genitourinary, gastrointestinal, thoracic, nervous system, and hematologic malignancies. These patients accounted for more than half of the cancer cohort and more than 70% of the incident CV burden.
Patients with more advanced cancer were at the highest risk for poor CV outcomes, but even those with very early-stage disease faced an elevated risk.
The risk for CV events was greatest in the first year following a cancer diagnosis for all outcomes (HRs, 1.24-8.36) but remained significantly elevated for CV death, heart failure, and pulmonary embolism a decade later.
Overall, the authors concluded that “patients with cancer constitute a high-risk population for CV disease” over the long term and suggested that those with cancer “may benefit from comanagement that includes cardiologists as well as stroke and thrombosis specialists.”
In an accompanying editorial, Hiroshi Ohtsu of Juntendo University in Tokyo, and colleagues concluded that the work “has remarkable strengths” and important clinical implications. However, they said that additional steps may be warranted before translating these findings to clinical practice.
For example, the study is limited by its retrospective population-based design and the lack of data on cancer therapy as well as on several patient factors, including ethnicity, smoking, and physical activity.
The study authors agreed, noting that future work should evaluate how cancer therapies and other potential contributors to poor CV outcomes influence patients’ risk.
“Such work would potentially lead to better prediction of CV risk for patients with cancer and survivors and improved prevention and treatment strategies,” they wrote.
The study was supported by a foundation grant from the Canadian Institutes of Health Research. The authors have disclosed no relevant financial relationships. The editorial was supported in part by funding to individual authors from the Japan Society for the Promotion of Science/Ministry of Education, Culture, Sports, Science and Technology, the Ministry of Health, Labour and Welfare, and the Agency for Medical Research and Development.
A version of this article first appeared on Medscape.com.
FROM JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
Study: Majority of research on homeopathic remedies unpublished or unregistered
Homeopathy is a form of alternative medicine based on the concept that increasing dilution of a substance leads to a stronger treatment effect.
The authors of the new paper, published in BMJ Evidence-Based Medicine, also found that a quarter of the 90 randomized published trials on homeopathic remedies they analyzed changed their results before publication.
The benefits of homeopathy touted in studies may be greatly exaggerated, suggest the authors, Gerald Gartlehner, MD, of Danube University, Krems, Austria, and colleagues.
The results raise awareness that published homeopathy trials represent a limited proportion of research, skewed toward favorable results, they wrote.
“This likely affects the validity of the body of evidence of homeopathic literature and may substantially overestimate the true treatment effect of homeopathic remedies,” they concluded.
Homeopathy as practiced today was developed approximately 200 years ago in Germany, and despite ongoing debate about its effectiveness, it remains a popular alternative to conventional medicine in many developed countries, the authors noted.
According to the National Institutes of Health, homeopathy is based on the idea of “like cures like,” meaning that a disease can be cured with a substance that produces similar symptoms in healthy people, and the “law of minimum dose,” meaning that a lower dose of medication will be more effective. “Many homeopathic products are so diluted that no molecules of the original substance remain,” according to the NIH.
Homeopathy is not subject to most regulatory requirements, so assessment of effectiveness of homeopathic remedies is limited to published data, the researchers said. “When no information is publicly available about the majority of homeopathic trials, sound conclusions about the efficacy and the risks of using homeopathic medicinal products for treating health conditions are impossible,” they wrote.
Study methods and findings
The researchers examined 17 trial registries for studies involving homeopathic remedies conducted since 2002.
The registries included clinicaltrials.gov, the EU Clinical Trials Register, and the International Clinical Trials Registry Platform up to April 2019 to identify registered homeopathy trials.
To determine whether registered trials were published and to identify trials that were published but unregistered, the researchers examined PubMed, the Allied and Complementary Medicine Database, Embase, and Google Scholar up to April 2021.
They found that approximately 38% of registered trials of homeopathy were never published, and 53% of the published randomized, controlled trials (RCTs) were not registered. Notably, 25% of the trials that were registered and published showed primary outcomes that were changed compared with the registry.
The number of registered homeopathy trials increased significantly over the past 5 years, but approximately one-third (30%) of trials published during the last 5 years were not registered, they said. In a meta-analysis, unregistered RCTs showed significantly greater treatment effects than registered RCTs, with standardized mean differences of –0.53 and –0.14, respectively.
The study findings were limited by several factors including the potential for missed records of studies not covered by the registries searched. Other limitations include the analysis of pooled data from homeopathic treatments that may not generalize to personalized homeopathy, and the exclusion of trials labeled as terminated or suspended.
Proceed with caution before recommending use of homeopathic remedies, says expert
Linda Girgis, MD, noted that prior to reading this report she had known that most homeopathic remedies didn’t have any evidence of being effective, and that, therefore, the results validated her understanding of the findings of studies of homeopathy.
The study is especially important at this time in the wake of the COVID-19 pandemic, Dr. Girgis, a family physician in private practice in South River, N.J., said in an interview.
“Many people are promoting treatments that don’t have any evidence that they are effective, and more people are turning to homeopathic treatments not knowing the risks and assuming they are safe,” she continued. “Many people are taking advantage of this and trying to cash in on this with ill-proven remedies.”
Homeopathic remedies become especially harmful when patients think they can use them instead of traditional medicine, she added.
Noting that some homeopathic remedies have been studied and show some evidence that they work, Dr. Girgis said there may be a role for certain ones in primary care.
“An example would be black cohosh or primrose oil for perimenopausal hot flashes. This could be a good alternative when you want to avoid hormonal supplements,” she said.
At the same time, Dr. Girgis advised clinicians to be cautious about suggesting homeopathic remedies to patients.
“Homeopathy seems to be a good money maker if you sell these products. However, you are not protected from liability and can be found more liable for prescribing off-label treatments or those not [Food and Drug Administration] approved,” Dr. Girgis said. Her general message to clinicians: Stick with evidence-based medicine.
Her message to patients who might want to pursue homeopathic remedies is that just because something is “homeopathic” or natural doesn’t mean that it is safe.
“There are some [homeopathic] products that have caused liver damage or other problems,” she explained. “Also, these remedies can interact with other medications.”
The study received no outside funding. The researchers and Dr. Girgis had no financial conflicts to disclose.
Homeopathy is a form of alternative medicine based on the concept that increasing dilution of a substance leads to a stronger treatment effect.
The authors of the new paper, published in BMJ Evidence-Based Medicine, also found that a quarter of the 90 randomized published trials on homeopathic remedies they analyzed changed their results before publication.
The benefits of homeopathy touted in studies may be greatly exaggerated, suggest the authors, Gerald Gartlehner, MD, of Danube University, Krems, Austria, and colleagues.
The results raise awareness that published homeopathy trials represent a limited proportion of research, skewed toward favorable results, they wrote.
“This likely affects the validity of the body of evidence of homeopathic literature and may substantially overestimate the true treatment effect of homeopathic remedies,” they concluded.
Homeopathy as practiced today was developed approximately 200 years ago in Germany, and despite ongoing debate about its effectiveness, it remains a popular alternative to conventional medicine in many developed countries, the authors noted.
According to the National Institutes of Health, homeopathy is based on the idea of “like cures like,” meaning that a disease can be cured with a substance that produces similar symptoms in healthy people, and the “law of minimum dose,” meaning that a lower dose of medication will be more effective. “Many homeopathic products are so diluted that no molecules of the original substance remain,” according to the NIH.
Homeopathy is not subject to most regulatory requirements, so assessment of effectiveness of homeopathic remedies is limited to published data, the researchers said. “When no information is publicly available about the majority of homeopathic trials, sound conclusions about the efficacy and the risks of using homeopathic medicinal products for treating health conditions are impossible,” they wrote.
Study methods and findings
The researchers examined 17 trial registries for studies involving homeopathic remedies conducted since 2002.
The registries included clinicaltrials.gov, the EU Clinical Trials Register, and the International Clinical Trials Registry Platform up to April 2019 to identify registered homeopathy trials.
To determine whether registered trials were published and to identify trials that were published but unregistered, the researchers examined PubMed, the Allied and Complementary Medicine Database, Embase, and Google Scholar up to April 2021.
They found that approximately 38% of registered trials of homeopathy were never published, and 53% of the published randomized, controlled trials (RCTs) were not registered. Notably, 25% of the trials that were registered and published showed primary outcomes that were changed compared with the registry.
The number of registered homeopathy trials increased significantly over the past 5 years, but approximately one-third (30%) of trials published during the last 5 years were not registered, they said. In a meta-analysis, unregistered RCTs showed significantly greater treatment effects than registered RCTs, with standardized mean differences of –0.53 and –0.14, respectively.
The study findings were limited by several factors including the potential for missed records of studies not covered by the registries searched. Other limitations include the analysis of pooled data from homeopathic treatments that may not generalize to personalized homeopathy, and the exclusion of trials labeled as terminated or suspended.
Proceed with caution before recommending use of homeopathic remedies, says expert
Linda Girgis, MD, noted that prior to reading this report she had known that most homeopathic remedies didn’t have any evidence of being effective, and that, therefore, the results validated her understanding of the findings of studies of homeopathy.
The study is especially important at this time in the wake of the COVID-19 pandemic, Dr. Girgis, a family physician in private practice in South River, N.J., said in an interview.
“Many people are promoting treatments that don’t have any evidence that they are effective, and more people are turning to homeopathic treatments not knowing the risks and assuming they are safe,” she continued. “Many people are taking advantage of this and trying to cash in on this with ill-proven remedies.”
Homeopathic remedies become especially harmful when patients think they can use them instead of traditional medicine, she added.
Noting that some homeopathic remedies have been studied and show some evidence that they work, Dr. Girgis said there may be a role for certain ones in primary care.
“An example would be black cohosh or primrose oil for perimenopausal hot flashes. This could be a good alternative when you want to avoid hormonal supplements,” she said.
At the same time, Dr. Girgis advised clinicians to be cautious about suggesting homeopathic remedies to patients.
“Homeopathy seems to be a good money maker if you sell these products. However, you are not protected from liability and can be found more liable for prescribing off-label treatments or those not [Food and Drug Administration] approved,” Dr. Girgis said. Her general message to clinicians: Stick with evidence-based medicine.
Her message to patients who might want to pursue homeopathic remedies is that just because something is “homeopathic” or natural doesn’t mean that it is safe.
“There are some [homeopathic] products that have caused liver damage or other problems,” she explained. “Also, these remedies can interact with other medications.”
The study received no outside funding. The researchers and Dr. Girgis had no financial conflicts to disclose.
Homeopathy is a form of alternative medicine based on the concept that increasing dilution of a substance leads to a stronger treatment effect.
The authors of the new paper, published in BMJ Evidence-Based Medicine, also found that a quarter of the 90 randomized published trials on homeopathic remedies they analyzed changed their results before publication.
The benefits of homeopathy touted in studies may be greatly exaggerated, suggest the authors, Gerald Gartlehner, MD, of Danube University, Krems, Austria, and colleagues.
The results raise awareness that published homeopathy trials represent a limited proportion of research, skewed toward favorable results, they wrote.
“This likely affects the validity of the body of evidence of homeopathic literature and may substantially overestimate the true treatment effect of homeopathic remedies,” they concluded.
Homeopathy as practiced today was developed approximately 200 years ago in Germany, and despite ongoing debate about its effectiveness, it remains a popular alternative to conventional medicine in many developed countries, the authors noted.
According to the National Institutes of Health, homeopathy is based on the idea of “like cures like,” meaning that a disease can be cured with a substance that produces similar symptoms in healthy people, and the “law of minimum dose,” meaning that a lower dose of medication will be more effective. “Many homeopathic products are so diluted that no molecules of the original substance remain,” according to the NIH.
Homeopathy is not subject to most regulatory requirements, so assessment of effectiveness of homeopathic remedies is limited to published data, the researchers said. “When no information is publicly available about the majority of homeopathic trials, sound conclusions about the efficacy and the risks of using homeopathic medicinal products for treating health conditions are impossible,” they wrote.
Study methods and findings
The researchers examined 17 trial registries for studies involving homeopathic remedies conducted since 2002.
The registries included clinicaltrials.gov, the EU Clinical Trials Register, and the International Clinical Trials Registry Platform up to April 2019 to identify registered homeopathy trials.
To determine whether registered trials were published and to identify trials that were published but unregistered, the researchers examined PubMed, the Allied and Complementary Medicine Database, Embase, and Google Scholar up to April 2021.
They found that approximately 38% of registered trials of homeopathy were never published, and 53% of the published randomized, controlled trials (RCTs) were not registered. Notably, 25% of the trials that were registered and published showed primary outcomes that were changed compared with the registry.
The number of registered homeopathy trials increased significantly over the past 5 years, but approximately one-third (30%) of trials published during the last 5 years were not registered, they said. In a meta-analysis, unregistered RCTs showed significantly greater treatment effects than registered RCTs, with standardized mean differences of –0.53 and –0.14, respectively.
The study findings were limited by several factors including the potential for missed records of studies not covered by the registries searched. Other limitations include the analysis of pooled data from homeopathic treatments that may not generalize to personalized homeopathy, and the exclusion of trials labeled as terminated or suspended.
Proceed with caution before recommending use of homeopathic remedies, says expert
Linda Girgis, MD, noted that prior to reading this report she had known that most homeopathic remedies didn’t have any evidence of being effective, and that, therefore, the results validated her understanding of the findings of studies of homeopathy.
The study is especially important at this time in the wake of the COVID-19 pandemic, Dr. Girgis, a family physician in private practice in South River, N.J., said in an interview.
“Many people are promoting treatments that don’t have any evidence that they are effective, and more people are turning to homeopathic treatments not knowing the risks and assuming they are safe,” she continued. “Many people are taking advantage of this and trying to cash in on this with ill-proven remedies.”
Homeopathic remedies become especially harmful when patients think they can use them instead of traditional medicine, she added.
Noting that some homeopathic remedies have been studied and show some evidence that they work, Dr. Girgis said there may be a role for certain ones in primary care.
“An example would be black cohosh or primrose oil for perimenopausal hot flashes. This could be a good alternative when you want to avoid hormonal supplements,” she said.
At the same time, Dr. Girgis advised clinicians to be cautious about suggesting homeopathic remedies to patients.
“Homeopathy seems to be a good money maker if you sell these products. However, you are not protected from liability and can be found more liable for prescribing off-label treatments or those not [Food and Drug Administration] approved,” Dr. Girgis said. Her general message to clinicians: Stick with evidence-based medicine.
Her message to patients who might want to pursue homeopathic remedies is that just because something is “homeopathic” or natural doesn’t mean that it is safe.
“There are some [homeopathic] products that have caused liver damage or other problems,” she explained. “Also, these remedies can interact with other medications.”
The study received no outside funding. The researchers and Dr. Girgis had no financial conflicts to disclose.
FROM BMJ EVIDENCE BASED MEDICINE
Just one extra drink a day may change the brain
It’s no secret that heavy drinking is linked to potential health problems, from liver damage to a higher risk of cancer. But most people probably wouldn’t think a nightcap every evening is much of a health threat.
Now, new evidence published in Nature Communications suggests
Previous research has found that people with alcohol use disorder have structural changes in their brains, compared with healthy people’s brains, such as reduced gray-matter and white-matter volume.
But those findings were in people with a history of heavy drinking, defined by the National Institute on Alcohol Abuse and Alcoholism as more than four drinks a day for men and more than three drinks a day for women.
The national dietary guidelines from the U.S. Department of Health & Human Services advise drinking no more than two standard drinks for men and one drink for women each day. A standard drink in the United States is 12 ounces of beer, 5 ounces of wine, or 1½ ounce of liquor.
But could even this modest amount of alcohol make a difference to our brains?
Researchers examined functional MRI brain scans from 36,678 healthy adults, aged 40-69 years, in the United Kingdom and compared those findings with their weekly alcohol consumption, adjusting for differences in age, sex, height, social and economic status, and country of residence, among other things.
In line with past studies, the researchers found that, as a person drank more alcohol, their gray-matter and white-matter volume decreased, getting worse the more drinks they had in a week.
But the researchers also noted that they could tell the difference between brain images of people who never drank alcohol and those who had just one or two drinks a day.
Going from 1 unit of alcohol to 2 – which in the United Kingdom means a full pint of beer or standard glass of wine – was linked to changes similar to 2 years of aging in the brain.
Other than comparing the changes with aging, it’s not yet clear what the findings mean until the scientists do more research, including looking at the genes of the people who took part in the study.
The study also has several drawbacks. The people who were studied are all middle-aged Europeans, so findings might be different in younger people or those with different ancestries. People also self-reported how much alcohol they drank for the past year, which they might not remember correctly or which might be different from previous years, including past years of heavy drinking.
And since the researchers compared drinking habits with brain imaging at one point in time, it’s not possible to say whether alcohol is actually causing the brain differences they saw.
Still, the findings raise the question of whether national guidelines should be revisited, and whether it’s better to cut that evening drink to a half-glass of wine instead.
A version of this article first appeared on WebMD.com.
It’s no secret that heavy drinking is linked to potential health problems, from liver damage to a higher risk of cancer. But most people probably wouldn’t think a nightcap every evening is much of a health threat.
Now, new evidence published in Nature Communications suggests
Previous research has found that people with alcohol use disorder have structural changes in their brains, compared with healthy people’s brains, such as reduced gray-matter and white-matter volume.
But those findings were in people with a history of heavy drinking, defined by the National Institute on Alcohol Abuse and Alcoholism as more than four drinks a day for men and more than three drinks a day for women.
The national dietary guidelines from the U.S. Department of Health & Human Services advise drinking no more than two standard drinks for men and one drink for women each day. A standard drink in the United States is 12 ounces of beer, 5 ounces of wine, or 1½ ounce of liquor.
But could even this modest amount of alcohol make a difference to our brains?
Researchers examined functional MRI brain scans from 36,678 healthy adults, aged 40-69 years, in the United Kingdom and compared those findings with their weekly alcohol consumption, adjusting for differences in age, sex, height, social and economic status, and country of residence, among other things.
In line with past studies, the researchers found that, as a person drank more alcohol, their gray-matter and white-matter volume decreased, getting worse the more drinks they had in a week.
But the researchers also noted that they could tell the difference between brain images of people who never drank alcohol and those who had just one or two drinks a day.
Going from 1 unit of alcohol to 2 – which in the United Kingdom means a full pint of beer or standard glass of wine – was linked to changes similar to 2 years of aging in the brain.
Other than comparing the changes with aging, it’s not yet clear what the findings mean until the scientists do more research, including looking at the genes of the people who took part in the study.
The study also has several drawbacks. The people who were studied are all middle-aged Europeans, so findings might be different in younger people or those with different ancestries. People also self-reported how much alcohol they drank for the past year, which they might not remember correctly or which might be different from previous years, including past years of heavy drinking.
And since the researchers compared drinking habits with brain imaging at one point in time, it’s not possible to say whether alcohol is actually causing the brain differences they saw.
Still, the findings raise the question of whether national guidelines should be revisited, and whether it’s better to cut that evening drink to a half-glass of wine instead.
A version of this article first appeared on WebMD.com.
It’s no secret that heavy drinking is linked to potential health problems, from liver damage to a higher risk of cancer. But most people probably wouldn’t think a nightcap every evening is much of a health threat.
Now, new evidence published in Nature Communications suggests
Previous research has found that people with alcohol use disorder have structural changes in their brains, compared with healthy people’s brains, such as reduced gray-matter and white-matter volume.
But those findings were in people with a history of heavy drinking, defined by the National Institute on Alcohol Abuse and Alcoholism as more than four drinks a day for men and more than three drinks a day for women.
The national dietary guidelines from the U.S. Department of Health & Human Services advise drinking no more than two standard drinks for men and one drink for women each day. A standard drink in the United States is 12 ounces of beer, 5 ounces of wine, or 1½ ounce of liquor.
But could even this modest amount of alcohol make a difference to our brains?
Researchers examined functional MRI brain scans from 36,678 healthy adults, aged 40-69 years, in the United Kingdom and compared those findings with their weekly alcohol consumption, adjusting for differences in age, sex, height, social and economic status, and country of residence, among other things.
In line with past studies, the researchers found that, as a person drank more alcohol, their gray-matter and white-matter volume decreased, getting worse the more drinks they had in a week.
But the researchers also noted that they could tell the difference between brain images of people who never drank alcohol and those who had just one or two drinks a day.
Going from 1 unit of alcohol to 2 – which in the United Kingdom means a full pint of beer or standard glass of wine – was linked to changes similar to 2 years of aging in the brain.
Other than comparing the changes with aging, it’s not yet clear what the findings mean until the scientists do more research, including looking at the genes of the people who took part in the study.
The study also has several drawbacks. The people who were studied are all middle-aged Europeans, so findings might be different in younger people or those with different ancestries. People also self-reported how much alcohol they drank for the past year, which they might not remember correctly or which might be different from previous years, including past years of heavy drinking.
And since the researchers compared drinking habits with brain imaging at one point in time, it’s not possible to say whether alcohol is actually causing the brain differences they saw.
Still, the findings raise the question of whether national guidelines should be revisited, and whether it’s better to cut that evening drink to a half-glass of wine instead.
A version of this article first appeared on WebMD.com.
FROM NATURE COMMUNICATIONS
Norovirus vaccine candidates employ different approaches
Scientists are trying different approaches to developing vaccines against norovirus, seeking to replicate the success seen in developing shots against rotavirus.
Speaking at the 12th World Congress of the World Society for Pediatric Infectious Diseases (WSPID), Miguel O’Ryan, MD, of the University of Chile, Santiago, presented an overview of candidate vaccines. Dr. O’Ryan has been involved for many years with research on rotavirus vaccines and has branched into work with the somewhat similar norovirus.
With advances in preventing rotavirus, norovirus has emerged in recent years as a leading cause of acute gastroenteritis (AGE) in most countries worldwide. It’s associated with almost 20% of all acute diarrheal cases globally and with an estimated 685 million episodes and 212,000 deaths annually, Dr. O’Ryan and coauthors reported in a review in the journal Viruses.
If successful, norovirus vaccines may be used someday to prevent outbreaks among military personnel, as this contagious virus has the potential to disrupt missions, Dr. O’Ryan and coauthors wrote. They also said people might consider getting norovirus vaccines ahead of trips to prevent traveler’s diarrhea. But most importantly, these kinds of vaccines could reduce diarrhea-associated hospitalizations and deaths of children.
Takeda Pharmaceutical Company, for whom Dr. O’Ryan has done consulting, last year announced a collaboration with Frazier Healthcare Partners to launch HilleVax. Based in Boston, the company is intended to commercialize Takeda’s norovirus vaccine candidate.
The Takeda-HilleVax candidate vaccine injection has advanced as far as phase 2 studies, including a test done over two winter seasons in U.S. Navy recruits. Takeda and U.S. Navy scientists reported in 2020 in the journal Vaccine that the primary efficacy outcome for this test could not be evaluated due to an unexpectedly low number of cases of norovirus. Still, data taken from this study indicate that the vaccine induces a broad immune response, the scientists reported.
In his WSPID presentation, Dr. O’Ryan also mentioned an oral norovirus vaccine candidate that the company Vaxart is developing, referring to this as a “very interesting approach.”
Betting on the gut
Based in South San Francisco, California, Vaxart is pursuing a theory that a vaccine designed to generate mucosal antibodies locally in the intestine, in addition to systemic antibodies in the blood, may better protect against norovirus infection than an injectable vaccine.
“A key ability to protect against norovirus needs to come from an intestinal immune response, and injected vaccines don’t give those very well,” Sean Tucker, PhD, the founder and chief scientific officer of Vaxart, told this news organization in an interview. “We think that’s one of the reasons why our oral approaches can have significant advantages.”
Challenges to developing a norovirus vaccine have included a lack of good animal models to use in research and a lack of an ability to grow the virus well in cell culture, Dr. Tucker said.
Vaxart experienced disruptions in its research during the early stages of the pandemic but has since picked up the pace of its efforts to develop its oral vaccine, Dr. Tucker said during the interview.
In a recent filing with the Securities and Exchange Commission, Vaxart said in early 2021 it resumed its norovirus vaccine program by initiating three clinical studies. These included a phase 1b placebo-controlled dose ranging study in healthy elderly adults aged 55-80. Data from these trials may be unveiled in the coming months.
Vaxart said that this year it has already initiated a phase 2 norovirus challenge study, which will evaluate safety, immunogenicity, and clinical efficacy of a vaccine candidate against placebo.
A version of this article first appeared on Medscape.com.
Scientists are trying different approaches to developing vaccines against norovirus, seeking to replicate the success seen in developing shots against rotavirus.
Speaking at the 12th World Congress of the World Society for Pediatric Infectious Diseases (WSPID), Miguel O’Ryan, MD, of the University of Chile, Santiago, presented an overview of candidate vaccines. Dr. O’Ryan has been involved for many years with research on rotavirus vaccines and has branched into work with the somewhat similar norovirus.
With advances in preventing rotavirus, norovirus has emerged in recent years as a leading cause of acute gastroenteritis (AGE) in most countries worldwide. It’s associated with almost 20% of all acute diarrheal cases globally and with an estimated 685 million episodes and 212,000 deaths annually, Dr. O’Ryan and coauthors reported in a review in the journal Viruses.
If successful, norovirus vaccines may be used someday to prevent outbreaks among military personnel, as this contagious virus has the potential to disrupt missions, Dr. O’Ryan and coauthors wrote. They also said people might consider getting norovirus vaccines ahead of trips to prevent traveler’s diarrhea. But most importantly, these kinds of vaccines could reduce diarrhea-associated hospitalizations and deaths of children.
Takeda Pharmaceutical Company, for whom Dr. O’Ryan has done consulting, last year announced a collaboration with Frazier Healthcare Partners to launch HilleVax. Based in Boston, the company is intended to commercialize Takeda’s norovirus vaccine candidate.
The Takeda-HilleVax candidate vaccine injection has advanced as far as phase 2 studies, including a test done over two winter seasons in U.S. Navy recruits. Takeda and U.S. Navy scientists reported in 2020 in the journal Vaccine that the primary efficacy outcome for this test could not be evaluated due to an unexpectedly low number of cases of norovirus. Still, data taken from this study indicate that the vaccine induces a broad immune response, the scientists reported.
In his WSPID presentation, Dr. O’Ryan also mentioned an oral norovirus vaccine candidate that the company Vaxart is developing, referring to this as a “very interesting approach.”
Betting on the gut
Based in South San Francisco, California, Vaxart is pursuing a theory that a vaccine designed to generate mucosal antibodies locally in the intestine, in addition to systemic antibodies in the blood, may better protect against norovirus infection than an injectable vaccine.
“A key ability to protect against norovirus needs to come from an intestinal immune response, and injected vaccines don’t give those very well,” Sean Tucker, PhD, the founder and chief scientific officer of Vaxart, told this news organization in an interview. “We think that’s one of the reasons why our oral approaches can have significant advantages.”
Challenges to developing a norovirus vaccine have included a lack of good animal models to use in research and a lack of an ability to grow the virus well in cell culture, Dr. Tucker said.
Vaxart experienced disruptions in its research during the early stages of the pandemic but has since picked up the pace of its efforts to develop its oral vaccine, Dr. Tucker said during the interview.
In a recent filing with the Securities and Exchange Commission, Vaxart said in early 2021 it resumed its norovirus vaccine program by initiating three clinical studies. These included a phase 1b placebo-controlled dose ranging study in healthy elderly adults aged 55-80. Data from these trials may be unveiled in the coming months.
Vaxart said that this year it has already initiated a phase 2 norovirus challenge study, which will evaluate safety, immunogenicity, and clinical efficacy of a vaccine candidate against placebo.
A version of this article first appeared on Medscape.com.
Scientists are trying different approaches to developing vaccines against norovirus, seeking to replicate the success seen in developing shots against rotavirus.
Speaking at the 12th World Congress of the World Society for Pediatric Infectious Diseases (WSPID), Miguel O’Ryan, MD, of the University of Chile, Santiago, presented an overview of candidate vaccines. Dr. O’Ryan has been involved for many years with research on rotavirus vaccines and has branched into work with the somewhat similar norovirus.
With advances in preventing rotavirus, norovirus has emerged in recent years as a leading cause of acute gastroenteritis (AGE) in most countries worldwide. It’s associated with almost 20% of all acute diarrheal cases globally and with an estimated 685 million episodes and 212,000 deaths annually, Dr. O’Ryan and coauthors reported in a review in the journal Viruses.
If successful, norovirus vaccines may be used someday to prevent outbreaks among military personnel, as this contagious virus has the potential to disrupt missions, Dr. O’Ryan and coauthors wrote. They also said people might consider getting norovirus vaccines ahead of trips to prevent traveler’s diarrhea. But most importantly, these kinds of vaccines could reduce diarrhea-associated hospitalizations and deaths of children.
Takeda Pharmaceutical Company, for whom Dr. O’Ryan has done consulting, last year announced a collaboration with Frazier Healthcare Partners to launch HilleVax. Based in Boston, the company is intended to commercialize Takeda’s norovirus vaccine candidate.
The Takeda-HilleVax candidate vaccine injection has advanced as far as phase 2 studies, including a test done over two winter seasons in U.S. Navy recruits. Takeda and U.S. Navy scientists reported in 2020 in the journal Vaccine that the primary efficacy outcome for this test could not be evaluated due to an unexpectedly low number of cases of norovirus. Still, data taken from this study indicate that the vaccine induces a broad immune response, the scientists reported.
In his WSPID presentation, Dr. O’Ryan also mentioned an oral norovirus vaccine candidate that the company Vaxart is developing, referring to this as a “very interesting approach.”
Betting on the gut
Based in South San Francisco, California, Vaxart is pursuing a theory that a vaccine designed to generate mucosal antibodies locally in the intestine, in addition to systemic antibodies in the blood, may better protect against norovirus infection than an injectable vaccine.
“A key ability to protect against norovirus needs to come from an intestinal immune response, and injected vaccines don’t give those very well,” Sean Tucker, PhD, the founder and chief scientific officer of Vaxart, told this news organization in an interview. “We think that’s one of the reasons why our oral approaches can have significant advantages.”
Challenges to developing a norovirus vaccine have included a lack of good animal models to use in research and a lack of an ability to grow the virus well in cell culture, Dr. Tucker said.
Vaxart experienced disruptions in its research during the early stages of the pandemic but has since picked up the pace of its efforts to develop its oral vaccine, Dr. Tucker said during the interview.
In a recent filing with the Securities and Exchange Commission, Vaxart said in early 2021 it resumed its norovirus vaccine program by initiating three clinical studies. These included a phase 1b placebo-controlled dose ranging study in healthy elderly adults aged 55-80. Data from these trials may be unveiled in the coming months.
Vaxart said that this year it has already initiated a phase 2 norovirus challenge study, which will evaluate safety, immunogenicity, and clinical efficacy of a vaccine candidate against placebo.
A version of this article first appeared on Medscape.com.
Death of pig heart transplant patient is more a beginning than an end
The genetically altered pig’s heart “worked like a rock star, beautifully functioning,” the surgeon who performed the pioneering Jan. 7 xenotransplant procedure said in a press statement on the death of the patient, David Bennett Sr.
“He wasn’t able to overcome what turned out to be devastating – the debilitation from his previous period of heart failure, which was extreme,” said Bartley P. Griffith, MD, clinical director of the cardiac xenotransplantation program at the University of Maryland, Baltimore.
Representatives of the institution aren’t offering many details on the cause of Mr. Bennett’s death on March 8, 60 days after his operation, but said they will elaborate when their findings are formally published. But their comments seem to downplay the unique nature of the implanted heart itself as a culprit and instead implicate the patient’s diminished overall clinical condition and what grew into an ongoing battle with infections.
The 57-year-old Bennett, bedridden with end-stage heart failure, judged a poor candidate for a ventricular assist device, and on extracorporeal membrane oxygenation (ECMO), reportedly was offered the extraordinary surgery after being turned down for a conventional transplant at several major centers.
“Until day 45 or 50, he was doing very well,” Muhammad M. Mohiuddin, MD, the xenotransplantation program’s scientific director, observed in the statement. But infections soon took advantage of his hobbled immune system.
Given his “preexisting condition and how frail his body was,” Dr. Mohiuddin said, “we were having difficulty maintaining a balance between his immunosuppression and controlling his infection.” Mr. Bennett went into multiple organ failure and “I think that resulted in his passing away.”
Beyond wildest dreams
The surgeons confidently framed Mr. Bennett’s experience as a milestone for heart xenotransplantation. “The demonstration that it was possible, beyond the wildest dreams of most people in the field, even, at this point – that we were able to take a genetically engineered organ and watch it function flawlessly for 9 weeks – is pretty positive in terms of the potential of this therapy,” Dr. Griffith said.
But enough questions linger that others were more circumspect, even as they praised the accomplishment. “There’s no question that this is a historic event,” Mandeep R. Mehra, MD, of Harvard Medical School, and director of the Center for Advanced Heart Disease at Brigham and Women’s Hospital, both in Boston, said in an interview.
Still, “I don’t think we should just conclude that it was the patient’s frailty or death from infection,” Dr. Mehra said. With so few details available, “I would be very careful in prematurely concluding that the problem did not reside with the heart but with the patient. We cannot be sure.”
For example, he noted, “6 to 8 weeks is right around the time when some cardiac complications, like accelerated forms of vasculopathy, could become evident.” Immune-mediated cardiac allograft vasculopathy is a common cause of heart transplant failure.
Or, “it could as easily have been the fact that immunosuppression was modified at 6 to 7 weeks in response to potential infection, which could have led to a cardiac compromise,” Dr. Mehra said. “We just don’t know.”
“It’s really important that this be reported in a scientifically accurate way, because we will all learn from this,” Lori J. West, MD, DPhil, said in an interview.
Little seems to be known for sure about the actual cause of death, “but the fact there was not hyperacute rejection is itself a big step forward. And we know, at least from the limited information we have, that it did not occur,” observed Dr. West, who directs the Alberta Transplant Institute, Edmonton, and the Canadian Donation and Transplantation Research Program. She is a professor of pediatrics with adjunct positions in the departments of surgery and microbiology/immunology.
Dr. West also sees Mr. Bennett’s struggle with infections and adjustments to his unique immunosuppressive regimen, at least as characterized by his care team, as in line with the experience of many heart transplant recipients facing the same threat.
“We already walk this tightrope with every transplant patient,” she said. Typically, they’re put on a somewhat standardized immunosuppressant regimen, “and then we modify it a bit, either increasing or decreasing it, depending on the posttransplant course.” The regimen can become especially intense in response to new signs of rejection, “and you know that that’s going to have an impact on susceptibility to all kinds of infections.”
Full circle
The porcine heart was protected along two fronts against assault from Mr. Bennett’s immune system and other inhospitable aspects of his physiology, either of which could also have been obstacles to success: Genetic modification (Revivicor) of the pig that provided the heart, and a singularly aggressive antirejection drug regimen for the patient.
The knockout of three genes targeting specific porcine cell-surface carbohydrates that provoke a strong human antibody response reportedly averted a hyperacute rejection response that would have caused the graft to fail almost immediately.
Other genetic manipulations, some using CRISPR technology, silenced genes encoded for porcine endogenous retroviruses. Others were aimed at controlling myocardial growth and stemming graft microangiopathy.
Mr. Bennett himself was treated with powerful immunosuppressants, including an investigational anti-CD40 monoclonal antibody (KPL-404, Kiniksa Pharmaceuticals) that, according to UMSOM, inhibits a well-recognized pathway critical to B-cell proliferation, T-cell activation, and antibody production.
“I suspect the patient may not have had rejection, but unfortunately, that intense immunosuppression really set him up – even if he had been half that age – for a very difficult time,” David A. Baran, MD, a cardiologist from Sentara Advanced Heart Failure Center, Norfolk, Va., who studies transplant immunology, said in an interview.
“This is in some ways like the original heart transplant in 1967, when the ability to do the surgery evolved before understanding of the immunosuppression needed. Four or 5 years later, heart transplantation almost died out, before the development of better immunosuppressants like cyclosporine and later tacrolimus,” Dr. Baran said.
“The current age, when we use less immunosuppression than ever, is based on 30 years of progressive success,” he noted. This landmark xenotransplantation “basically turns back the clock to a time when the intensity of immunosuppression by definition had to be extremely high, because we really didn’t know what to expect.”
Emerging role of xeno-organs
Xenotransplantation has been touted as potential strategy for expanding the pool of organs available for transplantation. Mr. Bennett’s “breakthrough surgery” takes the world “one step closer to solving the organ shortage crisis,” his surgeon, Dr. Griffith, announced soon after the procedure. “There are simply not enough donor human hearts available to meet the long list of potential recipients.”
But it’s not the only proposed approach. Measures could be taken, for example, to make more efficient use of the human organs that become available, partly by opening the field to additional less-than-ideal hearts and loosening regulatory mandates for projected graft survival.
“Every year, more than two-thirds of donor organs in the United States are discarded. So it’s not actually that we don’t have enough organs, it’s that we don’t have enough organs that people are willing to take,” Dr. Baran said. Still, it’s important to pursue all promising avenues, and “the genetic manipulation pathway is remarkable.”
But “honestly, organs such as kidneys probably make the most sense” for early study of xenotransplantation from pigs, he said. “The waiting list for kidneys is also very long, but if the kidney graft were to fail, the patient wouldn’t die. It would allow us to work out the immunosuppression without putting patients’ lives at risk.”
Often overlooked in assessments of organ demand, Dr. West said, is that “a lot of patients who could benefit from a transplant will never even be listed for a transplant.” It’s not clear why; perhaps they have multiple comorbidities, live too far from a transplant center, “or they’re too big or too small. Even if there were unlimited organs, you could never meet the needs of people who could benefit from transplantation.”
So even if more available donor organs were used, she said, there would still be a gap that xenotransplantation could help fill. “I’m very much in favor of research that allows us to continue to try to find a pathway to xenotransplantation. I think it’s critically important.”
Unquestionably, “we now need to have a dialogue to entertain how a technology like this, using modern medicine with gene editing, is really going to be utilized,” Dr. Mehra said. The Bennett case “does open up the field, but it also raises caution.” There should be broad participation to move the field forward, “coordinated through either societies or nationally allocated advisory committees that oversee the movement of this technology, to the next step.”
Ideally, that next step “would be to do a safety clinical trial in the right patient,” he said. “And the right patient, by definition, would be one who does not have a life-prolonging option, either mechanical circulatory support or allograft transplantation. That would be the goal.”
Dr. Mehra has reported receiving payments to his institution from Abbott for consulting; consulting fees from Janssen, Mesoblast, Broadview Ventures, Natera, Paragonix, Moderna, and the Baim Institute for Clinical Research; and serving on a scientific advisory board NuPulseCV, Leviticus, and FineHeart. Dr. Baran disclosed consulting for Getinge and LivaNova; speaking for Pfizer; and serving on trial steering committees for CareDx and Procyrion, all unrelated to xenotransplantation. Dr. West has declared no relevant conflicts.
A version of this article first appeared on Medscape.com.
The genetically altered pig’s heart “worked like a rock star, beautifully functioning,” the surgeon who performed the pioneering Jan. 7 xenotransplant procedure said in a press statement on the death of the patient, David Bennett Sr.
“He wasn’t able to overcome what turned out to be devastating – the debilitation from his previous period of heart failure, which was extreme,” said Bartley P. Griffith, MD, clinical director of the cardiac xenotransplantation program at the University of Maryland, Baltimore.
Representatives of the institution aren’t offering many details on the cause of Mr. Bennett’s death on March 8, 60 days after his operation, but said they will elaborate when their findings are formally published. But their comments seem to downplay the unique nature of the implanted heart itself as a culprit and instead implicate the patient’s diminished overall clinical condition and what grew into an ongoing battle with infections.
The 57-year-old Bennett, bedridden with end-stage heart failure, judged a poor candidate for a ventricular assist device, and on extracorporeal membrane oxygenation (ECMO), reportedly was offered the extraordinary surgery after being turned down for a conventional transplant at several major centers.
“Until day 45 or 50, he was doing very well,” Muhammad M. Mohiuddin, MD, the xenotransplantation program’s scientific director, observed in the statement. But infections soon took advantage of his hobbled immune system.
Given his “preexisting condition and how frail his body was,” Dr. Mohiuddin said, “we were having difficulty maintaining a balance between his immunosuppression and controlling his infection.” Mr. Bennett went into multiple organ failure and “I think that resulted in his passing away.”
Beyond wildest dreams
The surgeons confidently framed Mr. Bennett’s experience as a milestone for heart xenotransplantation. “The demonstration that it was possible, beyond the wildest dreams of most people in the field, even, at this point – that we were able to take a genetically engineered organ and watch it function flawlessly for 9 weeks – is pretty positive in terms of the potential of this therapy,” Dr. Griffith said.
But enough questions linger that others were more circumspect, even as they praised the accomplishment. “There’s no question that this is a historic event,” Mandeep R. Mehra, MD, of Harvard Medical School, and director of the Center for Advanced Heart Disease at Brigham and Women’s Hospital, both in Boston, said in an interview.
Still, “I don’t think we should just conclude that it was the patient’s frailty or death from infection,” Dr. Mehra said. With so few details available, “I would be very careful in prematurely concluding that the problem did not reside with the heart but with the patient. We cannot be sure.”
For example, he noted, “6 to 8 weeks is right around the time when some cardiac complications, like accelerated forms of vasculopathy, could become evident.” Immune-mediated cardiac allograft vasculopathy is a common cause of heart transplant failure.
Or, “it could as easily have been the fact that immunosuppression was modified at 6 to 7 weeks in response to potential infection, which could have led to a cardiac compromise,” Dr. Mehra said. “We just don’t know.”
“It’s really important that this be reported in a scientifically accurate way, because we will all learn from this,” Lori J. West, MD, DPhil, said in an interview.
Little seems to be known for sure about the actual cause of death, “but the fact there was not hyperacute rejection is itself a big step forward. And we know, at least from the limited information we have, that it did not occur,” observed Dr. West, who directs the Alberta Transplant Institute, Edmonton, and the Canadian Donation and Transplantation Research Program. She is a professor of pediatrics with adjunct positions in the departments of surgery and microbiology/immunology.
Dr. West also sees Mr. Bennett’s struggle with infections and adjustments to his unique immunosuppressive regimen, at least as characterized by his care team, as in line with the experience of many heart transplant recipients facing the same threat.
“We already walk this tightrope with every transplant patient,” she said. Typically, they’re put on a somewhat standardized immunosuppressant regimen, “and then we modify it a bit, either increasing or decreasing it, depending on the posttransplant course.” The regimen can become especially intense in response to new signs of rejection, “and you know that that’s going to have an impact on susceptibility to all kinds of infections.”
Full circle
The porcine heart was protected along two fronts against assault from Mr. Bennett’s immune system and other inhospitable aspects of his physiology, either of which could also have been obstacles to success: Genetic modification (Revivicor) of the pig that provided the heart, and a singularly aggressive antirejection drug regimen for the patient.
The knockout of three genes targeting specific porcine cell-surface carbohydrates that provoke a strong human antibody response reportedly averted a hyperacute rejection response that would have caused the graft to fail almost immediately.
Other genetic manipulations, some using CRISPR technology, silenced genes encoded for porcine endogenous retroviruses. Others were aimed at controlling myocardial growth and stemming graft microangiopathy.
Mr. Bennett himself was treated with powerful immunosuppressants, including an investigational anti-CD40 monoclonal antibody (KPL-404, Kiniksa Pharmaceuticals) that, according to UMSOM, inhibits a well-recognized pathway critical to B-cell proliferation, T-cell activation, and antibody production.
“I suspect the patient may not have had rejection, but unfortunately, that intense immunosuppression really set him up – even if he had been half that age – for a very difficult time,” David A. Baran, MD, a cardiologist from Sentara Advanced Heart Failure Center, Norfolk, Va., who studies transplant immunology, said in an interview.
“This is in some ways like the original heart transplant in 1967, when the ability to do the surgery evolved before understanding of the immunosuppression needed. Four or 5 years later, heart transplantation almost died out, before the development of better immunosuppressants like cyclosporine and later tacrolimus,” Dr. Baran said.
“The current age, when we use less immunosuppression than ever, is based on 30 years of progressive success,” he noted. This landmark xenotransplantation “basically turns back the clock to a time when the intensity of immunosuppression by definition had to be extremely high, because we really didn’t know what to expect.”
Emerging role of xeno-organs
Xenotransplantation has been touted as potential strategy for expanding the pool of organs available for transplantation. Mr. Bennett’s “breakthrough surgery” takes the world “one step closer to solving the organ shortage crisis,” his surgeon, Dr. Griffith, announced soon after the procedure. “There are simply not enough donor human hearts available to meet the long list of potential recipients.”
But it’s not the only proposed approach. Measures could be taken, for example, to make more efficient use of the human organs that become available, partly by opening the field to additional less-than-ideal hearts and loosening regulatory mandates for projected graft survival.
“Every year, more than two-thirds of donor organs in the United States are discarded. So it’s not actually that we don’t have enough organs, it’s that we don’t have enough organs that people are willing to take,” Dr. Baran said. Still, it’s important to pursue all promising avenues, and “the genetic manipulation pathway is remarkable.”
But “honestly, organs such as kidneys probably make the most sense” for early study of xenotransplantation from pigs, he said. “The waiting list for kidneys is also very long, but if the kidney graft were to fail, the patient wouldn’t die. It would allow us to work out the immunosuppression without putting patients’ lives at risk.”
Often overlooked in assessments of organ demand, Dr. West said, is that “a lot of patients who could benefit from a transplant will never even be listed for a transplant.” It’s not clear why; perhaps they have multiple comorbidities, live too far from a transplant center, “or they’re too big or too small. Even if there were unlimited organs, you could never meet the needs of people who could benefit from transplantation.”
So even if more available donor organs were used, she said, there would still be a gap that xenotransplantation could help fill. “I’m very much in favor of research that allows us to continue to try to find a pathway to xenotransplantation. I think it’s critically important.”
Unquestionably, “we now need to have a dialogue to entertain how a technology like this, using modern medicine with gene editing, is really going to be utilized,” Dr. Mehra said. The Bennett case “does open up the field, but it also raises caution.” There should be broad participation to move the field forward, “coordinated through either societies or nationally allocated advisory committees that oversee the movement of this technology, to the next step.”
Ideally, that next step “would be to do a safety clinical trial in the right patient,” he said. “And the right patient, by definition, would be one who does not have a life-prolonging option, either mechanical circulatory support or allograft transplantation. That would be the goal.”
Dr. Mehra has reported receiving payments to his institution from Abbott for consulting; consulting fees from Janssen, Mesoblast, Broadview Ventures, Natera, Paragonix, Moderna, and the Baim Institute for Clinical Research; and serving on a scientific advisory board NuPulseCV, Leviticus, and FineHeart. Dr. Baran disclosed consulting for Getinge and LivaNova; speaking for Pfizer; and serving on trial steering committees for CareDx and Procyrion, all unrelated to xenotransplantation. Dr. West has declared no relevant conflicts.
A version of this article first appeared on Medscape.com.
The genetically altered pig’s heart “worked like a rock star, beautifully functioning,” the surgeon who performed the pioneering Jan. 7 xenotransplant procedure said in a press statement on the death of the patient, David Bennett Sr.
“He wasn’t able to overcome what turned out to be devastating – the debilitation from his previous period of heart failure, which was extreme,” said Bartley P. Griffith, MD, clinical director of the cardiac xenotransplantation program at the University of Maryland, Baltimore.
Representatives of the institution aren’t offering many details on the cause of Mr. Bennett’s death on March 8, 60 days after his operation, but said they will elaborate when their findings are formally published. But their comments seem to downplay the unique nature of the implanted heart itself as a culprit and instead implicate the patient’s diminished overall clinical condition and what grew into an ongoing battle with infections.
The 57-year-old Bennett, bedridden with end-stage heart failure, judged a poor candidate for a ventricular assist device, and on extracorporeal membrane oxygenation (ECMO), reportedly was offered the extraordinary surgery after being turned down for a conventional transplant at several major centers.
“Until day 45 or 50, he was doing very well,” Muhammad M. Mohiuddin, MD, the xenotransplantation program’s scientific director, observed in the statement. But infections soon took advantage of his hobbled immune system.
Given his “preexisting condition and how frail his body was,” Dr. Mohiuddin said, “we were having difficulty maintaining a balance between his immunosuppression and controlling his infection.” Mr. Bennett went into multiple organ failure and “I think that resulted in his passing away.”
Beyond wildest dreams
The surgeons confidently framed Mr. Bennett’s experience as a milestone for heart xenotransplantation. “The demonstration that it was possible, beyond the wildest dreams of most people in the field, even, at this point – that we were able to take a genetically engineered organ and watch it function flawlessly for 9 weeks – is pretty positive in terms of the potential of this therapy,” Dr. Griffith said.
But enough questions linger that others were more circumspect, even as they praised the accomplishment. “There’s no question that this is a historic event,” Mandeep R. Mehra, MD, of Harvard Medical School, and director of the Center for Advanced Heart Disease at Brigham and Women’s Hospital, both in Boston, said in an interview.
Still, “I don’t think we should just conclude that it was the patient’s frailty or death from infection,” Dr. Mehra said. With so few details available, “I would be very careful in prematurely concluding that the problem did not reside with the heart but with the patient. We cannot be sure.”
For example, he noted, “6 to 8 weeks is right around the time when some cardiac complications, like accelerated forms of vasculopathy, could become evident.” Immune-mediated cardiac allograft vasculopathy is a common cause of heart transplant failure.
Or, “it could as easily have been the fact that immunosuppression was modified at 6 to 7 weeks in response to potential infection, which could have led to a cardiac compromise,” Dr. Mehra said. “We just don’t know.”
“It’s really important that this be reported in a scientifically accurate way, because we will all learn from this,” Lori J. West, MD, DPhil, said in an interview.
Little seems to be known for sure about the actual cause of death, “but the fact there was not hyperacute rejection is itself a big step forward. And we know, at least from the limited information we have, that it did not occur,” observed Dr. West, who directs the Alberta Transplant Institute, Edmonton, and the Canadian Donation and Transplantation Research Program. She is a professor of pediatrics with adjunct positions in the departments of surgery and microbiology/immunology.
Dr. West also sees Mr. Bennett’s struggle with infections and adjustments to his unique immunosuppressive regimen, at least as characterized by his care team, as in line with the experience of many heart transplant recipients facing the same threat.
“We already walk this tightrope with every transplant patient,” she said. Typically, they’re put on a somewhat standardized immunosuppressant regimen, “and then we modify it a bit, either increasing or decreasing it, depending on the posttransplant course.” The regimen can become especially intense in response to new signs of rejection, “and you know that that’s going to have an impact on susceptibility to all kinds of infections.”
Full circle
The porcine heart was protected along two fronts against assault from Mr. Bennett’s immune system and other inhospitable aspects of his physiology, either of which could also have been obstacles to success: Genetic modification (Revivicor) of the pig that provided the heart, and a singularly aggressive antirejection drug regimen for the patient.
The knockout of three genes targeting specific porcine cell-surface carbohydrates that provoke a strong human antibody response reportedly averted a hyperacute rejection response that would have caused the graft to fail almost immediately.
Other genetic manipulations, some using CRISPR technology, silenced genes encoded for porcine endogenous retroviruses. Others were aimed at controlling myocardial growth and stemming graft microangiopathy.
Mr. Bennett himself was treated with powerful immunosuppressants, including an investigational anti-CD40 monoclonal antibody (KPL-404, Kiniksa Pharmaceuticals) that, according to UMSOM, inhibits a well-recognized pathway critical to B-cell proliferation, T-cell activation, and antibody production.
“I suspect the patient may not have had rejection, but unfortunately, that intense immunosuppression really set him up – even if he had been half that age – for a very difficult time,” David A. Baran, MD, a cardiologist from Sentara Advanced Heart Failure Center, Norfolk, Va., who studies transplant immunology, said in an interview.
“This is in some ways like the original heart transplant in 1967, when the ability to do the surgery evolved before understanding of the immunosuppression needed. Four or 5 years later, heart transplantation almost died out, before the development of better immunosuppressants like cyclosporine and later tacrolimus,” Dr. Baran said.
“The current age, when we use less immunosuppression than ever, is based on 30 years of progressive success,” he noted. This landmark xenotransplantation “basically turns back the clock to a time when the intensity of immunosuppression by definition had to be extremely high, because we really didn’t know what to expect.”
Emerging role of xeno-organs
Xenotransplantation has been touted as potential strategy for expanding the pool of organs available for transplantation. Mr. Bennett’s “breakthrough surgery” takes the world “one step closer to solving the organ shortage crisis,” his surgeon, Dr. Griffith, announced soon after the procedure. “There are simply not enough donor human hearts available to meet the long list of potential recipients.”
But it’s not the only proposed approach. Measures could be taken, for example, to make more efficient use of the human organs that become available, partly by opening the field to additional less-than-ideal hearts and loosening regulatory mandates for projected graft survival.
“Every year, more than two-thirds of donor organs in the United States are discarded. So it’s not actually that we don’t have enough organs, it’s that we don’t have enough organs that people are willing to take,” Dr. Baran said. Still, it’s important to pursue all promising avenues, and “the genetic manipulation pathway is remarkable.”
But “honestly, organs such as kidneys probably make the most sense” for early study of xenotransplantation from pigs, he said. “The waiting list for kidneys is also very long, but if the kidney graft were to fail, the patient wouldn’t die. It would allow us to work out the immunosuppression without putting patients’ lives at risk.”
Often overlooked in assessments of organ demand, Dr. West said, is that “a lot of patients who could benefit from a transplant will never even be listed for a transplant.” It’s not clear why; perhaps they have multiple comorbidities, live too far from a transplant center, “or they’re too big or too small. Even if there were unlimited organs, you could never meet the needs of people who could benefit from transplantation.”
So even if more available donor organs were used, she said, there would still be a gap that xenotransplantation could help fill. “I’m very much in favor of research that allows us to continue to try to find a pathway to xenotransplantation. I think it’s critically important.”
Unquestionably, “we now need to have a dialogue to entertain how a technology like this, using modern medicine with gene editing, is really going to be utilized,” Dr. Mehra said. The Bennett case “does open up the field, but it also raises caution.” There should be broad participation to move the field forward, “coordinated through either societies or nationally allocated advisory committees that oversee the movement of this technology, to the next step.”
Ideally, that next step “would be to do a safety clinical trial in the right patient,” he said. “And the right patient, by definition, would be one who does not have a life-prolonging option, either mechanical circulatory support or allograft transplantation. That would be the goal.”
Dr. Mehra has reported receiving payments to his institution from Abbott for consulting; consulting fees from Janssen, Mesoblast, Broadview Ventures, Natera, Paragonix, Moderna, and the Baim Institute for Clinical Research; and serving on a scientific advisory board NuPulseCV, Leviticus, and FineHeart. Dr. Baran disclosed consulting for Getinge and LivaNova; speaking for Pfizer; and serving on trial steering committees for CareDx and Procyrion, all unrelated to xenotransplantation. Dr. West has declared no relevant conflicts.
A version of this article first appeared on Medscape.com.












