User login
Promising topline phase 2 results for novel oral Alzheimer’s drug
Topline results provide “clinical evidence of a modification of multiple AD pathologies associated with amyloid plaque burden,” said John Didsbury, PhD, chief executive officer of T3D Therapeutics Inc., the company developing the drug.
While the primary cognitive endpoints were not met in the overall study population, the data suggest that a “high plasma pTau-217/non–pTau-217 ratio, a marker of AD pathology, likely defines an AD population responsive to T3D-959 therapy,” Dr. Didsbury said.
He said it’s important to note that no PET imaging (amyloid/tau) or biomarkers were used as entry criteria and, in hindsight, some participants likely did not have AD, which likely played a role in the negative primary outcome.
The findings from the PIONEER study were presented at the annual Clinical Trials on Alzheimer’s Disease conference.
‘Surprised and shocked’
The PPAR family of proteins helps regulate blood sugar and triglyceride levels. The rationale for evaluating PPAR agonists in AD is based on the hypothesis that sporadic AD is fundamentally an age-related metabolic disease.
T3D-959 is the first PPAR delta-activating compound to be developed for the treatment of AD. Uniquely, this drug also activates PPAR gamma, which may provide potential additive or synergistic effects in regulating dysfunctional brain glucose energy and lipid metabolism in AD.
The PIONEER tested three doses of T3D-959 (15 mg, 30 mg, and 45 mg) vs. placebo in 250 adults with mild to moderate AD (Mini-Mental State Examination [MMSE] 14-26, Clinical Dementia Rating (CDR)-Global 0.5-2.0, and Sum of Boxes [CDR-SB] ≥ 3.0). T3D-959 or placebo was taken once daily for 24 weeks.
In the overall population, the primary endpoints – Alzheimer Disease Assessment Scale-Cognitive subscale (ADAS-Cog11) and Clinical Global Impression of Change (CGIC) – were not met.
“Plain and simple, when we saw this data, we were surprised and shocked,” said Dr. Didsbury, and wondered, “How can placebo be doing so well on a 6-month AD trial?”
“We suspect the presence of non-AD subjects in the trial based on the lower-than-typical number of ApoE4 positive subjects, increased cognitive performance and learning effects observed, and only 45% of trial subjects having a low pTau-217 ratio, a biomarker indicating that they would have no AD pathology plasma,” he explained.
Plasma baseline pTau-217 ratio correlates with AD risk and severity and is a marker of AD pathology; in the subgroup with high pTau-217 ratio, the ADAS-Cog11 endpoint was met in the 30-mg T3D-959 group vs. the placebo group (–0.74 vs. 1.27; P = .112), “consistent with clinical benefit,” Dr. Didsbury noted.
The secondary endpoint of change in plasma amyloid-beta (Ab)42/40 ratio was also met in the 30-mg T3D-959 group – increasing at week 24 with T3D-959 vs. decreasing with placebo (P = .0206), with even greater improvement in the high pTau-217 ratio group. In this group, improvement of Ab42/40 ratio was nearly twofold greater than the overall group.
T3D-959 had a similar magnitude of effect on Ab42/40 as lecanemab (Leqembi) at 6 months, the researchers point out in their late-breaking abstract.
Biomarkers of all three AD diagnostic criteria (amyloid/tau/neurodegeneration) were improved, as well as markers of inflammation, insulin resistance, and dysfunctional lipid metabolism – results that demonstrate “peripheral targeted engagement,” Dr. Didsbury said.
“Along with the strong safety profile of T3D-959, the evidence supports a larger study evaluating T3D-959 30 mg/day in patients with mild to moderate AD and a baseline plasma p-Tau-217/non–pTau-217 ratio of ≥ 0.015,” the researchers conclude in their abstract.
Lessons learned
Commenting on the research for this article, Rebecca Edelmayer, PhD, senior director of scientific engagement for the Alzheimer’s Association, noted that “the idea behind this treatment is that impaired glucose metabolism in the brain leads to toxic misfolded proteins, including amyloid and tau in people with Alzheimer’s disease.”
“The treatment focuses on improving regulation of glucose and lipid metabolism in the brain. This is one of more than 140 approaches that are being tested today to target the biological drivers and contributors to Alzheimer’s disease,” Dr. Edelmayer said.
Because biomarkers were not used to enroll participants, “there was a high population of people in the trial who did not have Alzheimer’s. This very likely contributed to the negative result,” she noted.
However, the results also suggest that those taking the drug who had a high pTau217 ratio – and are likely to have brain amyloid plaques – had less cognitive decline, she noted.
Lessons learned from this negative trial include “the proper dose to balance efficacy and safety, and how to screen participants for their next study,” Dr. Edelmayer said.
Funding for the study was provided by the National Institute on Aging/National Institutes of Health and the Alzheimer’s Association. Dr. Didsbury and Dr. Edelmayer report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Topline results provide “clinical evidence of a modification of multiple AD pathologies associated with amyloid plaque burden,” said John Didsbury, PhD, chief executive officer of T3D Therapeutics Inc., the company developing the drug.
While the primary cognitive endpoints were not met in the overall study population, the data suggest that a “high plasma pTau-217/non–pTau-217 ratio, a marker of AD pathology, likely defines an AD population responsive to T3D-959 therapy,” Dr. Didsbury said.
He said it’s important to note that no PET imaging (amyloid/tau) or biomarkers were used as entry criteria and, in hindsight, some participants likely did not have AD, which likely played a role in the negative primary outcome.
The findings from the PIONEER study were presented at the annual Clinical Trials on Alzheimer’s Disease conference.
‘Surprised and shocked’
The PPAR family of proteins helps regulate blood sugar and triglyceride levels. The rationale for evaluating PPAR agonists in AD is based on the hypothesis that sporadic AD is fundamentally an age-related metabolic disease.
T3D-959 is the first PPAR delta-activating compound to be developed for the treatment of AD. Uniquely, this drug also activates PPAR gamma, which may provide potential additive or synergistic effects in regulating dysfunctional brain glucose energy and lipid metabolism in AD.
The PIONEER tested three doses of T3D-959 (15 mg, 30 mg, and 45 mg) vs. placebo in 250 adults with mild to moderate AD (Mini-Mental State Examination [MMSE] 14-26, Clinical Dementia Rating (CDR)-Global 0.5-2.0, and Sum of Boxes [CDR-SB] ≥ 3.0). T3D-959 or placebo was taken once daily for 24 weeks.
In the overall population, the primary endpoints – Alzheimer Disease Assessment Scale-Cognitive subscale (ADAS-Cog11) and Clinical Global Impression of Change (CGIC) – were not met.
“Plain and simple, when we saw this data, we were surprised and shocked,” said Dr. Didsbury, and wondered, “How can placebo be doing so well on a 6-month AD trial?”
“We suspect the presence of non-AD subjects in the trial based on the lower-than-typical number of ApoE4 positive subjects, increased cognitive performance and learning effects observed, and only 45% of trial subjects having a low pTau-217 ratio, a biomarker indicating that they would have no AD pathology plasma,” he explained.
Plasma baseline pTau-217 ratio correlates with AD risk and severity and is a marker of AD pathology; in the subgroup with high pTau-217 ratio, the ADAS-Cog11 endpoint was met in the 30-mg T3D-959 group vs. the placebo group (–0.74 vs. 1.27; P = .112), “consistent with clinical benefit,” Dr. Didsbury noted.
The secondary endpoint of change in plasma amyloid-beta (Ab)42/40 ratio was also met in the 30-mg T3D-959 group – increasing at week 24 with T3D-959 vs. decreasing with placebo (P = .0206), with even greater improvement in the high pTau-217 ratio group. In this group, improvement of Ab42/40 ratio was nearly twofold greater than the overall group.
T3D-959 had a similar magnitude of effect on Ab42/40 as lecanemab (Leqembi) at 6 months, the researchers point out in their late-breaking abstract.
Biomarkers of all three AD diagnostic criteria (amyloid/tau/neurodegeneration) were improved, as well as markers of inflammation, insulin resistance, and dysfunctional lipid metabolism – results that demonstrate “peripheral targeted engagement,” Dr. Didsbury said.
“Along with the strong safety profile of T3D-959, the evidence supports a larger study evaluating T3D-959 30 mg/day in patients with mild to moderate AD and a baseline plasma p-Tau-217/non–pTau-217 ratio of ≥ 0.015,” the researchers conclude in their abstract.
Lessons learned
Commenting on the research for this article, Rebecca Edelmayer, PhD, senior director of scientific engagement for the Alzheimer’s Association, noted that “the idea behind this treatment is that impaired glucose metabolism in the brain leads to toxic misfolded proteins, including amyloid and tau in people with Alzheimer’s disease.”
“The treatment focuses on improving regulation of glucose and lipid metabolism in the brain. This is one of more than 140 approaches that are being tested today to target the biological drivers and contributors to Alzheimer’s disease,” Dr. Edelmayer said.
Because biomarkers were not used to enroll participants, “there was a high population of people in the trial who did not have Alzheimer’s. This very likely contributed to the negative result,” she noted.
However, the results also suggest that those taking the drug who had a high pTau217 ratio – and are likely to have brain amyloid plaques – had less cognitive decline, she noted.
Lessons learned from this negative trial include “the proper dose to balance efficacy and safety, and how to screen participants for their next study,” Dr. Edelmayer said.
Funding for the study was provided by the National Institute on Aging/National Institutes of Health and the Alzheimer’s Association. Dr. Didsbury and Dr. Edelmayer report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Topline results provide “clinical evidence of a modification of multiple AD pathologies associated with amyloid plaque burden,” said John Didsbury, PhD, chief executive officer of T3D Therapeutics Inc., the company developing the drug.
While the primary cognitive endpoints were not met in the overall study population, the data suggest that a “high plasma pTau-217/non–pTau-217 ratio, a marker of AD pathology, likely defines an AD population responsive to T3D-959 therapy,” Dr. Didsbury said.
He said it’s important to note that no PET imaging (amyloid/tau) or biomarkers were used as entry criteria and, in hindsight, some participants likely did not have AD, which likely played a role in the negative primary outcome.
The findings from the PIONEER study were presented at the annual Clinical Trials on Alzheimer’s Disease conference.
‘Surprised and shocked’
The PPAR family of proteins helps regulate blood sugar and triglyceride levels. The rationale for evaluating PPAR agonists in AD is based on the hypothesis that sporadic AD is fundamentally an age-related metabolic disease.
T3D-959 is the first PPAR delta-activating compound to be developed for the treatment of AD. Uniquely, this drug also activates PPAR gamma, which may provide potential additive or synergistic effects in regulating dysfunctional brain glucose energy and lipid metabolism in AD.
The PIONEER tested three doses of T3D-959 (15 mg, 30 mg, and 45 mg) vs. placebo in 250 adults with mild to moderate AD (Mini-Mental State Examination [MMSE] 14-26, Clinical Dementia Rating (CDR)-Global 0.5-2.0, and Sum of Boxes [CDR-SB] ≥ 3.0). T3D-959 or placebo was taken once daily for 24 weeks.
In the overall population, the primary endpoints – Alzheimer Disease Assessment Scale-Cognitive subscale (ADAS-Cog11) and Clinical Global Impression of Change (CGIC) – were not met.
“Plain and simple, when we saw this data, we were surprised and shocked,” said Dr. Didsbury, and wondered, “How can placebo be doing so well on a 6-month AD trial?”
“We suspect the presence of non-AD subjects in the trial based on the lower-than-typical number of ApoE4 positive subjects, increased cognitive performance and learning effects observed, and only 45% of trial subjects having a low pTau-217 ratio, a biomarker indicating that they would have no AD pathology plasma,” he explained.
Plasma baseline pTau-217 ratio correlates with AD risk and severity and is a marker of AD pathology; in the subgroup with high pTau-217 ratio, the ADAS-Cog11 endpoint was met in the 30-mg T3D-959 group vs. the placebo group (–0.74 vs. 1.27; P = .112), “consistent with clinical benefit,” Dr. Didsbury noted.
The secondary endpoint of change in plasma amyloid-beta (Ab)42/40 ratio was also met in the 30-mg T3D-959 group – increasing at week 24 with T3D-959 vs. decreasing with placebo (P = .0206), with even greater improvement in the high pTau-217 ratio group. In this group, improvement of Ab42/40 ratio was nearly twofold greater than the overall group.
T3D-959 had a similar magnitude of effect on Ab42/40 as lecanemab (Leqembi) at 6 months, the researchers point out in their late-breaking abstract.
Biomarkers of all three AD diagnostic criteria (amyloid/tau/neurodegeneration) were improved, as well as markers of inflammation, insulin resistance, and dysfunctional lipid metabolism – results that demonstrate “peripheral targeted engagement,” Dr. Didsbury said.
“Along with the strong safety profile of T3D-959, the evidence supports a larger study evaluating T3D-959 30 mg/day in patients with mild to moderate AD and a baseline plasma p-Tau-217/non–pTau-217 ratio of ≥ 0.015,” the researchers conclude in their abstract.
Lessons learned
Commenting on the research for this article, Rebecca Edelmayer, PhD, senior director of scientific engagement for the Alzheimer’s Association, noted that “the idea behind this treatment is that impaired glucose metabolism in the brain leads to toxic misfolded proteins, including amyloid and tau in people with Alzheimer’s disease.”
“The treatment focuses on improving regulation of glucose and lipid metabolism in the brain. This is one of more than 140 approaches that are being tested today to target the biological drivers and contributors to Alzheimer’s disease,” Dr. Edelmayer said.
Because biomarkers were not used to enroll participants, “there was a high population of people in the trial who did not have Alzheimer’s. This very likely contributed to the negative result,” she noted.
However, the results also suggest that those taking the drug who had a high pTau217 ratio – and are likely to have brain amyloid plaques – had less cognitive decline, she noted.
Lessons learned from this negative trial include “the proper dose to balance efficacy and safety, and how to screen participants for their next study,” Dr. Edelmayer said.
Funding for the study was provided by the National Institute on Aging/National Institutes of Health and the Alzheimer’s Association. Dr. Didsbury and Dr. Edelmayer report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CTAD 2023
Adolescents with atopic dermatitis more likely to have experienced bullying, study finds
TOPLINE:
METHODOLOGY:
- Adolescents with AD have reported appearance-based bullying.
- To evaluate the association between AD and the prevalence and frequency of bullying, researchers analyzed cross-sectional data from adult caregivers of U.S. adolescents aged 12-17 years who participated in the 2021 National Health Interview Survey.
- Logistic regression and ordinal logistic regression were used to compare the prevalence of experiencing one or more bullying encounters during the previous 12 months and the frequency of bullying between adolescents with and those without AD.
TAKEAWAY:
- A total of 3,207 adolescents were included in the analysis. The mean age of the participants was 14.5 years, and 11.9% currently had AD. The prevalence of experiencing bullying was significantly higher among adolescents with AD, compared with those without AD (33.2% vs. 19%; P < .001), as was the prevalence of cyberbullying (9.1% vs. 5.8%; P = .04).
- Following adjustment for demographics and atopic comorbidities, adolescents with AD were at increased odds of bullying, compared with their peers without AD (adjusted odds ratio, 1.99; 95% confidence interval, 1.45-2.73).
- Following adjustment for demographics, adolescents with AD were also at increased odds of cyberbullying, compared with their peers without AD (AOR, 1.65; 95% CI, 1.04-2.62), but no association was observed following adjustment for atopic comorbidities (AOR, 1.27; 95% CI, 0.82-1.96).
- Following ordinal logistic regression that was adjusted for demographics and atopic comorbidities, adolescents with AD were at greater odds of being bullied at a higher frequency, compared with their peers without AD (AOR, 1.97; 95% CI, 1.44-2.68).
IN PRACTICE:
“Larger, future studies using clinical AD diagnoses and adolescent self-report can advance understanding of bullying and AD,” the researchers wrote. “Clinicians, families, and schools should address and monitor bullying among adolescents.”
SOURCE:
Howa Yeung, MD, of the department of dermatology at Emory University School of Medicine, Atlanta, led the research. The study was published online in JAMA Dermatology.
LIMITATIONS:
Limitations include the study’s cross-sectional design. In addition, the investigators could not directly attribute bullying to skin-specific findings, and it was a caregiver report.
DISCLOSURES:
The study was supported by grants from the National Institutes of Health and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. One of the authors, Joy Wan, MD, received grants from Pfizer and personal fees from Janssen and Sun Pharmaceuticals outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Adolescents with AD have reported appearance-based bullying.
- To evaluate the association between AD and the prevalence and frequency of bullying, researchers analyzed cross-sectional data from adult caregivers of U.S. adolescents aged 12-17 years who participated in the 2021 National Health Interview Survey.
- Logistic regression and ordinal logistic regression were used to compare the prevalence of experiencing one or more bullying encounters during the previous 12 months and the frequency of bullying between adolescents with and those without AD.
TAKEAWAY:
- A total of 3,207 adolescents were included in the analysis. The mean age of the participants was 14.5 years, and 11.9% currently had AD. The prevalence of experiencing bullying was significantly higher among adolescents with AD, compared with those without AD (33.2% vs. 19%; P < .001), as was the prevalence of cyberbullying (9.1% vs. 5.8%; P = .04).
- Following adjustment for demographics and atopic comorbidities, adolescents with AD were at increased odds of bullying, compared with their peers without AD (adjusted odds ratio, 1.99; 95% confidence interval, 1.45-2.73).
- Following adjustment for demographics, adolescents with AD were also at increased odds of cyberbullying, compared with their peers without AD (AOR, 1.65; 95% CI, 1.04-2.62), but no association was observed following adjustment for atopic comorbidities (AOR, 1.27; 95% CI, 0.82-1.96).
- Following ordinal logistic regression that was adjusted for demographics and atopic comorbidities, adolescents with AD were at greater odds of being bullied at a higher frequency, compared with their peers without AD (AOR, 1.97; 95% CI, 1.44-2.68).
IN PRACTICE:
“Larger, future studies using clinical AD diagnoses and adolescent self-report can advance understanding of bullying and AD,” the researchers wrote. “Clinicians, families, and schools should address and monitor bullying among adolescents.”
SOURCE:
Howa Yeung, MD, of the department of dermatology at Emory University School of Medicine, Atlanta, led the research. The study was published online in JAMA Dermatology.
LIMITATIONS:
Limitations include the study’s cross-sectional design. In addition, the investigators could not directly attribute bullying to skin-specific findings, and it was a caregiver report.
DISCLOSURES:
The study was supported by grants from the National Institutes of Health and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. One of the authors, Joy Wan, MD, received grants from Pfizer and personal fees from Janssen and Sun Pharmaceuticals outside of the submitted work.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Adolescents with AD have reported appearance-based bullying.
- To evaluate the association between AD and the prevalence and frequency of bullying, researchers analyzed cross-sectional data from adult caregivers of U.S. adolescents aged 12-17 years who participated in the 2021 National Health Interview Survey.
- Logistic regression and ordinal logistic regression were used to compare the prevalence of experiencing one or more bullying encounters during the previous 12 months and the frequency of bullying between adolescents with and those without AD.
TAKEAWAY:
- A total of 3,207 adolescents were included in the analysis. The mean age of the participants was 14.5 years, and 11.9% currently had AD. The prevalence of experiencing bullying was significantly higher among adolescents with AD, compared with those without AD (33.2% vs. 19%; P < .001), as was the prevalence of cyberbullying (9.1% vs. 5.8%; P = .04).
- Following adjustment for demographics and atopic comorbidities, adolescents with AD were at increased odds of bullying, compared with their peers without AD (adjusted odds ratio, 1.99; 95% confidence interval, 1.45-2.73).
- Following adjustment for demographics, adolescents with AD were also at increased odds of cyberbullying, compared with their peers without AD (AOR, 1.65; 95% CI, 1.04-2.62), but no association was observed following adjustment for atopic comorbidities (AOR, 1.27; 95% CI, 0.82-1.96).
- Following ordinal logistic regression that was adjusted for demographics and atopic comorbidities, adolescents with AD were at greater odds of being bullied at a higher frequency, compared with their peers without AD (AOR, 1.97; 95% CI, 1.44-2.68).
IN PRACTICE:
“Larger, future studies using clinical AD diagnoses and adolescent self-report can advance understanding of bullying and AD,” the researchers wrote. “Clinicians, families, and schools should address and monitor bullying among adolescents.”
SOURCE:
Howa Yeung, MD, of the department of dermatology at Emory University School of Medicine, Atlanta, led the research. The study was published online in JAMA Dermatology.
LIMITATIONS:
Limitations include the study’s cross-sectional design. In addition, the investigators could not directly attribute bullying to skin-specific findings, and it was a caregiver report.
DISCLOSURES:
The study was supported by grants from the National Institutes of Health and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. One of the authors, Joy Wan, MD, received grants from Pfizer and personal fees from Janssen and Sun Pharmaceuticals outside of the submitted work.
A version of this article first appeared on Medscape.com.
Heart rate variability: Are we ignoring a harbinger of health?
A very long time ago, when I ran clinical labs, one of the most ordered tests was the “sed rate” (aka ESR, the erythrocyte sedimentation rate). Easy, quick, and low cost, with high sensitivity but very low specificity. If the sed rate was normal, the patient probably did not have an infectious or inflammatory disease. If it was elevated, they probably did, but no telling what. Later, the C-reactive protein (CRP) test came into common use. Same general inferences: If the CRP was low, the patient was unlikely to have an inflammatory process; if high, they were sick, but we didn’t know what with.
Could the heart rate variability (HRV) score come to be thought of similarly? Much as the sed rate and CRP are sensitivity indicators of infectious or inflammatory diseases, might the HRV score be a sensitivity indicator for nervous system (central and autonomic) and cardiovascular (especially heart rhythm) malfunctions?
A substantial and relatively old body of heart rhythm literature ties HRV alterations to posttraumatic stress disorder, physician occupational stress, sleep disorders, depression, autonomic nervous system derangements, various cardiac arrhythmias, fatigue, overexertion, medications, and age itself.
More than 100 million Americans are now believed to use smartwatches or personal fitness monitors. Some 30%-40% of these devices measure HRV. So what? Credible research about this huge mass of accumulating data from “wearables” is lacking.
What is HRV?
HRV is the variation in time between each heartbeat, in milliseconds. HRV is influenced by the autonomic nervous system, perhaps reflecting sympathetic-parasympathetic balance. Some devices measure HRV 24/7. My Fitbit Inspire 2 reports only nighttime measures during 3 hours of sustained sleep. Most trackers report averages; some calculate the root mean squares; others calculate standard deviations. All fitness trackers warn not to use the data for medical purposes.
Normal values (reference ranges) for HRV begin at an average of 100 msec in the first decade of life and decline by approximately 10 msec per decade lived. At age 30-40, the average is 70 msec; age 60-70, it’s 40 msec; and at age 90-100, it’s 10 msec.
As a long-time lab guy, I used to teach proper use of lab tests. Fitness trackers are “lab tests” of a sort. We taught never to do a lab test unless you know what you are going to do with the result, no matter what it is. We also taught “never do anything just because you can.” Curiosity, we know, is a frequent driver of lab test ordering.
That underlying philosophy gives me a hard time when it comes to wearables. I have been enamored of watching my step count, active zone minutes, resting heart rate, active heart rate, various sleep scores, and breathing rate (and, of course, a manually entered early morning daily body weight) for several years. I even check my “readiness score” (a calculation using resting heart rate, recent sleep, recent active zone minutes, and perhaps HRV) each morning and adjust my behaviors accordingly.
Why monitor HRV?
But what should we do with HRV scores? Ignore them? Try to understand them, perhaps as a screening tool? Or monitor HRV for consistency or change? “Monitoring” is a proper and common use of lab tests.
Some say we should improve the HRV score by managing stress, getting regular exercise, eating a healthy diet, getting enough sleep, and not smoking or consuming excess alcohol. Duh! I do all of that anyway.
The claims that HRV is a “simple but powerful tool that can be used to track overall health and well-being” might turn out to be true. Proper study and sharing of data will enable that determination.
To advance understanding, I offer an n-of-1, a real-world personal anecdote about HRV.
I did not request the HRV function on my Fitbit Inspire 2. It simply appeared, and I ignored it for some time.
A year or two ago, I started noticing my HRV score every morning. Initially, I did not like to see my “low” score, until I learned that the reference range was dramatically affected by age and I was in my late 80s at the time. The vast majority of my HRV readings were in the range of 17 msec to 27 msec.
Last week, I was administered the new Moderna COVID-19 Spikevax vaccine and the old folks’ influenza vaccine simultaneously. In my case, side effects from each vaccine have been modest in the past, but I never previously had both administered at the same time. My immune response was, shall we say, robust. Chills, muscle aches, headache, fatigue, deltoid swelling, fitful sleep, and increased resting heart rate.
My nightly average HRV had been running between 17 msec and 35 msec for many months. WHOA! After the shots, my overnight HRV score plummeted from 24 msec to 10 msec, my lowest ever. Instant worry. The next day, it rebounded to 28 msec, and it has been in the high teens or low 20s since then.
Off to PubMed. A recent study of HRV on the second and 10th days after administering the Pfizer mRNA vaccine to 75 healthy volunteers found that the HRV on day 2 was dramatically lower than prevaccination levels and by day 10, it had returned to prevaccination levels. Some comfort there.
Another review article has reported a rapid fall and rapid rebound of HRV after COVID-19 vaccination. A 2010 report demonstrated a significant but not dramatic short-term lowering of HRV after influenza A vaccination and correlated it with CRP changes.
Some believe that the decline in HRV after vaccination reflects an increased immune response and sympathetic nervous activity.
I don’t plan to receive my flu and COVID vaccines on the same day again.
So, I went back to review what happened to my HRV when I had COVID in 2023. My HRV was 14 msec and 12 msec on the first 2 days of symptoms, and then returned to the 20 msec range.
I received the RSV vaccine this year without adverse effects, and my HRV scores were 29 msec, 33 msec, and 32 msec on the first 3 days after vaccination. Finally, after receiving a pneumococcal vaccine in 2023, I had no adverse effects, and my HRV scores on the 5 days after vaccination were indeterminate: 19 msec, 14 msec, 18 msec, 13 msec, and 17 msec.
Of course, correlation is not causation. Cause and effect remain undetermined. But I find these observations interesting for a potentially useful screening test.
George D. Lundberg, MD, is the Editor in Chief of Cancer Commons.
A version of this article first appeared on Medscape.com.
A very long time ago, when I ran clinical labs, one of the most ordered tests was the “sed rate” (aka ESR, the erythrocyte sedimentation rate). Easy, quick, and low cost, with high sensitivity but very low specificity. If the sed rate was normal, the patient probably did not have an infectious or inflammatory disease. If it was elevated, they probably did, but no telling what. Later, the C-reactive protein (CRP) test came into common use. Same general inferences: If the CRP was low, the patient was unlikely to have an inflammatory process; if high, they were sick, but we didn’t know what with.
Could the heart rate variability (HRV) score come to be thought of similarly? Much as the sed rate and CRP are sensitivity indicators of infectious or inflammatory diseases, might the HRV score be a sensitivity indicator for nervous system (central and autonomic) and cardiovascular (especially heart rhythm) malfunctions?
A substantial and relatively old body of heart rhythm literature ties HRV alterations to posttraumatic stress disorder, physician occupational stress, sleep disorders, depression, autonomic nervous system derangements, various cardiac arrhythmias, fatigue, overexertion, medications, and age itself.
More than 100 million Americans are now believed to use smartwatches or personal fitness monitors. Some 30%-40% of these devices measure HRV. So what? Credible research about this huge mass of accumulating data from “wearables” is lacking.
What is HRV?
HRV is the variation in time between each heartbeat, in milliseconds. HRV is influenced by the autonomic nervous system, perhaps reflecting sympathetic-parasympathetic balance. Some devices measure HRV 24/7. My Fitbit Inspire 2 reports only nighttime measures during 3 hours of sustained sleep. Most trackers report averages; some calculate the root mean squares; others calculate standard deviations. All fitness trackers warn not to use the data for medical purposes.
Normal values (reference ranges) for HRV begin at an average of 100 msec in the first decade of life and decline by approximately 10 msec per decade lived. At age 30-40, the average is 70 msec; age 60-70, it’s 40 msec; and at age 90-100, it’s 10 msec.
As a long-time lab guy, I used to teach proper use of lab tests. Fitness trackers are “lab tests” of a sort. We taught never to do a lab test unless you know what you are going to do with the result, no matter what it is. We also taught “never do anything just because you can.” Curiosity, we know, is a frequent driver of lab test ordering.
That underlying philosophy gives me a hard time when it comes to wearables. I have been enamored of watching my step count, active zone minutes, resting heart rate, active heart rate, various sleep scores, and breathing rate (and, of course, a manually entered early morning daily body weight) for several years. I even check my “readiness score” (a calculation using resting heart rate, recent sleep, recent active zone minutes, and perhaps HRV) each morning and adjust my behaviors accordingly.
Why monitor HRV?
But what should we do with HRV scores? Ignore them? Try to understand them, perhaps as a screening tool? Or monitor HRV for consistency or change? “Monitoring” is a proper and common use of lab tests.
Some say we should improve the HRV score by managing stress, getting regular exercise, eating a healthy diet, getting enough sleep, and not smoking or consuming excess alcohol. Duh! I do all of that anyway.
The claims that HRV is a “simple but powerful tool that can be used to track overall health and well-being” might turn out to be true. Proper study and sharing of data will enable that determination.
To advance understanding, I offer an n-of-1, a real-world personal anecdote about HRV.
I did not request the HRV function on my Fitbit Inspire 2. It simply appeared, and I ignored it for some time.
A year or two ago, I started noticing my HRV score every morning. Initially, I did not like to see my “low” score, until I learned that the reference range was dramatically affected by age and I was in my late 80s at the time. The vast majority of my HRV readings were in the range of 17 msec to 27 msec.
Last week, I was administered the new Moderna COVID-19 Spikevax vaccine and the old folks’ influenza vaccine simultaneously. In my case, side effects from each vaccine have been modest in the past, but I never previously had both administered at the same time. My immune response was, shall we say, robust. Chills, muscle aches, headache, fatigue, deltoid swelling, fitful sleep, and increased resting heart rate.
My nightly average HRV had been running between 17 msec and 35 msec for many months. WHOA! After the shots, my overnight HRV score plummeted from 24 msec to 10 msec, my lowest ever. Instant worry. The next day, it rebounded to 28 msec, and it has been in the high teens or low 20s since then.
Off to PubMed. A recent study of HRV on the second and 10th days after administering the Pfizer mRNA vaccine to 75 healthy volunteers found that the HRV on day 2 was dramatically lower than prevaccination levels and by day 10, it had returned to prevaccination levels. Some comfort there.
Another review article has reported a rapid fall and rapid rebound of HRV after COVID-19 vaccination. A 2010 report demonstrated a significant but not dramatic short-term lowering of HRV after influenza A vaccination and correlated it with CRP changes.
Some believe that the decline in HRV after vaccination reflects an increased immune response and sympathetic nervous activity.
I don’t plan to receive my flu and COVID vaccines on the same day again.
So, I went back to review what happened to my HRV when I had COVID in 2023. My HRV was 14 msec and 12 msec on the first 2 days of symptoms, and then returned to the 20 msec range.
I received the RSV vaccine this year without adverse effects, and my HRV scores were 29 msec, 33 msec, and 32 msec on the first 3 days after vaccination. Finally, after receiving a pneumococcal vaccine in 2023, I had no adverse effects, and my HRV scores on the 5 days after vaccination were indeterminate: 19 msec, 14 msec, 18 msec, 13 msec, and 17 msec.
Of course, correlation is not causation. Cause and effect remain undetermined. But I find these observations interesting for a potentially useful screening test.
George D. Lundberg, MD, is the Editor in Chief of Cancer Commons.
A version of this article first appeared on Medscape.com.
A very long time ago, when I ran clinical labs, one of the most ordered tests was the “sed rate” (aka ESR, the erythrocyte sedimentation rate). Easy, quick, and low cost, with high sensitivity but very low specificity. If the sed rate was normal, the patient probably did not have an infectious or inflammatory disease. If it was elevated, they probably did, but no telling what. Later, the C-reactive protein (CRP) test came into common use. Same general inferences: If the CRP was low, the patient was unlikely to have an inflammatory process; if high, they were sick, but we didn’t know what with.
Could the heart rate variability (HRV) score come to be thought of similarly? Much as the sed rate and CRP are sensitivity indicators of infectious or inflammatory diseases, might the HRV score be a sensitivity indicator for nervous system (central and autonomic) and cardiovascular (especially heart rhythm) malfunctions?
A substantial and relatively old body of heart rhythm literature ties HRV alterations to posttraumatic stress disorder, physician occupational stress, sleep disorders, depression, autonomic nervous system derangements, various cardiac arrhythmias, fatigue, overexertion, medications, and age itself.
More than 100 million Americans are now believed to use smartwatches or personal fitness monitors. Some 30%-40% of these devices measure HRV. So what? Credible research about this huge mass of accumulating data from “wearables” is lacking.
What is HRV?
HRV is the variation in time between each heartbeat, in milliseconds. HRV is influenced by the autonomic nervous system, perhaps reflecting sympathetic-parasympathetic balance. Some devices measure HRV 24/7. My Fitbit Inspire 2 reports only nighttime measures during 3 hours of sustained sleep. Most trackers report averages; some calculate the root mean squares; others calculate standard deviations. All fitness trackers warn not to use the data for medical purposes.
Normal values (reference ranges) for HRV begin at an average of 100 msec in the first decade of life and decline by approximately 10 msec per decade lived. At age 30-40, the average is 70 msec; age 60-70, it’s 40 msec; and at age 90-100, it’s 10 msec.
As a long-time lab guy, I used to teach proper use of lab tests. Fitness trackers are “lab tests” of a sort. We taught never to do a lab test unless you know what you are going to do with the result, no matter what it is. We also taught “never do anything just because you can.” Curiosity, we know, is a frequent driver of lab test ordering.
That underlying philosophy gives me a hard time when it comes to wearables. I have been enamored of watching my step count, active zone minutes, resting heart rate, active heart rate, various sleep scores, and breathing rate (and, of course, a manually entered early morning daily body weight) for several years. I even check my “readiness score” (a calculation using resting heart rate, recent sleep, recent active zone minutes, and perhaps HRV) each morning and adjust my behaviors accordingly.
Why monitor HRV?
But what should we do with HRV scores? Ignore them? Try to understand them, perhaps as a screening tool? Or monitor HRV for consistency or change? “Monitoring” is a proper and common use of lab tests.
Some say we should improve the HRV score by managing stress, getting regular exercise, eating a healthy diet, getting enough sleep, and not smoking or consuming excess alcohol. Duh! I do all of that anyway.
The claims that HRV is a “simple but powerful tool that can be used to track overall health and well-being” might turn out to be true. Proper study and sharing of data will enable that determination.
To advance understanding, I offer an n-of-1, a real-world personal anecdote about HRV.
I did not request the HRV function on my Fitbit Inspire 2. It simply appeared, and I ignored it for some time.
A year or two ago, I started noticing my HRV score every morning. Initially, I did not like to see my “low” score, until I learned that the reference range was dramatically affected by age and I was in my late 80s at the time. The vast majority of my HRV readings were in the range of 17 msec to 27 msec.
Last week, I was administered the new Moderna COVID-19 Spikevax vaccine and the old folks’ influenza vaccine simultaneously. In my case, side effects from each vaccine have been modest in the past, but I never previously had both administered at the same time. My immune response was, shall we say, robust. Chills, muscle aches, headache, fatigue, deltoid swelling, fitful sleep, and increased resting heart rate.
My nightly average HRV had been running between 17 msec and 35 msec for many months. WHOA! After the shots, my overnight HRV score plummeted from 24 msec to 10 msec, my lowest ever. Instant worry. The next day, it rebounded to 28 msec, and it has been in the high teens or low 20s since then.
Off to PubMed. A recent study of HRV on the second and 10th days after administering the Pfizer mRNA vaccine to 75 healthy volunteers found that the HRV on day 2 was dramatically lower than prevaccination levels and by day 10, it had returned to prevaccination levels. Some comfort there.
Another review article has reported a rapid fall and rapid rebound of HRV after COVID-19 vaccination. A 2010 report demonstrated a significant but not dramatic short-term lowering of HRV after influenza A vaccination and correlated it with CRP changes.
Some believe that the decline in HRV after vaccination reflects an increased immune response and sympathetic nervous activity.
I don’t plan to receive my flu and COVID vaccines on the same day again.
So, I went back to review what happened to my HRV when I had COVID in 2023. My HRV was 14 msec and 12 msec on the first 2 days of symptoms, and then returned to the 20 msec range.
I received the RSV vaccine this year without adverse effects, and my HRV scores were 29 msec, 33 msec, and 32 msec on the first 3 days after vaccination. Finally, after receiving a pneumococcal vaccine in 2023, I had no adverse effects, and my HRV scores on the 5 days after vaccination were indeterminate: 19 msec, 14 msec, 18 msec, 13 msec, and 17 msec.
Of course, correlation is not causation. Cause and effect remain undetermined. But I find these observations interesting for a potentially useful screening test.
George D. Lundberg, MD, is the Editor in Chief of Cancer Commons.
A version of this article first appeared on Medscape.com.
Lag in antidepressant treatment response explained?
BARCELONA – , new imaging data suggest.
In a double-blind study, more than 30 volunteers were randomly assigned to the SSRI escitalopram or placebo for 3-5 weeks. Using PET imaging, the investigators found that over time, synaptic density significantly increased significantly in the neocortex and hippocampus but only in patients taking the active drug.
The results point to two conclusions, said study investigator Gitta Moos Knudsen, MD, PhD, clinical professor and chief physician at the department of clinical medicine, neurology, psychiatry and sensory sciences at Copenhagen (Denmark) University Hospital.
First, they indicate that SSRIs increase synaptic density in brain areas critically involved in depression, a finding that would go some way to indicating that the synaptic density in the brain may be involved in how antidepressants function, “which would give us a target for developing novel drugs against depression,” said Dr. Knudsen.
“Secondly, our data suggest synapses build up over a period of weeks, which would explain why the effects of these drugs take time to kick in,” she added.
The findings were presented at the 36th European College of Neuropsychopharmacology (ECNP) Congress and simultaneously published online in Molecular Psychiatry.
Marked increase in synaptic density
SSRIs are widely used for depression as well as anxiety and obsessive-compulsive disorder. It is thought that they act via neuroplasticity and synaptic remodeling to improve cognition and emotion processing. However, the investigators note clinical evidence is lacking.
For the study, the researchers randomly assigned healthy individuals to either 20-mg escitalopram or placebo for 3-5 weeks.
They performed PET with the 11C-UCB-J tracer, which allows imaging of the synaptic vesicle glycoprotein 2A (SV2A) in the brain, synaptic density, as well as changes in density over time, in the hippocampus and neocortex.
Between May 2020 and October 2021, 17 individuals were assigned to escitalopram and 15 to placebo. There were no significant differences between two groups in terms of age, sex, and PET-related variables. Serum escitalopram measurements confirmed that all participants in the active drug group were compliant.
When synaptic density was assessed at a single time point, an average of 29 days after the intervention, there were no significant differences between the escitalopram and placebo groups in either the neocortex (P = .41) or in the hippocampus (P = .26).
However, when they performed a secondary analysis of the time-dependent effect on SV2A levels, they found a marked difference between the two study groups.
Compared with the placebo group, participants taking escitalopram had a marked increase in synaptic density in both the neocortex (rp value, 0.58; P = .003) and the hippocampus (rp value, 0.41; P = .048).
In contrast, there were no significant changes in synaptic density in either the neocortex (rp value, –0.01; P = .95) or the hippocampus (rp value, –0.06; P = .62) in the hippocampus.
“That is consistent with our clinical observation that it takes time to evolve synaptic density, along with clinical improvement. Does that mean that the increase in synaptic density is a precondition for improvement in symptoms? We don’t know,” said Dr. Knudsen.
Exciting but not conclusive
Session co-chair Oliver Howes, MD, PhD, professor of molecular psychiatry, King’s College London, agreed that the results do not prove the gradual increase in synaptic density the treatment response lag with SSRIs.
“We definitely don’t yet have all the data to know one way or the other,” he said in an interview.
Another potential hypothesis, he said, is that SSRIs are causing shifts in underlying brain circuits that lead to cognitive changes before there is a discernable improvement in mood.
Indeed, Dr. Howes suggested that increases in synaptic density and cognitive changes related to SSRI use are not necessarily dependent on each other and could even be unrelated.
Also commenting on the research, David Nutt, MD, PhD, Edmond J. Safra professor of neuropsychopharmacology at Imperial College London, said that the “delay in therapeutic action of antidepressants has been a puzzle to psychiatrists ever since they were first discerned over 50 years ago. So, these new data in humans, that use cutting edge brain imaging to demonstrate an increase in brain connections developing over the period that the depression lifts, are very exciting.”
Dr. Nutt added that the results provide further evidence that “enhancing serotonin function in the brain can have enduring health benefits.”
Funding support was provided by the Danish Council for Independent Research, the Lundbeck Foundation, Rigshospitalet, and the Swedish Research Council. Open access funding provided by Royal Library, Copenhagen University Library.
Dr. Knudsen declares relationships with Sage Biogen, H. Lundbeck, Onsero, Pangea, Gilgamesh, Abbvie, and PureTechHealth. Another author declares relationships with Cambridge Cognition and PopReach via Cambridge Enterprise.
A version of this article first appeared on Medscape.com.
BARCELONA – , new imaging data suggest.
In a double-blind study, more than 30 volunteers were randomly assigned to the SSRI escitalopram or placebo for 3-5 weeks. Using PET imaging, the investigators found that over time, synaptic density significantly increased significantly in the neocortex and hippocampus but only in patients taking the active drug.
The results point to two conclusions, said study investigator Gitta Moos Knudsen, MD, PhD, clinical professor and chief physician at the department of clinical medicine, neurology, psychiatry and sensory sciences at Copenhagen (Denmark) University Hospital.
First, they indicate that SSRIs increase synaptic density in brain areas critically involved in depression, a finding that would go some way to indicating that the synaptic density in the brain may be involved in how antidepressants function, “which would give us a target for developing novel drugs against depression,” said Dr. Knudsen.
“Secondly, our data suggest synapses build up over a period of weeks, which would explain why the effects of these drugs take time to kick in,” she added.
The findings were presented at the 36th European College of Neuropsychopharmacology (ECNP) Congress and simultaneously published online in Molecular Psychiatry.
Marked increase in synaptic density
SSRIs are widely used for depression as well as anxiety and obsessive-compulsive disorder. It is thought that they act via neuroplasticity and synaptic remodeling to improve cognition and emotion processing. However, the investigators note clinical evidence is lacking.
For the study, the researchers randomly assigned healthy individuals to either 20-mg escitalopram or placebo for 3-5 weeks.
They performed PET with the 11C-UCB-J tracer, which allows imaging of the synaptic vesicle glycoprotein 2A (SV2A) in the brain, synaptic density, as well as changes in density over time, in the hippocampus and neocortex.
Between May 2020 and October 2021, 17 individuals were assigned to escitalopram and 15 to placebo. There were no significant differences between two groups in terms of age, sex, and PET-related variables. Serum escitalopram measurements confirmed that all participants in the active drug group were compliant.
When synaptic density was assessed at a single time point, an average of 29 days after the intervention, there were no significant differences between the escitalopram and placebo groups in either the neocortex (P = .41) or in the hippocampus (P = .26).
However, when they performed a secondary analysis of the time-dependent effect on SV2A levels, they found a marked difference between the two study groups.
Compared with the placebo group, participants taking escitalopram had a marked increase in synaptic density in both the neocortex (rp value, 0.58; P = .003) and the hippocampus (rp value, 0.41; P = .048).
In contrast, there were no significant changes in synaptic density in either the neocortex (rp value, –0.01; P = .95) or the hippocampus (rp value, –0.06; P = .62) in the hippocampus.
“That is consistent with our clinical observation that it takes time to evolve synaptic density, along with clinical improvement. Does that mean that the increase in synaptic density is a precondition for improvement in symptoms? We don’t know,” said Dr. Knudsen.
Exciting but not conclusive
Session co-chair Oliver Howes, MD, PhD, professor of molecular psychiatry, King’s College London, agreed that the results do not prove the gradual increase in synaptic density the treatment response lag with SSRIs.
“We definitely don’t yet have all the data to know one way or the other,” he said in an interview.
Another potential hypothesis, he said, is that SSRIs are causing shifts in underlying brain circuits that lead to cognitive changes before there is a discernable improvement in mood.
Indeed, Dr. Howes suggested that increases in synaptic density and cognitive changes related to SSRI use are not necessarily dependent on each other and could even be unrelated.
Also commenting on the research, David Nutt, MD, PhD, Edmond J. Safra professor of neuropsychopharmacology at Imperial College London, said that the “delay in therapeutic action of antidepressants has been a puzzle to psychiatrists ever since they were first discerned over 50 years ago. So, these new data in humans, that use cutting edge brain imaging to demonstrate an increase in brain connections developing over the period that the depression lifts, are very exciting.”
Dr. Nutt added that the results provide further evidence that “enhancing serotonin function in the brain can have enduring health benefits.”
Funding support was provided by the Danish Council for Independent Research, the Lundbeck Foundation, Rigshospitalet, and the Swedish Research Council. Open access funding provided by Royal Library, Copenhagen University Library.
Dr. Knudsen declares relationships with Sage Biogen, H. Lundbeck, Onsero, Pangea, Gilgamesh, Abbvie, and PureTechHealth. Another author declares relationships with Cambridge Cognition and PopReach via Cambridge Enterprise.
A version of this article first appeared on Medscape.com.
BARCELONA – , new imaging data suggest.
In a double-blind study, more than 30 volunteers were randomly assigned to the SSRI escitalopram or placebo for 3-5 weeks. Using PET imaging, the investigators found that over time, synaptic density significantly increased significantly in the neocortex and hippocampus but only in patients taking the active drug.
The results point to two conclusions, said study investigator Gitta Moos Knudsen, MD, PhD, clinical professor and chief physician at the department of clinical medicine, neurology, psychiatry and sensory sciences at Copenhagen (Denmark) University Hospital.
First, they indicate that SSRIs increase synaptic density in brain areas critically involved in depression, a finding that would go some way to indicating that the synaptic density in the brain may be involved in how antidepressants function, “which would give us a target for developing novel drugs against depression,” said Dr. Knudsen.
“Secondly, our data suggest synapses build up over a period of weeks, which would explain why the effects of these drugs take time to kick in,” she added.
The findings were presented at the 36th European College of Neuropsychopharmacology (ECNP) Congress and simultaneously published online in Molecular Psychiatry.
Marked increase in synaptic density
SSRIs are widely used for depression as well as anxiety and obsessive-compulsive disorder. It is thought that they act via neuroplasticity and synaptic remodeling to improve cognition and emotion processing. However, the investigators note clinical evidence is lacking.
For the study, the researchers randomly assigned healthy individuals to either 20-mg escitalopram or placebo for 3-5 weeks.
They performed PET with the 11C-UCB-J tracer, which allows imaging of the synaptic vesicle glycoprotein 2A (SV2A) in the brain, synaptic density, as well as changes in density over time, in the hippocampus and neocortex.
Between May 2020 and October 2021, 17 individuals were assigned to escitalopram and 15 to placebo. There were no significant differences between two groups in terms of age, sex, and PET-related variables. Serum escitalopram measurements confirmed that all participants in the active drug group were compliant.
When synaptic density was assessed at a single time point, an average of 29 days after the intervention, there were no significant differences between the escitalopram and placebo groups in either the neocortex (P = .41) or in the hippocampus (P = .26).
However, when they performed a secondary analysis of the time-dependent effect on SV2A levels, they found a marked difference between the two study groups.
Compared with the placebo group, participants taking escitalopram had a marked increase in synaptic density in both the neocortex (rp value, 0.58; P = .003) and the hippocampus (rp value, 0.41; P = .048).
In contrast, there were no significant changes in synaptic density in either the neocortex (rp value, –0.01; P = .95) or the hippocampus (rp value, –0.06; P = .62) in the hippocampus.
“That is consistent with our clinical observation that it takes time to evolve synaptic density, along with clinical improvement. Does that mean that the increase in synaptic density is a precondition for improvement in symptoms? We don’t know,” said Dr. Knudsen.
Exciting but not conclusive
Session co-chair Oliver Howes, MD, PhD, professor of molecular psychiatry, King’s College London, agreed that the results do not prove the gradual increase in synaptic density the treatment response lag with SSRIs.
“We definitely don’t yet have all the data to know one way or the other,” he said in an interview.
Another potential hypothesis, he said, is that SSRIs are causing shifts in underlying brain circuits that lead to cognitive changes before there is a discernable improvement in mood.
Indeed, Dr. Howes suggested that increases in synaptic density and cognitive changes related to SSRI use are not necessarily dependent on each other and could even be unrelated.
Also commenting on the research, David Nutt, MD, PhD, Edmond J. Safra professor of neuropsychopharmacology at Imperial College London, said that the “delay in therapeutic action of antidepressants has been a puzzle to psychiatrists ever since they were first discerned over 50 years ago. So, these new data in humans, that use cutting edge brain imaging to demonstrate an increase in brain connections developing over the period that the depression lifts, are very exciting.”
Dr. Nutt added that the results provide further evidence that “enhancing serotonin function in the brain can have enduring health benefits.”
Funding support was provided by the Danish Council for Independent Research, the Lundbeck Foundation, Rigshospitalet, and the Swedish Research Council. Open access funding provided by Royal Library, Copenhagen University Library.
Dr. Knudsen declares relationships with Sage Biogen, H. Lundbeck, Onsero, Pangea, Gilgamesh, Abbvie, and PureTechHealth. Another author declares relationships with Cambridge Cognition and PopReach via Cambridge Enterprise.
A version of this article first appeared on Medscape.com.
AT ECNP 2023
Commentary: Diagnostic Delay and Optimal Treatments for PsA, November 2023
There is steady advance in the treatment of PsA. Bimekizumab is a novel monoclonal antibody that, by binding to similar sites on interleukin (IL)-17A and IL-17F, inhibits these cytokines. Ritchlin and colleagues recently reported the 52-week results from the phase 3 BE OPTIMAL study including 852 biological disease-modifying antirheumatic drug (bDMARD)-naive patients with active PsA who were randomly assigned to receive bimekizumab, adalimumab, or placebo. At week 16, 43.9% of patients receiving bimekizumab achieved ≥ 50% improvement in the American College of Rheumatology scores (ACR50), with the response being maintained up to week 52 (54.5%). Among patients who switched from placebo to bimekizumab at week 16, a similar proportion (53.0%) achieved ACR50 at week 52. No new safety signals were observed. Thus, bimekizumab led to sustained improvements in clinical response up to week 52 and probably will soon be available to patients with PsA.
The optimal management of axial PsA continues to be investigated. One major question is whether IL-23 inhibitors, which are not efficacious in axial spondyloarthritis, have efficacy in axial PsA. A post hoc analysis of the DISCOVER-2 study included 246 biologic-naive patients with active PsA and sacroiliitis who were randomly assigned to guselkumab every 4 weeks (Q4W; n = 82), guselkumab every 8 weeks (Q8W; n = 68), or placebo with crossover to guselkumab Q4W at week 24 (n = 96), Mease and colleagues report that at week 24, guselkumab Q4W and Q8W vs placebo showed significantly greater scores in the total Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) as well as Ankylosing Spondylitis Disease Activity Score (ASDAS), with further improvements noted at week 100. Thus, in patients with active PsA and imaging-confirmed sacroiliitis, 100 mg guselkumab Q4W and Q8W yielded clinically meaningful and sustained improvements in axial symptoms through 2 years.
Finally, attention is currently being paid to patients with refractory or difficult-to-treat (D2T) PsA. These patients are generally characterized as having active disease despite treatment with two or more targeted DMARD (tDMARD). Philippoteaux and colleagues have reported results from their retrospective cohort study that included 150 patients with PsA who initiated treatment with tDMARD and were followed for at least 2 years, of whom 49 patients had D2T PsA. They found that peripheral structural damage, axial involvement, and the discontinuation of bDMARD due to poor skin psoriasis control were more prevalent in patients with D2T PsA compared with in non-D2T PsA. Thus, patients with D2T PsA are more likely to have more structural damage. Early diagnosis and treatment to reduce structural damage might reduce the prevalence of D2T PsA.
There is steady advance in the treatment of PsA. Bimekizumab is a novel monoclonal antibody that, by binding to similar sites on interleukin (IL)-17A and IL-17F, inhibits these cytokines. Ritchlin and colleagues recently reported the 52-week results from the phase 3 BE OPTIMAL study including 852 biological disease-modifying antirheumatic drug (bDMARD)-naive patients with active PsA who were randomly assigned to receive bimekizumab, adalimumab, or placebo. At week 16, 43.9% of patients receiving bimekizumab achieved ≥ 50% improvement in the American College of Rheumatology scores (ACR50), with the response being maintained up to week 52 (54.5%). Among patients who switched from placebo to bimekizumab at week 16, a similar proportion (53.0%) achieved ACR50 at week 52. No new safety signals were observed. Thus, bimekizumab led to sustained improvements in clinical response up to week 52 and probably will soon be available to patients with PsA.
The optimal management of axial PsA continues to be investigated. One major question is whether IL-23 inhibitors, which are not efficacious in axial spondyloarthritis, have efficacy in axial PsA. A post hoc analysis of the DISCOVER-2 study included 246 biologic-naive patients with active PsA and sacroiliitis who were randomly assigned to guselkumab every 4 weeks (Q4W; n = 82), guselkumab every 8 weeks (Q8W; n = 68), or placebo with crossover to guselkumab Q4W at week 24 (n = 96), Mease and colleagues report that at week 24, guselkumab Q4W and Q8W vs placebo showed significantly greater scores in the total Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) as well as Ankylosing Spondylitis Disease Activity Score (ASDAS), with further improvements noted at week 100. Thus, in patients with active PsA and imaging-confirmed sacroiliitis, 100 mg guselkumab Q4W and Q8W yielded clinically meaningful and sustained improvements in axial symptoms through 2 years.
Finally, attention is currently being paid to patients with refractory or difficult-to-treat (D2T) PsA. These patients are generally characterized as having active disease despite treatment with two or more targeted DMARD (tDMARD). Philippoteaux and colleagues have reported results from their retrospective cohort study that included 150 patients with PsA who initiated treatment with tDMARD and were followed for at least 2 years, of whom 49 patients had D2T PsA. They found that peripheral structural damage, axial involvement, and the discontinuation of bDMARD due to poor skin psoriasis control were more prevalent in patients with D2T PsA compared with in non-D2T PsA. Thus, patients with D2T PsA are more likely to have more structural damage. Early diagnosis and treatment to reduce structural damage might reduce the prevalence of D2T PsA.
There is steady advance in the treatment of PsA. Bimekizumab is a novel monoclonal antibody that, by binding to similar sites on interleukin (IL)-17A and IL-17F, inhibits these cytokines. Ritchlin and colleagues recently reported the 52-week results from the phase 3 BE OPTIMAL study including 852 biological disease-modifying antirheumatic drug (bDMARD)-naive patients with active PsA who were randomly assigned to receive bimekizumab, adalimumab, or placebo. At week 16, 43.9% of patients receiving bimekizumab achieved ≥ 50% improvement in the American College of Rheumatology scores (ACR50), with the response being maintained up to week 52 (54.5%). Among patients who switched from placebo to bimekizumab at week 16, a similar proportion (53.0%) achieved ACR50 at week 52. No new safety signals were observed. Thus, bimekizumab led to sustained improvements in clinical response up to week 52 and probably will soon be available to patients with PsA.
The optimal management of axial PsA continues to be investigated. One major question is whether IL-23 inhibitors, which are not efficacious in axial spondyloarthritis, have efficacy in axial PsA. A post hoc analysis of the DISCOVER-2 study included 246 biologic-naive patients with active PsA and sacroiliitis who were randomly assigned to guselkumab every 4 weeks (Q4W; n = 82), guselkumab every 8 weeks (Q8W; n = 68), or placebo with crossover to guselkumab Q4W at week 24 (n = 96), Mease and colleagues report that at week 24, guselkumab Q4W and Q8W vs placebo showed significantly greater scores in the total Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) as well as Ankylosing Spondylitis Disease Activity Score (ASDAS), with further improvements noted at week 100. Thus, in patients with active PsA and imaging-confirmed sacroiliitis, 100 mg guselkumab Q4W and Q8W yielded clinically meaningful and sustained improvements in axial symptoms through 2 years.
Finally, attention is currently being paid to patients with refractory or difficult-to-treat (D2T) PsA. These patients are generally characterized as having active disease despite treatment with two or more targeted DMARD (tDMARD). Philippoteaux and colleagues have reported results from their retrospective cohort study that included 150 patients with PsA who initiated treatment with tDMARD and were followed for at least 2 years, of whom 49 patients had D2T PsA. They found that peripheral structural damage, axial involvement, and the discontinuation of bDMARD due to poor skin psoriasis control were more prevalent in patients with D2T PsA compared with in non-D2T PsA. Thus, patients with D2T PsA are more likely to have more structural damage. Early diagnosis and treatment to reduce structural damage might reduce the prevalence of D2T PsA.
Ocular MALT lymphoma: Radiation reduces relapse
“Our study represents the largest institutional cohort analysis on the course of patients with stage I POAML,” said first author Linrui Gao, MD, of the department of radiation oncology at the National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, in Beijing.
Dr. Gao presented these findings at ESMO 2023, held in Madrid.
“We confirm the indolent nature of this stage I disease, with mortality that is similar to the general population and a low rate of lymphoma-attributed mortality,” she said, adding that “radiation therapy was associated with the lowest relapse or disease progression, compared with [other treatments].”
POAML, which can involve lesions in areas including the eyelid, conjunctiva, orbit, and lacrimal gland, makes up about 7% of mucosa-associated lymphoid tissue (MALT) lymphomas. However, the incidence is reported to be steadily increasing. With the majority of patients, 70%-85%, diagnosed as stage I, consensus on treatment approaches is lacking.
Guidelines typically recommend radiation therapy as the standard of care, and approximately 70% of POAML patients do receive the therapy, compared with only about 36% of those with early-stage MALT lymphoma, with the indolent nature of the disease likely weighing on decisions to forgo the treatment, Dr. Gao reported.
“Adoption of initial radiotherapy in early-stage POAML is relatively low worldwide, with possible reasons being [concerns] of a low survival benefit and long-term toxicities,” she said.
To evaluate the long-term outcomes based on baseline clinical features and treatments, Dr. Gao and colleagues conducted a retrospective study of 262 patients with stage I POAML (ipsilateral or bilateral disease), enrolled between January 2000 and December 2020 at two hospitals in China.
Of the patients, who had a median age of 55 and a male-female ratio of 1:3, 82 were initially treated with radiation therapy, 81 with observation, 70 with surgery, and 29 with systemic treatment.
Those receiving radiation therapy had higher rates of an Eastern Cooperative Oncology Group performance status of 1 or higher (P = .02), higher elevations of LDH (P = .03), and higher rates of chronic disease (P < .001), while other baseline characteristics between the groups, including age, T stage, symptom duration, and other factors, were similar.
With a median follow-up of 66 months, the 5-year and 10-year overall survival rates were 96.8% and 90%, respectively, which is similar to the survival rate in the general population in China.
Likewise, the 5- and 10-year rates of lymphoma-specific mortality were both extremely low, at 0.4%, and the corresponding rates of competing nonlymphoma mortality at 5 and 10 years were 2.3% and 4.2%, also consistent with the general population.
The 5- and 10-year mortality rates remained similar to the general population in stratifying patients according to the initial treatment type (P = .767 between treatments).
In terms of recurrence, the overall failure rates were relatively high, with 19.5% of patients experiencing relapse at 5 years and 24.05% at 10 years.
“The failure rates show that the risk of relapse in POAML does not decrease over time,” Dr. Gao said.
Notably, those treated with radiation therapy had a significantly decreased 5-year cumulative risk of failure (8.5%), compared with those who only received observation (29.6%), surgery (22.9%), or systemic treatment (17.2%; overall, P = .002).
The most common failure site was the ipsilateral orbit, and again, rates of those relapses were significantly lower with radiation therapy (2.4%), compared with observation (23.5%), surgery (21.4%), and systemic treatment (17.3%).
However, rates of relapses in other sites, including the contralateral orbit, extraocular site, and multiple sites, were similar among all treatment groups. One patient receiving systemic treatment had large cell transformation, associated with poorer outcomes.
Strategies after recurrence were salvage therapy for 53 patients, including 27 receiving radiation therapy, and observation for 10 patients.
Dr. Gao noted that treatment failure was not associated with higher mortality rates. “However, given the limited number of cases, we think more cases and longer follow-up are needed,” she told MDedge.
Among the most common acute toxicities were ocular dermatitis or mucositis, described as mild, among 23 patients receiving radiation therapy. Nine patients experienced postoperative complications of mild eye irritation and periorbital edema, and five patients receiving systemic treatment experienced grade 2-3 leukopenia. There were no severe adverse events.
In terms of late ocular adverse effects, overall, 3 patients in the radiation therapy group developed cataracts and 143 patients developed dry-eye disease.
“Radiation therapy was associated with the lowest rate of relapse progression, compared with observation, surgery, and systemic treatment, with similar overall and recurrent survival,” Dr. Gao said.
“Based on our study results, radiotherapy should be considered as the optimal treatment for all patients with stage I disease because of its lowest failure risk and minor toxicity,” Dr. Gao told MDedge.
“However, the radiotherapy dose and techniques should be further optimized in good clinical trials,” she noted. “There are some clinical studies undergoing to explore the modern radiotherapy strategy, including by our group.”
Commenting on the study, discussant Olivier Casasnovas, MD, PhD, of the department of hematology, University Hospital Francois Mitterrand, in Dijon, France, noted that “interestingly, radiotherapy reduced the risk of local relapse but not systemic relapse.”
Benefits linked to radiation therapy dose?
Furthermore, the study adds to evidence suggesting the role of dose in radiation therapy’s benefits in POAML, Dr. Casanovas noted. He pointed to previous research showing that, with a median radiotherapy dose of 26 Gy, stage I POAML patients had a local relapse rate of 9.5%, whereas in the current study, which reported a median radiotherapy dose of 30.6 Gy, the local relapse rate was just 2%.
“Regarding the risk of local relapse, it’s important to see that, as previous published, the risk of a local relapse depends probably on the dose of radiotherapy,” he said.
The results indicate that “radiation therapy could impact patients’ outcome. In comparison to previous research, this suggests benefits from a higher dose.”
He added that “it would be interesting to test in this series if patients receiving more or less 30 Gy had different outcomes or the risks of failure at different sites.”
Overall, the study confirms that POAML “can be safely treated with radiation therapy, which allows for a better chance of local control, compared with other options, but does not preclude relapse over time,” Dr. Casasnovas concluded, adding, “I think that a standardization of radiotherapy dose is warranted to provide guidelines to clinicians treating this infrequent population of patients.”
The authors had no disclosures to report.
“Our study represents the largest institutional cohort analysis on the course of patients with stage I POAML,” said first author Linrui Gao, MD, of the department of radiation oncology at the National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, in Beijing.
Dr. Gao presented these findings at ESMO 2023, held in Madrid.
“We confirm the indolent nature of this stage I disease, with mortality that is similar to the general population and a low rate of lymphoma-attributed mortality,” she said, adding that “radiation therapy was associated with the lowest relapse or disease progression, compared with [other treatments].”
POAML, which can involve lesions in areas including the eyelid, conjunctiva, orbit, and lacrimal gland, makes up about 7% of mucosa-associated lymphoid tissue (MALT) lymphomas. However, the incidence is reported to be steadily increasing. With the majority of patients, 70%-85%, diagnosed as stage I, consensus on treatment approaches is lacking.
Guidelines typically recommend radiation therapy as the standard of care, and approximately 70% of POAML patients do receive the therapy, compared with only about 36% of those with early-stage MALT lymphoma, with the indolent nature of the disease likely weighing on decisions to forgo the treatment, Dr. Gao reported.
“Adoption of initial radiotherapy in early-stage POAML is relatively low worldwide, with possible reasons being [concerns] of a low survival benefit and long-term toxicities,” she said.
To evaluate the long-term outcomes based on baseline clinical features and treatments, Dr. Gao and colleagues conducted a retrospective study of 262 patients with stage I POAML (ipsilateral or bilateral disease), enrolled between January 2000 and December 2020 at two hospitals in China.
Of the patients, who had a median age of 55 and a male-female ratio of 1:3, 82 were initially treated with radiation therapy, 81 with observation, 70 with surgery, and 29 with systemic treatment.
Those receiving radiation therapy had higher rates of an Eastern Cooperative Oncology Group performance status of 1 or higher (P = .02), higher elevations of LDH (P = .03), and higher rates of chronic disease (P < .001), while other baseline characteristics between the groups, including age, T stage, symptom duration, and other factors, were similar.
With a median follow-up of 66 months, the 5-year and 10-year overall survival rates were 96.8% and 90%, respectively, which is similar to the survival rate in the general population in China.
Likewise, the 5- and 10-year rates of lymphoma-specific mortality were both extremely low, at 0.4%, and the corresponding rates of competing nonlymphoma mortality at 5 and 10 years were 2.3% and 4.2%, also consistent with the general population.
The 5- and 10-year mortality rates remained similar to the general population in stratifying patients according to the initial treatment type (P = .767 between treatments).
In terms of recurrence, the overall failure rates were relatively high, with 19.5% of patients experiencing relapse at 5 years and 24.05% at 10 years.
“The failure rates show that the risk of relapse in POAML does not decrease over time,” Dr. Gao said.
Notably, those treated with radiation therapy had a significantly decreased 5-year cumulative risk of failure (8.5%), compared with those who only received observation (29.6%), surgery (22.9%), or systemic treatment (17.2%; overall, P = .002).
The most common failure site was the ipsilateral orbit, and again, rates of those relapses were significantly lower with radiation therapy (2.4%), compared with observation (23.5%), surgery (21.4%), and systemic treatment (17.3%).
However, rates of relapses in other sites, including the contralateral orbit, extraocular site, and multiple sites, were similar among all treatment groups. One patient receiving systemic treatment had large cell transformation, associated with poorer outcomes.
Strategies after recurrence were salvage therapy for 53 patients, including 27 receiving radiation therapy, and observation for 10 patients.
Dr. Gao noted that treatment failure was not associated with higher mortality rates. “However, given the limited number of cases, we think more cases and longer follow-up are needed,” she told MDedge.
Among the most common acute toxicities were ocular dermatitis or mucositis, described as mild, among 23 patients receiving radiation therapy. Nine patients experienced postoperative complications of mild eye irritation and periorbital edema, and five patients receiving systemic treatment experienced grade 2-3 leukopenia. There were no severe adverse events.
In terms of late ocular adverse effects, overall, 3 patients in the radiation therapy group developed cataracts and 143 patients developed dry-eye disease.
“Radiation therapy was associated with the lowest rate of relapse progression, compared with observation, surgery, and systemic treatment, with similar overall and recurrent survival,” Dr. Gao said.
“Based on our study results, radiotherapy should be considered as the optimal treatment for all patients with stage I disease because of its lowest failure risk and minor toxicity,” Dr. Gao told MDedge.
“However, the radiotherapy dose and techniques should be further optimized in good clinical trials,” she noted. “There are some clinical studies undergoing to explore the modern radiotherapy strategy, including by our group.”
Commenting on the study, discussant Olivier Casasnovas, MD, PhD, of the department of hematology, University Hospital Francois Mitterrand, in Dijon, France, noted that “interestingly, radiotherapy reduced the risk of local relapse but not systemic relapse.”
Benefits linked to radiation therapy dose?
Furthermore, the study adds to evidence suggesting the role of dose in radiation therapy’s benefits in POAML, Dr. Casanovas noted. He pointed to previous research showing that, with a median radiotherapy dose of 26 Gy, stage I POAML patients had a local relapse rate of 9.5%, whereas in the current study, which reported a median radiotherapy dose of 30.6 Gy, the local relapse rate was just 2%.
“Regarding the risk of local relapse, it’s important to see that, as previous published, the risk of a local relapse depends probably on the dose of radiotherapy,” he said.
The results indicate that “radiation therapy could impact patients’ outcome. In comparison to previous research, this suggests benefits from a higher dose.”
He added that “it would be interesting to test in this series if patients receiving more or less 30 Gy had different outcomes or the risks of failure at different sites.”
Overall, the study confirms that POAML “can be safely treated with radiation therapy, which allows for a better chance of local control, compared with other options, but does not preclude relapse over time,” Dr. Casasnovas concluded, adding, “I think that a standardization of radiotherapy dose is warranted to provide guidelines to clinicians treating this infrequent population of patients.”
The authors had no disclosures to report.
“Our study represents the largest institutional cohort analysis on the course of patients with stage I POAML,” said first author Linrui Gao, MD, of the department of radiation oncology at the National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, in Beijing.
Dr. Gao presented these findings at ESMO 2023, held in Madrid.
“We confirm the indolent nature of this stage I disease, with mortality that is similar to the general population and a low rate of lymphoma-attributed mortality,” she said, adding that “radiation therapy was associated with the lowest relapse or disease progression, compared with [other treatments].”
POAML, which can involve lesions in areas including the eyelid, conjunctiva, orbit, and lacrimal gland, makes up about 7% of mucosa-associated lymphoid tissue (MALT) lymphomas. However, the incidence is reported to be steadily increasing. With the majority of patients, 70%-85%, diagnosed as stage I, consensus on treatment approaches is lacking.
Guidelines typically recommend radiation therapy as the standard of care, and approximately 70% of POAML patients do receive the therapy, compared with only about 36% of those with early-stage MALT lymphoma, with the indolent nature of the disease likely weighing on decisions to forgo the treatment, Dr. Gao reported.
“Adoption of initial radiotherapy in early-stage POAML is relatively low worldwide, with possible reasons being [concerns] of a low survival benefit and long-term toxicities,” she said.
To evaluate the long-term outcomes based on baseline clinical features and treatments, Dr. Gao and colleagues conducted a retrospective study of 262 patients with stage I POAML (ipsilateral or bilateral disease), enrolled between January 2000 and December 2020 at two hospitals in China.
Of the patients, who had a median age of 55 and a male-female ratio of 1:3, 82 were initially treated with radiation therapy, 81 with observation, 70 with surgery, and 29 with systemic treatment.
Those receiving radiation therapy had higher rates of an Eastern Cooperative Oncology Group performance status of 1 or higher (P = .02), higher elevations of LDH (P = .03), and higher rates of chronic disease (P < .001), while other baseline characteristics between the groups, including age, T stage, symptom duration, and other factors, were similar.
With a median follow-up of 66 months, the 5-year and 10-year overall survival rates were 96.8% and 90%, respectively, which is similar to the survival rate in the general population in China.
Likewise, the 5- and 10-year rates of lymphoma-specific mortality were both extremely low, at 0.4%, and the corresponding rates of competing nonlymphoma mortality at 5 and 10 years were 2.3% and 4.2%, also consistent with the general population.
The 5- and 10-year mortality rates remained similar to the general population in stratifying patients according to the initial treatment type (P = .767 between treatments).
In terms of recurrence, the overall failure rates were relatively high, with 19.5% of patients experiencing relapse at 5 years and 24.05% at 10 years.
“The failure rates show that the risk of relapse in POAML does not decrease over time,” Dr. Gao said.
Notably, those treated with radiation therapy had a significantly decreased 5-year cumulative risk of failure (8.5%), compared with those who only received observation (29.6%), surgery (22.9%), or systemic treatment (17.2%; overall, P = .002).
The most common failure site was the ipsilateral orbit, and again, rates of those relapses were significantly lower with radiation therapy (2.4%), compared with observation (23.5%), surgery (21.4%), and systemic treatment (17.3%).
However, rates of relapses in other sites, including the contralateral orbit, extraocular site, and multiple sites, were similar among all treatment groups. One patient receiving systemic treatment had large cell transformation, associated with poorer outcomes.
Strategies after recurrence were salvage therapy for 53 patients, including 27 receiving radiation therapy, and observation for 10 patients.
Dr. Gao noted that treatment failure was not associated with higher mortality rates. “However, given the limited number of cases, we think more cases and longer follow-up are needed,” she told MDedge.
Among the most common acute toxicities were ocular dermatitis or mucositis, described as mild, among 23 patients receiving radiation therapy. Nine patients experienced postoperative complications of mild eye irritation and periorbital edema, and five patients receiving systemic treatment experienced grade 2-3 leukopenia. There were no severe adverse events.
In terms of late ocular adverse effects, overall, 3 patients in the radiation therapy group developed cataracts and 143 patients developed dry-eye disease.
“Radiation therapy was associated with the lowest rate of relapse progression, compared with observation, surgery, and systemic treatment, with similar overall and recurrent survival,” Dr. Gao said.
“Based on our study results, radiotherapy should be considered as the optimal treatment for all patients with stage I disease because of its lowest failure risk and minor toxicity,” Dr. Gao told MDedge.
“However, the radiotherapy dose and techniques should be further optimized in good clinical trials,” she noted. “There are some clinical studies undergoing to explore the modern radiotherapy strategy, including by our group.”
Commenting on the study, discussant Olivier Casasnovas, MD, PhD, of the department of hematology, University Hospital Francois Mitterrand, in Dijon, France, noted that “interestingly, radiotherapy reduced the risk of local relapse but not systemic relapse.”
Benefits linked to radiation therapy dose?
Furthermore, the study adds to evidence suggesting the role of dose in radiation therapy’s benefits in POAML, Dr. Casanovas noted. He pointed to previous research showing that, with a median radiotherapy dose of 26 Gy, stage I POAML patients had a local relapse rate of 9.5%, whereas in the current study, which reported a median radiotherapy dose of 30.6 Gy, the local relapse rate was just 2%.
“Regarding the risk of local relapse, it’s important to see that, as previous published, the risk of a local relapse depends probably on the dose of radiotherapy,” he said.
The results indicate that “radiation therapy could impact patients’ outcome. In comparison to previous research, this suggests benefits from a higher dose.”
He added that “it would be interesting to test in this series if patients receiving more or less 30 Gy had different outcomes or the risks of failure at different sites.”
Overall, the study confirms that POAML “can be safely treated with radiation therapy, which allows for a better chance of local control, compared with other options, but does not preclude relapse over time,” Dr. Casasnovas concluded, adding, “I think that a standardization of radiotherapy dose is warranted to provide guidelines to clinicians treating this infrequent population of patients.”
The authors had no disclosures to report.
FROM ESMO 2023
Upper respiratory infections: Viral testing in primary care
It’s upper respiratory infection (URI) season. The following is a clinical scenario drawn from my own practice. I’ll tell you what I plan to do, but I’m most interested in crowdsourcing a response from all of you to collectively determine best practice. So please answer the polling questions and contribute your thoughts in the comments, whether you agree or disagree with me.
The patient
The patient is a 69-year-old woman with a 3-day history of cough, nasal congestion, malaise, tactile fever, and poor appetite. She has no sick contacts. She denies dyspnea, presyncope, and chest pain. She has tried guaifenesin and ibuprofen for her symptoms, which helped a little.
She is up to date on immunizations, including four doses of COVID-19 vaccine and the influenza vaccine, which she received 2 months ago.
The patient has a history of heart failure with reduced ejection fraction, coronary artery disease, hypertension, chronic kidney disease stage 3aA2, obesity, and osteoarthritis. Current medications include atorvastatin, losartan, metoprolol, and aspirin.
Her weight is stable at 212 lb, and her vital signs today are:
- Temperature: 37.5° C
- Pulse: 60 beats/min
- Blood pressure: 150/88 mm Hg
- Respiration rate: 14 breaths/min
- SpO2: 93% on room air
What information is most critical before deciding on management?
Your peers chose:
- The patient’s history of viral URIs
14%
- Whether her cough is productive and the color of the sputum
38%
- How well this season’s flu vaccine matches circulating influenza viruses
8%
- Local epidemiology of major viral pathogens (e.g., SARS-CoV-2, influenza, RSV)
40%
Dr. Vega’s take
To provide the best care for our patients when they are threatened with multiple viral upper respiratory pathogens, it is imperative that clinicians have some idea regarding the epidemiology of viral infections, with as much local data as possible. This knowledge will help direct appropriate testing and treatment.
Modern viral molecular testing platforms are highly accurate, but they are not infallible. Small flaws in specificity and sensitivity of testing are magnified when community viral circulation is low. In a U.K. study conducted during a period of low COVID-19 prevalence, the positive predictive value of reverse-transcriptase polymerase chain reaction (RT-PCR) testing was just 16%. Although the negative predictive value was much higher, the false-positive rate of testing was still 0.5%. The authors of the study describe important potential consequences of false-positive results, such as being temporarily removed from an organ transplant list and unnecessary contact tracing.
Testing and treatment
Your county public health department maintains a website describing local activity of SARS-CoV-2 and influenza. Both viruses are in heavy circulation now.
What is the next best step in this patient’s management?
Your peers chose:
- Treat empirically with ritonavir-boosted nirmatrelvir
7%
- Treat empirically with oseltamivir or baloxavir
14%
- Perform lab-based multiplex RT-PCR testing and wait to treat on the basis of results
34%
- Perform rapid nucleic acid amplification testing (NAAT) and treat on the basis of results
45%
Every practice has different resources and should use the best means available to treat patients. Ideally, this patient would undergo rapid NAAT with results available within 30 minutes. Test results will help guide not only treatment decisions but also infection-control measures.
The Infectious Diseases Society of America has provided updates for testing for URIs since the onset of the COVID-19 pandemic. Both laboratory-based and point-of-care rapid NAATs are recommended for testing. Rapid NAATs have been demonstrated to have a sensitivity of 96% and specificity of 100% in the detection of SARS-CoV-2. Obviously, they also offer a highly efficient means to make treatment and isolation decisions.
There are multiple platforms for molecular testing available. Laboratory-based platforms can test for dozens of potential pathogens, including bacteria. Rapid NAATs often have the ability to test for SARS-CoV-2, influenza, and respiratory syncytial virus (RSV). This functionality is important, because these infections generally are difficult to discriminate on the basis of clinical information alone.
The IDSA clearly recognizes the challenges of trying to manage cases of URI. For example, they state that testing of the anterior nares (AN) or oropharynx (OP) is acceptable, even though testing from the nasopharynx offers increased sensitivity. However, testing at the AN/OP allows for patient self-collection of samples, which is also recommended as an option by the IDSA. In an analysis of six cohort studies, the pooled sensitivity of patient-collected nasopharyngeal samples from the AN/OP was 88%, whereas the respective value for samples taken by health care providers was 95%.
The U.S. Centers for Disease Control and Prevention also provides recommendations for the management of patients with acute upper respiratory illness. Patients who are sick enough to be hospitalized should be tested at least for SARS-CoV-2 and influenza using molecular assays. Outpatients should be tested for SARS-CoV-2 with either molecular or antigen testing, and influenza testing should be offered if the findings will change decisions regarding treatment or isolation. Practically speaking, the recommendations for influenza testing mean that most individuals should be tested, including patients at high risk for complications of influenza and those who might have exposure to individuals at high risk.
Treatment of COVID-19 should only be provided in cases of a positive test within 5 days of symptom onset. However, clinicians may treat patients with anti-influenza medications presumptively if test results are not immediately available and the patient has worsening symptoms or is in a group at high risk for complications.
What are some of the challenges that you have faced during the COVID-19 pandemic regarding the management of patients with acute URIs? What have you found in terms of solutions, and where do gaps in quality of care persist? Please add your comments. I will review and circle back with a response. Thank you!
A version of this article first appeared on Medscape.com.
It’s upper respiratory infection (URI) season. The following is a clinical scenario drawn from my own practice. I’ll tell you what I plan to do, but I’m most interested in crowdsourcing a response from all of you to collectively determine best practice. So please answer the polling questions and contribute your thoughts in the comments, whether you agree or disagree with me.
The patient
The patient is a 69-year-old woman with a 3-day history of cough, nasal congestion, malaise, tactile fever, and poor appetite. She has no sick contacts. She denies dyspnea, presyncope, and chest pain. She has tried guaifenesin and ibuprofen for her symptoms, which helped a little.
She is up to date on immunizations, including four doses of COVID-19 vaccine and the influenza vaccine, which she received 2 months ago.
The patient has a history of heart failure with reduced ejection fraction, coronary artery disease, hypertension, chronic kidney disease stage 3aA2, obesity, and osteoarthritis. Current medications include atorvastatin, losartan, metoprolol, and aspirin.
Her weight is stable at 212 lb, and her vital signs today are:
- Temperature: 37.5° C
- Pulse: 60 beats/min
- Blood pressure: 150/88 mm Hg
- Respiration rate: 14 breaths/min
- SpO2: 93% on room air
What information is most critical before deciding on management?
Your peers chose:
- The patient’s history of viral URIs
14%
- Whether her cough is productive and the color of the sputum
38%
- How well this season’s flu vaccine matches circulating influenza viruses
8%
- Local epidemiology of major viral pathogens (e.g., SARS-CoV-2, influenza, RSV)
40%
Dr. Vega’s take
To provide the best care for our patients when they are threatened with multiple viral upper respiratory pathogens, it is imperative that clinicians have some idea regarding the epidemiology of viral infections, with as much local data as possible. This knowledge will help direct appropriate testing and treatment.
Modern viral molecular testing platforms are highly accurate, but they are not infallible. Small flaws in specificity and sensitivity of testing are magnified when community viral circulation is low. In a U.K. study conducted during a period of low COVID-19 prevalence, the positive predictive value of reverse-transcriptase polymerase chain reaction (RT-PCR) testing was just 16%. Although the negative predictive value was much higher, the false-positive rate of testing was still 0.5%. The authors of the study describe important potential consequences of false-positive results, such as being temporarily removed from an organ transplant list and unnecessary contact tracing.
Testing and treatment
Your county public health department maintains a website describing local activity of SARS-CoV-2 and influenza. Both viruses are in heavy circulation now.
What is the next best step in this patient’s management?
Your peers chose:
- Treat empirically with ritonavir-boosted nirmatrelvir
7%
- Treat empirically with oseltamivir or baloxavir
14%
- Perform lab-based multiplex RT-PCR testing and wait to treat on the basis of results
34%
- Perform rapid nucleic acid amplification testing (NAAT) and treat on the basis of results
45%
Every practice has different resources and should use the best means available to treat patients. Ideally, this patient would undergo rapid NAAT with results available within 30 minutes. Test results will help guide not only treatment decisions but also infection-control measures.
The Infectious Diseases Society of America has provided updates for testing for URIs since the onset of the COVID-19 pandemic. Both laboratory-based and point-of-care rapid NAATs are recommended for testing. Rapid NAATs have been demonstrated to have a sensitivity of 96% and specificity of 100% in the detection of SARS-CoV-2. Obviously, they also offer a highly efficient means to make treatment and isolation decisions.
There are multiple platforms for molecular testing available. Laboratory-based platforms can test for dozens of potential pathogens, including bacteria. Rapid NAATs often have the ability to test for SARS-CoV-2, influenza, and respiratory syncytial virus (RSV). This functionality is important, because these infections generally are difficult to discriminate on the basis of clinical information alone.
The IDSA clearly recognizes the challenges of trying to manage cases of URI. For example, they state that testing of the anterior nares (AN) or oropharynx (OP) is acceptable, even though testing from the nasopharynx offers increased sensitivity. However, testing at the AN/OP allows for patient self-collection of samples, which is also recommended as an option by the IDSA. In an analysis of six cohort studies, the pooled sensitivity of patient-collected nasopharyngeal samples from the AN/OP was 88%, whereas the respective value for samples taken by health care providers was 95%.
The U.S. Centers for Disease Control and Prevention also provides recommendations for the management of patients with acute upper respiratory illness. Patients who are sick enough to be hospitalized should be tested at least for SARS-CoV-2 and influenza using molecular assays. Outpatients should be tested for SARS-CoV-2 with either molecular or antigen testing, and influenza testing should be offered if the findings will change decisions regarding treatment or isolation. Practically speaking, the recommendations for influenza testing mean that most individuals should be tested, including patients at high risk for complications of influenza and those who might have exposure to individuals at high risk.
Treatment of COVID-19 should only be provided in cases of a positive test within 5 days of symptom onset. However, clinicians may treat patients with anti-influenza medications presumptively if test results are not immediately available and the patient has worsening symptoms or is in a group at high risk for complications.
What are some of the challenges that you have faced during the COVID-19 pandemic regarding the management of patients with acute URIs? What have you found in terms of solutions, and where do gaps in quality of care persist? Please add your comments. I will review and circle back with a response. Thank you!
A version of this article first appeared on Medscape.com.
It’s upper respiratory infection (URI) season. The following is a clinical scenario drawn from my own practice. I’ll tell you what I plan to do, but I’m most interested in crowdsourcing a response from all of you to collectively determine best practice. So please answer the polling questions and contribute your thoughts in the comments, whether you agree or disagree with me.
The patient
The patient is a 69-year-old woman with a 3-day history of cough, nasal congestion, malaise, tactile fever, and poor appetite. She has no sick contacts. She denies dyspnea, presyncope, and chest pain. She has tried guaifenesin and ibuprofen for her symptoms, which helped a little.
She is up to date on immunizations, including four doses of COVID-19 vaccine and the influenza vaccine, which she received 2 months ago.
The patient has a history of heart failure with reduced ejection fraction, coronary artery disease, hypertension, chronic kidney disease stage 3aA2, obesity, and osteoarthritis. Current medications include atorvastatin, losartan, metoprolol, and aspirin.
Her weight is stable at 212 lb, and her vital signs today are:
- Temperature: 37.5° C
- Pulse: 60 beats/min
- Blood pressure: 150/88 mm Hg
- Respiration rate: 14 breaths/min
- SpO2: 93% on room air
What information is most critical before deciding on management?
Your peers chose:
- The patient’s history of viral URIs
14%
- Whether her cough is productive and the color of the sputum
38%
- How well this season’s flu vaccine matches circulating influenza viruses
8%
- Local epidemiology of major viral pathogens (e.g., SARS-CoV-2, influenza, RSV)
40%
Dr. Vega’s take
To provide the best care for our patients when they are threatened with multiple viral upper respiratory pathogens, it is imperative that clinicians have some idea regarding the epidemiology of viral infections, with as much local data as possible. This knowledge will help direct appropriate testing and treatment.
Modern viral molecular testing platforms are highly accurate, but they are not infallible. Small flaws in specificity and sensitivity of testing are magnified when community viral circulation is low. In a U.K. study conducted during a period of low COVID-19 prevalence, the positive predictive value of reverse-transcriptase polymerase chain reaction (RT-PCR) testing was just 16%. Although the negative predictive value was much higher, the false-positive rate of testing was still 0.5%. The authors of the study describe important potential consequences of false-positive results, such as being temporarily removed from an organ transplant list and unnecessary contact tracing.
Testing and treatment
Your county public health department maintains a website describing local activity of SARS-CoV-2 and influenza. Both viruses are in heavy circulation now.
What is the next best step in this patient’s management?
Your peers chose:
- Treat empirically with ritonavir-boosted nirmatrelvir
7%
- Treat empirically with oseltamivir or baloxavir
14%
- Perform lab-based multiplex RT-PCR testing and wait to treat on the basis of results
34%
- Perform rapid nucleic acid amplification testing (NAAT) and treat on the basis of results
45%
Every practice has different resources and should use the best means available to treat patients. Ideally, this patient would undergo rapid NAAT with results available within 30 minutes. Test results will help guide not only treatment decisions but also infection-control measures.
The Infectious Diseases Society of America has provided updates for testing for URIs since the onset of the COVID-19 pandemic. Both laboratory-based and point-of-care rapid NAATs are recommended for testing. Rapid NAATs have been demonstrated to have a sensitivity of 96% and specificity of 100% in the detection of SARS-CoV-2. Obviously, they also offer a highly efficient means to make treatment and isolation decisions.
There are multiple platforms for molecular testing available. Laboratory-based platforms can test for dozens of potential pathogens, including bacteria. Rapid NAATs often have the ability to test for SARS-CoV-2, influenza, and respiratory syncytial virus (RSV). This functionality is important, because these infections generally are difficult to discriminate on the basis of clinical information alone.
The IDSA clearly recognizes the challenges of trying to manage cases of URI. For example, they state that testing of the anterior nares (AN) or oropharynx (OP) is acceptable, even though testing from the nasopharynx offers increased sensitivity. However, testing at the AN/OP allows for patient self-collection of samples, which is also recommended as an option by the IDSA. In an analysis of six cohort studies, the pooled sensitivity of patient-collected nasopharyngeal samples from the AN/OP was 88%, whereas the respective value for samples taken by health care providers was 95%.
The U.S. Centers for Disease Control and Prevention also provides recommendations for the management of patients with acute upper respiratory illness. Patients who are sick enough to be hospitalized should be tested at least for SARS-CoV-2 and influenza using molecular assays. Outpatients should be tested for SARS-CoV-2 with either molecular or antigen testing, and influenza testing should be offered if the findings will change decisions regarding treatment or isolation. Practically speaking, the recommendations for influenza testing mean that most individuals should be tested, including patients at high risk for complications of influenza and those who might have exposure to individuals at high risk.
Treatment of COVID-19 should only be provided in cases of a positive test within 5 days of symptom onset. However, clinicians may treat patients with anti-influenza medications presumptively if test results are not immediately available and the patient has worsening symptoms or is in a group at high risk for complications.
What are some of the challenges that you have faced during the COVID-19 pandemic regarding the management of patients with acute URIs? What have you found in terms of solutions, and where do gaps in quality of care persist? Please add your comments. I will review and circle back with a response. Thank you!
A version of this article first appeared on Medscape.com.
Telehealth linked to better opioid treatment retention
TOPLINE:
METHODOLOGY:
- Researchers analyzed Medicaid claims data from November 2019 through the end of 2020 in Kentucky and Ohio to investigate the impact of a policy change implemented during the COVID-19 pandemic that allowed the use of telehealth to prescribe buprenorphine for OUD.
- The two main outcomes of interest were retention in treatment after initiation (telehealth vs. traditional) and opioid-related nonfatal overdose after initiation.
TAKEAWAY:
- For both states combined, nearly 92,000 adults had a buprenorphine prescription in at least one quarter in 2020, with nearly 43,000 of those individuals starting treatment in 2020.
- Sharp increases in telehealth delivery of buprenorphine were noted at the beginning of 2020 at the pandemic outset, and this was associated with greater retention in treatment (Kentucky adjusted odds ratio, 1.13; 95% confidence interval, 1.01-1.27 and Ohio aOR, 1.19; 95% CI, 1.06-1.32).
- Furthermore, 90-day retention rates were higher among those who started treatment via telehealth versus those who started treatment in nontelehealth settings in Kentucky (48% vs. 44%, respectively) and in Ohio (32% vs. 28%, respectively).
- There was no increased risk of nonfatal overdose with telehealth treatment, providing added evidence to suggest that patients were not harmed by having increased access to buprenorphine treatment via telehealth.
IN PRACTICE:
“These results offer important insights for states with a high burden of OUD looking to policies and methods to reduce barriers to treatment,” the authors write.
SOURCE:
The study, with first author Lindsey Hammerslag, PhD, with University of Kentucky College of Medicine, Lexington, was published online in JAMA Network Open, with an invited commentary by Lindsey Allen, PhD, Northwestern University, Chicago, on navigating the path to effective, equitable, and evidence-based telehealth for OUD treatment.
LIMITATIONS:
The analysis was limited to Medicaid patients in two states over 1 year and there may have been unmeasured confounders, such as perceived patient stability, that influenced the findings. Because Medicaid data were not linked to emergency services or death records, this study considered only medically treated overdose.
DISCLOSURES:
The study was supported by the National Institute on Drug Abuse and carried out in partnership with the Substance Abuse and Mental Health Services Administration. The authors report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers analyzed Medicaid claims data from November 2019 through the end of 2020 in Kentucky and Ohio to investigate the impact of a policy change implemented during the COVID-19 pandemic that allowed the use of telehealth to prescribe buprenorphine for OUD.
- The two main outcomes of interest were retention in treatment after initiation (telehealth vs. traditional) and opioid-related nonfatal overdose after initiation.
TAKEAWAY:
- For both states combined, nearly 92,000 adults had a buprenorphine prescription in at least one quarter in 2020, with nearly 43,000 of those individuals starting treatment in 2020.
- Sharp increases in telehealth delivery of buprenorphine were noted at the beginning of 2020 at the pandemic outset, and this was associated with greater retention in treatment (Kentucky adjusted odds ratio, 1.13; 95% confidence interval, 1.01-1.27 and Ohio aOR, 1.19; 95% CI, 1.06-1.32).
- Furthermore, 90-day retention rates were higher among those who started treatment via telehealth versus those who started treatment in nontelehealth settings in Kentucky (48% vs. 44%, respectively) and in Ohio (32% vs. 28%, respectively).
- There was no increased risk of nonfatal overdose with telehealth treatment, providing added evidence to suggest that patients were not harmed by having increased access to buprenorphine treatment via telehealth.
IN PRACTICE:
“These results offer important insights for states with a high burden of OUD looking to policies and methods to reduce barriers to treatment,” the authors write.
SOURCE:
The study, with first author Lindsey Hammerslag, PhD, with University of Kentucky College of Medicine, Lexington, was published online in JAMA Network Open, with an invited commentary by Lindsey Allen, PhD, Northwestern University, Chicago, on navigating the path to effective, equitable, and evidence-based telehealth for OUD treatment.
LIMITATIONS:
The analysis was limited to Medicaid patients in two states over 1 year and there may have been unmeasured confounders, such as perceived patient stability, that influenced the findings. Because Medicaid data were not linked to emergency services or death records, this study considered only medically treated overdose.
DISCLOSURES:
The study was supported by the National Institute on Drug Abuse and carried out in partnership with the Substance Abuse and Mental Health Services Administration. The authors report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers analyzed Medicaid claims data from November 2019 through the end of 2020 in Kentucky and Ohio to investigate the impact of a policy change implemented during the COVID-19 pandemic that allowed the use of telehealth to prescribe buprenorphine for OUD.
- The two main outcomes of interest were retention in treatment after initiation (telehealth vs. traditional) and opioid-related nonfatal overdose after initiation.
TAKEAWAY:
- For both states combined, nearly 92,000 adults had a buprenorphine prescription in at least one quarter in 2020, with nearly 43,000 of those individuals starting treatment in 2020.
- Sharp increases in telehealth delivery of buprenorphine were noted at the beginning of 2020 at the pandemic outset, and this was associated with greater retention in treatment (Kentucky adjusted odds ratio, 1.13; 95% confidence interval, 1.01-1.27 and Ohio aOR, 1.19; 95% CI, 1.06-1.32).
- Furthermore, 90-day retention rates were higher among those who started treatment via telehealth versus those who started treatment in nontelehealth settings in Kentucky (48% vs. 44%, respectively) and in Ohio (32% vs. 28%, respectively).
- There was no increased risk of nonfatal overdose with telehealth treatment, providing added evidence to suggest that patients were not harmed by having increased access to buprenorphine treatment via telehealth.
IN PRACTICE:
“These results offer important insights for states with a high burden of OUD looking to policies and methods to reduce barriers to treatment,” the authors write.
SOURCE:
The study, with first author Lindsey Hammerslag, PhD, with University of Kentucky College of Medicine, Lexington, was published online in JAMA Network Open, with an invited commentary by Lindsey Allen, PhD, Northwestern University, Chicago, on navigating the path to effective, equitable, and evidence-based telehealth for OUD treatment.
LIMITATIONS:
The analysis was limited to Medicaid patients in two states over 1 year and there may have been unmeasured confounders, such as perceived patient stability, that influenced the findings. Because Medicaid data were not linked to emergency services or death records, this study considered only medically treated overdose.
DISCLOSURES:
The study was supported by the National Institute on Drug Abuse and carried out in partnership with the Substance Abuse and Mental Health Services Administration. The authors report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN
Orthostatic hypotension no deterrent to hypertension treatment
TOPLINE:
Intensive antihypertensive treatment provides the same benefit with regard to cardiovascular disease (CVD) and all-cause mortality regardless of the presence or absence of orthostatic or standing hypotension, new research shows.
METHODOLOGY:
- In response to ongoing concern about the benefits of intensive versus standard blood pressure treatment for adults with orthostatic hypotension (OH), researchers conducted a meta-analysis of individual patient data from nine randomized clinical trials to see whether the benefit of antihypertensive treatment was diminished for patients who had OH at baseline. Benefit was defined as a reduction in nonfatal CVD events and all-cause mortality.
- The included trials assessed BP pharmacologic treatment (more intensive BP goal or active agent) and had data on OH.
TAKEAWAY:
- The nine trials included 29,235 participants (mean age, 69 years; 48% women) who were followed for a median of 4 years; 9% had OH and 5% had standing hypotension at baseline.
- Having OH at baseline was significantly associated with the composite of CVD or all-cause mortality (hazard ratio, 1.14; 95% confidence interval, 1.04-1.26) and with all-cause mortality (HR, 1.24; 95% CI, 1.09-1.41). The same was true for baseline standing hypotension (composite outcome: HR, 1.39; 95% CI, 1.24-1.57; all-cause mortality: HR, 1.38; 95% CI, 1.14-1.66).
- More intensive BP treatment or active therapy significantly and similarly lowered risk of CVD or all-cause mortality among adults who did not have OH at baseline (HR, 0.81; 95% CI, 0.76-0.86) as well as those with OH at baseline (HR, 0.83; 95% CI, 0.70-1.00).
- More intensive BP treatment or active therapy also significantly lowered risk of CVD or all-cause mortality among those without baseline standing hypotension (HR, 0.80; 95% CI, 0.75-0.85) and nonsignificantly lowered the risk among those with baseline standing hypotension (HR, 0.94; 95% CI, 0.75-1.18).
IN PRACTICE:
“These findings suggest that orthostatic hypotension alone (that is, without symptoms) and standing hypotension measured prior to intensification of BP treatment should not deter adoption of more intensive BP treatment in adults with hypertension,” the authors conclude.
The findings should “reassure clinicians that patients with OH (and perhaps standing hypotension) will derive the full expected benefits from antihypertensive therapy,” add the authors of an accompanying editorial. “This also applies to patients treated to lower BP goals, albeit with less certainty.”
SOURCE:
The study, with first author Stephen Juraschek, MD, PhD, Beth Israel Deaconess Medical Center/Harvard Medical School, Boston, and the accompanying editorial were published online in JAMA.
LIMITATIONS:
In the hypertension trials that were included in the analysis, the study populations differed, as did BP measurement procedures, interventions, duration, and CVD outcome ascertainment processes and definitions. Some trials excluded adults with low standing systolic BP, limiting the number of participants with standing hypotension. OH was determined on the basis of a seated-to-standing protocol; supine-to-standing protocols are more sensitive and may not be interchangeable. Medications used in the trials may not reflect current medicine practice, or the trials may not have included agents thought to be more likely to affect OH and falls.
DISCLOSURES:
The study had no specific funding. Dr. Juraschek has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
TOPLINE:
Intensive antihypertensive treatment provides the same benefit with regard to cardiovascular disease (CVD) and all-cause mortality regardless of the presence or absence of orthostatic or standing hypotension, new research shows.
METHODOLOGY:
- In response to ongoing concern about the benefits of intensive versus standard blood pressure treatment for adults with orthostatic hypotension (OH), researchers conducted a meta-analysis of individual patient data from nine randomized clinical trials to see whether the benefit of antihypertensive treatment was diminished for patients who had OH at baseline. Benefit was defined as a reduction in nonfatal CVD events and all-cause mortality.
- The included trials assessed BP pharmacologic treatment (more intensive BP goal or active agent) and had data on OH.
TAKEAWAY:
- The nine trials included 29,235 participants (mean age, 69 years; 48% women) who were followed for a median of 4 years; 9% had OH and 5% had standing hypotension at baseline.
- Having OH at baseline was significantly associated with the composite of CVD or all-cause mortality (hazard ratio, 1.14; 95% confidence interval, 1.04-1.26) and with all-cause mortality (HR, 1.24; 95% CI, 1.09-1.41). The same was true for baseline standing hypotension (composite outcome: HR, 1.39; 95% CI, 1.24-1.57; all-cause mortality: HR, 1.38; 95% CI, 1.14-1.66).
- More intensive BP treatment or active therapy significantly and similarly lowered risk of CVD or all-cause mortality among adults who did not have OH at baseline (HR, 0.81; 95% CI, 0.76-0.86) as well as those with OH at baseline (HR, 0.83; 95% CI, 0.70-1.00).
- More intensive BP treatment or active therapy also significantly lowered risk of CVD or all-cause mortality among those without baseline standing hypotension (HR, 0.80; 95% CI, 0.75-0.85) and nonsignificantly lowered the risk among those with baseline standing hypotension (HR, 0.94; 95% CI, 0.75-1.18).
IN PRACTICE:
“These findings suggest that orthostatic hypotension alone (that is, without symptoms) and standing hypotension measured prior to intensification of BP treatment should not deter adoption of more intensive BP treatment in adults with hypertension,” the authors conclude.
The findings should “reassure clinicians that patients with OH (and perhaps standing hypotension) will derive the full expected benefits from antihypertensive therapy,” add the authors of an accompanying editorial. “This also applies to patients treated to lower BP goals, albeit with less certainty.”
SOURCE:
The study, with first author Stephen Juraschek, MD, PhD, Beth Israel Deaconess Medical Center/Harvard Medical School, Boston, and the accompanying editorial were published online in JAMA.
LIMITATIONS:
In the hypertension trials that were included in the analysis, the study populations differed, as did BP measurement procedures, interventions, duration, and CVD outcome ascertainment processes and definitions. Some trials excluded adults with low standing systolic BP, limiting the number of participants with standing hypotension. OH was determined on the basis of a seated-to-standing protocol; supine-to-standing protocols are more sensitive and may not be interchangeable. Medications used in the trials may not reflect current medicine practice, or the trials may not have included agents thought to be more likely to affect OH and falls.
DISCLOSURES:
The study had no specific funding. Dr. Juraschek has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
TOPLINE:
Intensive antihypertensive treatment provides the same benefit with regard to cardiovascular disease (CVD) and all-cause mortality regardless of the presence or absence of orthostatic or standing hypotension, new research shows.
METHODOLOGY:
- In response to ongoing concern about the benefits of intensive versus standard blood pressure treatment for adults with orthostatic hypotension (OH), researchers conducted a meta-analysis of individual patient data from nine randomized clinical trials to see whether the benefit of antihypertensive treatment was diminished for patients who had OH at baseline. Benefit was defined as a reduction in nonfatal CVD events and all-cause mortality.
- The included trials assessed BP pharmacologic treatment (more intensive BP goal or active agent) and had data on OH.
TAKEAWAY:
- The nine trials included 29,235 participants (mean age, 69 years; 48% women) who were followed for a median of 4 years; 9% had OH and 5% had standing hypotension at baseline.
- Having OH at baseline was significantly associated with the composite of CVD or all-cause mortality (hazard ratio, 1.14; 95% confidence interval, 1.04-1.26) and with all-cause mortality (HR, 1.24; 95% CI, 1.09-1.41). The same was true for baseline standing hypotension (composite outcome: HR, 1.39; 95% CI, 1.24-1.57; all-cause mortality: HR, 1.38; 95% CI, 1.14-1.66).
- More intensive BP treatment or active therapy significantly and similarly lowered risk of CVD or all-cause mortality among adults who did not have OH at baseline (HR, 0.81; 95% CI, 0.76-0.86) as well as those with OH at baseline (HR, 0.83; 95% CI, 0.70-1.00).
- More intensive BP treatment or active therapy also significantly lowered risk of CVD or all-cause mortality among those without baseline standing hypotension (HR, 0.80; 95% CI, 0.75-0.85) and nonsignificantly lowered the risk among those with baseline standing hypotension (HR, 0.94; 95% CI, 0.75-1.18).
IN PRACTICE:
“These findings suggest that orthostatic hypotension alone (that is, without symptoms) and standing hypotension measured prior to intensification of BP treatment should not deter adoption of more intensive BP treatment in adults with hypertension,” the authors conclude.
The findings should “reassure clinicians that patients with OH (and perhaps standing hypotension) will derive the full expected benefits from antihypertensive therapy,” add the authors of an accompanying editorial. “This also applies to patients treated to lower BP goals, albeit with less certainty.”
SOURCE:
The study, with first author Stephen Juraschek, MD, PhD, Beth Israel Deaconess Medical Center/Harvard Medical School, Boston, and the accompanying editorial were published online in JAMA.
LIMITATIONS:
In the hypertension trials that were included in the analysis, the study populations differed, as did BP measurement procedures, interventions, duration, and CVD outcome ascertainment processes and definitions. Some trials excluded adults with low standing systolic BP, limiting the number of participants with standing hypotension. OH was determined on the basis of a seated-to-standing protocol; supine-to-standing protocols are more sensitive and may not be interchangeable. Medications used in the trials may not reflect current medicine practice, or the trials may not have included agents thought to be more likely to affect OH and falls.
DISCLOSURES:
The study had no specific funding. Dr. Juraschek has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Pandemic-era telehealth led to fewer therapy disruptions
TOPLINE:
METHODOLOGY:
- Retrospective study using electronic health records and insurance claims data from three large U.S. health systems.
- Sample included 110,089 patients with mental health conditions who attended at least two psychotherapy visits during the 9 months before and 9 months after the onset of COVID-19, defined in this study as March 14, 2020.
- Outcome was disruption in psychotherapy, defined as a gap of more than 45 days between visits.
TAKEAWAY:
- Before the pandemic, 96.9% of psychotherapy visits were in person and 35.4% were followed by a gap of more than 45 days.
- After the onset of the pandemic, more than half of visits (51.8%) were virtual, and only 17.9% were followed by a gap of more than 45 days.
- Prior to the pandemic, the median time between visits was 27 days, and after the pandemic, it dropped to 14 days, suggesting individuals were more likely to return for additional psychotherapy after the widespread shift to virtual care.
- Over the entire study period, individuals with depressive, anxiety, or bipolar disorders were more likely to maintain consistent psychotherapy visits, whereas those with schizophrenia, ADHD, autism, conduct or disruptive disorders, dementia, or personality disorders were more likely to have a disruption in their visits.
IN PRACTICE:
“These findings support continued use of virtual psychotherapy as an option for care when appropriate infrastructure is in place. In addition, these findings support the continuation of policies that provide access to and coverage for virtual psychotherapy,” the authors write.
SOURCE:
The study, led by Brian K. Ahmedani, PhD, with the Center for Health Policy and Health Services Research, Henry Ford Health, Detroit, was published online in Psychiatric Services.
LIMITATIONS:
The study was conducted in three large health systems with virtual care infrastructure already in place. Researchers did not examine use of virtual care for medication management or for types of care other than psychotherapy, which may present different challenges.
DISCLOSURES:
The study was supported by the National Institute of Mental Health. The authors have no relevant disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Retrospective study using electronic health records and insurance claims data from three large U.S. health systems.
- Sample included 110,089 patients with mental health conditions who attended at least two psychotherapy visits during the 9 months before and 9 months after the onset of COVID-19, defined in this study as March 14, 2020.
- Outcome was disruption in psychotherapy, defined as a gap of more than 45 days between visits.
TAKEAWAY:
- Before the pandemic, 96.9% of psychotherapy visits were in person and 35.4% were followed by a gap of more than 45 days.
- After the onset of the pandemic, more than half of visits (51.8%) were virtual, and only 17.9% were followed by a gap of more than 45 days.
- Prior to the pandemic, the median time between visits was 27 days, and after the pandemic, it dropped to 14 days, suggesting individuals were more likely to return for additional psychotherapy after the widespread shift to virtual care.
- Over the entire study period, individuals with depressive, anxiety, or bipolar disorders were more likely to maintain consistent psychotherapy visits, whereas those with schizophrenia, ADHD, autism, conduct or disruptive disorders, dementia, or personality disorders were more likely to have a disruption in their visits.
IN PRACTICE:
“These findings support continued use of virtual psychotherapy as an option for care when appropriate infrastructure is in place. In addition, these findings support the continuation of policies that provide access to and coverage for virtual psychotherapy,” the authors write.
SOURCE:
The study, led by Brian K. Ahmedani, PhD, with the Center for Health Policy and Health Services Research, Henry Ford Health, Detroit, was published online in Psychiatric Services.
LIMITATIONS:
The study was conducted in three large health systems with virtual care infrastructure already in place. Researchers did not examine use of virtual care for medication management or for types of care other than psychotherapy, which may present different challenges.
DISCLOSURES:
The study was supported by the National Institute of Mental Health. The authors have no relevant disclosures.
A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Retrospective study using electronic health records and insurance claims data from three large U.S. health systems.
- Sample included 110,089 patients with mental health conditions who attended at least two psychotherapy visits during the 9 months before and 9 months after the onset of COVID-19, defined in this study as March 14, 2020.
- Outcome was disruption in psychotherapy, defined as a gap of more than 45 days between visits.
TAKEAWAY:
- Before the pandemic, 96.9% of psychotherapy visits were in person and 35.4% were followed by a gap of more than 45 days.
- After the onset of the pandemic, more than half of visits (51.8%) were virtual, and only 17.9% were followed by a gap of more than 45 days.
- Prior to the pandemic, the median time between visits was 27 days, and after the pandemic, it dropped to 14 days, suggesting individuals were more likely to return for additional psychotherapy after the widespread shift to virtual care.
- Over the entire study period, individuals with depressive, anxiety, or bipolar disorders were more likely to maintain consistent psychotherapy visits, whereas those with schizophrenia, ADHD, autism, conduct or disruptive disorders, dementia, or personality disorders were more likely to have a disruption in their visits.
IN PRACTICE:
“These findings support continued use of virtual psychotherapy as an option for care when appropriate infrastructure is in place. In addition, these findings support the continuation of policies that provide access to and coverage for virtual psychotherapy,” the authors write.
SOURCE:
The study, led by Brian K. Ahmedani, PhD, with the Center for Health Policy and Health Services Research, Henry Ford Health, Detroit, was published online in Psychiatric Services.
LIMITATIONS:
The study was conducted in three large health systems with virtual care infrastructure already in place. Researchers did not examine use of virtual care for medication management or for types of care other than psychotherapy, which may present different challenges.
DISCLOSURES:
The study was supported by the National Institute of Mental Health. The authors have no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM PSYCHIATRIC SERVICES