User login
T-DXd Moves Toward First Line for HER2-Low Metastatic BC
HER2-low cancers express levels of human epidermal growth factor receptor 2 that are below standard thresholds for HER2-positive immunohistochemistry. In 2022, results from the DESTINY-Breast04 trial showed T-DXd (Enhertu, AstraZeneca) to be an effective second-line chemotherapy in patients with HER2-low metastatic breast cancer.
The highly awaited new findings, from the manufacturer-sponsored, open-label Phase 3 DESTINY-Breast06 trial, were presented at the annual meeting of the American Society of Clinical Oncology (ASCO) in Chicago, Illinois.
The findings not only definitively establish a role for T-DXd earlier in the treatment sequence for HER2-low cancers, they also suggest benefit in a group of patients designated for the purposes of this trial to be HER2-ultralow. These patients have cancers with only faintly detectable HER2 expression on currently used assays (J Clin Oncol 42, 2024 [suppl 17; abstr LBA 1000]).
In a separate set of findings also presented at ASCO, from the randomized phase 1B open-label study, DESTINY-Breast07, T-Dxd showed efficacy in previously untreated HER2-positive metastatic breast cancer patients both alone and in combination with the monoclonal antibody pertuzumab (Perjeta, Genentech).
DESTINY-Breast06 Methods and Results
The DESTINY-Breast06 findings were presented by lead investigator Guiseppe Curigliano, MD, PhD, of the University of Milan and European Institute of Oncology. Dr. Curigliano and his colleagues randomized 866 patients with metastatic breast cancer: 436 to intravenous T-Dxd and 430 to the investigator’s choice of capecitabine, nab-paclitaxel, or paclitaxel chemotherapy. The investigators chose capecitabine 60% of the time.
Most patients had cancers classed as HER2 low (immunohistochemistry 1+ or 2+), while 153 had cancers classed by investigators as HER2-ultralow (IHC 0 with membrane staining or IHC under 1+). Patients enrolled in the study were those whose disease had progressed after endocrine therapy with or without targeted therapy. Patients’ median age was between 57 and 58, and all were chemotherapy-naive in the metastatic breast cancer setting.
The main outcome of the study was median progression-free survival in the HER2-low group. T-Dxd was seen improving progression-free survival, with median 13.2 months vs. 8.1 months (hazard ratio, 0.62; 95% confidence interval, 0.51-0.74; P < .0001). In the intention-to-treat population, which included the HER2 ultralow patients, the benefit was the same (HR, 0.63; 95% CI, 0.53-0.75; P < .0001). This suggested that T-DXd is also effective in these patients, and it will be extremely important going forward to identify the lowest level of HER2 expression in metastatic breast cancers that can still benefit from therapy with T-DxD, Dr. Curigliano said.
Overall survival could not be assessed in the study cohort because complete data were not yet available, Dr. Curigliano said. However, trends pointed to an advantage for T-DXd, and tumor response rates were markedly higher with T-DXd: 57% compared with 31% for standard chemotherapy in the full cohort.
Serious treatment-emergent adverse events were more common in the T-Dxd–treated patients, with 11% of that arm developing drug-related interstitial lung disease, and three patients dying of it. Five patients in the T-DXd arm died of adverse events deemed treatment-related, and none died from treatment-related adverse events in the standard chemotherapy arm. Altogether 11 patients died in the T-DXd arm and 6 in the chemotherapy arm.
Clinical Implications of DESTINY-Breast06
The DESTINY-Breast06 data show that “we have to again change how we think about HER2 expression. Even very low levels of HER2 expression matter, and they can be leveraged to improve the treatment for our patients,” said Ian Krop, MD, PhD, of the Yale Cancer Center in New Haven, Connecticut, during the session where the results were presented.
But T-DXd may not be an appropriate first choice for all patients, especially given the safety concerns associated with T-DXd, he continued. With overall survival and quality-of-life data still lacking, clinicians will have to determine on a case-by-case basis who should get T-DXd in the first line.
“For patients who have symptomatic metastatic disease, who need a response to address those symptoms, those in whom you think chemotherapy may not work as well because they had, for example, a short recurrence interval after their adjuvant chemotherapy — using T-DXd in that first-line setting makes perfect sense to take advantage of the substantially higher response rate compared to chemo,” Dr. Krop said. “But for patients who have asymptomatic low burdens of disease, it seems very reasonable to consider using a well-tolerated chemotherapy like capecitabine in the first line, and then using T-DXd in the second line.”
In an interview, Erica Mayer, MD, of the Dana Farber Cancer Institute in Boston, Massachusetts, said patient choice will also matter in determining whether T-DXd is a first-line option. The known toxicity of T-DXd was underscored by the latest findings, she noted, while capecitabine, one of the chemotherapy choices in the control arm of the study, “really reflects what the majority of breast cancer doctors tend to offer, both because of the efficacy of the drug, but also because it’s oral, it’s well tolerated, and you don’t lose your hair.”
DESTINY-Breast07 Results
The DESTINY-Breast07 findings, from a Phase 1B open-label trial measuring safety and tolerability, were presented by Fabrice Andre, MD, PhD, of Université Paris Saclay in Paris, France. Dr. Andre and his colleagues presented the first data comparing T-DXd monotherapy and T-DXd with pertuzumab — a monoclonal antibody targeting HER2 — as a first-line treatment in patients with HER2-overexpressing (immunohistochemistry 3 and above) metastatic breast cancer. (J Clin Oncol 42, 2024 [suppl 16; abstr 1009]).
Current first-line standard of care for these patients is pertuzumab, trastuzumab, and docetaxel, based on results from the 2015 CLEOPATRA trial. T-DXd is currently approved as a second-line treatment.
Dr. Andre and his colleagues randomized 75 patients to monotherapy with T-DXd and 50 to combined therapy, with a median follow-up of 2 years.
After 1 year of treatment, combination of T-DXd and pertuzumab was seen to be associated with a progression-free survival of 89% at 1 year (80% CI, 81.9-93.9), compared with 80% in patients treated with T-DXd alone (80% CI, 73.7-86.1). Objective tumor response rate was 84% for the combined therapy at 12 weeks, with 20% of patients seeing a complete response, compared with 76% and 8%, respectively, for monotherapy.
As in the DESTINY-Breast06 trial, adverse events were high, with interstitial lung disease seen in 9% of patients in the monotherapy group and in 14% of the combined-therapy patients, although no treatment-related deaths occurred.
A randomized phase 3 trial, DESTINY Breast09, will now compare the monotherapy and the combined therapy with standard care.
T-DXd has seen a rapidly expanding role in treating breast and other solid tumors. The DESTINY Breast06 findings will move up its place in the treatment algorithm for metastatic breast cancer, “allowing us to now offer T-DXd as the first chemotherapy choice for patients who are making that transition to chemotherapy over many of the traditional provider choices that we previously have offered,” Dr. Mayer said.
The results “support the use of not only this specific agent, but also the concept of antibody drug conjugates as a very effective way to treat malignancy,” she added.
Dr. Curigliano reported receiving speaker’s fees, research funding, and other support from AstraZeneca and Daiichi Sankyo, among other companies, as did most of his co-authors, of whom three were AstraZeneca employees. Dr. Fabrice disclosed receiving research funding, travel compensation, and/or advisory fees from AstraZeneca and other entities, as did several of his co-authors. Two of his co-authors were employed by AstraZeneca and Roche, manufacturers of the study drugs. Dr. Krop and Dr. Mayer disclosed relationships with AstraZeneca and others.
HER2-low cancers express levels of human epidermal growth factor receptor 2 that are below standard thresholds for HER2-positive immunohistochemistry. In 2022, results from the DESTINY-Breast04 trial showed T-DXd (Enhertu, AstraZeneca) to be an effective second-line chemotherapy in patients with HER2-low metastatic breast cancer.
The highly awaited new findings, from the manufacturer-sponsored, open-label Phase 3 DESTINY-Breast06 trial, were presented at the annual meeting of the American Society of Clinical Oncology (ASCO) in Chicago, Illinois.
The findings not only definitively establish a role for T-DXd earlier in the treatment sequence for HER2-low cancers, they also suggest benefit in a group of patients designated for the purposes of this trial to be HER2-ultralow. These patients have cancers with only faintly detectable HER2 expression on currently used assays (J Clin Oncol 42, 2024 [suppl 17; abstr LBA 1000]).
In a separate set of findings also presented at ASCO, from the randomized phase 1B open-label study, DESTINY-Breast07, T-Dxd showed efficacy in previously untreated HER2-positive metastatic breast cancer patients both alone and in combination with the monoclonal antibody pertuzumab (Perjeta, Genentech).
DESTINY-Breast06 Methods and Results
The DESTINY-Breast06 findings were presented by lead investigator Guiseppe Curigliano, MD, PhD, of the University of Milan and European Institute of Oncology. Dr. Curigliano and his colleagues randomized 866 patients with metastatic breast cancer: 436 to intravenous T-Dxd and 430 to the investigator’s choice of capecitabine, nab-paclitaxel, or paclitaxel chemotherapy. The investigators chose capecitabine 60% of the time.
Most patients had cancers classed as HER2 low (immunohistochemistry 1+ or 2+), while 153 had cancers classed by investigators as HER2-ultralow (IHC 0 with membrane staining or IHC under 1+). Patients enrolled in the study were those whose disease had progressed after endocrine therapy with or without targeted therapy. Patients’ median age was between 57 and 58, and all were chemotherapy-naive in the metastatic breast cancer setting.
The main outcome of the study was median progression-free survival in the HER2-low group. T-Dxd was seen improving progression-free survival, with median 13.2 months vs. 8.1 months (hazard ratio, 0.62; 95% confidence interval, 0.51-0.74; P < .0001). In the intention-to-treat population, which included the HER2 ultralow patients, the benefit was the same (HR, 0.63; 95% CI, 0.53-0.75; P < .0001). This suggested that T-DXd is also effective in these patients, and it will be extremely important going forward to identify the lowest level of HER2 expression in metastatic breast cancers that can still benefit from therapy with T-DxD, Dr. Curigliano said.
Overall survival could not be assessed in the study cohort because complete data were not yet available, Dr. Curigliano said. However, trends pointed to an advantage for T-DXd, and tumor response rates were markedly higher with T-DXd: 57% compared with 31% for standard chemotherapy in the full cohort.
Serious treatment-emergent adverse events were more common in the T-Dxd–treated patients, with 11% of that arm developing drug-related interstitial lung disease, and three patients dying of it. Five patients in the T-DXd arm died of adverse events deemed treatment-related, and none died from treatment-related adverse events in the standard chemotherapy arm. Altogether 11 patients died in the T-DXd arm and 6 in the chemotherapy arm.
Clinical Implications of DESTINY-Breast06
The DESTINY-Breast06 data show that “we have to again change how we think about HER2 expression. Even very low levels of HER2 expression matter, and they can be leveraged to improve the treatment for our patients,” said Ian Krop, MD, PhD, of the Yale Cancer Center in New Haven, Connecticut, during the session where the results were presented.
But T-DXd may not be an appropriate first choice for all patients, especially given the safety concerns associated with T-DXd, he continued. With overall survival and quality-of-life data still lacking, clinicians will have to determine on a case-by-case basis who should get T-DXd in the first line.
“For patients who have symptomatic metastatic disease, who need a response to address those symptoms, those in whom you think chemotherapy may not work as well because they had, for example, a short recurrence interval after their adjuvant chemotherapy — using T-DXd in that first-line setting makes perfect sense to take advantage of the substantially higher response rate compared to chemo,” Dr. Krop said. “But for patients who have asymptomatic low burdens of disease, it seems very reasonable to consider using a well-tolerated chemotherapy like capecitabine in the first line, and then using T-DXd in the second line.”
In an interview, Erica Mayer, MD, of the Dana Farber Cancer Institute in Boston, Massachusetts, said patient choice will also matter in determining whether T-DXd is a first-line option. The known toxicity of T-DXd was underscored by the latest findings, she noted, while capecitabine, one of the chemotherapy choices in the control arm of the study, “really reflects what the majority of breast cancer doctors tend to offer, both because of the efficacy of the drug, but also because it’s oral, it’s well tolerated, and you don’t lose your hair.”
DESTINY-Breast07 Results
The DESTINY-Breast07 findings, from a Phase 1B open-label trial measuring safety and tolerability, were presented by Fabrice Andre, MD, PhD, of Université Paris Saclay in Paris, France. Dr. Andre and his colleagues presented the first data comparing T-DXd monotherapy and T-DXd with pertuzumab — a monoclonal antibody targeting HER2 — as a first-line treatment in patients with HER2-overexpressing (immunohistochemistry 3 and above) metastatic breast cancer. (J Clin Oncol 42, 2024 [suppl 16; abstr 1009]).
Current first-line standard of care for these patients is pertuzumab, trastuzumab, and docetaxel, based on results from the 2015 CLEOPATRA trial. T-DXd is currently approved as a second-line treatment.
Dr. Andre and his colleagues randomized 75 patients to monotherapy with T-DXd and 50 to combined therapy, with a median follow-up of 2 years.
After 1 year of treatment, combination of T-DXd and pertuzumab was seen to be associated with a progression-free survival of 89% at 1 year (80% CI, 81.9-93.9), compared with 80% in patients treated with T-DXd alone (80% CI, 73.7-86.1). Objective tumor response rate was 84% for the combined therapy at 12 weeks, with 20% of patients seeing a complete response, compared with 76% and 8%, respectively, for monotherapy.
As in the DESTINY-Breast06 trial, adverse events were high, with interstitial lung disease seen in 9% of patients in the monotherapy group and in 14% of the combined-therapy patients, although no treatment-related deaths occurred.
A randomized phase 3 trial, DESTINY Breast09, will now compare the monotherapy and the combined therapy with standard care.
T-DXd has seen a rapidly expanding role in treating breast and other solid tumors. The DESTINY Breast06 findings will move up its place in the treatment algorithm for metastatic breast cancer, “allowing us to now offer T-DXd as the first chemotherapy choice for patients who are making that transition to chemotherapy over many of the traditional provider choices that we previously have offered,” Dr. Mayer said.
The results “support the use of not only this specific agent, but also the concept of antibody drug conjugates as a very effective way to treat malignancy,” she added.
Dr. Curigliano reported receiving speaker’s fees, research funding, and other support from AstraZeneca and Daiichi Sankyo, among other companies, as did most of his co-authors, of whom three were AstraZeneca employees. Dr. Fabrice disclosed receiving research funding, travel compensation, and/or advisory fees from AstraZeneca and other entities, as did several of his co-authors. Two of his co-authors were employed by AstraZeneca and Roche, manufacturers of the study drugs. Dr. Krop and Dr. Mayer disclosed relationships with AstraZeneca and others.
HER2-low cancers express levels of human epidermal growth factor receptor 2 that are below standard thresholds for HER2-positive immunohistochemistry. In 2022, results from the DESTINY-Breast04 trial showed T-DXd (Enhertu, AstraZeneca) to be an effective second-line chemotherapy in patients with HER2-low metastatic breast cancer.
The highly awaited new findings, from the manufacturer-sponsored, open-label Phase 3 DESTINY-Breast06 trial, were presented at the annual meeting of the American Society of Clinical Oncology (ASCO) in Chicago, Illinois.
The findings not only definitively establish a role for T-DXd earlier in the treatment sequence for HER2-low cancers, they also suggest benefit in a group of patients designated for the purposes of this trial to be HER2-ultralow. These patients have cancers with only faintly detectable HER2 expression on currently used assays (J Clin Oncol 42, 2024 [suppl 17; abstr LBA 1000]).
In a separate set of findings also presented at ASCO, from the randomized phase 1B open-label study, DESTINY-Breast07, T-Dxd showed efficacy in previously untreated HER2-positive metastatic breast cancer patients both alone and in combination with the monoclonal antibody pertuzumab (Perjeta, Genentech).
DESTINY-Breast06 Methods and Results
The DESTINY-Breast06 findings were presented by lead investigator Guiseppe Curigliano, MD, PhD, of the University of Milan and European Institute of Oncology. Dr. Curigliano and his colleagues randomized 866 patients with metastatic breast cancer: 436 to intravenous T-Dxd and 430 to the investigator’s choice of capecitabine, nab-paclitaxel, or paclitaxel chemotherapy. The investigators chose capecitabine 60% of the time.
Most patients had cancers classed as HER2 low (immunohistochemistry 1+ or 2+), while 153 had cancers classed by investigators as HER2-ultralow (IHC 0 with membrane staining or IHC under 1+). Patients enrolled in the study were those whose disease had progressed after endocrine therapy with or without targeted therapy. Patients’ median age was between 57 and 58, and all were chemotherapy-naive in the metastatic breast cancer setting.
The main outcome of the study was median progression-free survival in the HER2-low group. T-Dxd was seen improving progression-free survival, with median 13.2 months vs. 8.1 months (hazard ratio, 0.62; 95% confidence interval, 0.51-0.74; P < .0001). In the intention-to-treat population, which included the HER2 ultralow patients, the benefit was the same (HR, 0.63; 95% CI, 0.53-0.75; P < .0001). This suggested that T-DXd is also effective in these patients, and it will be extremely important going forward to identify the lowest level of HER2 expression in metastatic breast cancers that can still benefit from therapy with T-DxD, Dr. Curigliano said.
Overall survival could not be assessed in the study cohort because complete data were not yet available, Dr. Curigliano said. However, trends pointed to an advantage for T-DXd, and tumor response rates were markedly higher with T-DXd: 57% compared with 31% for standard chemotherapy in the full cohort.
Serious treatment-emergent adverse events were more common in the T-Dxd–treated patients, with 11% of that arm developing drug-related interstitial lung disease, and three patients dying of it. Five patients in the T-DXd arm died of adverse events deemed treatment-related, and none died from treatment-related adverse events in the standard chemotherapy arm. Altogether 11 patients died in the T-DXd arm and 6 in the chemotherapy arm.
Clinical Implications of DESTINY-Breast06
The DESTINY-Breast06 data show that “we have to again change how we think about HER2 expression. Even very low levels of HER2 expression matter, and they can be leveraged to improve the treatment for our patients,” said Ian Krop, MD, PhD, of the Yale Cancer Center in New Haven, Connecticut, during the session where the results were presented.
But T-DXd may not be an appropriate first choice for all patients, especially given the safety concerns associated with T-DXd, he continued. With overall survival and quality-of-life data still lacking, clinicians will have to determine on a case-by-case basis who should get T-DXd in the first line.
“For patients who have symptomatic metastatic disease, who need a response to address those symptoms, those in whom you think chemotherapy may not work as well because they had, for example, a short recurrence interval after their adjuvant chemotherapy — using T-DXd in that first-line setting makes perfect sense to take advantage of the substantially higher response rate compared to chemo,” Dr. Krop said. “But for patients who have asymptomatic low burdens of disease, it seems very reasonable to consider using a well-tolerated chemotherapy like capecitabine in the first line, and then using T-DXd in the second line.”
In an interview, Erica Mayer, MD, of the Dana Farber Cancer Institute in Boston, Massachusetts, said patient choice will also matter in determining whether T-DXd is a first-line option. The known toxicity of T-DXd was underscored by the latest findings, she noted, while capecitabine, one of the chemotherapy choices in the control arm of the study, “really reflects what the majority of breast cancer doctors tend to offer, both because of the efficacy of the drug, but also because it’s oral, it’s well tolerated, and you don’t lose your hair.”
DESTINY-Breast07 Results
The DESTINY-Breast07 findings, from a Phase 1B open-label trial measuring safety and tolerability, were presented by Fabrice Andre, MD, PhD, of Université Paris Saclay in Paris, France. Dr. Andre and his colleagues presented the first data comparing T-DXd monotherapy and T-DXd with pertuzumab — a monoclonal antibody targeting HER2 — as a first-line treatment in patients with HER2-overexpressing (immunohistochemistry 3 and above) metastatic breast cancer. (J Clin Oncol 42, 2024 [suppl 16; abstr 1009]).
Current first-line standard of care for these patients is pertuzumab, trastuzumab, and docetaxel, based on results from the 2015 CLEOPATRA trial. T-DXd is currently approved as a second-line treatment.
Dr. Andre and his colleagues randomized 75 patients to monotherapy with T-DXd and 50 to combined therapy, with a median follow-up of 2 years.
After 1 year of treatment, combination of T-DXd and pertuzumab was seen to be associated with a progression-free survival of 89% at 1 year (80% CI, 81.9-93.9), compared with 80% in patients treated with T-DXd alone (80% CI, 73.7-86.1). Objective tumor response rate was 84% for the combined therapy at 12 weeks, with 20% of patients seeing a complete response, compared with 76% and 8%, respectively, for monotherapy.
As in the DESTINY-Breast06 trial, adverse events were high, with interstitial lung disease seen in 9% of patients in the monotherapy group and in 14% of the combined-therapy patients, although no treatment-related deaths occurred.
A randomized phase 3 trial, DESTINY Breast09, will now compare the monotherapy and the combined therapy with standard care.
T-DXd has seen a rapidly expanding role in treating breast and other solid tumors. The DESTINY Breast06 findings will move up its place in the treatment algorithm for metastatic breast cancer, “allowing us to now offer T-DXd as the first chemotherapy choice for patients who are making that transition to chemotherapy over many of the traditional provider choices that we previously have offered,” Dr. Mayer said.
The results “support the use of not only this specific agent, but also the concept of antibody drug conjugates as a very effective way to treat malignancy,” she added.
Dr. Curigliano reported receiving speaker’s fees, research funding, and other support from AstraZeneca and Daiichi Sankyo, among other companies, as did most of his co-authors, of whom three were AstraZeneca employees. Dr. Fabrice disclosed receiving research funding, travel compensation, and/or advisory fees from AstraZeneca and other entities, as did several of his co-authors. Two of his co-authors were employed by AstraZeneca and Roche, manufacturers of the study drugs. Dr. Krop and Dr. Mayer disclosed relationships with AstraZeneca and others.
FROM ASCO
PTSD Rates Soar Among College Students
TOPLINE:
Posttraumatic stress disorder (PTSD) rates among college students more than doubled between 2017 and 2022, new data showed. Rates of acute stress disorder (ASD) also increased during that time.
METHODOLOGY:
- Researchers conducted five waves of cross-sectional study from 2017 to 2022, involving 392,377 participants across 332 colleges and universities.
- The study utilized the Healthy Minds Study data, ensuring representativeness by applying sample weights based on institutional demographics.
- Outcome variables were diagnoses of PTSD and ASD, confirmed by healthcare practitioners, with statistical analysis assessing change in odds of estimated prevalence during 2017-2022.
TAKEAWAY:
- The prevalence of PTSD among US college students increased from 3.4% in 2017-2018 to 7.5% in 2021-2022.
- ASD diagnoses also rose from 0.2% in 2017-2018 to 0.7% in 2021-2022, with both increases remaining statistically significant after adjusting for demographic differences.
- Investigators noted that these findings underscore the need for targeted, trauma-informed intervention strategies in college settings.
IN PRACTICE:
“These trends highlight the escalating mental health challenges among college students, which is consistent with recent research reporting a surge in psychiatric diagnoses,” the authors wrote. “Factors contributing to this rise may include pandemic-related stressors (eg, loss of loved ones) and the effect of traumatic events (eg, campus shootings and racial trauma),” they added.
SOURCE:
The study was led by Yusen Zhai, PhD, University of Alabama at Birmingham. It was published online on May 30, 2024, in JAMA Network Open.
LIMITATIONS:
The study’s reliance on self-reported data and single questions for diagnosed PTSD and ASD may have limited the accuracy of the findings. The retrospective design and the absence of longitudinal follow-up may have restricted the ability to infer causality from the observed trends.
DISCLOSURES:
No disclosures were reported. No funding information was available.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Posttraumatic stress disorder (PTSD) rates among college students more than doubled between 2017 and 2022, new data showed. Rates of acute stress disorder (ASD) also increased during that time.
METHODOLOGY:
- Researchers conducted five waves of cross-sectional study from 2017 to 2022, involving 392,377 participants across 332 colleges and universities.
- The study utilized the Healthy Minds Study data, ensuring representativeness by applying sample weights based on institutional demographics.
- Outcome variables were diagnoses of PTSD and ASD, confirmed by healthcare practitioners, with statistical analysis assessing change in odds of estimated prevalence during 2017-2022.
TAKEAWAY:
- The prevalence of PTSD among US college students increased from 3.4% in 2017-2018 to 7.5% in 2021-2022.
- ASD diagnoses also rose from 0.2% in 2017-2018 to 0.7% in 2021-2022, with both increases remaining statistically significant after adjusting for demographic differences.
- Investigators noted that these findings underscore the need for targeted, trauma-informed intervention strategies in college settings.
IN PRACTICE:
“These trends highlight the escalating mental health challenges among college students, which is consistent with recent research reporting a surge in psychiatric diagnoses,” the authors wrote. “Factors contributing to this rise may include pandemic-related stressors (eg, loss of loved ones) and the effect of traumatic events (eg, campus shootings and racial trauma),” they added.
SOURCE:
The study was led by Yusen Zhai, PhD, University of Alabama at Birmingham. It was published online on May 30, 2024, in JAMA Network Open.
LIMITATIONS:
The study’s reliance on self-reported data and single questions for diagnosed PTSD and ASD may have limited the accuracy of the findings. The retrospective design and the absence of longitudinal follow-up may have restricted the ability to infer causality from the observed trends.
DISCLOSURES:
No disclosures were reported. No funding information was available.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Posttraumatic stress disorder (PTSD) rates among college students more than doubled between 2017 and 2022, new data showed. Rates of acute stress disorder (ASD) also increased during that time.
METHODOLOGY:
- Researchers conducted five waves of cross-sectional study from 2017 to 2022, involving 392,377 participants across 332 colleges and universities.
- The study utilized the Healthy Minds Study data, ensuring representativeness by applying sample weights based on institutional demographics.
- Outcome variables were diagnoses of PTSD and ASD, confirmed by healthcare practitioners, with statistical analysis assessing change in odds of estimated prevalence during 2017-2022.
TAKEAWAY:
- The prevalence of PTSD among US college students increased from 3.4% in 2017-2018 to 7.5% in 2021-2022.
- ASD diagnoses also rose from 0.2% in 2017-2018 to 0.7% in 2021-2022, with both increases remaining statistically significant after adjusting for demographic differences.
- Investigators noted that these findings underscore the need for targeted, trauma-informed intervention strategies in college settings.
IN PRACTICE:
“These trends highlight the escalating mental health challenges among college students, which is consistent with recent research reporting a surge in psychiatric diagnoses,” the authors wrote. “Factors contributing to this rise may include pandemic-related stressors (eg, loss of loved ones) and the effect of traumatic events (eg, campus shootings and racial trauma),” they added.
SOURCE:
The study was led by Yusen Zhai, PhD, University of Alabama at Birmingham. It was published online on May 30, 2024, in JAMA Network Open.
LIMITATIONS:
The study’s reliance on self-reported data and single questions for diagnosed PTSD and ASD may have limited the accuracy of the findings. The retrospective design and the absence of longitudinal follow-up may have restricted the ability to infer causality from the observed trends.
DISCLOSURES:
No disclosures were reported. No funding information was available.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
Early Memory Problems Linked to Increased Tau
Reports from older adults and their partners of early memory issues are associated with higher levels of tau neurofibrillary tangles in the brain, new research suggests.
The findings show that in addition to beta-amyloid, tau is implicated in cognitive decline even in the absence of overt clinical symptoms.
“Understanding the earliest signs of Alzheimer’s disease is even more important now that new disease-modifying drugs are becoming available,” study author
Rebecca E. Amariglio, PhD, clinical neuropsychologist at Brigham and Women’s Hospital and the Massachusetts General Hospital and assistant professor in neurology at Harvard Medical School, Boston, said in a news release. “Our study found early suspicions of memory problems by both participants and the people who knew them well were linked to higher levels of tau tangles in the brain.”
The study was published online in Neurology.
Subjective Cognitive Decline
Beta-amyloid plaque accumulations and tau neurofibrillary tangles both underlie the clinical continuum of Alzheimer’s disease (AD). Previous studies have investigated beta-amyloid burden and self- and partner-reported cognitive decline, but fewer have examined regional tau.
Subjective cognitive decline may be an early sign of AD, but self-awareness declines as individuals become increasingly symptomatic. So, a report from a partner about the participant’s level of cognitive functioning is often required in studies of mild cognitive impairment and dementia. The relevance of this model during the preclinical stage is less clear.
For the multicohort, cross-sectional study, investigators studied 675 cognitively unimpaired older adults (mean age, 72 years; 59% female), including persons with nonelevated beta-amyloid levels and those with elevated beta-amyloid levels, as determined by PET.
Participants brought a spouse, adult child, or other study partner with them to answer questions about the participant’s cognitive abilities and their ability to complete daily tasks. About 65% of participants lived with their partners and both completed the Cognitive Function Index (CFI) to assess cognitive decline, with higher scores indicating greater cognitive decline.
Covariates included age, sex, education, and cohort as well as objective cognitive performance.
The Value of Partner Reporting
Investigators found that higher tau levels were associated with greater self- and partner-reported cognitive decline (P < .001 for both).
Significant associations between self- and partner-reported CFI measures were driven by elevated beta-amyloid levels, with continuous beta-amyloid levels showing an independent effect on CFI in addition to tau.
“Our findings suggest that asking older people who have elevated Alzheimer’s disease biomarkers about subjective cognitive decline may be valuable for early detection,” Dr. Amariglio said.
Limitations include the fact that most participants were White and highly educated. Future studies should include participants from more diverse racial and ethnic groups and people with diverse levels of education, researchers noted.
“Although this study was cross-sectional, findings suggest that among older CU individuals who at risk for AD dementia, capturing self-report and study partner report of cognitive function may be valuable for understanding the relationship between early pathophysiologic progression and the emergence of functional impairment,” the authors concluded.
The study was funded in part by the National Institute on Aging, Eli Lily, and the Alzheimer’s Association, among others. Dr. Amariglio receives research funding from the National Institute on Aging. Complete study funding and other authors’ disclosures are listed in the original paper.
A version of this article first appeared on Medscape.com.
Reports from older adults and their partners of early memory issues are associated with higher levels of tau neurofibrillary tangles in the brain, new research suggests.
The findings show that in addition to beta-amyloid, tau is implicated in cognitive decline even in the absence of overt clinical symptoms.
“Understanding the earliest signs of Alzheimer’s disease is even more important now that new disease-modifying drugs are becoming available,” study author
Rebecca E. Amariglio, PhD, clinical neuropsychologist at Brigham and Women’s Hospital and the Massachusetts General Hospital and assistant professor in neurology at Harvard Medical School, Boston, said in a news release. “Our study found early suspicions of memory problems by both participants and the people who knew them well were linked to higher levels of tau tangles in the brain.”
The study was published online in Neurology.
Subjective Cognitive Decline
Beta-amyloid plaque accumulations and tau neurofibrillary tangles both underlie the clinical continuum of Alzheimer’s disease (AD). Previous studies have investigated beta-amyloid burden and self- and partner-reported cognitive decline, but fewer have examined regional tau.
Subjective cognitive decline may be an early sign of AD, but self-awareness declines as individuals become increasingly symptomatic. So, a report from a partner about the participant’s level of cognitive functioning is often required in studies of mild cognitive impairment and dementia. The relevance of this model during the preclinical stage is less clear.
For the multicohort, cross-sectional study, investigators studied 675 cognitively unimpaired older adults (mean age, 72 years; 59% female), including persons with nonelevated beta-amyloid levels and those with elevated beta-amyloid levels, as determined by PET.
Participants brought a spouse, adult child, or other study partner with them to answer questions about the participant’s cognitive abilities and their ability to complete daily tasks. About 65% of participants lived with their partners and both completed the Cognitive Function Index (CFI) to assess cognitive decline, with higher scores indicating greater cognitive decline.
Covariates included age, sex, education, and cohort as well as objective cognitive performance.
The Value of Partner Reporting
Investigators found that higher tau levels were associated with greater self- and partner-reported cognitive decline (P < .001 for both).
Significant associations between self- and partner-reported CFI measures were driven by elevated beta-amyloid levels, with continuous beta-amyloid levels showing an independent effect on CFI in addition to tau.
“Our findings suggest that asking older people who have elevated Alzheimer’s disease biomarkers about subjective cognitive decline may be valuable for early detection,” Dr. Amariglio said.
Limitations include the fact that most participants were White and highly educated. Future studies should include participants from more diverse racial and ethnic groups and people with diverse levels of education, researchers noted.
“Although this study was cross-sectional, findings suggest that among older CU individuals who at risk for AD dementia, capturing self-report and study partner report of cognitive function may be valuable for understanding the relationship between early pathophysiologic progression and the emergence of functional impairment,” the authors concluded.
The study was funded in part by the National Institute on Aging, Eli Lily, and the Alzheimer’s Association, among others. Dr. Amariglio receives research funding from the National Institute on Aging. Complete study funding and other authors’ disclosures are listed in the original paper.
A version of this article first appeared on Medscape.com.
Reports from older adults and their partners of early memory issues are associated with higher levels of tau neurofibrillary tangles in the brain, new research suggests.
The findings show that in addition to beta-amyloid, tau is implicated in cognitive decline even in the absence of overt clinical symptoms.
“Understanding the earliest signs of Alzheimer’s disease is even more important now that new disease-modifying drugs are becoming available,” study author
Rebecca E. Amariglio, PhD, clinical neuropsychologist at Brigham and Women’s Hospital and the Massachusetts General Hospital and assistant professor in neurology at Harvard Medical School, Boston, said in a news release. “Our study found early suspicions of memory problems by both participants and the people who knew them well were linked to higher levels of tau tangles in the brain.”
The study was published online in Neurology.
Subjective Cognitive Decline
Beta-amyloid plaque accumulations and tau neurofibrillary tangles both underlie the clinical continuum of Alzheimer’s disease (AD). Previous studies have investigated beta-amyloid burden and self- and partner-reported cognitive decline, but fewer have examined regional tau.
Subjective cognitive decline may be an early sign of AD, but self-awareness declines as individuals become increasingly symptomatic. So, a report from a partner about the participant’s level of cognitive functioning is often required in studies of mild cognitive impairment and dementia. The relevance of this model during the preclinical stage is less clear.
For the multicohort, cross-sectional study, investigators studied 675 cognitively unimpaired older adults (mean age, 72 years; 59% female), including persons with nonelevated beta-amyloid levels and those with elevated beta-amyloid levels, as determined by PET.
Participants brought a spouse, adult child, or other study partner with them to answer questions about the participant’s cognitive abilities and their ability to complete daily tasks. About 65% of participants lived with their partners and both completed the Cognitive Function Index (CFI) to assess cognitive decline, with higher scores indicating greater cognitive decline.
Covariates included age, sex, education, and cohort as well as objective cognitive performance.
The Value of Partner Reporting
Investigators found that higher tau levels were associated with greater self- and partner-reported cognitive decline (P < .001 for both).
Significant associations between self- and partner-reported CFI measures were driven by elevated beta-amyloid levels, with continuous beta-amyloid levels showing an independent effect on CFI in addition to tau.
“Our findings suggest that asking older people who have elevated Alzheimer’s disease biomarkers about subjective cognitive decline may be valuable for early detection,” Dr. Amariglio said.
Limitations include the fact that most participants were White and highly educated. Future studies should include participants from more diverse racial and ethnic groups and people with diverse levels of education, researchers noted.
“Although this study was cross-sectional, findings suggest that among older CU individuals who at risk for AD dementia, capturing self-report and study partner report of cognitive function may be valuable for understanding the relationship between early pathophysiologic progression and the emergence of functional impairment,” the authors concluded.
The study was funded in part by the National Institute on Aging, Eli Lily, and the Alzheimer’s Association, among others. Dr. Amariglio receives research funding from the National Institute on Aging. Complete study funding and other authors’ disclosures are listed in the original paper.
A version of this article first appeared on Medscape.com.
New Era? ‘Double Selective’ Antibiotic Spares the Microbiome
A new antibiotic uses a never-before-seen mechanism to deliver a direct hit on tough-to-treat infections while leaving beneficial microbes alone. The strategy could lead to a new class of antibiotics that attack dangerous bacteria in a powerful new way, overcoming current drug resistance while sparing the gut microbiome.
“The biggest takeaway is the double-selective component,” said co-lead author Kristen A. Muñoz, PhD, who performed the research as a doctoral student at University of Illinois at Urbana-Champaign (UIUC). “We were able to develop a drug that not only targets problematic pathogens, but because it is selective for these pathogens only, we can spare the good bacteria and preserve the integrity of the microbiome.”
The drug goes after Gram-negative bacteria — pathogens responsible for debilitating and even fatal infections like gastroenteritis, urinary tract infections, pneumonia, sepsis, and cholera. The arsenal of antibiotics against them is old, with no new classes specifically targeting these bacteria coming on the market since 1968.
Many of these bugs have become resistant to one or more antibiotics, with deadly consequences. And antibiotics against them can also wipe out beneficial gut bacteria, allowing serious secondary infections to flare up.
In a study published in Nature, the drug lolamicin knocked out or reduced 130 strains of antibiotic-resistant Gram-negative bacteria in cell cultures. It also successfully treated drug-resistant bloodstream infections and pneumonia in mice while sparing their gut microbiome.
With their microbiomes intact, the mice then fought off secondary infection with Clostridioides difficile (a leading cause of opportunistic and sometimes fatal infections in US health care facilities), while mice treated with other compounds that damaged their microbiome succumbed.
How It Works
Like a well-built medieval castle, Gram-negative bacteria are encased in two protective walls, or membranes. Dr. Muñoz and her team at UIUC set out to breach this defense by finding compounds that hinder the “Lol system,” which ferries lipoproteins between them.
From one compound they constructed lolamicin, which can stop Gram-negative pathogens — with little effect on Gram-negative beneficial bacteria and no effect on Gram-positive bacteria.
“Gram-positive bacteria do not have an outer membrane, so they do not possess the Lol system,” Dr. Muñoz said. “When we compared the sequences of the Lol system in certain Gram-negative pathogens to Gram-negative commensal [beneficial] gut bacteria, we saw that the Lol systems were pretty different.”
Tossing a monkey wrench into the Lol system may be the study’s biggest contribution to future antibiotic development, said Kim Lewis, PhD, professor of Biology and director of Antimicrobial Discovery Center at Northeastern University, Boston, who has discovered several antibiotics now in preclinical research. One, darobactin, targets Gram-negative bugs without affecting the gut microbiome. Another, teixobactin, takes down Gram-positive bacteria without causing drug resistance.
“Lolamicin hits a novel target. I would say that’s the most significant study finding,” said Dr. Lewis, who was not involved in the study. “That is rare. If you look at antibiotics introduced since 1968, they have been modifications of existing antibiotics or, rarely, new chemically but hitting the same proven targets. This one hits something properly new, and [that’s] what I found perhaps the most original and interesting.”
Kirk E. Hevener, PharmD, PhD, associate professor of Pharmaceutical Sciences at the University of Tennessee Health Science Center, Memphis, Tennessee, agreed. (Dr. Hevener also was not involved in the study.) “Lolamicin works by targeting a unique Gram-negative transport system. No currently approved antibacterials work in this way, meaning it potentially represents the first of a new class of antibacterials with narrow-spectrum Gram-negative activity and low gastrointestinal disturbance,” said Dr. Hevener, whose research looks at new antimicrobial drug targets.
The UIUC researchers noted that lolamicin has one drawback: Bacteria frequently developed resistance to it. But in future work, it could be tweaked, combined with other antibiotics, or used as a template for finding other Lol system attackers, they said.
“There is still a good amount of work cut out for us in terms of assessing the clinical translatability of lolamicin, but we are hopeful for the future of this drug,” Dr. Muñoz said.
Addressing a Dire Need
Bringing such a drug to market — from discovery to Food and Drug Administration approval — could take more than a decade, said Dr. Hevener. And new agents, especially for Gram-negative bugs, are sorely needed.
Not only do these bacteria shield themselves with a double membrane but they also “have more complex resistance mechanisms including special pumps that can remove antibacterial drugs from the cell before they can be effective,” Dr. Hevener said.
As a result, drug-resistant Gram-negative bacteria are making treatment of severe infections such as sepsis and pneumonia in health care settings difficult.
Bloodstream infections with drug-resistant Klebsiella pneumoniae have a 40% mortality rate, Dr. Lewis said. And microbiome damage caused by antibiotics is also widespread and deadly, wiping out communities of helpful, protective gut bacteria. That contributes to over half of the C. difficile infections that affect 500,000 people and kill 30,000 a year in the United States.
“Our arsenal of antibacterials that can be used to treat Gram-negative infections is dangerously low,” Dr. Hevener said. “Research will always be needed to develop new antibacterials with novel mechanisms of activity that can bypass bacterial resistance mechanisms.”
A version of this article appeared on Medscape.com.
A new antibiotic uses a never-before-seen mechanism to deliver a direct hit on tough-to-treat infections while leaving beneficial microbes alone. The strategy could lead to a new class of antibiotics that attack dangerous bacteria in a powerful new way, overcoming current drug resistance while sparing the gut microbiome.
“The biggest takeaway is the double-selective component,” said co-lead author Kristen A. Muñoz, PhD, who performed the research as a doctoral student at University of Illinois at Urbana-Champaign (UIUC). “We were able to develop a drug that not only targets problematic pathogens, but because it is selective for these pathogens only, we can spare the good bacteria and preserve the integrity of the microbiome.”
The drug goes after Gram-negative bacteria — pathogens responsible for debilitating and even fatal infections like gastroenteritis, urinary tract infections, pneumonia, sepsis, and cholera. The arsenal of antibiotics against them is old, with no new classes specifically targeting these bacteria coming on the market since 1968.
Many of these bugs have become resistant to one or more antibiotics, with deadly consequences. And antibiotics against them can also wipe out beneficial gut bacteria, allowing serious secondary infections to flare up.
In a study published in Nature, the drug lolamicin knocked out or reduced 130 strains of antibiotic-resistant Gram-negative bacteria in cell cultures. It also successfully treated drug-resistant bloodstream infections and pneumonia in mice while sparing their gut microbiome.
With their microbiomes intact, the mice then fought off secondary infection with Clostridioides difficile (a leading cause of opportunistic and sometimes fatal infections in US health care facilities), while mice treated with other compounds that damaged their microbiome succumbed.
How It Works
Like a well-built medieval castle, Gram-negative bacteria are encased in two protective walls, or membranes. Dr. Muñoz and her team at UIUC set out to breach this defense by finding compounds that hinder the “Lol system,” which ferries lipoproteins between them.
From one compound they constructed lolamicin, which can stop Gram-negative pathogens — with little effect on Gram-negative beneficial bacteria and no effect on Gram-positive bacteria.
“Gram-positive bacteria do not have an outer membrane, so they do not possess the Lol system,” Dr. Muñoz said. “When we compared the sequences of the Lol system in certain Gram-negative pathogens to Gram-negative commensal [beneficial] gut bacteria, we saw that the Lol systems were pretty different.”
Tossing a monkey wrench into the Lol system may be the study’s biggest contribution to future antibiotic development, said Kim Lewis, PhD, professor of Biology and director of Antimicrobial Discovery Center at Northeastern University, Boston, who has discovered several antibiotics now in preclinical research. One, darobactin, targets Gram-negative bugs without affecting the gut microbiome. Another, teixobactin, takes down Gram-positive bacteria without causing drug resistance.
“Lolamicin hits a novel target. I would say that’s the most significant study finding,” said Dr. Lewis, who was not involved in the study. “That is rare. If you look at antibiotics introduced since 1968, they have been modifications of existing antibiotics or, rarely, new chemically but hitting the same proven targets. This one hits something properly new, and [that’s] what I found perhaps the most original and interesting.”
Kirk E. Hevener, PharmD, PhD, associate professor of Pharmaceutical Sciences at the University of Tennessee Health Science Center, Memphis, Tennessee, agreed. (Dr. Hevener also was not involved in the study.) “Lolamicin works by targeting a unique Gram-negative transport system. No currently approved antibacterials work in this way, meaning it potentially represents the first of a new class of antibacterials with narrow-spectrum Gram-negative activity and low gastrointestinal disturbance,” said Dr. Hevener, whose research looks at new antimicrobial drug targets.
The UIUC researchers noted that lolamicin has one drawback: Bacteria frequently developed resistance to it. But in future work, it could be tweaked, combined with other antibiotics, or used as a template for finding other Lol system attackers, they said.
“There is still a good amount of work cut out for us in terms of assessing the clinical translatability of lolamicin, but we are hopeful for the future of this drug,” Dr. Muñoz said.
Addressing a Dire Need
Bringing such a drug to market — from discovery to Food and Drug Administration approval — could take more than a decade, said Dr. Hevener. And new agents, especially for Gram-negative bugs, are sorely needed.
Not only do these bacteria shield themselves with a double membrane but they also “have more complex resistance mechanisms including special pumps that can remove antibacterial drugs from the cell before they can be effective,” Dr. Hevener said.
As a result, drug-resistant Gram-negative bacteria are making treatment of severe infections such as sepsis and pneumonia in health care settings difficult.
Bloodstream infections with drug-resistant Klebsiella pneumoniae have a 40% mortality rate, Dr. Lewis said. And microbiome damage caused by antibiotics is also widespread and deadly, wiping out communities of helpful, protective gut bacteria. That contributes to over half of the C. difficile infections that affect 500,000 people and kill 30,000 a year in the United States.
“Our arsenal of antibacterials that can be used to treat Gram-negative infections is dangerously low,” Dr. Hevener said. “Research will always be needed to develop new antibacterials with novel mechanisms of activity that can bypass bacterial resistance mechanisms.”
A version of this article appeared on Medscape.com.
A new antibiotic uses a never-before-seen mechanism to deliver a direct hit on tough-to-treat infections while leaving beneficial microbes alone. The strategy could lead to a new class of antibiotics that attack dangerous bacteria in a powerful new way, overcoming current drug resistance while sparing the gut microbiome.
“The biggest takeaway is the double-selective component,” said co-lead author Kristen A. Muñoz, PhD, who performed the research as a doctoral student at University of Illinois at Urbana-Champaign (UIUC). “We were able to develop a drug that not only targets problematic pathogens, but because it is selective for these pathogens only, we can spare the good bacteria and preserve the integrity of the microbiome.”
The drug goes after Gram-negative bacteria — pathogens responsible for debilitating and even fatal infections like gastroenteritis, urinary tract infections, pneumonia, sepsis, and cholera. The arsenal of antibiotics against them is old, with no new classes specifically targeting these bacteria coming on the market since 1968.
Many of these bugs have become resistant to one or more antibiotics, with deadly consequences. And antibiotics against them can also wipe out beneficial gut bacteria, allowing serious secondary infections to flare up.
In a study published in Nature, the drug lolamicin knocked out or reduced 130 strains of antibiotic-resistant Gram-negative bacteria in cell cultures. It also successfully treated drug-resistant bloodstream infections and pneumonia in mice while sparing their gut microbiome.
With their microbiomes intact, the mice then fought off secondary infection with Clostridioides difficile (a leading cause of opportunistic and sometimes fatal infections in US health care facilities), while mice treated with other compounds that damaged their microbiome succumbed.
How It Works
Like a well-built medieval castle, Gram-negative bacteria are encased in two protective walls, or membranes. Dr. Muñoz and her team at UIUC set out to breach this defense by finding compounds that hinder the “Lol system,” which ferries lipoproteins between them.
From one compound they constructed lolamicin, which can stop Gram-negative pathogens — with little effect on Gram-negative beneficial bacteria and no effect on Gram-positive bacteria.
“Gram-positive bacteria do not have an outer membrane, so they do not possess the Lol system,” Dr. Muñoz said. “When we compared the sequences of the Lol system in certain Gram-negative pathogens to Gram-negative commensal [beneficial] gut bacteria, we saw that the Lol systems were pretty different.”
Tossing a monkey wrench into the Lol system may be the study’s biggest contribution to future antibiotic development, said Kim Lewis, PhD, professor of Biology and director of Antimicrobial Discovery Center at Northeastern University, Boston, who has discovered several antibiotics now in preclinical research. One, darobactin, targets Gram-negative bugs without affecting the gut microbiome. Another, teixobactin, takes down Gram-positive bacteria without causing drug resistance.
“Lolamicin hits a novel target. I would say that’s the most significant study finding,” said Dr. Lewis, who was not involved in the study. “That is rare. If you look at antibiotics introduced since 1968, they have been modifications of existing antibiotics or, rarely, new chemically but hitting the same proven targets. This one hits something properly new, and [that’s] what I found perhaps the most original and interesting.”
Kirk E. Hevener, PharmD, PhD, associate professor of Pharmaceutical Sciences at the University of Tennessee Health Science Center, Memphis, Tennessee, agreed. (Dr. Hevener also was not involved in the study.) “Lolamicin works by targeting a unique Gram-negative transport system. No currently approved antibacterials work in this way, meaning it potentially represents the first of a new class of antibacterials with narrow-spectrum Gram-negative activity and low gastrointestinal disturbance,” said Dr. Hevener, whose research looks at new antimicrobial drug targets.
The UIUC researchers noted that lolamicin has one drawback: Bacteria frequently developed resistance to it. But in future work, it could be tweaked, combined with other antibiotics, or used as a template for finding other Lol system attackers, they said.
“There is still a good amount of work cut out for us in terms of assessing the clinical translatability of lolamicin, but we are hopeful for the future of this drug,” Dr. Muñoz said.
Addressing a Dire Need
Bringing such a drug to market — from discovery to Food and Drug Administration approval — could take more than a decade, said Dr. Hevener. And new agents, especially for Gram-negative bugs, are sorely needed.
Not only do these bacteria shield themselves with a double membrane but they also “have more complex resistance mechanisms including special pumps that can remove antibacterial drugs from the cell before they can be effective,” Dr. Hevener said.
As a result, drug-resistant Gram-negative bacteria are making treatment of severe infections such as sepsis and pneumonia in health care settings difficult.
Bloodstream infections with drug-resistant Klebsiella pneumoniae have a 40% mortality rate, Dr. Lewis said. And microbiome damage caused by antibiotics is also widespread and deadly, wiping out communities of helpful, protective gut bacteria. That contributes to over half of the C. difficile infections that affect 500,000 people and kill 30,000 a year in the United States.
“Our arsenal of antibacterials that can be used to treat Gram-negative infections is dangerously low,” Dr. Hevener said. “Research will always be needed to develop new antibacterials with novel mechanisms of activity that can bypass bacterial resistance mechanisms.”
A version of this article appeared on Medscape.com.
Vitamin D Test Inaccuracies Persist Despite Gains in Field: CDC
Some vitamin D tests may give misleading results despite progress made in recent years to improve the quality of these assays, according to the US Centers for Disease Control and Prevention (CDC).
Otoe Sugahara manager of the CDC Vitamin D Standardization-Certification Program (VDSCP), presented an update of her group’s work at ENDO 2024, the Endocrine Society’s annual meeting in Boston.
“Though most vitamin D tests in our program have improved, there still remain some sample-specific inaccuracies. The CDC is working with program participants to address these situations,” Ms. Sugahara said in a statement released by the Endocrine Society.
For example, some assays measure other compounds besides 25-hydroxyvitamin D, which can falsely elevate results of some blood samples, Ms. Sugahara reported. Thus, some tests may be misclassified, with results seen as sufficient from samples that should have indicated a vitamin D deficiency.
“While most vitamin D tests are effective, it is important for healthcare providers to be aware of the potential inconsistencies associated with vitamin D tests to avoid misclassification of the patients,” Ms. Sugahara and coauthors said in an abstract provided by the Endocrine Society.
Ms. Sugahara’s report provided a snapshot of the state of longstanding efforts to improve the quality of a widely performed service in US healthcare: testing vitamin D levels.
These include an international collaboration that gave rise in 2010 to a vitamin D standardization program, from which the CDC’s VDSCP certification emerged. Among the leaders of these efforts was Christopher Sempos, PhD, then with the Office of Dietary Supplements at the National Institutes of Health.
Many clinicians may not be aware of the concerns about the accuracy of vitamin D tests that led to the drive for standardization, Dr. Sempos, now retired, said in an interview. And, in his view, it’s something that busy practitioners should not have to consider.
“They have literally thousands of diseases they have to be able to recognize and diagnose,” Dr. Sempos said. “They should be able to count on the laboratory system to give them accurate and precise data.”
‘Nudging’ Toward Better Results
The CDC’s certification program gives labs and companies detailed information about the analytical accuracy and precision of their vitamin D tests.
This feedback has paid off with improved results, Andy Hoofnagle, MD, PhD, professor of laboratory medicine and pathology at the University of Washington in Seattle, told this news organization. It helps by “nudging manufacturers in the right direction,” he said.
“Some manufacturers reformulated, others recalibrated, which is a lot of effort on their part, so that when the patient get a number, it actually means the right thing,” said Dr. Hoofnagle, who is also chair of the Accuracy-Based Programs Committee of the College of American Pathologists.
“There are still many immunoassays on the market that aren’t giving the correct results, unfortunately, but the standardization certification program has really pushed the field in the right direction,” he said.
US scientists use two main types of technologies to measure vitamin D in the blood, Ms. Sugahara said. One is mass spectrometry, which separately measures 25-hydroxyvitamin D2 and D3 and sums the values. The other type, immunoassay, measures both compounds at the same time and reports one result for total 25-hydroxyvitamin D.
At the ENDO 2024 meeting, Ms. Sugahara reported generally positive trends seen in the VDSCP. For example, the program looks at specific tests’ bias, or the deviation of test results from the true value, as determined with the CDC’s reference method for vitamin D.
Average calibration bias was less than 1% for all assays in the VDSCP in 2022, Ms. Sugahara said. The average calibration bias for immunoassays was 0.86%, and for assays using mass spectrometry, it was 0.55%, Ms. Sugahara reported.
These are improved results compared with 2019 data, in which mass spectrometry–based assays had a mean bias of 1.9% and immunoassays had a mean bias of 2.4%, the CDC told this news organization in an email exchange.
The CDC said the VDSCP supports laboratories and researchers from around the world, including ones based in the US, China, Australia, Japan, and Korea.
Call for Research
Vitamin D tests are widely administered despite questions about their benefit for people who do not appear likely to be deficient of it.
The Endocrine Society’s newly released practice guideline recommends against routine testing of blood vitamin D levels in the general population.
Laboratory testing has increased over the years owing to studies reporting associations between blood vitamin D [25(OH)D] levels and a variety of common disorders, including musculoskeletal, metabolic, cardiovascular, malignant, autoimmune, and infectious diseases, wrote Marie B. Demay, MD, of Harvard Medical School in Boston, and coauthors in the new guideline. It was published on June 3 in The Journal of Clinical Endocrinology & Metabolism.
‘”Although a causal link between serum 25(OH)D concentrations and many disorders has not been clearly established, these associations have led to widespread supplementation with vitamin D and increased laboratory testing for 25(OH)D in the general population,” they wrote.
It’s uncertain that “any putative benefits of screening would outweigh the increased burden and cost, and whether implementation of universal 25(OH)D screening would be feasible from a societal perspective,” Dr. Demay and coauthors added.
They noted that the influential US Preventive Services Task Force also has raised doubts about widespread use of vitamin D tests.
The USPSTF has a somewhat different take from the Endocrine Society. The task force in 2021 reiterated its view that there is not enough evidence to recommend for or against widespread vitamin D testing for adults. The task force gave this test an I grade, meaning there is insufficient evidence to weigh the risks and benefits. That’s the same grade the task force gave it in 2014.
The USPSTF uses a grade of D to recommend against use of a test or service.
In an interview with this news organization, John Wong, MD, vice chair of the USPSTF, reiterated his group’s call for more research into the potential benefits and harms of vitamin D screening.
One of the challenges in addressing this issue, Dr. Wong noted, has been the variability of test results. Therefore, efforts such as the CDC’s VDSCP in improving test quality may help in eventually building up the kind of evidence base needed for the task force to offer a more definitive judgment on the tests, he said.
Wong acknowledged it must be frustrating for clinicians and patients to hear that experts don’t have the evidence needed to make a broad call about whether routine vitamin D tests are beneficial.
“We really would like to have that evidence because we recognize that it’s an important health question to help everybody in this nation stay healthy and live longer,” Dr. Wong said.
A version of this article appeared on Medscape.com.
Some vitamin D tests may give misleading results despite progress made in recent years to improve the quality of these assays, according to the US Centers for Disease Control and Prevention (CDC).
Otoe Sugahara manager of the CDC Vitamin D Standardization-Certification Program (VDSCP), presented an update of her group’s work at ENDO 2024, the Endocrine Society’s annual meeting in Boston.
“Though most vitamin D tests in our program have improved, there still remain some sample-specific inaccuracies. The CDC is working with program participants to address these situations,” Ms. Sugahara said in a statement released by the Endocrine Society.
For example, some assays measure other compounds besides 25-hydroxyvitamin D, which can falsely elevate results of some blood samples, Ms. Sugahara reported. Thus, some tests may be misclassified, with results seen as sufficient from samples that should have indicated a vitamin D deficiency.
“While most vitamin D tests are effective, it is important for healthcare providers to be aware of the potential inconsistencies associated with vitamin D tests to avoid misclassification of the patients,” Ms. Sugahara and coauthors said in an abstract provided by the Endocrine Society.
Ms. Sugahara’s report provided a snapshot of the state of longstanding efforts to improve the quality of a widely performed service in US healthcare: testing vitamin D levels.
These include an international collaboration that gave rise in 2010 to a vitamin D standardization program, from which the CDC’s VDSCP certification emerged. Among the leaders of these efforts was Christopher Sempos, PhD, then with the Office of Dietary Supplements at the National Institutes of Health.
Many clinicians may not be aware of the concerns about the accuracy of vitamin D tests that led to the drive for standardization, Dr. Sempos, now retired, said in an interview. And, in his view, it’s something that busy practitioners should not have to consider.
“They have literally thousands of diseases they have to be able to recognize and diagnose,” Dr. Sempos said. “They should be able to count on the laboratory system to give them accurate and precise data.”
‘Nudging’ Toward Better Results
The CDC’s certification program gives labs and companies detailed information about the analytical accuracy and precision of their vitamin D tests.
This feedback has paid off with improved results, Andy Hoofnagle, MD, PhD, professor of laboratory medicine and pathology at the University of Washington in Seattle, told this news organization. It helps by “nudging manufacturers in the right direction,” he said.
“Some manufacturers reformulated, others recalibrated, which is a lot of effort on their part, so that when the patient get a number, it actually means the right thing,” said Dr. Hoofnagle, who is also chair of the Accuracy-Based Programs Committee of the College of American Pathologists.
“There are still many immunoassays on the market that aren’t giving the correct results, unfortunately, but the standardization certification program has really pushed the field in the right direction,” he said.
US scientists use two main types of technologies to measure vitamin D in the blood, Ms. Sugahara said. One is mass spectrometry, which separately measures 25-hydroxyvitamin D2 and D3 and sums the values. The other type, immunoassay, measures both compounds at the same time and reports one result for total 25-hydroxyvitamin D.
At the ENDO 2024 meeting, Ms. Sugahara reported generally positive trends seen in the VDSCP. For example, the program looks at specific tests’ bias, or the deviation of test results from the true value, as determined with the CDC’s reference method for vitamin D.
Average calibration bias was less than 1% for all assays in the VDSCP in 2022, Ms. Sugahara said. The average calibration bias for immunoassays was 0.86%, and for assays using mass spectrometry, it was 0.55%, Ms. Sugahara reported.
These are improved results compared with 2019 data, in which mass spectrometry–based assays had a mean bias of 1.9% and immunoassays had a mean bias of 2.4%, the CDC told this news organization in an email exchange.
The CDC said the VDSCP supports laboratories and researchers from around the world, including ones based in the US, China, Australia, Japan, and Korea.
Call for Research
Vitamin D tests are widely administered despite questions about their benefit for people who do not appear likely to be deficient of it.
The Endocrine Society’s newly released practice guideline recommends against routine testing of blood vitamin D levels in the general population.
Laboratory testing has increased over the years owing to studies reporting associations between blood vitamin D [25(OH)D] levels and a variety of common disorders, including musculoskeletal, metabolic, cardiovascular, malignant, autoimmune, and infectious diseases, wrote Marie B. Demay, MD, of Harvard Medical School in Boston, and coauthors in the new guideline. It was published on June 3 in The Journal of Clinical Endocrinology & Metabolism.
‘”Although a causal link between serum 25(OH)D concentrations and many disorders has not been clearly established, these associations have led to widespread supplementation with vitamin D and increased laboratory testing for 25(OH)D in the general population,” they wrote.
It’s uncertain that “any putative benefits of screening would outweigh the increased burden and cost, and whether implementation of universal 25(OH)D screening would be feasible from a societal perspective,” Dr. Demay and coauthors added.
They noted that the influential US Preventive Services Task Force also has raised doubts about widespread use of vitamin D tests.
The USPSTF has a somewhat different take from the Endocrine Society. The task force in 2021 reiterated its view that there is not enough evidence to recommend for or against widespread vitamin D testing for adults. The task force gave this test an I grade, meaning there is insufficient evidence to weigh the risks and benefits. That’s the same grade the task force gave it in 2014.
The USPSTF uses a grade of D to recommend against use of a test or service.
In an interview with this news organization, John Wong, MD, vice chair of the USPSTF, reiterated his group’s call for more research into the potential benefits and harms of vitamin D screening.
One of the challenges in addressing this issue, Dr. Wong noted, has been the variability of test results. Therefore, efforts such as the CDC’s VDSCP in improving test quality may help in eventually building up the kind of evidence base needed for the task force to offer a more definitive judgment on the tests, he said.
Wong acknowledged it must be frustrating for clinicians and patients to hear that experts don’t have the evidence needed to make a broad call about whether routine vitamin D tests are beneficial.
“We really would like to have that evidence because we recognize that it’s an important health question to help everybody in this nation stay healthy and live longer,” Dr. Wong said.
A version of this article appeared on Medscape.com.
Some vitamin D tests may give misleading results despite progress made in recent years to improve the quality of these assays, according to the US Centers for Disease Control and Prevention (CDC).
Otoe Sugahara manager of the CDC Vitamin D Standardization-Certification Program (VDSCP), presented an update of her group’s work at ENDO 2024, the Endocrine Society’s annual meeting in Boston.
“Though most vitamin D tests in our program have improved, there still remain some sample-specific inaccuracies. The CDC is working with program participants to address these situations,” Ms. Sugahara said in a statement released by the Endocrine Society.
For example, some assays measure other compounds besides 25-hydroxyvitamin D, which can falsely elevate results of some blood samples, Ms. Sugahara reported. Thus, some tests may be misclassified, with results seen as sufficient from samples that should have indicated a vitamin D deficiency.
“While most vitamin D tests are effective, it is important for healthcare providers to be aware of the potential inconsistencies associated with vitamin D tests to avoid misclassification of the patients,” Ms. Sugahara and coauthors said in an abstract provided by the Endocrine Society.
Ms. Sugahara’s report provided a snapshot of the state of longstanding efforts to improve the quality of a widely performed service in US healthcare: testing vitamin D levels.
These include an international collaboration that gave rise in 2010 to a vitamin D standardization program, from which the CDC’s VDSCP certification emerged. Among the leaders of these efforts was Christopher Sempos, PhD, then with the Office of Dietary Supplements at the National Institutes of Health.
Many clinicians may not be aware of the concerns about the accuracy of vitamin D tests that led to the drive for standardization, Dr. Sempos, now retired, said in an interview. And, in his view, it’s something that busy practitioners should not have to consider.
“They have literally thousands of diseases they have to be able to recognize and diagnose,” Dr. Sempos said. “They should be able to count on the laboratory system to give them accurate and precise data.”
‘Nudging’ Toward Better Results
The CDC’s certification program gives labs and companies detailed information about the analytical accuracy and precision of their vitamin D tests.
This feedback has paid off with improved results, Andy Hoofnagle, MD, PhD, professor of laboratory medicine and pathology at the University of Washington in Seattle, told this news organization. It helps by “nudging manufacturers in the right direction,” he said.
“Some manufacturers reformulated, others recalibrated, which is a lot of effort on their part, so that when the patient get a number, it actually means the right thing,” said Dr. Hoofnagle, who is also chair of the Accuracy-Based Programs Committee of the College of American Pathologists.
“There are still many immunoassays on the market that aren’t giving the correct results, unfortunately, but the standardization certification program has really pushed the field in the right direction,” he said.
US scientists use two main types of technologies to measure vitamin D in the blood, Ms. Sugahara said. One is mass spectrometry, which separately measures 25-hydroxyvitamin D2 and D3 and sums the values. The other type, immunoassay, measures both compounds at the same time and reports one result for total 25-hydroxyvitamin D.
At the ENDO 2024 meeting, Ms. Sugahara reported generally positive trends seen in the VDSCP. For example, the program looks at specific tests’ bias, or the deviation of test results from the true value, as determined with the CDC’s reference method for vitamin D.
Average calibration bias was less than 1% for all assays in the VDSCP in 2022, Ms. Sugahara said. The average calibration bias for immunoassays was 0.86%, and for assays using mass spectrometry, it was 0.55%, Ms. Sugahara reported.
These are improved results compared with 2019 data, in which mass spectrometry–based assays had a mean bias of 1.9% and immunoassays had a mean bias of 2.4%, the CDC told this news organization in an email exchange.
The CDC said the VDSCP supports laboratories and researchers from around the world, including ones based in the US, China, Australia, Japan, and Korea.
Call for Research
Vitamin D tests are widely administered despite questions about their benefit for people who do not appear likely to be deficient of it.
The Endocrine Society’s newly released practice guideline recommends against routine testing of blood vitamin D levels in the general population.
Laboratory testing has increased over the years owing to studies reporting associations between blood vitamin D [25(OH)D] levels and a variety of common disorders, including musculoskeletal, metabolic, cardiovascular, malignant, autoimmune, and infectious diseases, wrote Marie B. Demay, MD, of Harvard Medical School in Boston, and coauthors in the new guideline. It was published on June 3 in The Journal of Clinical Endocrinology & Metabolism.
‘”Although a causal link between serum 25(OH)D concentrations and many disorders has not been clearly established, these associations have led to widespread supplementation with vitamin D and increased laboratory testing for 25(OH)D in the general population,” they wrote.
It’s uncertain that “any putative benefits of screening would outweigh the increased burden and cost, and whether implementation of universal 25(OH)D screening would be feasible from a societal perspective,” Dr. Demay and coauthors added.
They noted that the influential US Preventive Services Task Force also has raised doubts about widespread use of vitamin D tests.
The USPSTF has a somewhat different take from the Endocrine Society. The task force in 2021 reiterated its view that there is not enough evidence to recommend for or against widespread vitamin D testing for adults. The task force gave this test an I grade, meaning there is insufficient evidence to weigh the risks and benefits. That’s the same grade the task force gave it in 2014.
The USPSTF uses a grade of D to recommend against use of a test or service.
In an interview with this news organization, John Wong, MD, vice chair of the USPSTF, reiterated his group’s call for more research into the potential benefits and harms of vitamin D screening.
One of the challenges in addressing this issue, Dr. Wong noted, has been the variability of test results. Therefore, efforts such as the CDC’s VDSCP in improving test quality may help in eventually building up the kind of evidence base needed for the task force to offer a more definitive judgment on the tests, he said.
Wong acknowledged it must be frustrating for clinicians and patients to hear that experts don’t have the evidence needed to make a broad call about whether routine vitamin D tests are beneficial.
“We really would like to have that evidence because we recognize that it’s an important health question to help everybody in this nation stay healthy and live longer,” Dr. Wong said.
A version of this article appeared on Medscape.com.
Early-Life Exposure to Pollution Linked to Psychosis, Anxiety, Depression
Early-life exposure to air and noise pollution is associated with a higher risk for psychosis, depression, and anxiety in adolescence and early adulthood, results from a longitudinal birth cohort study showed.
While air pollution was associated primarily with psychotic experiences and depression, noise pollution was more likely to be associated with anxiety in adolescence and early adulthood.
“Early-life exposure could be detrimental to mental health given the extensive brain development and epigenetic processes that occur in utero and during infancy,” the researchers, led by Joanne Newbury, PhD, of Bristol Medical School, University of Bristol, England, wrote, adding that “the results of this cohort study provide novel evidence that early-life exposure to particulate matter is prospectively associated with the development of psychotic experiences and depression in youth.”
The findings were published online on May 28 in JAMA Network Open.
Large, Longitudinal Study
To learn more about how air and noise pollution may affect the brain from an early age, the investigators used data from the Avon Longitudinal Study of Parents and Children, an ongoing longitudinal birth cohort capturing data on new births in Southwest England from 1991 to 1992.
Investigators captured levels of air pollutants, which included nitrogen dioxide and fine particulate matter with a diameter smaller than 2.5 µm (PM2.5), in the areas where expectant mothers lived and where their children lived until age 12.
They also collected decibel levels of noise pollution in neighborhoods where expectant mothers and their children lived.
Participants were assessed for psychotic experiences, depression, and anxiety when they were 13, 18, and 24 years old.
Among the 9065 participants who had mental health data, 20% reported psychotic experiences, 11% reported depression, and 10% reported anxiety. About 60% of the participants had a family history of mental illness.
When they were age 13, 13.6% of participants reported psychotic experiences; 9.2% reported them at age 18, and 12.6% at age 24.
A lower number of participants reported feeling depressed and anxious at 13 years (5.6% for depression and 3.6% for anxiety) and 18 years (7.9% for depression and 5.7% for anxiety).
After adjusting for individual and family-level variables, including family psychiatric history, maternal social class, and neighborhood deprivation, elevated PM2.5 levels during pregnancy (P = .002) and childhood (P = .04) were associated with a significantly increased risk for psychotic experiences later in life. Pregnancy PM2.5 exposure was also associated with depression (P = .01).
Participants exposed to higher noise pollution in childhood and adolescence had an increased risk for anxiety (P = .03) as teenagers.
Vulnerability of the Developing Brain
The investigators noted that more information is needed to understand the underlying mechanisms behind these associations but noted that early-life exposure could be detrimental to mental health given “extensive brain development and epigenetic processes that occur in utero.”
They also noted that air pollution could lead to restricted fetal growth and premature birth, both of which are risk factors for psychopathology.
Martin Clift, PhD, of Swansea University in Swansea, Wales, who was not involved in the study, said that the paper highlights the need for more consideration of health consequences related to these exposures.
“As noted by the authors, this is an area that has received a lot of recent attention, yet there remains a large void of knowledge,” Dr. Clift said in a UK Science Media Centre release. “It highlights that some of the most dominant air pollutants can impact different mental health diagnoses, but that time-of-life is particularly important as to how each individual air pollutant may impact this diagnosis.”
Study limitations included limitations to generalizability of the data — the families in the study were more affluent and less diverse than the UK population overall.
The study was funded by the UK Medical Research Council, Wellcome Trust, and University of Bristol. Disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
Early-life exposure to air and noise pollution is associated with a higher risk for psychosis, depression, and anxiety in adolescence and early adulthood, results from a longitudinal birth cohort study showed.
While air pollution was associated primarily with psychotic experiences and depression, noise pollution was more likely to be associated with anxiety in adolescence and early adulthood.
“Early-life exposure could be detrimental to mental health given the extensive brain development and epigenetic processes that occur in utero and during infancy,” the researchers, led by Joanne Newbury, PhD, of Bristol Medical School, University of Bristol, England, wrote, adding that “the results of this cohort study provide novel evidence that early-life exposure to particulate matter is prospectively associated with the development of psychotic experiences and depression in youth.”
The findings were published online on May 28 in JAMA Network Open.
Large, Longitudinal Study
To learn more about how air and noise pollution may affect the brain from an early age, the investigators used data from the Avon Longitudinal Study of Parents and Children, an ongoing longitudinal birth cohort capturing data on new births in Southwest England from 1991 to 1992.
Investigators captured levels of air pollutants, which included nitrogen dioxide and fine particulate matter with a diameter smaller than 2.5 µm (PM2.5), in the areas where expectant mothers lived and where their children lived until age 12.
They also collected decibel levels of noise pollution in neighborhoods where expectant mothers and their children lived.
Participants were assessed for psychotic experiences, depression, and anxiety when they were 13, 18, and 24 years old.
Among the 9065 participants who had mental health data, 20% reported psychotic experiences, 11% reported depression, and 10% reported anxiety. About 60% of the participants had a family history of mental illness.
When they were age 13, 13.6% of participants reported psychotic experiences; 9.2% reported them at age 18, and 12.6% at age 24.
A lower number of participants reported feeling depressed and anxious at 13 years (5.6% for depression and 3.6% for anxiety) and 18 years (7.9% for depression and 5.7% for anxiety).
After adjusting for individual and family-level variables, including family psychiatric history, maternal social class, and neighborhood deprivation, elevated PM2.5 levels during pregnancy (P = .002) and childhood (P = .04) were associated with a significantly increased risk for psychotic experiences later in life. Pregnancy PM2.5 exposure was also associated with depression (P = .01).
Participants exposed to higher noise pollution in childhood and adolescence had an increased risk for anxiety (P = .03) as teenagers.
Vulnerability of the Developing Brain
The investigators noted that more information is needed to understand the underlying mechanisms behind these associations but noted that early-life exposure could be detrimental to mental health given “extensive brain development and epigenetic processes that occur in utero.”
They also noted that air pollution could lead to restricted fetal growth and premature birth, both of which are risk factors for psychopathology.
Martin Clift, PhD, of Swansea University in Swansea, Wales, who was not involved in the study, said that the paper highlights the need for more consideration of health consequences related to these exposures.
“As noted by the authors, this is an area that has received a lot of recent attention, yet there remains a large void of knowledge,” Dr. Clift said in a UK Science Media Centre release. “It highlights that some of the most dominant air pollutants can impact different mental health diagnoses, but that time-of-life is particularly important as to how each individual air pollutant may impact this diagnosis.”
Study limitations included limitations to generalizability of the data — the families in the study were more affluent and less diverse than the UK population overall.
The study was funded by the UK Medical Research Council, Wellcome Trust, and University of Bristol. Disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
Early-life exposure to air and noise pollution is associated with a higher risk for psychosis, depression, and anxiety in adolescence and early adulthood, results from a longitudinal birth cohort study showed.
While air pollution was associated primarily with psychotic experiences and depression, noise pollution was more likely to be associated with anxiety in adolescence and early adulthood.
“Early-life exposure could be detrimental to mental health given the extensive brain development and epigenetic processes that occur in utero and during infancy,” the researchers, led by Joanne Newbury, PhD, of Bristol Medical School, University of Bristol, England, wrote, adding that “the results of this cohort study provide novel evidence that early-life exposure to particulate matter is prospectively associated with the development of psychotic experiences and depression in youth.”
The findings were published online on May 28 in JAMA Network Open.
Large, Longitudinal Study
To learn more about how air and noise pollution may affect the brain from an early age, the investigators used data from the Avon Longitudinal Study of Parents and Children, an ongoing longitudinal birth cohort capturing data on new births in Southwest England from 1991 to 1992.
Investigators captured levels of air pollutants, which included nitrogen dioxide and fine particulate matter with a diameter smaller than 2.5 µm (PM2.5), in the areas where expectant mothers lived and where their children lived until age 12.
They also collected decibel levels of noise pollution in neighborhoods where expectant mothers and their children lived.
Participants were assessed for psychotic experiences, depression, and anxiety when they were 13, 18, and 24 years old.
Among the 9065 participants who had mental health data, 20% reported psychotic experiences, 11% reported depression, and 10% reported anxiety. About 60% of the participants had a family history of mental illness.
When they were age 13, 13.6% of participants reported psychotic experiences; 9.2% reported them at age 18, and 12.6% at age 24.
A lower number of participants reported feeling depressed and anxious at 13 years (5.6% for depression and 3.6% for anxiety) and 18 years (7.9% for depression and 5.7% for anxiety).
After adjusting for individual and family-level variables, including family psychiatric history, maternal social class, and neighborhood deprivation, elevated PM2.5 levels during pregnancy (P = .002) and childhood (P = .04) were associated with a significantly increased risk for psychotic experiences later in life. Pregnancy PM2.5 exposure was also associated with depression (P = .01).
Participants exposed to higher noise pollution in childhood and adolescence had an increased risk for anxiety (P = .03) as teenagers.
Vulnerability of the Developing Brain
The investigators noted that more information is needed to understand the underlying mechanisms behind these associations but noted that early-life exposure could be detrimental to mental health given “extensive brain development and epigenetic processes that occur in utero.”
They also noted that air pollution could lead to restricted fetal growth and premature birth, both of which are risk factors for psychopathology.
Martin Clift, PhD, of Swansea University in Swansea, Wales, who was not involved in the study, said that the paper highlights the need for more consideration of health consequences related to these exposures.
“As noted by the authors, this is an area that has received a lot of recent attention, yet there remains a large void of knowledge,” Dr. Clift said in a UK Science Media Centre release. “It highlights that some of the most dominant air pollutants can impact different mental health diagnoses, but that time-of-life is particularly important as to how each individual air pollutant may impact this diagnosis.”
Study limitations included limitations to generalizability of the data — the families in the study were more affluent and less diverse than the UK population overall.
The study was funded by the UK Medical Research Council, Wellcome Trust, and University of Bristol. Disclosures were noted in the original article.
A version of this article appeared on Medscape.com.
FDA Approves First-in-Class Drug for Lower-Risk Myelodysplastic Syndromes
The US Food and Drug Administration (FDA) has approved imetelstat (Rytelo, Geron Corporation) for certain patients with relapsed or refractory low- to intermediate-risk myelodysplastic syndromes (MDS).
Specifically, the first-in-class oligonucleotide telomerase inhibitor, which received orphan drug designation, is indicated for adults with MDS who have transfusion-dependent anemia requiring four or more red blood cell units over 8 weeks and who have not responded to erythropoiesis-stimulating agents or who have lost response to or are not eligible for erythropoiesis-stimulating agents, according to an FDA press release.
“For patients with lower-risk MDS and anemia who are transfusion dependent, we have very few options today and often cycle through available therapies, making the approval of RYTELO potentially practice changing for us,” co-investigator Rami Komrokji, MD, of Moffitt Cancer Center, Tampa, Florida, said in the Geron Corporation’s announcement of the approval.
Approval was based on efficacy and safety findings from the randomized, placebo-controlled, phase 3 IMerge trial, which found significantly improved red blood cell transfusion independence with treatment vs with placebo.
Overall, 178 patients were randomly assigned to the imetelstat arm (n = 118) and the placebo arm (n = 60). The median follow-up was 19.5 months in the treatment arm and 17.5 months in the placebo arm.
Patients received infusions of either 7.1 mg/kg of imetelstat or placebo in 28-day cycles until disease progression or unacceptable toxicity. All patients received supportive care, including red blood cell transfusions.
The rate of 8-week-or-greater red blood cell transfusion independence was 39.8% in the imetelstat vs 15% placebo arm. The rate of 24-week-or-greater red blood cell transfusion independence was 28% in the treatment arm vs 3.3% in the placebo arm.
An exploratory analysis among patients who achieved at least 8 weeks of red blood cell transfusion independence revealed that median increases in hemoglobin were 3.6 g/dL in the treatment group vs 0.8 g/dL in the placebo group.
Adverse reactions, occurring in at least 10% of patients and in at least 5% more patients in the treatment arm than in the placebo arm, included decreased platelets, white blood cells, and neutrophils; increased aspartate aminotransferase, alkaline phosphatase, and alanine aminotransferase; and fatigue, prolonged partial thromboplastin time, arthralgia/myalgia, COVID-19, and headache.
The recommended imetelstat dose is 7.1 mg/kg administered as an intravenous infusion over 2 hours every 28 days, according to the full prescribing information.
“What is exciting about RYTELO is the totality of the clinical benefit across [lower risk] MDS patients irrespective of ring sideroblast status or high transfusion burden, including sustained and durable transfusion independence and increases in hemoglobin levels, all within a well-characterized safety profile of generally manageable cytopenias,” Dr. Komrokji stated. The treatment goal for patients with this condition “is transfusion-independence and before today, this wasn’t possible for many patients.”
A version of this article appeared on Medscape.com.
The US Food and Drug Administration (FDA) has approved imetelstat (Rytelo, Geron Corporation) for certain patients with relapsed or refractory low- to intermediate-risk myelodysplastic syndromes (MDS).
Specifically, the first-in-class oligonucleotide telomerase inhibitor, which received orphan drug designation, is indicated for adults with MDS who have transfusion-dependent anemia requiring four or more red blood cell units over 8 weeks and who have not responded to erythropoiesis-stimulating agents or who have lost response to or are not eligible for erythropoiesis-stimulating agents, according to an FDA press release.
“For patients with lower-risk MDS and anemia who are transfusion dependent, we have very few options today and often cycle through available therapies, making the approval of RYTELO potentially practice changing for us,” co-investigator Rami Komrokji, MD, of Moffitt Cancer Center, Tampa, Florida, said in the Geron Corporation’s announcement of the approval.
Approval was based on efficacy and safety findings from the randomized, placebo-controlled, phase 3 IMerge trial, which found significantly improved red blood cell transfusion independence with treatment vs with placebo.
Overall, 178 patients were randomly assigned to the imetelstat arm (n = 118) and the placebo arm (n = 60). The median follow-up was 19.5 months in the treatment arm and 17.5 months in the placebo arm.
Patients received infusions of either 7.1 mg/kg of imetelstat or placebo in 28-day cycles until disease progression or unacceptable toxicity. All patients received supportive care, including red blood cell transfusions.
The rate of 8-week-or-greater red blood cell transfusion independence was 39.8% in the imetelstat vs 15% placebo arm. The rate of 24-week-or-greater red blood cell transfusion independence was 28% in the treatment arm vs 3.3% in the placebo arm.
An exploratory analysis among patients who achieved at least 8 weeks of red blood cell transfusion independence revealed that median increases in hemoglobin were 3.6 g/dL in the treatment group vs 0.8 g/dL in the placebo group.
Adverse reactions, occurring in at least 10% of patients and in at least 5% more patients in the treatment arm than in the placebo arm, included decreased platelets, white blood cells, and neutrophils; increased aspartate aminotransferase, alkaline phosphatase, and alanine aminotransferase; and fatigue, prolonged partial thromboplastin time, arthralgia/myalgia, COVID-19, and headache.
The recommended imetelstat dose is 7.1 mg/kg administered as an intravenous infusion over 2 hours every 28 days, according to the full prescribing information.
“What is exciting about RYTELO is the totality of the clinical benefit across [lower risk] MDS patients irrespective of ring sideroblast status or high transfusion burden, including sustained and durable transfusion independence and increases in hemoglobin levels, all within a well-characterized safety profile of generally manageable cytopenias,” Dr. Komrokji stated. The treatment goal for patients with this condition “is transfusion-independence and before today, this wasn’t possible for many patients.”
A version of this article appeared on Medscape.com.
The US Food and Drug Administration (FDA) has approved imetelstat (Rytelo, Geron Corporation) for certain patients with relapsed or refractory low- to intermediate-risk myelodysplastic syndromes (MDS).
Specifically, the first-in-class oligonucleotide telomerase inhibitor, which received orphan drug designation, is indicated for adults with MDS who have transfusion-dependent anemia requiring four or more red blood cell units over 8 weeks and who have not responded to erythropoiesis-stimulating agents or who have lost response to or are not eligible for erythropoiesis-stimulating agents, according to an FDA press release.
“For patients with lower-risk MDS and anemia who are transfusion dependent, we have very few options today and often cycle through available therapies, making the approval of RYTELO potentially practice changing for us,” co-investigator Rami Komrokji, MD, of Moffitt Cancer Center, Tampa, Florida, said in the Geron Corporation’s announcement of the approval.
Approval was based on efficacy and safety findings from the randomized, placebo-controlled, phase 3 IMerge trial, which found significantly improved red blood cell transfusion independence with treatment vs with placebo.
Overall, 178 patients were randomly assigned to the imetelstat arm (n = 118) and the placebo arm (n = 60). The median follow-up was 19.5 months in the treatment arm and 17.5 months in the placebo arm.
Patients received infusions of either 7.1 mg/kg of imetelstat or placebo in 28-day cycles until disease progression or unacceptable toxicity. All patients received supportive care, including red blood cell transfusions.
The rate of 8-week-or-greater red blood cell transfusion independence was 39.8% in the imetelstat vs 15% placebo arm. The rate of 24-week-or-greater red blood cell transfusion independence was 28% in the treatment arm vs 3.3% in the placebo arm.
An exploratory analysis among patients who achieved at least 8 weeks of red blood cell transfusion independence revealed that median increases in hemoglobin were 3.6 g/dL in the treatment group vs 0.8 g/dL in the placebo group.
Adverse reactions, occurring in at least 10% of patients and in at least 5% more patients in the treatment arm than in the placebo arm, included decreased platelets, white blood cells, and neutrophils; increased aspartate aminotransferase, alkaline phosphatase, and alanine aminotransferase; and fatigue, prolonged partial thromboplastin time, arthralgia/myalgia, COVID-19, and headache.
The recommended imetelstat dose is 7.1 mg/kg administered as an intravenous infusion over 2 hours every 28 days, according to the full prescribing information.
“What is exciting about RYTELO is the totality of the clinical benefit across [lower risk] MDS patients irrespective of ring sideroblast status or high transfusion burden, including sustained and durable transfusion independence and increases in hemoglobin levels, all within a well-characterized safety profile of generally manageable cytopenias,” Dr. Komrokji stated. The treatment goal for patients with this condition “is transfusion-independence and before today, this wasn’t possible for many patients.”
A version of this article appeared on Medscape.com.
Fine Particulate Matter Raises Type 2 Diabetes Risk in Women
TOPLINE:
Long-term exposure to fine particulate matter is associated with higher fasting blood glucose (FBG) levels and an increased type 2 diabetes risk, significantly contributing to the diabetes-related health burden among women of reproductive age.
METHODOLOGY:
- Exposure to fine particulate matter < 2.5 µm (PM2.5) is a known risk factor for type 2 diabetes, but its effect on women of reproductive age, who undergo hormonal fluctuations during reproductive events, is not well studied.
- Researchers evaluated the association of long-term exposure to PM2.5 with FBG levels and diabetes risk in 20,076,032 eligible women of reproductive age (average age, 27.04 years) across 350 cities in China between 2010 and 2015.
- They assessed PM2.5 exposure at the participants’ residential addresses and calculated average long-term exposure at 1 (lag 1 year), 2 (lag 2 years), and 3 years (lag 3 years) before the survey date, as defined by the World Health Organization (WHO).
- The primary outcomes were FBG levels and diabetes prevalence (FBG, ≥ 7 mmol/L, classified as diabetes; FBG, 6.1-7 mmol/L, classified as prediabetes).
- The study also evaluated the diabetes burden attributed to long-term PM2.5 exposure as per the Chinese National Ambient Air Quality Standards (annual mean PM2.5 exposure limit, > 35 µg/m3) and the WHO air quality guideline (annual mean PM2.5 exposure limit, > 5 µg/m3).
TAKEAWAY:
- The median PM2.5 exposure levels over lag periods of 1, 2, and 3 years were 67, 67, and 66 µg/m3, respectively, exceeding the WHO limit by more than 13-fold.
- Each interquartile range increase in the 3-year average PM2.5 exposure by 27 μg/m3 raised FBG levels by 0.078 mmol/L (P < .05), risk for diabetes by 18% (odds ratio [OR], 1.18; 95% CI, 1.16-1.19), and risk for prediabetes by 5% (OR, 1.05; 95% CI, 1.04-1.05).
- Long-term exposure to PM2.5 > 5 µg/m3 and 35 µg/m3 in the previous 3 years corresponded to an additional 41.7 (95% CI, 39.3-44.0) and 78.6 (95% CI, 74.5-82.6) thousand cases of diabetes nationwide, respectively.
- A higher PM2.5 exposure increased FBG levels and risk for diabetes in women with overweight or obesity vs those without and in those aged ≥ 35 years vs < 35 years (P < .001).
IN PRACTICE:
“These findings carry significant public health implications for formulating effective intervention strategies and environmental policies to better protect women’s health, particularly in countries with relatively high levels of air pollution and a large population with diabetes, such as China,” the authors wrote.
SOURCE:
The study, led by Yang Shen, Key Laboratory of Public Health Safety of the Ministry of Education and National Health Commission Key Laboratory of Health Technology Assessment, School of Public Health, Fudan University, Shanghai, China, was published online in Diabetes Care.
LIMITATIONS:
An error in the measurement of particulate matter exposure may have been possible as residential address estimates were used as a proxy for actual personal exposure. Questionnaires were used to retrospectively collect information on parameters such as smoking and alcohol consumption, which may have introduced recall bias. Data on potential confounders, such as diet and physical activity, were not included. Distinction between type 1 and type 2 diabetes was not reported owing to data collection–related limitations.
DISCLOSURES:
The study was supported by the National Key Research and Development Program of China, Henan Key Research and Development Program, State Key Laboratory of Resources and Environmental Information System, and Three-Year Public Health Action Plan of Shanghai. The authors declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
Long-term exposure to fine particulate matter is associated with higher fasting blood glucose (FBG) levels and an increased type 2 diabetes risk, significantly contributing to the diabetes-related health burden among women of reproductive age.
METHODOLOGY:
- Exposure to fine particulate matter < 2.5 µm (PM2.5) is a known risk factor for type 2 diabetes, but its effect on women of reproductive age, who undergo hormonal fluctuations during reproductive events, is not well studied.
- Researchers evaluated the association of long-term exposure to PM2.5 with FBG levels and diabetes risk in 20,076,032 eligible women of reproductive age (average age, 27.04 years) across 350 cities in China between 2010 and 2015.
- They assessed PM2.5 exposure at the participants’ residential addresses and calculated average long-term exposure at 1 (lag 1 year), 2 (lag 2 years), and 3 years (lag 3 years) before the survey date, as defined by the World Health Organization (WHO).
- The primary outcomes were FBG levels and diabetes prevalence (FBG, ≥ 7 mmol/L, classified as diabetes; FBG, 6.1-7 mmol/L, classified as prediabetes).
- The study also evaluated the diabetes burden attributed to long-term PM2.5 exposure as per the Chinese National Ambient Air Quality Standards (annual mean PM2.5 exposure limit, > 35 µg/m3) and the WHO air quality guideline (annual mean PM2.5 exposure limit, > 5 µg/m3).
TAKEAWAY:
- The median PM2.5 exposure levels over lag periods of 1, 2, and 3 years were 67, 67, and 66 µg/m3, respectively, exceeding the WHO limit by more than 13-fold.
- Each interquartile range increase in the 3-year average PM2.5 exposure by 27 μg/m3 raised FBG levels by 0.078 mmol/L (P < .05), risk for diabetes by 18% (odds ratio [OR], 1.18; 95% CI, 1.16-1.19), and risk for prediabetes by 5% (OR, 1.05; 95% CI, 1.04-1.05).
- Long-term exposure to PM2.5 > 5 µg/m3 and 35 µg/m3 in the previous 3 years corresponded to an additional 41.7 (95% CI, 39.3-44.0) and 78.6 (95% CI, 74.5-82.6) thousand cases of diabetes nationwide, respectively.
- A higher PM2.5 exposure increased FBG levels and risk for diabetes in women with overweight or obesity vs those without and in those aged ≥ 35 years vs < 35 years (P < .001).
IN PRACTICE:
“These findings carry significant public health implications for formulating effective intervention strategies and environmental policies to better protect women’s health, particularly in countries with relatively high levels of air pollution and a large population with diabetes, such as China,” the authors wrote.
SOURCE:
The study, led by Yang Shen, Key Laboratory of Public Health Safety of the Ministry of Education and National Health Commission Key Laboratory of Health Technology Assessment, School of Public Health, Fudan University, Shanghai, China, was published online in Diabetes Care.
LIMITATIONS:
An error in the measurement of particulate matter exposure may have been possible as residential address estimates were used as a proxy for actual personal exposure. Questionnaires were used to retrospectively collect information on parameters such as smoking and alcohol consumption, which may have introduced recall bias. Data on potential confounders, such as diet and physical activity, were not included. Distinction between type 1 and type 2 diabetes was not reported owing to data collection–related limitations.
DISCLOSURES:
The study was supported by the National Key Research and Development Program of China, Henan Key Research and Development Program, State Key Laboratory of Resources and Environmental Information System, and Three-Year Public Health Action Plan of Shanghai. The authors declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
TOPLINE:
Long-term exposure to fine particulate matter is associated with higher fasting blood glucose (FBG) levels and an increased type 2 diabetes risk, significantly contributing to the diabetes-related health burden among women of reproductive age.
METHODOLOGY:
- Exposure to fine particulate matter < 2.5 µm (PM2.5) is a known risk factor for type 2 diabetes, but its effect on women of reproductive age, who undergo hormonal fluctuations during reproductive events, is not well studied.
- Researchers evaluated the association of long-term exposure to PM2.5 with FBG levels and diabetes risk in 20,076,032 eligible women of reproductive age (average age, 27.04 years) across 350 cities in China between 2010 and 2015.
- They assessed PM2.5 exposure at the participants’ residential addresses and calculated average long-term exposure at 1 (lag 1 year), 2 (lag 2 years), and 3 years (lag 3 years) before the survey date, as defined by the World Health Organization (WHO).
- The primary outcomes were FBG levels and diabetes prevalence (FBG, ≥ 7 mmol/L, classified as diabetes; FBG, 6.1-7 mmol/L, classified as prediabetes).
- The study also evaluated the diabetes burden attributed to long-term PM2.5 exposure as per the Chinese National Ambient Air Quality Standards (annual mean PM2.5 exposure limit, > 35 µg/m3) and the WHO air quality guideline (annual mean PM2.5 exposure limit, > 5 µg/m3).
TAKEAWAY:
- The median PM2.5 exposure levels over lag periods of 1, 2, and 3 years were 67, 67, and 66 µg/m3, respectively, exceeding the WHO limit by more than 13-fold.
- Each interquartile range increase in the 3-year average PM2.5 exposure by 27 μg/m3 raised FBG levels by 0.078 mmol/L (P < .05), risk for diabetes by 18% (odds ratio [OR], 1.18; 95% CI, 1.16-1.19), and risk for prediabetes by 5% (OR, 1.05; 95% CI, 1.04-1.05).
- Long-term exposure to PM2.5 > 5 µg/m3 and 35 µg/m3 in the previous 3 years corresponded to an additional 41.7 (95% CI, 39.3-44.0) and 78.6 (95% CI, 74.5-82.6) thousand cases of diabetes nationwide, respectively.
- A higher PM2.5 exposure increased FBG levels and risk for diabetes in women with overweight or obesity vs those without and in those aged ≥ 35 years vs < 35 years (P < .001).
IN PRACTICE:
“These findings carry significant public health implications for formulating effective intervention strategies and environmental policies to better protect women’s health, particularly in countries with relatively high levels of air pollution and a large population with diabetes, such as China,” the authors wrote.
SOURCE:
The study, led by Yang Shen, Key Laboratory of Public Health Safety of the Ministry of Education and National Health Commission Key Laboratory of Health Technology Assessment, School of Public Health, Fudan University, Shanghai, China, was published online in Diabetes Care.
LIMITATIONS:
An error in the measurement of particulate matter exposure may have been possible as residential address estimates were used as a proxy for actual personal exposure. Questionnaires were used to retrospectively collect information on parameters such as smoking and alcohol consumption, which may have introduced recall bias. Data on potential confounders, such as diet and physical activity, were not included. Distinction between type 1 and type 2 diabetes was not reported owing to data collection–related limitations.
DISCLOSURES:
The study was supported by the National Key Research and Development Program of China, Henan Key Research and Development Program, State Key Laboratory of Resources and Environmental Information System, and Three-Year Public Health Action Plan of Shanghai. The authors declared no conflicts of interest.
A version of this article first appeared on Medscape.com.
Celiac Disease: Five Things to Know
Celiac disease is a chronic, immune-mediated, systemic disorder caused by intolerance to gluten — a protein present in rye, barley, and wheat grains — that affects genetically predisposed individuals.
Due to its wide spectrum of clinical manifestations, celiac disease resembles a multisystemic disorder. Its most common gastrointestinal (GI) symptoms include chronic diarrhea, weight loss, and abdominal distention. However, celiac disease can also manifest in myriad extraintestinal symptoms, ranging from headache and fatigue to delayed puberty and psychiatric disorders, with differing presentations in children and adults.
To date, the only treatment is adopting a gluten-free diet (GFD). Although key to preventing persistent villous atrophy, the main cause of complications in celiac disease, lifelong adherence to GFD is challenging and may not resolve all clinical issues. These shortcomings have driven recent efforts to develop novel therapeutic options for patients with this disease.
Here are five things to know about celiac disease.
1. Rising Prevalence of Celiac Disease and Other Autoimmune Disorders Suggests Environmental Factors May Be at Play
Gluten was first identified as the cause of celiac disease in the 1950s. At that time, the condition was thought to be a relatively rare GI disease of childhood that primarily affected people of European descent, but it is now known to be a common disease affecting those of various ages, races, and ethnicities.
A 2018 meta-analysis found the pooled global prevalence of celiac disease was 1.4%. Incidence has increased by as much as 7.5% annually over the past several decades.
Increased awareness among clinicians and improved detection likely play a role in the trend. However, the growth in celiac disease is consistent with that seen for other autoimmune disorders, according to a 2024 update of evidence surrounding celiac disease. Shared environmental factors have been proposed as triggers for celiac disease and other autoimmune diseases and appear to be influencing their rise, the authors noted. These factors include migration and population growth, changing dietary patterns and food processing practices, and altered wheat consumption.
2. No-Biopsy Diagnosis Is Accepted for Children and Shows Promise for Adults
It is estimated that almost 60 million people worldwide have celiac disease, but most remain undiagnosed or misdiagnosed, or they experience significant diagnostic delays.
Prospective data indicate that children with first-degree relatives with celiac disease are at a significantly higher risk of developing the condition, which should prompt screening efforts in this population.
The 2023 updated guidelines from the American College of Gastroenterology (ACG) state that serology testing plays a central role in screening. This commonly involves serological testing for positive serological markers of the disease, including immunoglobulin A (IgA), anti-tissue transglutaminase IgA (tTG-IgA), anti-deamidated gliadin peptide, or endomysial antibodies.
To confirm diagnosis, clinicians have relied on intestinal biopsy since the late 1950s. The ACG still recommends esophagogastroduodenoscopy with multiple duodenal biopsies for confirmation of diagnosis in both children and adults with suspicion of celiac disease. However, recent years have seen a shift toward a no-biopsy approach.
For more than a decade in Europe, a no-biopsy approach has been established practice in pediatric patients, for whom the burden of obtaining a histological confirmation is understandably greater. Most guidelines now permit children to be diagnosed with celiac disease in the absence of a biopsy under specific circumstances (eg, characteristic symptoms of celiac disease and tTG-IgA levels > 10 times the upper limit of normal). The ACG guidelines state that “this approach is a reasonable alternative to the standard approach to a [celiac disease] diagnosis in selected children.”
The ACG does not recommend a no-biopsy approach in adults, noting that, in comparison with children, there is a relative lack of data indicating that serology is predictive in this population. However, it does recognize that physicians may encounter patients for whom a biopsy diagnosis may not be safe or practical. In such cases, an “after-the-fact” diagnosis of likely celiac disease can be given to symptomatic adult patients with a ≥ 10-fold elevation of tTG-IgA and a positive endomysial antibody in a second blood sample.
A 2024 meta-analysis of 18 studies involving over 12,103 adult patients from 15 countries concluded that a no-biopsy approach using tTG-IgA antibody levels ≥ 10 times the upper limit of normal was highly specific and predictive of celiac disease.
3. Celiac Disease Is Associated With Several Life-Threatening Conditions
Emerging data indicate that gastroenterologists should be vigilant in screening patients with celiac disease for several other GI conditions.
Inflammatory bowel disease and celiac disease have a strong bidirectional association, suggesting a possible genetic link between the conditions and indicating that physicians should consider the alternate diagnosis when symptoms persist after treatment.
Given the hypervigilance around food and diet inherent to celiac disease, patients are at an increased risk of developing avoidant/restrictive food intake disorder, according to a 2022 retrospective study.
In 2023, Italian investigators showed that children with celiac disease have an elevated prevalence of functional GI disorders even after adopting a GFD for a year, regardless of whether they consumed processed or natural foods. It was unclear whether this was due to a chronic inflammatory process or to nutritional factors.
Complications resulting from celiac disease are not limited to GI disorders. For a variety of underlying pathophysiological reasons, including intestinal permeability, hyposplenism, and malabsorption of nutrients, patients with celiac disease may be at a higher risk for non-GI conditions, such as osteopenia, women’s health disorders (eg, ovarian failure, endometriosis, or pregnancy loss), juvenile idiopathic arthritis in children and rheumatoid arthritis in adults, certain forms of cancer, infectious diseases, and cardiomyopathy.
4. GFD Is the Only Treatment, but It’s Imperfect and Frustrating for Patients
GFD is the only treatment for celiac disease and must be adhered to without deviation throughout a patient’s life.
Maintaining unwavering adherence reaps considerable benefits: Improved clinical symptoms, robust mucosal healing, and normalization of serological markers. Yet it also takes a considerable toll on patients. Patients with celiac disease struggle with a host of negative physical, psychological, and social impacts. They also report a higher treatment burden than those with gastroesophageal reflux disease or hypertension, and comparable with end-stage renal disease.
GFD also poses financial challenges. Although the price of gluten-free products has decreased in recent years, they still cost significantly more than items with gluten.
Adherence to GFD does not always equate to complete mucosal recovery. While mucosal recovery is achieved in 95% of children within 2 years of the diet’s adoption, only 34% and 66% of adults obtain it within 2 and 5 years, respectively.
GFD may lead to nutrient imbalances because gluten-free foods are typically low in alimentary fiber, micronutrients (eg, vitamin D, vitamin B12, or folate), and minerals (eg, iron, zinc, magnesium, or calcium). With higher sugar and fat content, GFD may leave patients susceptible to unwanted weight gain.
The pervasiveness of gluten in the food production system makes the risk for cross-contamination high. Gluten is often found in both naturally gluten-free foods and products labeled as such. Gluten-sensing technologies, some of which can be used via smartphone apps, have been developed to help patients identify possible cross-contamination. However, the ACG guidelines recommend against the use of these technologies until there is sufficient evidence supporting their ability to improve adherence and clinical outcomes.
5. Novel Therapies for Celiac Disease Are in the Pipeline
The limitations of GFD as the standard treatment for celiac disease have led to an increased focus on developing novel therapeutic interventions. They can be sorted into five key categories: Modulation of the immunostimulatory effects of toxic gluten peptides, elimination of toxic gluten peptides before they reach the intestine, induction of gluten tolerance, modulation of intestinal permeability, and restoration of gut microbiota balance.
Three therapies designed to block antigen presentation by HLA-DQ2/8, the gene alleles that predispose people to celiac disease, show promise: TPM502, an agent that contains three gluten-specific antigenic peptides with overlapping T-cell epitopes for the HLA-DQ2.5 gene; KAN-101, designed to induce gluten tolerance by targeting receptors on the liver; and DONQ52, a multi-specific antibody that targets HLA-DQ2. The KAN-101 therapy received Fast Track designation by the US Food and Drug Administration in 2022.
These and several other agents in clinical and preclinical development are discussed in detail in a 2024 review article. Although no therapies have reached phase 3 testing, when they do, it will undoubtedly be welcomed by those with celiac disease.
A version of this article first appeared on Medscape.com.
Celiac disease is a chronic, immune-mediated, systemic disorder caused by intolerance to gluten — a protein present in rye, barley, and wheat grains — that affects genetically predisposed individuals.
Due to its wide spectrum of clinical manifestations, celiac disease resembles a multisystemic disorder. Its most common gastrointestinal (GI) symptoms include chronic diarrhea, weight loss, and abdominal distention. However, celiac disease can also manifest in myriad extraintestinal symptoms, ranging from headache and fatigue to delayed puberty and psychiatric disorders, with differing presentations in children and adults.
To date, the only treatment is adopting a gluten-free diet (GFD). Although key to preventing persistent villous atrophy, the main cause of complications in celiac disease, lifelong adherence to GFD is challenging and may not resolve all clinical issues. These shortcomings have driven recent efforts to develop novel therapeutic options for patients with this disease.
Here are five things to know about celiac disease.
1. Rising Prevalence of Celiac Disease and Other Autoimmune Disorders Suggests Environmental Factors May Be at Play
Gluten was first identified as the cause of celiac disease in the 1950s. At that time, the condition was thought to be a relatively rare GI disease of childhood that primarily affected people of European descent, but it is now known to be a common disease affecting those of various ages, races, and ethnicities.
A 2018 meta-analysis found the pooled global prevalence of celiac disease was 1.4%. Incidence has increased by as much as 7.5% annually over the past several decades.
Increased awareness among clinicians and improved detection likely play a role in the trend. However, the growth in celiac disease is consistent with that seen for other autoimmune disorders, according to a 2024 update of evidence surrounding celiac disease. Shared environmental factors have been proposed as triggers for celiac disease and other autoimmune diseases and appear to be influencing their rise, the authors noted. These factors include migration and population growth, changing dietary patterns and food processing practices, and altered wheat consumption.
2. No-Biopsy Diagnosis Is Accepted for Children and Shows Promise for Adults
It is estimated that almost 60 million people worldwide have celiac disease, but most remain undiagnosed or misdiagnosed, or they experience significant diagnostic delays.
Prospective data indicate that children with first-degree relatives with celiac disease are at a significantly higher risk of developing the condition, which should prompt screening efforts in this population.
The 2023 updated guidelines from the American College of Gastroenterology (ACG) state that serology testing plays a central role in screening. This commonly involves serological testing for positive serological markers of the disease, including immunoglobulin A (IgA), anti-tissue transglutaminase IgA (tTG-IgA), anti-deamidated gliadin peptide, or endomysial antibodies.
To confirm diagnosis, clinicians have relied on intestinal biopsy since the late 1950s. The ACG still recommends esophagogastroduodenoscopy with multiple duodenal biopsies for confirmation of diagnosis in both children and adults with suspicion of celiac disease. However, recent years have seen a shift toward a no-biopsy approach.
For more than a decade in Europe, a no-biopsy approach has been established practice in pediatric patients, for whom the burden of obtaining a histological confirmation is understandably greater. Most guidelines now permit children to be diagnosed with celiac disease in the absence of a biopsy under specific circumstances (eg, characteristic symptoms of celiac disease and tTG-IgA levels > 10 times the upper limit of normal). The ACG guidelines state that “this approach is a reasonable alternative to the standard approach to a [celiac disease] diagnosis in selected children.”
The ACG does not recommend a no-biopsy approach in adults, noting that, in comparison with children, there is a relative lack of data indicating that serology is predictive in this population. However, it does recognize that physicians may encounter patients for whom a biopsy diagnosis may not be safe or practical. In such cases, an “after-the-fact” diagnosis of likely celiac disease can be given to symptomatic adult patients with a ≥ 10-fold elevation of tTG-IgA and a positive endomysial antibody in a second blood sample.
A 2024 meta-analysis of 18 studies involving over 12,103 adult patients from 15 countries concluded that a no-biopsy approach using tTG-IgA antibody levels ≥ 10 times the upper limit of normal was highly specific and predictive of celiac disease.
3. Celiac Disease Is Associated With Several Life-Threatening Conditions
Emerging data indicate that gastroenterologists should be vigilant in screening patients with celiac disease for several other GI conditions.
Inflammatory bowel disease and celiac disease have a strong bidirectional association, suggesting a possible genetic link between the conditions and indicating that physicians should consider the alternate diagnosis when symptoms persist after treatment.
Given the hypervigilance around food and diet inherent to celiac disease, patients are at an increased risk of developing avoidant/restrictive food intake disorder, according to a 2022 retrospective study.
In 2023, Italian investigators showed that children with celiac disease have an elevated prevalence of functional GI disorders even after adopting a GFD for a year, regardless of whether they consumed processed or natural foods. It was unclear whether this was due to a chronic inflammatory process or to nutritional factors.
Complications resulting from celiac disease are not limited to GI disorders. For a variety of underlying pathophysiological reasons, including intestinal permeability, hyposplenism, and malabsorption of nutrients, patients with celiac disease may be at a higher risk for non-GI conditions, such as osteopenia, women’s health disorders (eg, ovarian failure, endometriosis, or pregnancy loss), juvenile idiopathic arthritis in children and rheumatoid arthritis in adults, certain forms of cancer, infectious diseases, and cardiomyopathy.
4. GFD Is the Only Treatment, but It’s Imperfect and Frustrating for Patients
GFD is the only treatment for celiac disease and must be adhered to without deviation throughout a patient’s life.
Maintaining unwavering adherence reaps considerable benefits: Improved clinical symptoms, robust mucosal healing, and normalization of serological markers. Yet it also takes a considerable toll on patients. Patients with celiac disease struggle with a host of negative physical, psychological, and social impacts. They also report a higher treatment burden than those with gastroesophageal reflux disease or hypertension, and comparable with end-stage renal disease.
GFD also poses financial challenges. Although the price of gluten-free products has decreased in recent years, they still cost significantly more than items with gluten.
Adherence to GFD does not always equate to complete mucosal recovery. While mucosal recovery is achieved in 95% of children within 2 years of the diet’s adoption, only 34% and 66% of adults obtain it within 2 and 5 years, respectively.
GFD may lead to nutrient imbalances because gluten-free foods are typically low in alimentary fiber, micronutrients (eg, vitamin D, vitamin B12, or folate), and minerals (eg, iron, zinc, magnesium, or calcium). With higher sugar and fat content, GFD may leave patients susceptible to unwanted weight gain.
The pervasiveness of gluten in the food production system makes the risk for cross-contamination high. Gluten is often found in both naturally gluten-free foods and products labeled as such. Gluten-sensing technologies, some of which can be used via smartphone apps, have been developed to help patients identify possible cross-contamination. However, the ACG guidelines recommend against the use of these technologies until there is sufficient evidence supporting their ability to improve adherence and clinical outcomes.
5. Novel Therapies for Celiac Disease Are in the Pipeline
The limitations of GFD as the standard treatment for celiac disease have led to an increased focus on developing novel therapeutic interventions. They can be sorted into five key categories: Modulation of the immunostimulatory effects of toxic gluten peptides, elimination of toxic gluten peptides before they reach the intestine, induction of gluten tolerance, modulation of intestinal permeability, and restoration of gut microbiota balance.
Three therapies designed to block antigen presentation by HLA-DQ2/8, the gene alleles that predispose people to celiac disease, show promise: TPM502, an agent that contains three gluten-specific antigenic peptides with overlapping T-cell epitopes for the HLA-DQ2.5 gene; KAN-101, designed to induce gluten tolerance by targeting receptors on the liver; and DONQ52, a multi-specific antibody that targets HLA-DQ2. The KAN-101 therapy received Fast Track designation by the US Food and Drug Administration in 2022.
These and several other agents in clinical and preclinical development are discussed in detail in a 2024 review article. Although no therapies have reached phase 3 testing, when they do, it will undoubtedly be welcomed by those with celiac disease.
A version of this article first appeared on Medscape.com.
Celiac disease is a chronic, immune-mediated, systemic disorder caused by intolerance to gluten — a protein present in rye, barley, and wheat grains — that affects genetically predisposed individuals.
Due to its wide spectrum of clinical manifestations, celiac disease resembles a multisystemic disorder. Its most common gastrointestinal (GI) symptoms include chronic diarrhea, weight loss, and abdominal distention. However, celiac disease can also manifest in myriad extraintestinal symptoms, ranging from headache and fatigue to delayed puberty and psychiatric disorders, with differing presentations in children and adults.
To date, the only treatment is adopting a gluten-free diet (GFD). Although key to preventing persistent villous atrophy, the main cause of complications in celiac disease, lifelong adherence to GFD is challenging and may not resolve all clinical issues. These shortcomings have driven recent efforts to develop novel therapeutic options for patients with this disease.
Here are five things to know about celiac disease.
1. Rising Prevalence of Celiac Disease and Other Autoimmune Disorders Suggests Environmental Factors May Be at Play
Gluten was first identified as the cause of celiac disease in the 1950s. At that time, the condition was thought to be a relatively rare GI disease of childhood that primarily affected people of European descent, but it is now known to be a common disease affecting those of various ages, races, and ethnicities.
A 2018 meta-analysis found the pooled global prevalence of celiac disease was 1.4%. Incidence has increased by as much as 7.5% annually over the past several decades.
Increased awareness among clinicians and improved detection likely play a role in the trend. However, the growth in celiac disease is consistent with that seen for other autoimmune disorders, according to a 2024 update of evidence surrounding celiac disease. Shared environmental factors have been proposed as triggers for celiac disease and other autoimmune diseases and appear to be influencing their rise, the authors noted. These factors include migration and population growth, changing dietary patterns and food processing practices, and altered wheat consumption.
2. No-Biopsy Diagnosis Is Accepted for Children and Shows Promise for Adults
It is estimated that almost 60 million people worldwide have celiac disease, but most remain undiagnosed or misdiagnosed, or they experience significant diagnostic delays.
Prospective data indicate that children with first-degree relatives with celiac disease are at a significantly higher risk of developing the condition, which should prompt screening efforts in this population.
The 2023 updated guidelines from the American College of Gastroenterology (ACG) state that serology testing plays a central role in screening. This commonly involves serological testing for positive serological markers of the disease, including immunoglobulin A (IgA), anti-tissue transglutaminase IgA (tTG-IgA), anti-deamidated gliadin peptide, or endomysial antibodies.
To confirm diagnosis, clinicians have relied on intestinal biopsy since the late 1950s. The ACG still recommends esophagogastroduodenoscopy with multiple duodenal biopsies for confirmation of diagnosis in both children and adults with suspicion of celiac disease. However, recent years have seen a shift toward a no-biopsy approach.
For more than a decade in Europe, a no-biopsy approach has been established practice in pediatric patients, for whom the burden of obtaining a histological confirmation is understandably greater. Most guidelines now permit children to be diagnosed with celiac disease in the absence of a biopsy under specific circumstances (eg, characteristic symptoms of celiac disease and tTG-IgA levels > 10 times the upper limit of normal). The ACG guidelines state that “this approach is a reasonable alternative to the standard approach to a [celiac disease] diagnosis in selected children.”
The ACG does not recommend a no-biopsy approach in adults, noting that, in comparison with children, there is a relative lack of data indicating that serology is predictive in this population. However, it does recognize that physicians may encounter patients for whom a biopsy diagnosis may not be safe or practical. In such cases, an “after-the-fact” diagnosis of likely celiac disease can be given to symptomatic adult patients with a ≥ 10-fold elevation of tTG-IgA and a positive endomysial antibody in a second blood sample.
A 2024 meta-analysis of 18 studies involving over 12,103 adult patients from 15 countries concluded that a no-biopsy approach using tTG-IgA antibody levels ≥ 10 times the upper limit of normal was highly specific and predictive of celiac disease.
3. Celiac Disease Is Associated With Several Life-Threatening Conditions
Emerging data indicate that gastroenterologists should be vigilant in screening patients with celiac disease for several other GI conditions.
Inflammatory bowel disease and celiac disease have a strong bidirectional association, suggesting a possible genetic link between the conditions and indicating that physicians should consider the alternate diagnosis when symptoms persist after treatment.
Given the hypervigilance around food and diet inherent to celiac disease, patients are at an increased risk of developing avoidant/restrictive food intake disorder, according to a 2022 retrospective study.
In 2023, Italian investigators showed that children with celiac disease have an elevated prevalence of functional GI disorders even after adopting a GFD for a year, regardless of whether they consumed processed or natural foods. It was unclear whether this was due to a chronic inflammatory process or to nutritional factors.
Complications resulting from celiac disease are not limited to GI disorders. For a variety of underlying pathophysiological reasons, including intestinal permeability, hyposplenism, and malabsorption of nutrients, patients with celiac disease may be at a higher risk for non-GI conditions, such as osteopenia, women’s health disorders (eg, ovarian failure, endometriosis, or pregnancy loss), juvenile idiopathic arthritis in children and rheumatoid arthritis in adults, certain forms of cancer, infectious diseases, and cardiomyopathy.
4. GFD Is the Only Treatment, but It’s Imperfect and Frustrating for Patients
GFD is the only treatment for celiac disease and must be adhered to without deviation throughout a patient’s life.
Maintaining unwavering adherence reaps considerable benefits: Improved clinical symptoms, robust mucosal healing, and normalization of serological markers. Yet it also takes a considerable toll on patients. Patients with celiac disease struggle with a host of negative physical, psychological, and social impacts. They also report a higher treatment burden than those with gastroesophageal reflux disease or hypertension, and comparable with end-stage renal disease.
GFD also poses financial challenges. Although the price of gluten-free products has decreased in recent years, they still cost significantly more than items with gluten.
Adherence to GFD does not always equate to complete mucosal recovery. While mucosal recovery is achieved in 95% of children within 2 years of the diet’s adoption, only 34% and 66% of adults obtain it within 2 and 5 years, respectively.
GFD may lead to nutrient imbalances because gluten-free foods are typically low in alimentary fiber, micronutrients (eg, vitamin D, vitamin B12, or folate), and minerals (eg, iron, zinc, magnesium, or calcium). With higher sugar and fat content, GFD may leave patients susceptible to unwanted weight gain.
The pervasiveness of gluten in the food production system makes the risk for cross-contamination high. Gluten is often found in both naturally gluten-free foods and products labeled as such. Gluten-sensing technologies, some of which can be used via smartphone apps, have been developed to help patients identify possible cross-contamination. However, the ACG guidelines recommend against the use of these technologies until there is sufficient evidence supporting their ability to improve adherence and clinical outcomes.
5. Novel Therapies for Celiac Disease Are in the Pipeline
The limitations of GFD as the standard treatment for celiac disease have led to an increased focus on developing novel therapeutic interventions. They can be sorted into five key categories: Modulation of the immunostimulatory effects of toxic gluten peptides, elimination of toxic gluten peptides before they reach the intestine, induction of gluten tolerance, modulation of intestinal permeability, and restoration of gut microbiota balance.
Three therapies designed to block antigen presentation by HLA-DQ2/8, the gene alleles that predispose people to celiac disease, show promise: TPM502, an agent that contains three gluten-specific antigenic peptides with overlapping T-cell epitopes for the HLA-DQ2.5 gene; KAN-101, designed to induce gluten tolerance by targeting receptors on the liver; and DONQ52, a multi-specific antibody that targets HLA-DQ2. The KAN-101 therapy received Fast Track designation by the US Food and Drug Administration in 2022.
These and several other agents in clinical and preclinical development are discussed in detail in a 2024 review article. Although no therapies have reached phase 3 testing, when they do, it will undoubtedly be welcomed by those with celiac disease.
A version of this article first appeared on Medscape.com.
Clear Coverage Preference for Humira Over Biosimilars Seen in Most Medicare Part D Plans
Despite the influx of adalimumab biosimilars entering the market in 2023, Humira remains on top.
As of January 2024, both high and low concentrations of Humira, the originator adalimumab product, are nearly universally covered by Medicare Part D plans, while only half of these plans covered adalimumab biosimilars, according to a new research letter published online on June 6, 2024, in JAMA.
Of the plans that covered both, only 1.5% had lower-tier placement for biosimilars.
“This study of formulary coverage helps explain limited uptake of adalimumab biosimilars,” wrote the authors, led by Matthew J. Klebanoff, MD, of the University of Pennsylvania, Philadelphia. “Subpar biosimilar adoption will not only undermine their potential to reduce spending but also may deter investments in biosimilar development.”
The analysis included the formulary and enrollment files for 5609 Medicare Part D plans, representing 44.4 million beneficiaries. Drug list prices and whole acquisition costs (WAC) were pulled from the Red Book database, which provides prices for prescription and over-the-counter drugs as well as medical devices and supplies.
Nearly all (98.9%) of Part D plans covered the high-concentration (100 mg/mL) version of adalimumab with a WAC of $6923. This higher concentration is the most popular formulation of the drug, making up an estimated 85% of prescriptions. By comparison, 26.8% of plans covered the high-concentration version of adalimumab-adaz (Hyrimoz), with a WAC 5% less than the reference product.
The unbranded version of adalimumab-adaz, sold at an 81% discount from the reference product, was covered by 13% of plans. Only 4.6% of plans covered high-concentration adalimumab-bwwd (Hadlima), manufactured by Samsung Bioepis.
In January 2024, no high-concentration adalimumab biosimilar had been granted interchangeability status by the US Food and Drug Administration (FDA). Adalimumab-ryvk (Simlandi) was the first biosimilar to receive this designation and was launched in late May 2024.
Coverage for the lower concentration of adalimumab was nearly universal (98.7% of plans). About half of the plans (50.7%) covered adalimumab-adbm (Cyltezo) at a 5% discount. Adalimumab-adbm (Boehringer Ingelheim) was the first interchangeable Humira biosimilar approved by the FDA, but it is only interchangeable with the less popular, lower concentration formulation of adalimumab.
All other biosimilars were covered by less than 5% of Medicare Part D plans, even with some having a WAC 86% below Humira.
Few plans (1.5%) had biosimilars on preferred tiers compared with the reference product, and no plans used prior authorization to incentivize use of biosimilars. Most plans preferred the higher-priced version of adalimumab biosimilars, which appeals to pharmacy benefit managers who can therefore receive higher rebates, the authors noted.
“Ultimately, biosimilars’ true effect on spending will depend not on their list price but rather on their net price (after rebates) and their influence on originator biologics’ net price,” they wrote. They pointed to the 38% drop in Humira’s annual net price at the end of 2023 compared with the prior year.
“Despite this price decrease, biosimilars offer far greater potential savings: Several adalimumab biosimilars have list prices that are less than half of Humira’s net price,” the authors continued, and encouraged policy makers to mandate coverage for these lower-priced options.
Dr. Klebanoff was supported by a grant from the Health Resources and Services Administration. Two coauthors were supported by a grant from the National Institute on Aging. One author reported receiving consulting fees from AbbVie, which manufactures Humira.
A version of this article appeared on Medscape.com .
Despite the influx of adalimumab biosimilars entering the market in 2023, Humira remains on top.
As of January 2024, both high and low concentrations of Humira, the originator adalimumab product, are nearly universally covered by Medicare Part D plans, while only half of these plans covered adalimumab biosimilars, according to a new research letter published online on June 6, 2024, in JAMA.
Of the plans that covered both, only 1.5% had lower-tier placement for biosimilars.
“This study of formulary coverage helps explain limited uptake of adalimumab biosimilars,” wrote the authors, led by Matthew J. Klebanoff, MD, of the University of Pennsylvania, Philadelphia. “Subpar biosimilar adoption will not only undermine their potential to reduce spending but also may deter investments in biosimilar development.”
The analysis included the formulary and enrollment files for 5609 Medicare Part D plans, representing 44.4 million beneficiaries. Drug list prices and whole acquisition costs (WAC) were pulled from the Red Book database, which provides prices for prescription and over-the-counter drugs as well as medical devices and supplies.
Nearly all (98.9%) of Part D plans covered the high-concentration (100 mg/mL) version of adalimumab with a WAC of $6923. This higher concentration is the most popular formulation of the drug, making up an estimated 85% of prescriptions. By comparison, 26.8% of plans covered the high-concentration version of adalimumab-adaz (Hyrimoz), with a WAC 5% less than the reference product.
The unbranded version of adalimumab-adaz, sold at an 81% discount from the reference product, was covered by 13% of plans. Only 4.6% of plans covered high-concentration adalimumab-bwwd (Hadlima), manufactured by Samsung Bioepis.
In January 2024, no high-concentration adalimumab biosimilar had been granted interchangeability status by the US Food and Drug Administration (FDA). Adalimumab-ryvk (Simlandi) was the first biosimilar to receive this designation and was launched in late May 2024.
Coverage for the lower concentration of adalimumab was nearly universal (98.7% of plans). About half of the plans (50.7%) covered adalimumab-adbm (Cyltezo) at a 5% discount. Adalimumab-adbm (Boehringer Ingelheim) was the first interchangeable Humira biosimilar approved by the FDA, but it is only interchangeable with the less popular, lower concentration formulation of adalimumab.
All other biosimilars were covered by less than 5% of Medicare Part D plans, even with some having a WAC 86% below Humira.
Few plans (1.5%) had biosimilars on preferred tiers compared with the reference product, and no plans used prior authorization to incentivize use of biosimilars. Most plans preferred the higher-priced version of adalimumab biosimilars, which appeals to pharmacy benefit managers who can therefore receive higher rebates, the authors noted.
“Ultimately, biosimilars’ true effect on spending will depend not on their list price but rather on their net price (after rebates) and their influence on originator biologics’ net price,” they wrote. They pointed to the 38% drop in Humira’s annual net price at the end of 2023 compared with the prior year.
“Despite this price decrease, biosimilars offer far greater potential savings: Several adalimumab biosimilars have list prices that are less than half of Humira’s net price,” the authors continued, and encouraged policy makers to mandate coverage for these lower-priced options.
Dr. Klebanoff was supported by a grant from the Health Resources and Services Administration. Two coauthors were supported by a grant from the National Institute on Aging. One author reported receiving consulting fees from AbbVie, which manufactures Humira.
A version of this article appeared on Medscape.com .
Despite the influx of adalimumab biosimilars entering the market in 2023, Humira remains on top.
As of January 2024, both high and low concentrations of Humira, the originator adalimumab product, are nearly universally covered by Medicare Part D plans, while only half of these plans covered adalimumab biosimilars, according to a new research letter published online on June 6, 2024, in JAMA.
Of the plans that covered both, only 1.5% had lower-tier placement for biosimilars.
“This study of formulary coverage helps explain limited uptake of adalimumab biosimilars,” wrote the authors, led by Matthew J. Klebanoff, MD, of the University of Pennsylvania, Philadelphia. “Subpar biosimilar adoption will not only undermine their potential to reduce spending but also may deter investments in biosimilar development.”
The analysis included the formulary and enrollment files for 5609 Medicare Part D plans, representing 44.4 million beneficiaries. Drug list prices and whole acquisition costs (WAC) were pulled from the Red Book database, which provides prices for prescription and over-the-counter drugs as well as medical devices and supplies.
Nearly all (98.9%) of Part D plans covered the high-concentration (100 mg/mL) version of adalimumab with a WAC of $6923. This higher concentration is the most popular formulation of the drug, making up an estimated 85% of prescriptions. By comparison, 26.8% of plans covered the high-concentration version of adalimumab-adaz (Hyrimoz), with a WAC 5% less than the reference product.
The unbranded version of adalimumab-adaz, sold at an 81% discount from the reference product, was covered by 13% of plans. Only 4.6% of plans covered high-concentration adalimumab-bwwd (Hadlima), manufactured by Samsung Bioepis.
In January 2024, no high-concentration adalimumab biosimilar had been granted interchangeability status by the US Food and Drug Administration (FDA). Adalimumab-ryvk (Simlandi) was the first biosimilar to receive this designation and was launched in late May 2024.
Coverage for the lower concentration of adalimumab was nearly universal (98.7% of plans). About half of the plans (50.7%) covered adalimumab-adbm (Cyltezo) at a 5% discount. Adalimumab-adbm (Boehringer Ingelheim) was the first interchangeable Humira biosimilar approved by the FDA, but it is only interchangeable with the less popular, lower concentration formulation of adalimumab.
All other biosimilars were covered by less than 5% of Medicare Part D plans, even with some having a WAC 86% below Humira.
Few plans (1.5%) had biosimilars on preferred tiers compared with the reference product, and no plans used prior authorization to incentivize use of biosimilars. Most plans preferred the higher-priced version of adalimumab biosimilars, which appeals to pharmacy benefit managers who can therefore receive higher rebates, the authors noted.
“Ultimately, biosimilars’ true effect on spending will depend not on their list price but rather on their net price (after rebates) and their influence on originator biologics’ net price,” they wrote. They pointed to the 38% drop in Humira’s annual net price at the end of 2023 compared with the prior year.
“Despite this price decrease, biosimilars offer far greater potential savings: Several adalimumab biosimilars have list prices that are less than half of Humira’s net price,” the authors continued, and encouraged policy makers to mandate coverage for these lower-priced options.
Dr. Klebanoff was supported by a grant from the Health Resources and Services Administration. Two coauthors were supported by a grant from the National Institute on Aging. One author reported receiving consulting fees from AbbVie, which manufactures Humira.
A version of this article appeared on Medscape.com .
FROM JAMA