User login
US Alcohol-Related Deaths Double Over 2 Decades, With Notable Age and Gender Disparities
TOPLINE:
US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.
METHODOLOGY:
- Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
- Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
- International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.
TAKEAWAY:
- Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
- Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
- American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
- The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.
IN PRACTICE:
“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.
SOURCE:
The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.
LIMITATIONS:
According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.
DISCLOSURES:
The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.
METHODOLOGY:
- Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
- Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
- International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.
TAKEAWAY:
- Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
- Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
- American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
- The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.
IN PRACTICE:
“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.
SOURCE:
The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.
LIMITATIONS:
According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.
DISCLOSURES:
The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
US alcohol-related mortality rates increased from 10.7 to 21.6 per 100,000 between 1999 and 2020, with the largest rise of 3.8-fold observed in adults aged 25-34 years. Women experienced a 2.5-fold increase, while the Midwest region showed a similar rise in mortality rates.
METHODOLOGY:
- Analysis utilized the US Centers for Disease Control and Prevention Wide-Ranging Online Data for Epidemiologic Research to examine alcohol-related mortality trends from 1999 to 2020.
- Researchers analyzed data from a total US population of 180,408,769 people aged 25 to 85+ years in 1999 and 226,635,013 people in 2020.
- International Classification of Diseases, Tenth Revision, codes were used to identify deaths with alcohol attribution, including mental and behavioral disorders, alcoholic organ damage, and alcohol-related poisoning.
TAKEAWAY:
- Overall mortality rates increased from 10.7 (95% CI, 10.6-10.8) per 100,000 in 1999 to 21.6 (95% CI, 21.4-21.8) per 100,000 in 2020, representing a significant twofold increase.
- Adults aged 55-64 years demonstrated both the steepest increase and highest absolute rates in both 1999 and 2020.
- American Indian and Alaska Native individuals experienced the steepest increase and highest absolute rates among all racial groups.
- The West region maintained the highest absolute rates in both 1999 and 2020, despite the Midwest showing the largest increase.
IN PRACTICE:
“Individuals who consume large amounts of alcohol tend to have the highest risks of total mortality as well as deaths from cardiovascular disease. Cardiovascular disease deaths are predominantly due to myocardial infarction and stroke. To mitigate these risks, health providers may wish to implement screening for alcohol use in primary care and other healthcare settings. By providing brief interventions and referrals to treatment, healthcare providers would be able to achieve the early identification of individuals at risk of alcohol-related harm and offer them the support and resources they need to reduce their alcohol consumption,” wrote the authors of the study.
SOURCE:
The study was led by Alexandra Matarazzo, BS, Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton. It was published online in The American Journal of Medicine.
LIMITATIONS:
According to the authors, the cross-sectional nature of the data limits the study to descriptive analysis only, making it suitable for hypothesis generation but not hypothesis testing. While the validity and generalizability within the United States are secure because of the use of complete population data, potential bias and uncontrolled confounding may exist because of different population mixes between the two time points.
DISCLOSURES:
The authors reported no relevant conflicts of interest. One coauthor disclosed serving as an independent scientist in an advisory role to investigators and sponsors as Chair of Data Monitoring Committees for Amgen and UBC, to the Food and Drug Administration, and to Up to Date. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Vaping Linked to Higher Risk of Blurred Vision & Eye Pain
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Veterans’ Well-Being Tools Aim to Improve Quality of Life
Could assessing the well-being of older patients create better treatment plans?
Researchers with the US Department of Veterans Affairs posit that doing so just might improve patient quality of life.
In an article in Medical Care, Dawne Vogt, PhD, and her colleagues described two surveys of well-being developed for use in clinical settings.
“Well-Being Signs” (WBS), a 1-minute screening, asks patients about how satisfied they are with the most important parts of their daily life, which could include time with family. It also asks how regularly involved they are in the activities and their level of functioning.
“Well-Being Brief” (WBB) is self-administered and asks more in-depth questions about finances, health, social relationships, and vocation. Clinicians can use the tool to make referrals to appropriate services like counseling or resources like senior centers.
“They’re not things that we’ve historically paid a lot of attention to, at least in the healthcare setting,” said Vogt, a research psychologist in the Women’s Health Sciences Division of the VA Boston Healthcare System in Massachusetts. “A growing body of research shows that they have really big implications for health.”
The two approaches stem from an increased awareness of the relationship between social determinants of health and outcomes. Both screenings can be implemented more effectively in a clinical setting than other measures because of their brevity and ease of use, she said.
Vogt shared that anecdotally, she finds patients are pleasantly surprised by the questionnaires “because they’re being seen in a way that they don’t always feel like they’re seen.”
Vogt said that the two well-being measurements are more nuanced than standard screenings for depression.
“A measure of depression tells you something much more narrow than a measure of well-being tells you,” she said, adding that identifying problem areas early can help prevent developing mental health disorders. For example, Vogt said that veterans with higher well-being are less likely to develop posttraumatic stress disorder when exposed to trauma.
The WBS has been validated, while the WBB questionnaire awaits final testing.
James Michail, MD, a family and geriatric physician with Providence Health & Services in Los Angeles, California, said he views the well-being screeners as launching points into discussing whether a treatment is enhancing or inhibiting a patient’s life.
“We have screenings for everything else but not for wellness, and the goal of care isn’t necessarily always treatment,” Michail said. “It’s taking the whole person into consideration. There’s a person behind the disease.”
Kendra Segura, MD, an obstetrician-gynecologist in Los Angeles, said she is open to using a well-being screener. Usually, building repertoire with a patient takes time, and sometimes only then can it allow for a more candid assessment of well-being.
“Over the course of several visits, that is when patients open up,” she said. “It’s when that starts to happen where they start to tell you about their well-being. It’s not an easy thing to establish.”
The authors of the article reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Could assessing the well-being of older patients create better treatment plans?
Researchers with the US Department of Veterans Affairs posit that doing so just might improve patient quality of life.
In an article in Medical Care, Dawne Vogt, PhD, and her colleagues described two surveys of well-being developed for use in clinical settings.
“Well-Being Signs” (WBS), a 1-minute screening, asks patients about how satisfied they are with the most important parts of their daily life, which could include time with family. It also asks how regularly involved they are in the activities and their level of functioning.
“Well-Being Brief” (WBB) is self-administered and asks more in-depth questions about finances, health, social relationships, and vocation. Clinicians can use the tool to make referrals to appropriate services like counseling or resources like senior centers.
“They’re not things that we’ve historically paid a lot of attention to, at least in the healthcare setting,” said Vogt, a research psychologist in the Women’s Health Sciences Division of the VA Boston Healthcare System in Massachusetts. “A growing body of research shows that they have really big implications for health.”
The two approaches stem from an increased awareness of the relationship between social determinants of health and outcomes. Both screenings can be implemented more effectively in a clinical setting than other measures because of their brevity and ease of use, she said.
Vogt shared that anecdotally, she finds patients are pleasantly surprised by the questionnaires “because they’re being seen in a way that they don’t always feel like they’re seen.”
Vogt said that the two well-being measurements are more nuanced than standard screenings for depression.
“A measure of depression tells you something much more narrow than a measure of well-being tells you,” she said, adding that identifying problem areas early can help prevent developing mental health disorders. For example, Vogt said that veterans with higher well-being are less likely to develop posttraumatic stress disorder when exposed to trauma.
The WBS has been validated, while the WBB questionnaire awaits final testing.
James Michail, MD, a family and geriatric physician with Providence Health & Services in Los Angeles, California, said he views the well-being screeners as launching points into discussing whether a treatment is enhancing or inhibiting a patient’s life.
“We have screenings for everything else but not for wellness, and the goal of care isn’t necessarily always treatment,” Michail said. “It’s taking the whole person into consideration. There’s a person behind the disease.”
Kendra Segura, MD, an obstetrician-gynecologist in Los Angeles, said she is open to using a well-being screener. Usually, building repertoire with a patient takes time, and sometimes only then can it allow for a more candid assessment of well-being.
“Over the course of several visits, that is when patients open up,” she said. “It’s when that starts to happen where they start to tell you about their well-being. It’s not an easy thing to establish.”
The authors of the article reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Could assessing the well-being of older patients create better treatment plans?
Researchers with the US Department of Veterans Affairs posit that doing so just might improve patient quality of life.
In an article in Medical Care, Dawne Vogt, PhD, and her colleagues described two surveys of well-being developed for use in clinical settings.
“Well-Being Signs” (WBS), a 1-minute screening, asks patients about how satisfied they are with the most important parts of their daily life, which could include time with family. It also asks how regularly involved they are in the activities and their level of functioning.
“Well-Being Brief” (WBB) is self-administered and asks more in-depth questions about finances, health, social relationships, and vocation. Clinicians can use the tool to make referrals to appropriate services like counseling or resources like senior centers.
“They’re not things that we’ve historically paid a lot of attention to, at least in the healthcare setting,” said Vogt, a research psychologist in the Women’s Health Sciences Division of the VA Boston Healthcare System in Massachusetts. “A growing body of research shows that they have really big implications for health.”
The two approaches stem from an increased awareness of the relationship between social determinants of health and outcomes. Both screenings can be implemented more effectively in a clinical setting than other measures because of their brevity and ease of use, she said.
Vogt shared that anecdotally, she finds patients are pleasantly surprised by the questionnaires “because they’re being seen in a way that they don’t always feel like they’re seen.”
Vogt said that the two well-being measurements are more nuanced than standard screenings for depression.
“A measure of depression tells you something much more narrow than a measure of well-being tells you,” she said, adding that identifying problem areas early can help prevent developing mental health disorders. For example, Vogt said that veterans with higher well-being are less likely to develop posttraumatic stress disorder when exposed to trauma.
The WBS has been validated, while the WBB questionnaire awaits final testing.
James Michail, MD, a family and geriatric physician with Providence Health & Services in Los Angeles, California, said he views the well-being screeners as launching points into discussing whether a treatment is enhancing or inhibiting a patient’s life.
“We have screenings for everything else but not for wellness, and the goal of care isn’t necessarily always treatment,” Michail said. “It’s taking the whole person into consideration. There’s a person behind the disease.”
Kendra Segura, MD, an obstetrician-gynecologist in Los Angeles, said she is open to using a well-being screener. Usually, building repertoire with a patient takes time, and sometimes only then can it allow for a more candid assessment of well-being.
“Over the course of several visits, that is when patients open up,” she said. “It’s when that starts to happen where they start to tell you about their well-being. It’s not an easy thing to establish.”
The authors of the article reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM MEDICAL CARE
Which Breast Cancer Patients Can Skip Postop Radiotherapy?
TOPLINE:
Overall, patients with a high POLAR score derived a significant benefit from adjuvant radiotherapy, while those with a low score did not and might consider forgoing radiotherapy.
METHODOLOGY:
- Radiation therapy after breast-conserving surgery has been shown to reduce the risk for locoregional recurrence and is a standard approach to manage early breast cancer. However, certain patients with low locoregional recurrence risks may not necessarily benefit from adjuvant radiation, but there has not been a commercially available molecular test to help identify which patients that might be.
- In the current analysis, researchers assessed whether the POLAR biomarker test could reliably predict locoregional recurrence as well as identify patients who would not benefit from radiotherapy.
- The meta-analysis used data from three randomized trials — Scottish Conservation Trial, SweBCG91-RT, and Princess Margaret RT trial — to validate the POLAR biomarker test in patients with low-risk, HR-positive, HER2-negative, node-negative breast cancer.
- The analysis included 623 patients (ages 50-76), of whom 429 (69%) had high POLAR scores and 194 (31%) had low POLAR scores.
- The primary endpoint was the time to locoregional recurrence, and secondary endpoints included evaluating POLAR as a prognostic factor for locoregional recurrence in patients without radiotherapy and effect of radiotherapy in patients with low and high POLAR scores.
TAKEAWAY:
- Patients with high POLAR scores demonstrated a significant benefit from radiotherapy. The 10-year locoregional recurrence rate was 7% with radiotherapy vs 20% without radiotherapy (hazard ratio [HR], 0.37; P < .001).
- Patients with low POLAR scores, however, did not experience a significant benefit from radiotherapy. In this group, the 10-year locoregional recurrence rates were similar with and without radiotherapy (7% vs 5%, respectively; HR, 0.92; P = .832), indicating that radiotherapy could potentially be omitted for these patients.
- Among patients who did not receive radiotherapy (n = 309), higher POLAR scores predicted a greater risk for recurrence, suggesting the genomic signature has prognostic value. There is no evidence, however, that POLAR predicts radiotherapy benefit or predicts patients’ risk for distant metastases or mortality.
IN PRACTICE:
“This meta-analysis from three randomized controlled trials clearly demonstrates the clinical potential for POLAR to be used in smaller estrogen receptor positive node negative breast cancer patients to identify those women who do not appear to benefit from the use of post-operative adjuvant radiotherapy,” the authors wrote. “ This classifier is an important step towards molecularly-stratified targeting of the use of radiotherapy.”
SOURCE:
The study, led by Per Karlsson, MD, PhD, University of Gothenburg, Sweden, was published online in the Journal of the National Cancer Institute.
LIMITATIONS:
One cohort (SweBCG) had limited use of adjuvant systemic therapy, which could affect generalizability. Additionally, low numbers of patients with low POLAR scores in two trials could affect the observed benefit of radiotherapy.
DISCLOSURES:
This study was supported by the Breast Cancer Institute Fund (Edinburgh and Lothians Health Foundation), Canadian Institutes of Health Research, Exact Sciences Corporation, PFS Genomics, Swedish Cancer Society, and Swedish Research Council. One author reported being an employee and owning stock or stock options or patents with Exact Sciences. Several authors reported having various ties with various sources including Exact Sciences.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Overall, patients with a high POLAR score derived a significant benefit from adjuvant radiotherapy, while those with a low score did not and might consider forgoing radiotherapy.
METHODOLOGY:
- Radiation therapy after breast-conserving surgery has been shown to reduce the risk for locoregional recurrence and is a standard approach to manage early breast cancer. However, certain patients with low locoregional recurrence risks may not necessarily benefit from adjuvant radiation, but there has not been a commercially available molecular test to help identify which patients that might be.
- In the current analysis, researchers assessed whether the POLAR biomarker test could reliably predict locoregional recurrence as well as identify patients who would not benefit from radiotherapy.
- The meta-analysis used data from three randomized trials — Scottish Conservation Trial, SweBCG91-RT, and Princess Margaret RT trial — to validate the POLAR biomarker test in patients with low-risk, HR-positive, HER2-negative, node-negative breast cancer.
- The analysis included 623 patients (ages 50-76), of whom 429 (69%) had high POLAR scores and 194 (31%) had low POLAR scores.
- The primary endpoint was the time to locoregional recurrence, and secondary endpoints included evaluating POLAR as a prognostic factor for locoregional recurrence in patients without radiotherapy and effect of radiotherapy in patients with low and high POLAR scores.
TAKEAWAY:
- Patients with high POLAR scores demonstrated a significant benefit from radiotherapy. The 10-year locoregional recurrence rate was 7% with radiotherapy vs 20% without radiotherapy (hazard ratio [HR], 0.37; P < .001).
- Patients with low POLAR scores, however, did not experience a significant benefit from radiotherapy. In this group, the 10-year locoregional recurrence rates were similar with and without radiotherapy (7% vs 5%, respectively; HR, 0.92; P = .832), indicating that radiotherapy could potentially be omitted for these patients.
- Among patients who did not receive radiotherapy (n = 309), higher POLAR scores predicted a greater risk for recurrence, suggesting the genomic signature has prognostic value. There is no evidence, however, that POLAR predicts radiotherapy benefit or predicts patients’ risk for distant metastases or mortality.
IN PRACTICE:
“This meta-analysis from three randomized controlled trials clearly demonstrates the clinical potential for POLAR to be used in smaller estrogen receptor positive node negative breast cancer patients to identify those women who do not appear to benefit from the use of post-operative adjuvant radiotherapy,” the authors wrote. “ This classifier is an important step towards molecularly-stratified targeting of the use of radiotherapy.”
SOURCE:
The study, led by Per Karlsson, MD, PhD, University of Gothenburg, Sweden, was published online in the Journal of the National Cancer Institute.
LIMITATIONS:
One cohort (SweBCG) had limited use of adjuvant systemic therapy, which could affect generalizability. Additionally, low numbers of patients with low POLAR scores in two trials could affect the observed benefit of radiotherapy.
DISCLOSURES:
This study was supported by the Breast Cancer Institute Fund (Edinburgh and Lothians Health Foundation), Canadian Institutes of Health Research, Exact Sciences Corporation, PFS Genomics, Swedish Cancer Society, and Swedish Research Council. One author reported being an employee and owning stock or stock options or patents with Exact Sciences. Several authors reported having various ties with various sources including Exact Sciences.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Overall, patients with a high POLAR score derived a significant benefit from adjuvant radiotherapy, while those with a low score did not and might consider forgoing radiotherapy.
METHODOLOGY:
- Radiation therapy after breast-conserving surgery has been shown to reduce the risk for locoregional recurrence and is a standard approach to manage early breast cancer. However, certain patients with low locoregional recurrence risks may not necessarily benefit from adjuvant radiation, but there has not been a commercially available molecular test to help identify which patients that might be.
- In the current analysis, researchers assessed whether the POLAR biomarker test could reliably predict locoregional recurrence as well as identify patients who would not benefit from radiotherapy.
- The meta-analysis used data from three randomized trials — Scottish Conservation Trial, SweBCG91-RT, and Princess Margaret RT trial — to validate the POLAR biomarker test in patients with low-risk, HR-positive, HER2-negative, node-negative breast cancer.
- The analysis included 623 patients (ages 50-76), of whom 429 (69%) had high POLAR scores and 194 (31%) had low POLAR scores.
- The primary endpoint was the time to locoregional recurrence, and secondary endpoints included evaluating POLAR as a prognostic factor for locoregional recurrence in patients without radiotherapy and effect of radiotherapy in patients with low and high POLAR scores.
TAKEAWAY:
- Patients with high POLAR scores demonstrated a significant benefit from radiotherapy. The 10-year locoregional recurrence rate was 7% with radiotherapy vs 20% without radiotherapy (hazard ratio [HR], 0.37; P < .001).
- Patients with low POLAR scores, however, did not experience a significant benefit from radiotherapy. In this group, the 10-year locoregional recurrence rates were similar with and without radiotherapy (7% vs 5%, respectively; HR, 0.92; P = .832), indicating that radiotherapy could potentially be omitted for these patients.
- Among patients who did not receive radiotherapy (n = 309), higher POLAR scores predicted a greater risk for recurrence, suggesting the genomic signature has prognostic value. There is no evidence, however, that POLAR predicts radiotherapy benefit or predicts patients’ risk for distant metastases or mortality.
IN PRACTICE:
“This meta-analysis from three randomized controlled trials clearly demonstrates the clinical potential for POLAR to be used in smaller estrogen receptor positive node negative breast cancer patients to identify those women who do not appear to benefit from the use of post-operative adjuvant radiotherapy,” the authors wrote. “ This classifier is an important step towards molecularly-stratified targeting of the use of radiotherapy.”
SOURCE:
The study, led by Per Karlsson, MD, PhD, University of Gothenburg, Sweden, was published online in the Journal of the National Cancer Institute.
LIMITATIONS:
One cohort (SweBCG) had limited use of adjuvant systemic therapy, which could affect generalizability. Additionally, low numbers of patients with low POLAR scores in two trials could affect the observed benefit of radiotherapy.
DISCLOSURES:
This study was supported by the Breast Cancer Institute Fund (Edinburgh and Lothians Health Foundation), Canadian Institutes of Health Research, Exact Sciences Corporation, PFS Genomics, Swedish Cancer Society, and Swedish Research Council. One author reported being an employee and owning stock or stock options or patents with Exact Sciences. Several authors reported having various ties with various sources including Exact Sciences.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Stages I-III Screen-Detected CRC Boosts Disease-Free Survival Rates
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Is Pancreatic Cancer Really Rising in Young People?
TOPLINE:
Given the stable mortality rates in this population, the increase in incidence likely reflects previously undetected cases instead of a true rise in new cases, researchers say.
METHODOLOGY:
- Data from several registries have indicated that the incidence of pancreatic cancer among younger individuals, particularly women, is on the rise in the United States and worldwide.
- In a new analysis, researchers wanted to see if the observed increase in pancreatic cancer incidence among young Americans represented a true rise in cancer occurrence or indicated greater diagnostic scrutiny. If pancreatic cancer incidence is really increasing, “incidence and mortality would be expected to increase concurrently, as would early- and late-stage diagnoses,” the researchers explained.
- The researchers collected data on pancreatic cancer incidence, histology, and stage distribution for individuals aged 15-39 years from US Cancer Statistics, a database covering almost the entire US population from 2001 to 2020. Pancreatic cancer mortality data from the same timeframe came from the National Vital Statistics System.
- The researchers looked at four histologic categories: Adenocarcinoma, the dominant pancreatic cancer histology, as well as more rare subtypes — endocrine and solid pseudopapillary — and “other” category. Researchers also categorized stage-specific incidence as early stage (in situ or localized) or late stage (regional or distant).
TAKEAWAY:
- The incidence of pancreatic cancer increased 2.1-fold in young women (incidence, 3.3-6.9 per million) and 1.6-fold in young men (incidence, 3.9-6.2 per million) between 2001 and 2019. However, mortality rates remained stable for women (1.5 deaths per million; annual percent change [AAPC], −0.5%; 95% CI, –1.4% to 0.5%) and men (2.5 deaths per million; AAPC, –0.1%; 95% CI, –0.8% to 0.6%) over this period.
- Looking at cancer subtypes, the increase in incidence was largely caused by early-stage endocrine cancer and solid pseudopapillary neoplasms in women, not adenocarcinoma (which remained stable over the study period).
- Looking at cancer stage, most of the increase in incidence came from detection of smaller tumors (< 2 cm) and early-stage cancer, which rose from 0.6 to 3.7 per million in women and from 0.4 to 2.2 per million in men. The authors also found no statistically significant change in the incidence of late-stage cancer in women or men.
- Rates of surgical treatment for pancreatic cancer increased, more than tripling among women (from 1.5 to 4.7 per million) and more than doubling among men (from 1.1 to 2.3 per million).
IN PRACTICE:
“Pancreatic cancer now can be another cancer subject to overdiagnosis: The detection of disease not destined to cause symptoms or death,” the authors concluded. “Although the observed changes in incidence are small, overdiagnosis is especially concerning for pancreatic cancer, as pancreatic surgery has substantial risk for morbidity (in particular, pancreatic fistulas) and mortality.”
SOURCE:
The study, with first author Vishal R. Patel, MD, MPH, and corresponding author H. Gilbert Welch, MD, MPH, from Brigham and Women’s Hospital, Boston, was published online on November 19 in Annals of Internal Medicine.
LIMITATIONS:
The study was limited by the lack of data on the method of cancer detection, which may have affected the interpretation of the findings.
DISCLOSURES:
Disclosure forms are available with the article online.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Given the stable mortality rates in this population, the increase in incidence likely reflects previously undetected cases instead of a true rise in new cases, researchers say.
METHODOLOGY:
- Data from several registries have indicated that the incidence of pancreatic cancer among younger individuals, particularly women, is on the rise in the United States and worldwide.
- In a new analysis, researchers wanted to see if the observed increase in pancreatic cancer incidence among young Americans represented a true rise in cancer occurrence or indicated greater diagnostic scrutiny. If pancreatic cancer incidence is really increasing, “incidence and mortality would be expected to increase concurrently, as would early- and late-stage diagnoses,” the researchers explained.
- The researchers collected data on pancreatic cancer incidence, histology, and stage distribution for individuals aged 15-39 years from US Cancer Statistics, a database covering almost the entire US population from 2001 to 2020. Pancreatic cancer mortality data from the same timeframe came from the National Vital Statistics System.
- The researchers looked at four histologic categories: Adenocarcinoma, the dominant pancreatic cancer histology, as well as more rare subtypes — endocrine and solid pseudopapillary — and “other” category. Researchers also categorized stage-specific incidence as early stage (in situ or localized) or late stage (regional or distant).
TAKEAWAY:
- The incidence of pancreatic cancer increased 2.1-fold in young women (incidence, 3.3-6.9 per million) and 1.6-fold in young men (incidence, 3.9-6.2 per million) between 2001 and 2019. However, mortality rates remained stable for women (1.5 deaths per million; annual percent change [AAPC], −0.5%; 95% CI, –1.4% to 0.5%) and men (2.5 deaths per million; AAPC, –0.1%; 95% CI, –0.8% to 0.6%) over this period.
- Looking at cancer subtypes, the increase in incidence was largely caused by early-stage endocrine cancer and solid pseudopapillary neoplasms in women, not adenocarcinoma (which remained stable over the study period).
- Looking at cancer stage, most of the increase in incidence came from detection of smaller tumors (< 2 cm) and early-stage cancer, which rose from 0.6 to 3.7 per million in women and from 0.4 to 2.2 per million in men. The authors also found no statistically significant change in the incidence of late-stage cancer in women or men.
- Rates of surgical treatment for pancreatic cancer increased, more than tripling among women (from 1.5 to 4.7 per million) and more than doubling among men (from 1.1 to 2.3 per million).
IN PRACTICE:
“Pancreatic cancer now can be another cancer subject to overdiagnosis: The detection of disease not destined to cause symptoms or death,” the authors concluded. “Although the observed changes in incidence are small, overdiagnosis is especially concerning for pancreatic cancer, as pancreatic surgery has substantial risk for morbidity (in particular, pancreatic fistulas) and mortality.”
SOURCE:
The study, with first author Vishal R. Patel, MD, MPH, and corresponding author H. Gilbert Welch, MD, MPH, from Brigham and Women’s Hospital, Boston, was published online on November 19 in Annals of Internal Medicine.
LIMITATIONS:
The study was limited by the lack of data on the method of cancer detection, which may have affected the interpretation of the findings.
DISCLOSURES:
Disclosure forms are available with the article online.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Given the stable mortality rates in this population, the increase in incidence likely reflects previously undetected cases instead of a true rise in new cases, researchers say.
METHODOLOGY:
- Data from several registries have indicated that the incidence of pancreatic cancer among younger individuals, particularly women, is on the rise in the United States and worldwide.
- In a new analysis, researchers wanted to see if the observed increase in pancreatic cancer incidence among young Americans represented a true rise in cancer occurrence or indicated greater diagnostic scrutiny. If pancreatic cancer incidence is really increasing, “incidence and mortality would be expected to increase concurrently, as would early- and late-stage diagnoses,” the researchers explained.
- The researchers collected data on pancreatic cancer incidence, histology, and stage distribution for individuals aged 15-39 years from US Cancer Statistics, a database covering almost the entire US population from 2001 to 2020. Pancreatic cancer mortality data from the same timeframe came from the National Vital Statistics System.
- The researchers looked at four histologic categories: Adenocarcinoma, the dominant pancreatic cancer histology, as well as more rare subtypes — endocrine and solid pseudopapillary — and “other” category. Researchers also categorized stage-specific incidence as early stage (in situ or localized) or late stage (regional or distant).
TAKEAWAY:
- The incidence of pancreatic cancer increased 2.1-fold in young women (incidence, 3.3-6.9 per million) and 1.6-fold in young men (incidence, 3.9-6.2 per million) between 2001 and 2019. However, mortality rates remained stable for women (1.5 deaths per million; annual percent change [AAPC], −0.5%; 95% CI, –1.4% to 0.5%) and men (2.5 deaths per million; AAPC, –0.1%; 95% CI, –0.8% to 0.6%) over this period.
- Looking at cancer subtypes, the increase in incidence was largely caused by early-stage endocrine cancer and solid pseudopapillary neoplasms in women, not adenocarcinoma (which remained stable over the study period).
- Looking at cancer stage, most of the increase in incidence came from detection of smaller tumors (< 2 cm) and early-stage cancer, which rose from 0.6 to 3.7 per million in women and from 0.4 to 2.2 per million in men. The authors also found no statistically significant change in the incidence of late-stage cancer in women or men.
- Rates of surgical treatment for pancreatic cancer increased, more than tripling among women (from 1.5 to 4.7 per million) and more than doubling among men (from 1.1 to 2.3 per million).
IN PRACTICE:
“Pancreatic cancer now can be another cancer subject to overdiagnosis: The detection of disease not destined to cause symptoms or death,” the authors concluded. “Although the observed changes in incidence are small, overdiagnosis is especially concerning for pancreatic cancer, as pancreatic surgery has substantial risk for morbidity (in particular, pancreatic fistulas) and mortality.”
SOURCE:
The study, with first author Vishal R. Patel, MD, MPH, and corresponding author H. Gilbert Welch, MD, MPH, from Brigham and Women’s Hospital, Boston, was published online on November 19 in Annals of Internal Medicine.
LIMITATIONS:
The study was limited by the lack of data on the method of cancer detection, which may have affected the interpretation of the findings.
DISCLOSURES:
Disclosure forms are available with the article online.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Levonorgestrel IUDs Linked to Higher Skin Side Effects
TOPLINE:
, with some differences between the available levonorgestrel IUDs.
METHODOLOGY:
- Researchers reviewed the US Food and Drug Administration (FDA) Adverse Events Reporting System (FAERS) through December 2023 for adverse events associated with levonorgestrel IUDs where IUDs were the only suspected cause, focusing on acne, alopecia, and hirsutism.
- They included 139,348 reports for the levonorgestrel IUDs (Mirena, Liletta, Kyleena, Skyla) and 50,450 reports for the copper IUD (Paragard).
TAKEAWAY:
- Levonorgestrel IUD users showed higher odds of reporting acne (odds ratio [OR], 3.21), alopecia (OR, 5.96), and hirsutism (OR, 15.48; all P < .0001) than copper IUD users.
- The Kyleena 19.5 mg levonorgestrel IUD was associated with the highest odds of acne reports (OR, 3.42), followed by the Mirena 52 mg (OR, 3.40) and Skyla 13.5 mg (OR, 2.30) levonorgestrel IUDs (all P < .0001).
- The Mirena IUD was associated with the highest odds of alopecia and hirsutism reports (OR, 6.62 and 17.43, respectively), followed by the Kyleena (ORs, 2.90 and 8.17, respectively) and Skyla (ORs, 2.69 and 1.48, respectively) IUDs (all P < .0001).
- Reports of acne, alopecia, and hirsutism were not significantly different between the Liletta 52 mg levonorgestrel IUD and the copper IUD.
IN PRACTICE:
“Overall, we identified significant associations between levonorgestrel IUDs and androgenic cutaneous adverse events,” the authors wrote. “Counseling prior to initiation of levonorgestrel IUDs should include information on possible cutaneous AEs including acne, alopecia, and hirsutism to guide contraceptive shared decision making,” they added.
SOURCE:
The study was led by Lydia Cassard, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, and was published online November 3 in Journal of the American Academy of Dermatology.
LIMITATIONS:
FAERS database reports could not be verified, and differences in FDA approval dates for IUDs could have influenced reporting rates. Moreover, a lack of data on prior medication use limits the ability to determine if these AEs are a result of changes in androgenic or antiandrogenic medication use. Cutaneous adverse events associated with copper IUDs may have been underreported because of assumptions that a nonhormonal device would not cause these adverse events.
DISCLOSURES:
The authors did not report any funding source or conflict of interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, with some differences between the available levonorgestrel IUDs.
METHODOLOGY:
- Researchers reviewed the US Food and Drug Administration (FDA) Adverse Events Reporting System (FAERS) through December 2023 for adverse events associated with levonorgestrel IUDs where IUDs were the only suspected cause, focusing on acne, alopecia, and hirsutism.
- They included 139,348 reports for the levonorgestrel IUDs (Mirena, Liletta, Kyleena, Skyla) and 50,450 reports for the copper IUD (Paragard).
TAKEAWAY:
- Levonorgestrel IUD users showed higher odds of reporting acne (odds ratio [OR], 3.21), alopecia (OR, 5.96), and hirsutism (OR, 15.48; all P < .0001) than copper IUD users.
- The Kyleena 19.5 mg levonorgestrel IUD was associated with the highest odds of acne reports (OR, 3.42), followed by the Mirena 52 mg (OR, 3.40) and Skyla 13.5 mg (OR, 2.30) levonorgestrel IUDs (all P < .0001).
- The Mirena IUD was associated with the highest odds of alopecia and hirsutism reports (OR, 6.62 and 17.43, respectively), followed by the Kyleena (ORs, 2.90 and 8.17, respectively) and Skyla (ORs, 2.69 and 1.48, respectively) IUDs (all P < .0001).
- Reports of acne, alopecia, and hirsutism were not significantly different between the Liletta 52 mg levonorgestrel IUD and the copper IUD.
IN PRACTICE:
“Overall, we identified significant associations between levonorgestrel IUDs and androgenic cutaneous adverse events,” the authors wrote. “Counseling prior to initiation of levonorgestrel IUDs should include information on possible cutaneous AEs including acne, alopecia, and hirsutism to guide contraceptive shared decision making,” they added.
SOURCE:
The study was led by Lydia Cassard, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, and was published online November 3 in Journal of the American Academy of Dermatology.
LIMITATIONS:
FAERS database reports could not be verified, and differences in FDA approval dates for IUDs could have influenced reporting rates. Moreover, a lack of data on prior medication use limits the ability to determine if these AEs are a result of changes in androgenic or antiandrogenic medication use. Cutaneous adverse events associated with copper IUDs may have been underreported because of assumptions that a nonhormonal device would not cause these adverse events.
DISCLOSURES:
The authors did not report any funding source or conflict of interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, with some differences between the available levonorgestrel IUDs.
METHODOLOGY:
- Researchers reviewed the US Food and Drug Administration (FDA) Adverse Events Reporting System (FAERS) through December 2023 for adverse events associated with levonorgestrel IUDs where IUDs were the only suspected cause, focusing on acne, alopecia, and hirsutism.
- They included 139,348 reports for the levonorgestrel IUDs (Mirena, Liletta, Kyleena, Skyla) and 50,450 reports for the copper IUD (Paragard).
TAKEAWAY:
- Levonorgestrel IUD users showed higher odds of reporting acne (odds ratio [OR], 3.21), alopecia (OR, 5.96), and hirsutism (OR, 15.48; all P < .0001) than copper IUD users.
- The Kyleena 19.5 mg levonorgestrel IUD was associated with the highest odds of acne reports (OR, 3.42), followed by the Mirena 52 mg (OR, 3.40) and Skyla 13.5 mg (OR, 2.30) levonorgestrel IUDs (all P < .0001).
- The Mirena IUD was associated with the highest odds of alopecia and hirsutism reports (OR, 6.62 and 17.43, respectively), followed by the Kyleena (ORs, 2.90 and 8.17, respectively) and Skyla (ORs, 2.69 and 1.48, respectively) IUDs (all P < .0001).
- Reports of acne, alopecia, and hirsutism were not significantly different between the Liletta 52 mg levonorgestrel IUD and the copper IUD.
IN PRACTICE:
“Overall, we identified significant associations between levonorgestrel IUDs and androgenic cutaneous adverse events,” the authors wrote. “Counseling prior to initiation of levonorgestrel IUDs should include information on possible cutaneous AEs including acne, alopecia, and hirsutism to guide contraceptive shared decision making,” they added.
SOURCE:
The study was led by Lydia Cassard, Cleveland Clinic Lerner College of Medicine, Cleveland, Ohio, and was published online November 3 in Journal of the American Academy of Dermatology.
LIMITATIONS:
FAERS database reports could not be verified, and differences in FDA approval dates for IUDs could have influenced reporting rates. Moreover, a lack of data on prior medication use limits the ability to determine if these AEs are a result of changes in androgenic or antiandrogenic medication use. Cutaneous adverse events associated with copper IUDs may have been underreported because of assumptions that a nonhormonal device would not cause these adverse events.
DISCLOSURES:
The authors did not report any funding source or conflict of interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Does Semaglutide Increase Risk for Optic Neuropathy?
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Ultraprocessed Foods Linked to Faster Biological Aging
TOPLINE:
and factors other than poor nutritional content may be to blame.
METHODOLOGY:
- Previous studies have reported an association between high consumption of UPFs and some measures of early biological aging, such as shorter telomere length, cognitive decline, and frailty, but the relationship is largely unexplored so far, including exactly how UPFs may harm health.
- To examine the association between UPF consumption and biological aging, researchers conducted a cross-sectional analysis of 22,495 participants (mean chronological age, 55.6 years; 52% women) from the Moli-sani Study in Italy, who were recruited between 2005 and 2010.
- Food intake was assessed with a food frequency questionnaire that covered 188 different food items, each of which was categorized into one of four groups based on the extent of processing, ranging from minimally processed foods, such as fruits, vegetables, meat and fish, to UPFs.
- UPF intake was determined by weight, using the ratio of UPFs to the total weight of food and beverages (g/d), and participants were categorized into sex-specific fifths according to the proportion of UPFs in their total food intake. Diet quality was also evaluated using the Mediterranean Diet Score.
- Biological age was computed using a deep neural network approach based on 36 circulating blood biomarkers, and the mean difference between the mean biological and chronological ages was analyzed.
TAKEAWAY:
- The mean difference between biological and chronological ages of the participants was –0.70 years.
- Higher intake of UPFs was associated with accelerated biological aging compared with the lowest intake (regression coefficient, 0.34; 95% CI, 0.08-0.61), with a mean difference between the biological and chronological ages of −4.1 years and 1.6 years in those with the lowest and highest intakes, respectively.
- The association between UPF consumption and biological aging was nonlinear (P = .049 for nonlinearity). The association tended to be stronger in men than in women, but this was not statistically significant.
- Including the Mediterranean Diet Score in the model slightly attenuated the association by 9.1%, indicating that poor nutritional content was likely to explain a small part of the underlying mechanism.
IN PRACTICE:
“Our results showed that the UPFs–biological aging association was weakly explained by the poor nutritional composition of these highly processed foods, suggesting that biological aging could be mainly influenced by non-nutrient food characteristics, which include altered food matrix, contact materials and neo-formed compounds,” the authors wrote.
SOURCE:
The study was led by Simona Esposito, Research Unit of Epidemiology and Prevention, IRCCS Neuromed, Isernia, Italy. It was published online in The American Journal of Clinical Nutrition.
LIMITATIONS:
The cross-sectional design of the study limited the ability to determine the temporal directionality of the association, and the observational nature of the study limited the ability to establish the causality between UPF consumption and biological aging. The use of self-reported dietary data may have introduced recall bias. The study population was limited to adults from Central-Southern Italy, which may affect the generalizability of the findings.
DISCLOSURES:
The study was developed within the project funded by the Next Generation European Union “Age-It — Ageing well in an ageing society” project, National Recovery and Resilience Plan. The analyses were partially supported by the Italian Ministry of Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
and factors other than poor nutritional content may be to blame.
METHODOLOGY:
- Previous studies have reported an association between high consumption of UPFs and some measures of early biological aging, such as shorter telomere length, cognitive decline, and frailty, but the relationship is largely unexplored so far, including exactly how UPFs may harm health.
- To examine the association between UPF consumption and biological aging, researchers conducted a cross-sectional analysis of 22,495 participants (mean chronological age, 55.6 years; 52% women) from the Moli-sani Study in Italy, who were recruited between 2005 and 2010.
- Food intake was assessed with a food frequency questionnaire that covered 188 different food items, each of which was categorized into one of four groups based on the extent of processing, ranging from minimally processed foods, such as fruits, vegetables, meat and fish, to UPFs.
- UPF intake was determined by weight, using the ratio of UPFs to the total weight of food and beverages (g/d), and participants were categorized into sex-specific fifths according to the proportion of UPFs in their total food intake. Diet quality was also evaluated using the Mediterranean Diet Score.
- Biological age was computed using a deep neural network approach based on 36 circulating blood biomarkers, and the mean difference between the mean biological and chronological ages was analyzed.
TAKEAWAY:
- The mean difference between biological and chronological ages of the participants was –0.70 years.
- Higher intake of UPFs was associated with accelerated biological aging compared with the lowest intake (regression coefficient, 0.34; 95% CI, 0.08-0.61), with a mean difference between the biological and chronological ages of −4.1 years and 1.6 years in those with the lowest and highest intakes, respectively.
- The association between UPF consumption and biological aging was nonlinear (P = .049 for nonlinearity). The association tended to be stronger in men than in women, but this was not statistically significant.
- Including the Mediterranean Diet Score in the model slightly attenuated the association by 9.1%, indicating that poor nutritional content was likely to explain a small part of the underlying mechanism.
IN PRACTICE:
“Our results showed that the UPFs–biological aging association was weakly explained by the poor nutritional composition of these highly processed foods, suggesting that biological aging could be mainly influenced by non-nutrient food characteristics, which include altered food matrix, contact materials and neo-formed compounds,” the authors wrote.
SOURCE:
The study was led by Simona Esposito, Research Unit of Epidemiology and Prevention, IRCCS Neuromed, Isernia, Italy. It was published online in The American Journal of Clinical Nutrition.
LIMITATIONS:
The cross-sectional design of the study limited the ability to determine the temporal directionality of the association, and the observational nature of the study limited the ability to establish the causality between UPF consumption and biological aging. The use of self-reported dietary data may have introduced recall bias. The study population was limited to adults from Central-Southern Italy, which may affect the generalizability of the findings.
DISCLOSURES:
The study was developed within the project funded by the Next Generation European Union “Age-It — Ageing well in an ageing society” project, National Recovery and Resilience Plan. The analyses were partially supported by the Italian Ministry of Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
and factors other than poor nutritional content may be to blame.
METHODOLOGY:
- Previous studies have reported an association between high consumption of UPFs and some measures of early biological aging, such as shorter telomere length, cognitive decline, and frailty, but the relationship is largely unexplored so far, including exactly how UPFs may harm health.
- To examine the association between UPF consumption and biological aging, researchers conducted a cross-sectional analysis of 22,495 participants (mean chronological age, 55.6 years; 52% women) from the Moli-sani Study in Italy, who were recruited between 2005 and 2010.
- Food intake was assessed with a food frequency questionnaire that covered 188 different food items, each of which was categorized into one of four groups based on the extent of processing, ranging from minimally processed foods, such as fruits, vegetables, meat and fish, to UPFs.
- UPF intake was determined by weight, using the ratio of UPFs to the total weight of food and beverages (g/d), and participants were categorized into sex-specific fifths according to the proportion of UPFs in their total food intake. Diet quality was also evaluated using the Mediterranean Diet Score.
- Biological age was computed using a deep neural network approach based on 36 circulating blood biomarkers, and the mean difference between the mean biological and chronological ages was analyzed.
TAKEAWAY:
- The mean difference between biological and chronological ages of the participants was –0.70 years.
- Higher intake of UPFs was associated with accelerated biological aging compared with the lowest intake (regression coefficient, 0.34; 95% CI, 0.08-0.61), with a mean difference between the biological and chronological ages of −4.1 years and 1.6 years in those with the lowest and highest intakes, respectively.
- The association between UPF consumption and biological aging was nonlinear (P = .049 for nonlinearity). The association tended to be stronger in men than in women, but this was not statistically significant.
- Including the Mediterranean Diet Score in the model slightly attenuated the association by 9.1%, indicating that poor nutritional content was likely to explain a small part of the underlying mechanism.
IN PRACTICE:
“Our results showed that the UPFs–biological aging association was weakly explained by the poor nutritional composition of these highly processed foods, suggesting that biological aging could be mainly influenced by non-nutrient food characteristics, which include altered food matrix, contact materials and neo-formed compounds,” the authors wrote.
SOURCE:
The study was led by Simona Esposito, Research Unit of Epidemiology and Prevention, IRCCS Neuromed, Isernia, Italy. It was published online in The American Journal of Clinical Nutrition.
LIMITATIONS:
The cross-sectional design of the study limited the ability to determine the temporal directionality of the association, and the observational nature of the study limited the ability to establish the causality between UPF consumption and biological aging. The use of self-reported dietary data may have introduced recall bias. The study population was limited to adults from Central-Southern Italy, which may affect the generalizability of the findings.
DISCLOSURES:
The study was developed within the project funded by the Next Generation European Union “Age-It — Ageing well in an ageing society” project, National Recovery and Resilience Plan. The analyses were partially supported by the Italian Ministry of Health. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
How to Stop Bone Loss After Denosumab? No Easy Answers
Patients who discontinue treatment with the osteoporosis drug denosumab, despite transitioning to zoledronate, show significant losses in lumbar spine bone mineral density (BMD) within a year, according to the latest findings to show that the rapid rebound of bone loss after denosumab discontinuation is not easily prevented with other therapies — even bisphosphonates.
“When initiating denosumab for osteoporosis treatment, it is recommended to engage in thorough shared decision-making with the patient to ensure they understand the potential risks associated with discontinuing the medication,” senior author Shau-Huai Fu, MD, PhD, Department of Orthopedics, National Taiwan University Hospital Yunlin Branch, Douliu, told this news organization.
Furthermore, “integrating a case manager system is crucial to support long-term adherence and compliance,” he added.
The results are from the Denosumab Sequential Therapy prospective, open-label, parallel-group randomized clinical trial, published online in JAMA Network Open.
In the study, 101 patients were recruited between April 2019 and May 2021 at a referral center and two hospitals in Taiwan. The patients, including postmenopausal women and men over the age of 50, had been treated with regular denosumab for at least 2 years and had no previous exposure to other anti-osteoporosis medication.
They were randomized to treatment either with continuous denosumab at the standard dose of 60 mg twice yearly or to discontinue denosumab and receive the standard intravenous dose of the bisphosphonate zoledronate at 5 mg at the time when the next dose of denosumab would have been administered.
There were no differences between the two groups in serum bone turnover markers at baseline.
The current results, reflecting the first year of the 2-year study, show that, overall, those receiving zoledronate (n = 76), had a significant decrease in lumbar spine BMD, compared with a slight increase in the denosumab continuation group (–0.68% vs 1.30%, respectively; P = .03).
No significant differences were observed between the groups in terms of the study’s other measures of total hip BMD (median, 0% vs 1.12%; P = .24), and femoral neck BMD (median, 0.18% vs 0.17%; P = .71).
Additional findings from multivariable analyses in the study also supported results from previous studies showing that a longer duration of denosumab use is associated with a more substantial rebound effect: Among 15 of the denosumab users in the study who had ≥ 3 prior years of the drug, the reduction in lumbar spine BMD was even greater with zoledronate compared with denosumab continuation (–3.20% vs 1.30%; P = .003).
Though the lack of losses in the other measures of total hip and femoral neck BMD may seem encouraging, evidence from the bulk of other studies suggests cautious interpretation of those findings, Fu said.
“Although our study did not observe a noticeable decline in total hip or femoral neck BMD, other randomized controlled trials with longer durations of denosumab use have reported significant reductions in these areas,” Fu said. “Therefore, it cannot be assumed that non-lumbar spine regions are entirely safe.”
Fracture Risk Is the Overriding Concern
Meanwhile, the loss of lumbar spine BMD is of particular concern because of its role in what amounts to the broader, overriding concern of denosumab discontinuation — the risk for fracture, Fu noted.
“Real-world observations indicate that fractures caused by or associated with discontinuation of denosumab primarily occur in the spine,” he explained.
Previous research underscores the risk for fracture with denosumab discontinuation — and the greater risk with longer-term denosumab use, showing an 11.8% annual incidence of vertebral fracture after discontinuation of denosumab used for less than 2 years, increasing to 16.0% upon discontinuation after more than 2 years of treatment.
Randomized trials have shown sequential zoledronate to have some benefit in offsetting that risk, reducing first-year fracture risk by 3%-4% in some studies.
In the current study, 3 of 76 participants experienced a vertebral fracture in the first year of discontinuation, all involving women, including 2 who had been receiving denosumab for ≥ 4 years before medication transition.
If a transition to a bisphosphonate is anticipated, the collective findings suggest doing it as early on in denosumab treatment as possible, Fu and his colleagues noted in the study.
“When medication transition from denosumab is expected or when long-term denosumab treatment may not be suitable, earlier medication transition with potent sequential therapy should be considered,” they wrote.
Dosing Adjustments?
The findings add to the evidence that “patients who gain the most with denosumab are likely to lose the most with zoledronate,” Nelson Watts, MD, who authored an editorial accompanying the study, told this news organization.
Furthermore, “denosumab and other medications seem to do more [and faster] for BMD in the spine, so we expect more loss in the spine than in the hip,” said Watts, who is director of Mercy Health Osteoporosis and Bone Health Services, Bon Secours Mercy Health in Cincinnati, Ohio.
“Studies are needed but not yet done to see if a higher dose or more frequent zoledronate would be better for BMD than the ‘usual’ yearly dose,” Watts added.
The only published clinical recommendations on the matter are discussed in a position paper from the European Calcified Tissue Society (ECTS).
“Pending additional robust data, a pragmatic approach is to begin treatment with zoledronate 6 months after the last denosumab injection and monitor the effect with bone turnover markers, for example, 3 and 6 months after the zoledronate infusion,” they recommended.
In cases of increased bone turnover markers, including above the mean found in age- and sex-matched cohorts, “repeated infusion of zoledronate should be considered,” the society added.
If bone turnover markers are not available for monitoring the patients, “a pragmatic approach could be administrating a second infusion of zoledronate 6 months after the first infusion,” they wrote.
Clinicians Need to Be Proactive From the Start
Bente Langdahl, MD, of the Medical Department of Endocrinology, Aarhus University Hospital in Denmark, who was a coauthor on the ECTS position statement, told this news organization that clinicians should also be proactive on the other side of treatment — before it begins — to prevent problems with discontinuation.
“I think denosumab is a very good treatment for some patients with high fracture risk and very low BMD, but both patients and clinicians should know that this treatment is either lifelong or there needs to be a plan for discontinuation,” Langdahl said.
Langdahl noted that denosumab is coming off patent soon; hence, issues with cost could become more manageable.
But until then, “I think [cost] should be considered before starting treatment because if patients cannot afford denosumab, they should have been started on zoledronate from the beginning.”
Discontinuation Reasons Vary
Research indicates that, broadly, adherence to denosumab ranges from about 45% to 72% at 2 years, with some reasons for discontinuation including the need for dental treatment or cost, Fu and colleagues reported.
Fu added, however, that other reasons for discontinuing denosumab “are not due to ‘need’ but rather factors such as relocating, missing follow-up appointments, or poor adherence.”
Lorenz Hofbauer, MD, who is head of the Division of Endocrinology, Diabetes, and Bone Diseases, Department of Medicine III at the Technical University Medical Center in Dresden, Germany, noted that another issue contributing to some hesitation by patients about remaining on, or even initiating denosumab, is the known risk for osteonecrosis of the jaw (ONJ).
Though reported as being rare, research continuing to stir concern for ONJ with denosumab use includes one recent study of patients with breast cancer showing those treated with denosumab had a fivefold higher risk for ONJ vs those on bisphosphonates.
“About 20% of my patients have ONJ concerns or other questions, which may delay treatment with denosumab or other therapies,” Hofbauer told this news organization.
“There is a high need to discuss risk versus benefits toward a shared decision-making,” he said.
Conversely, however, Hofbauer noted that adherence to denosumab at his center is fairly high — at 90%, which he says is largely credited to an electronically supported recall system in place at the center.
Denosumab maker Amgen also offers patient reminders via email, text, or phone through its Bone Matters patient support system, which also provides access to a call center for questions or to update treatment appointment information.
In terms of the ongoing question of how to best prevent fracture risk when patients do wind up discontinuing denosumab, Watts concluded in his editorial that more robust studies are needed.
“The dilemma is what to do with longer-term users who stop, and the real question is not what happens to BMD, but what happens to fracture risk,” he wrote.
“It is unlikely that the fracture risk question can be answered due to ethical limitations, but finding the best option, [whether it is] oral or intravenous bisphosphonate, timing, dose, and frequency, to minimize bone loss and the rebound increase in bone resorption after stopping long-term denosumab requires larger and longer studies of better design.”
The authors had no disclosures to report. Watts has been an investigator, consultant, and speaker for Amgen outside of the published editorial. Hofbauer is on advisory boards for Alexion Pharmaceuticals, Amolyt Pharma, Amgen, and UCB. Langdahl has been a primary investigator on previous and ongoing clinical trials involving denosumab.
A version of this article appeared on Medscape.com.
Patients who discontinue treatment with the osteoporosis drug denosumab, despite transitioning to zoledronate, show significant losses in lumbar spine bone mineral density (BMD) within a year, according to the latest findings to show that the rapid rebound of bone loss after denosumab discontinuation is not easily prevented with other therapies — even bisphosphonates.
“When initiating denosumab for osteoporosis treatment, it is recommended to engage in thorough shared decision-making with the patient to ensure they understand the potential risks associated with discontinuing the medication,” senior author Shau-Huai Fu, MD, PhD, Department of Orthopedics, National Taiwan University Hospital Yunlin Branch, Douliu, told this news organization.
Furthermore, “integrating a case manager system is crucial to support long-term adherence and compliance,” he added.
The results are from the Denosumab Sequential Therapy prospective, open-label, parallel-group randomized clinical trial, published online in JAMA Network Open.
In the study, 101 patients were recruited between April 2019 and May 2021 at a referral center and two hospitals in Taiwan. The patients, including postmenopausal women and men over the age of 50, had been treated with regular denosumab for at least 2 years and had no previous exposure to other anti-osteoporosis medication.
They were randomized to treatment either with continuous denosumab at the standard dose of 60 mg twice yearly or to discontinue denosumab and receive the standard intravenous dose of the bisphosphonate zoledronate at 5 mg at the time when the next dose of denosumab would have been administered.
There were no differences between the two groups in serum bone turnover markers at baseline.
The current results, reflecting the first year of the 2-year study, show that, overall, those receiving zoledronate (n = 76), had a significant decrease in lumbar spine BMD, compared with a slight increase in the denosumab continuation group (–0.68% vs 1.30%, respectively; P = .03).
No significant differences were observed between the groups in terms of the study’s other measures of total hip BMD (median, 0% vs 1.12%; P = .24), and femoral neck BMD (median, 0.18% vs 0.17%; P = .71).
Additional findings from multivariable analyses in the study also supported results from previous studies showing that a longer duration of denosumab use is associated with a more substantial rebound effect: Among 15 of the denosumab users in the study who had ≥ 3 prior years of the drug, the reduction in lumbar spine BMD was even greater with zoledronate compared with denosumab continuation (–3.20% vs 1.30%; P = .003).
Though the lack of losses in the other measures of total hip and femoral neck BMD may seem encouraging, evidence from the bulk of other studies suggests cautious interpretation of those findings, Fu said.
“Although our study did not observe a noticeable decline in total hip or femoral neck BMD, other randomized controlled trials with longer durations of denosumab use have reported significant reductions in these areas,” Fu said. “Therefore, it cannot be assumed that non-lumbar spine regions are entirely safe.”
Fracture Risk Is the Overriding Concern
Meanwhile, the loss of lumbar spine BMD is of particular concern because of its role in what amounts to the broader, overriding concern of denosumab discontinuation — the risk for fracture, Fu noted.
“Real-world observations indicate that fractures caused by or associated with discontinuation of denosumab primarily occur in the spine,” he explained.
Previous research underscores the risk for fracture with denosumab discontinuation — and the greater risk with longer-term denosumab use, showing an 11.8% annual incidence of vertebral fracture after discontinuation of denosumab used for less than 2 years, increasing to 16.0% upon discontinuation after more than 2 years of treatment.
Randomized trials have shown sequential zoledronate to have some benefit in offsetting that risk, reducing first-year fracture risk by 3%-4% in some studies.
In the current study, 3 of 76 participants experienced a vertebral fracture in the first year of discontinuation, all involving women, including 2 who had been receiving denosumab for ≥ 4 years before medication transition.
If a transition to a bisphosphonate is anticipated, the collective findings suggest doing it as early on in denosumab treatment as possible, Fu and his colleagues noted in the study.
“When medication transition from denosumab is expected or when long-term denosumab treatment may not be suitable, earlier medication transition with potent sequential therapy should be considered,” they wrote.
Dosing Adjustments?
The findings add to the evidence that “patients who gain the most with denosumab are likely to lose the most with zoledronate,” Nelson Watts, MD, who authored an editorial accompanying the study, told this news organization.
Furthermore, “denosumab and other medications seem to do more [and faster] for BMD in the spine, so we expect more loss in the spine than in the hip,” said Watts, who is director of Mercy Health Osteoporosis and Bone Health Services, Bon Secours Mercy Health in Cincinnati, Ohio.
“Studies are needed but not yet done to see if a higher dose or more frequent zoledronate would be better for BMD than the ‘usual’ yearly dose,” Watts added.
The only published clinical recommendations on the matter are discussed in a position paper from the European Calcified Tissue Society (ECTS).
“Pending additional robust data, a pragmatic approach is to begin treatment with zoledronate 6 months after the last denosumab injection and monitor the effect with bone turnover markers, for example, 3 and 6 months after the zoledronate infusion,” they recommended.
In cases of increased bone turnover markers, including above the mean found in age- and sex-matched cohorts, “repeated infusion of zoledronate should be considered,” the society added.
If bone turnover markers are not available for monitoring the patients, “a pragmatic approach could be administrating a second infusion of zoledronate 6 months after the first infusion,” they wrote.
Clinicians Need to Be Proactive From the Start
Bente Langdahl, MD, of the Medical Department of Endocrinology, Aarhus University Hospital in Denmark, who was a coauthor on the ECTS position statement, told this news organization that clinicians should also be proactive on the other side of treatment — before it begins — to prevent problems with discontinuation.
“I think denosumab is a very good treatment for some patients with high fracture risk and very low BMD, but both patients and clinicians should know that this treatment is either lifelong or there needs to be a plan for discontinuation,” Langdahl said.
Langdahl noted that denosumab is coming off patent soon; hence, issues with cost could become more manageable.
But until then, “I think [cost] should be considered before starting treatment because if patients cannot afford denosumab, they should have been started on zoledronate from the beginning.”
Discontinuation Reasons Vary
Research indicates that, broadly, adherence to denosumab ranges from about 45% to 72% at 2 years, with some reasons for discontinuation including the need for dental treatment or cost, Fu and colleagues reported.
Fu added, however, that other reasons for discontinuing denosumab “are not due to ‘need’ but rather factors such as relocating, missing follow-up appointments, or poor adherence.”
Lorenz Hofbauer, MD, who is head of the Division of Endocrinology, Diabetes, and Bone Diseases, Department of Medicine III at the Technical University Medical Center in Dresden, Germany, noted that another issue contributing to some hesitation by patients about remaining on, or even initiating denosumab, is the known risk for osteonecrosis of the jaw (ONJ).
Though reported as being rare, research continuing to stir concern for ONJ with denosumab use includes one recent study of patients with breast cancer showing those treated with denosumab had a fivefold higher risk for ONJ vs those on bisphosphonates.
“About 20% of my patients have ONJ concerns or other questions, which may delay treatment with denosumab or other therapies,” Hofbauer told this news organization.
“There is a high need to discuss risk versus benefits toward a shared decision-making,” he said.
Conversely, however, Hofbauer noted that adherence to denosumab at his center is fairly high — at 90%, which he says is largely credited to an electronically supported recall system in place at the center.
Denosumab maker Amgen also offers patient reminders via email, text, or phone through its Bone Matters patient support system, which also provides access to a call center for questions or to update treatment appointment information.
In terms of the ongoing question of how to best prevent fracture risk when patients do wind up discontinuing denosumab, Watts concluded in his editorial that more robust studies are needed.
“The dilemma is what to do with longer-term users who stop, and the real question is not what happens to BMD, but what happens to fracture risk,” he wrote.
“It is unlikely that the fracture risk question can be answered due to ethical limitations, but finding the best option, [whether it is] oral or intravenous bisphosphonate, timing, dose, and frequency, to minimize bone loss and the rebound increase in bone resorption after stopping long-term denosumab requires larger and longer studies of better design.”
The authors had no disclosures to report. Watts has been an investigator, consultant, and speaker for Amgen outside of the published editorial. Hofbauer is on advisory boards for Alexion Pharmaceuticals, Amolyt Pharma, Amgen, and UCB. Langdahl has been a primary investigator on previous and ongoing clinical trials involving denosumab.
A version of this article appeared on Medscape.com.
Patients who discontinue treatment with the osteoporosis drug denosumab, despite transitioning to zoledronate, show significant losses in lumbar spine bone mineral density (BMD) within a year, according to the latest findings to show that the rapid rebound of bone loss after denosumab discontinuation is not easily prevented with other therapies — even bisphosphonates.
“When initiating denosumab for osteoporosis treatment, it is recommended to engage in thorough shared decision-making with the patient to ensure they understand the potential risks associated with discontinuing the medication,” senior author Shau-Huai Fu, MD, PhD, Department of Orthopedics, National Taiwan University Hospital Yunlin Branch, Douliu, told this news organization.
Furthermore, “integrating a case manager system is crucial to support long-term adherence and compliance,” he added.
The results are from the Denosumab Sequential Therapy prospective, open-label, parallel-group randomized clinical trial, published online in JAMA Network Open.
In the study, 101 patients were recruited between April 2019 and May 2021 at a referral center and two hospitals in Taiwan. The patients, including postmenopausal women and men over the age of 50, had been treated with regular denosumab for at least 2 years and had no previous exposure to other anti-osteoporosis medication.
They were randomized to treatment either with continuous denosumab at the standard dose of 60 mg twice yearly or to discontinue denosumab and receive the standard intravenous dose of the bisphosphonate zoledronate at 5 mg at the time when the next dose of denosumab would have been administered.
There were no differences between the two groups in serum bone turnover markers at baseline.
The current results, reflecting the first year of the 2-year study, show that, overall, those receiving zoledronate (n = 76), had a significant decrease in lumbar spine BMD, compared with a slight increase in the denosumab continuation group (–0.68% vs 1.30%, respectively; P = .03).
No significant differences were observed between the groups in terms of the study’s other measures of total hip BMD (median, 0% vs 1.12%; P = .24), and femoral neck BMD (median, 0.18% vs 0.17%; P = .71).
Additional findings from multivariable analyses in the study also supported results from previous studies showing that a longer duration of denosumab use is associated with a more substantial rebound effect: Among 15 of the denosumab users in the study who had ≥ 3 prior years of the drug, the reduction in lumbar spine BMD was even greater with zoledronate compared with denosumab continuation (–3.20% vs 1.30%; P = .003).
Though the lack of losses in the other measures of total hip and femoral neck BMD may seem encouraging, evidence from the bulk of other studies suggests cautious interpretation of those findings, Fu said.
“Although our study did not observe a noticeable decline in total hip or femoral neck BMD, other randomized controlled trials with longer durations of denosumab use have reported significant reductions in these areas,” Fu said. “Therefore, it cannot be assumed that non-lumbar spine regions are entirely safe.”
Fracture Risk Is the Overriding Concern
Meanwhile, the loss of lumbar spine BMD is of particular concern because of its role in what amounts to the broader, overriding concern of denosumab discontinuation — the risk for fracture, Fu noted.
“Real-world observations indicate that fractures caused by or associated with discontinuation of denosumab primarily occur in the spine,” he explained.
Previous research underscores the risk for fracture with denosumab discontinuation — and the greater risk with longer-term denosumab use, showing an 11.8% annual incidence of vertebral fracture after discontinuation of denosumab used for less than 2 years, increasing to 16.0% upon discontinuation after more than 2 years of treatment.
Randomized trials have shown sequential zoledronate to have some benefit in offsetting that risk, reducing first-year fracture risk by 3%-4% in some studies.
In the current study, 3 of 76 participants experienced a vertebral fracture in the first year of discontinuation, all involving women, including 2 who had been receiving denosumab for ≥ 4 years before medication transition.
If a transition to a bisphosphonate is anticipated, the collective findings suggest doing it as early on in denosumab treatment as possible, Fu and his colleagues noted in the study.
“When medication transition from denosumab is expected or when long-term denosumab treatment may not be suitable, earlier medication transition with potent sequential therapy should be considered,” they wrote.
Dosing Adjustments?
The findings add to the evidence that “patients who gain the most with denosumab are likely to lose the most with zoledronate,” Nelson Watts, MD, who authored an editorial accompanying the study, told this news organization.
Furthermore, “denosumab and other medications seem to do more [and faster] for BMD in the spine, so we expect more loss in the spine than in the hip,” said Watts, who is director of Mercy Health Osteoporosis and Bone Health Services, Bon Secours Mercy Health in Cincinnati, Ohio.
“Studies are needed but not yet done to see if a higher dose or more frequent zoledronate would be better for BMD than the ‘usual’ yearly dose,” Watts added.
The only published clinical recommendations on the matter are discussed in a position paper from the European Calcified Tissue Society (ECTS).
“Pending additional robust data, a pragmatic approach is to begin treatment with zoledronate 6 months after the last denosumab injection and monitor the effect with bone turnover markers, for example, 3 and 6 months after the zoledronate infusion,” they recommended.
In cases of increased bone turnover markers, including above the mean found in age- and sex-matched cohorts, “repeated infusion of zoledronate should be considered,” the society added.
If bone turnover markers are not available for monitoring the patients, “a pragmatic approach could be administrating a second infusion of zoledronate 6 months after the first infusion,” they wrote.
Clinicians Need to Be Proactive From the Start
Bente Langdahl, MD, of the Medical Department of Endocrinology, Aarhus University Hospital in Denmark, who was a coauthor on the ECTS position statement, told this news organization that clinicians should also be proactive on the other side of treatment — before it begins — to prevent problems with discontinuation.
“I think denosumab is a very good treatment for some patients with high fracture risk and very low BMD, but both patients and clinicians should know that this treatment is either lifelong or there needs to be a plan for discontinuation,” Langdahl said.
Langdahl noted that denosumab is coming off patent soon; hence, issues with cost could become more manageable.
But until then, “I think [cost] should be considered before starting treatment because if patients cannot afford denosumab, they should have been started on zoledronate from the beginning.”
Discontinuation Reasons Vary
Research indicates that, broadly, adherence to denosumab ranges from about 45% to 72% at 2 years, with some reasons for discontinuation including the need for dental treatment or cost, Fu and colleagues reported.
Fu added, however, that other reasons for discontinuing denosumab “are not due to ‘need’ but rather factors such as relocating, missing follow-up appointments, or poor adherence.”
Lorenz Hofbauer, MD, who is head of the Division of Endocrinology, Diabetes, and Bone Diseases, Department of Medicine III at the Technical University Medical Center in Dresden, Germany, noted that another issue contributing to some hesitation by patients about remaining on, or even initiating denosumab, is the known risk for osteonecrosis of the jaw (ONJ).
Though reported as being rare, research continuing to stir concern for ONJ with denosumab use includes one recent study of patients with breast cancer showing those treated with denosumab had a fivefold higher risk for ONJ vs those on bisphosphonates.
“About 20% of my patients have ONJ concerns or other questions, which may delay treatment with denosumab or other therapies,” Hofbauer told this news organization.
“There is a high need to discuss risk versus benefits toward a shared decision-making,” he said.
Conversely, however, Hofbauer noted that adherence to denosumab at his center is fairly high — at 90%, which he says is largely credited to an electronically supported recall system in place at the center.
Denosumab maker Amgen also offers patient reminders via email, text, or phone through its Bone Matters patient support system, which also provides access to a call center for questions or to update treatment appointment information.
In terms of the ongoing question of how to best prevent fracture risk when patients do wind up discontinuing denosumab, Watts concluded in his editorial that more robust studies are needed.
“The dilemma is what to do with longer-term users who stop, and the real question is not what happens to BMD, but what happens to fracture risk,” he wrote.
“It is unlikely that the fracture risk question can be answered due to ethical limitations, but finding the best option, [whether it is] oral or intravenous bisphosphonate, timing, dose, and frequency, to minimize bone loss and the rebound increase in bone resorption after stopping long-term denosumab requires larger and longer studies of better design.”
The authors had no disclosures to report. Watts has been an investigator, consultant, and speaker for Amgen outside of the published editorial. Hofbauer is on advisory boards for Alexion Pharmaceuticals, Amolyt Pharma, Amgen, and UCB. Langdahl has been a primary investigator on previous and ongoing clinical trials involving denosumab.
A version of this article appeared on Medscape.com.
FROM JAMA NETWORK OPEN