User login
Oxidative Stress Marker May Signal Fracture Risk in T2D
TOPLINE:
Elevated levels of plasma F2-isoprostanes, a reliable marker of oxidative stress, are associated with an increased risk for fractures in older ambulatory patients with type 2 diabetes (T2D) independently of bone density.
METHODOLOGY:
- Patients with T2D face an increased risk for fractures at any given bone mineral density; oxidative stress levels (reflected in circulating F2-isoprostanes), which are elevated in T2D, are associated with other T2D complications, and may weaken bone integrity.
- Researchers analyzed data from an observational cohort study to investigate the association between the levels of circulating F2-isoprostanes and the risk for clinical fractures in older patients with T2D.
- The data included 703 older ambulatory adults (baseline age, 70-79 years; about half White individuals and half Black individuals ; about half men and half women) from the Health, Aging and Body Composition Study, of whom 132 had T2D.
- Plasma F2-isoprostane levels were measured using baseline serum samples; bone turnover markers were also measured including procollagen type 1 N-terminal propeptide, osteocalcin, and C-terminal telopeptide of type 1 collagen.
- Incident clinical fractures were tracked over a follow-up period of up to 17.3 years, with fractures verified through radiology reports.
TAKEAWAY:
- Overall, 25.8% patients in the T2D group and 23.5% adults in the non-diabetes group reported an incident clinical fracture during a mean follow-up period of 6.2 and 8.0 years, respectively.
- In patients with T2D, the risk for incident clinical fracture increased by 93% for every standard deviation increase in the log F2-isoprostane serum levels (hazard ratio [HR], 1.93; 95% CI, 1.26-2.95; P = .002) independently of baseline bone density, medication use, and other risk factors, with no such association reported in individuals without T2D (HR, 0.98; 95% CI, 0.81-1.18; P = .79).
- In the T2D group, elevated plasma F2-isoprostane levels were also associated with a decrease in total hip bone mineral density over 4 years (r = −0.28; P = .008), but not in the non-diabetes group.
- No correlation was found between plasma F2-isoprostane levels and circulating advanced glycoxidation end-products, bone turnover markers, or A1c levels in either group.
IN PRACTICE:
“Oxidative stress in T2D may play an important role in the decline of bone quality and not just bone quantity,” the authors wrote.
SOURCE:
This study was led by Bowen Wang, PhD, Rensselaer Polytechnic Institute, Troy, New York. It was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
This study was conducted in a well-functioning elderly population with only White and Black participants, which may limit the generalizability of the findings to other age groups or less healthy populations. Additionally, the study did not assess prevalent vertebral fracture risk due to the small sample size.
DISCLOSURES:
This study was supported by the US National Institute on Aging and the Intramural Research Program of the US National Institutes of Health and the Dr and Ms Sands and Sands Family for Orthopaedic Research. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Elevated levels of plasma F2-isoprostanes, a reliable marker of oxidative stress, are associated with an increased risk for fractures in older ambulatory patients with type 2 diabetes (T2D) independently of bone density.
METHODOLOGY:
- Patients with T2D face an increased risk for fractures at any given bone mineral density; oxidative stress levels (reflected in circulating F2-isoprostanes), which are elevated in T2D, are associated with other T2D complications, and may weaken bone integrity.
- Researchers analyzed data from an observational cohort study to investigate the association between the levels of circulating F2-isoprostanes and the risk for clinical fractures in older patients with T2D.
- The data included 703 older ambulatory adults (baseline age, 70-79 years; about half White individuals and half Black individuals ; about half men and half women) from the Health, Aging and Body Composition Study, of whom 132 had T2D.
- Plasma F2-isoprostane levels were measured using baseline serum samples; bone turnover markers were also measured including procollagen type 1 N-terminal propeptide, osteocalcin, and C-terminal telopeptide of type 1 collagen.
- Incident clinical fractures were tracked over a follow-up period of up to 17.3 years, with fractures verified through radiology reports.
TAKEAWAY:
- Overall, 25.8% patients in the T2D group and 23.5% adults in the non-diabetes group reported an incident clinical fracture during a mean follow-up period of 6.2 and 8.0 years, respectively.
- In patients with T2D, the risk for incident clinical fracture increased by 93% for every standard deviation increase in the log F2-isoprostane serum levels (hazard ratio [HR], 1.93; 95% CI, 1.26-2.95; P = .002) independently of baseline bone density, medication use, and other risk factors, with no such association reported in individuals without T2D (HR, 0.98; 95% CI, 0.81-1.18; P = .79).
- In the T2D group, elevated plasma F2-isoprostane levels were also associated with a decrease in total hip bone mineral density over 4 years (r = −0.28; P = .008), but not in the non-diabetes group.
- No correlation was found between plasma F2-isoprostane levels and circulating advanced glycoxidation end-products, bone turnover markers, or A1c levels in either group.
IN PRACTICE:
“Oxidative stress in T2D may play an important role in the decline of bone quality and not just bone quantity,” the authors wrote.
SOURCE:
This study was led by Bowen Wang, PhD, Rensselaer Polytechnic Institute, Troy, New York. It was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
This study was conducted in a well-functioning elderly population with only White and Black participants, which may limit the generalizability of the findings to other age groups or less healthy populations. Additionally, the study did not assess prevalent vertebral fracture risk due to the small sample size.
DISCLOSURES:
This study was supported by the US National Institute on Aging and the Intramural Research Program of the US National Institutes of Health and the Dr and Ms Sands and Sands Family for Orthopaedic Research. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Elevated levels of plasma F2-isoprostanes, a reliable marker of oxidative stress, are associated with an increased risk for fractures in older ambulatory patients with type 2 diabetes (T2D) independently of bone density.
METHODOLOGY:
- Patients with T2D face an increased risk for fractures at any given bone mineral density; oxidative stress levels (reflected in circulating F2-isoprostanes), which are elevated in T2D, are associated with other T2D complications, and may weaken bone integrity.
- Researchers analyzed data from an observational cohort study to investigate the association between the levels of circulating F2-isoprostanes and the risk for clinical fractures in older patients with T2D.
- The data included 703 older ambulatory adults (baseline age, 70-79 years; about half White individuals and half Black individuals ; about half men and half women) from the Health, Aging and Body Composition Study, of whom 132 had T2D.
- Plasma F2-isoprostane levels were measured using baseline serum samples; bone turnover markers were also measured including procollagen type 1 N-terminal propeptide, osteocalcin, and C-terminal telopeptide of type 1 collagen.
- Incident clinical fractures were tracked over a follow-up period of up to 17.3 years, with fractures verified through radiology reports.
TAKEAWAY:
- Overall, 25.8% patients in the T2D group and 23.5% adults in the non-diabetes group reported an incident clinical fracture during a mean follow-up period of 6.2 and 8.0 years, respectively.
- In patients with T2D, the risk for incident clinical fracture increased by 93% for every standard deviation increase in the log F2-isoprostane serum levels (hazard ratio [HR], 1.93; 95% CI, 1.26-2.95; P = .002) independently of baseline bone density, medication use, and other risk factors, with no such association reported in individuals without T2D (HR, 0.98; 95% CI, 0.81-1.18; P = .79).
- In the T2D group, elevated plasma F2-isoprostane levels were also associated with a decrease in total hip bone mineral density over 4 years (r = −0.28; P = .008), but not in the non-diabetes group.
- No correlation was found between plasma F2-isoprostane levels and circulating advanced glycoxidation end-products, bone turnover markers, or A1c levels in either group.
IN PRACTICE:
“Oxidative stress in T2D may play an important role in the decline of bone quality and not just bone quantity,” the authors wrote.
SOURCE:
This study was led by Bowen Wang, PhD, Rensselaer Polytechnic Institute, Troy, New York. It was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
This study was conducted in a well-functioning elderly population with only White and Black participants, which may limit the generalizability of the findings to other age groups or less healthy populations. Additionally, the study did not assess prevalent vertebral fracture risk due to the small sample size.
DISCLOSURES:
This study was supported by the US National Institute on Aging and the Intramural Research Program of the US National Institutes of Health and the Dr and Ms Sands and Sands Family for Orthopaedic Research. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Vaping Linked to Higher Risk of Blurred Vision & Eye Pain
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Adults who use electronic cigarettes (e-cigarettes/vapes) had more than double the risk for developing uveitis than nonusers, with elevated risks persisting for up to 4 years after initial use. This increased risk was observed across all age groups and affected both men and women as well as various ethnic groups.
METHODOLOGY:
- Researchers used the TriNetX global database, which contains data from over 100 million patients across the United States, Europe, the Middle East, and Africa, to examine the risk for developing uveitis among e-cigarette users.
- 419,325 e-cigarette users over the age of 18 years (mean age, 51.41 years; 48.65% women) were included, based on diagnosis codes for vaping and unspecified nicotine dependence.
- The e-cigarette users were propensity score–matched to non-e-cigarette-users.
- People were excluded if they had comorbid conditions that might have influenced the risk for uveitis.
- The primary outcome measure was the first-time encounter diagnosis of uveitis using diagnosis codes for iridocyclitis, unspecified choroidal inflammation, posterior cyclitis, choroidal degeneration, retinal vasculitis, and pan-uveitis.
TAKEAWAY:
- E-cigarette users had a significantly higher risk for developing uveitis than nonusers (hazard ratio [HR], 2.53; 95% CI, 2.33-2.76 ), for iridocyclitis (HR, 2.59), unspecified chorioretinal inflammation (HR, 2.34), and retinal vasculitis (HR, 1.95).
- This increased risk for uveitis was observed across all age groups, affecting all genders and patients from Asian, Black or African American, and White ethnic backgrounds.
- The risk for uveitis increased as early as within 7 days after smoking an e-cigarettes (HR, 6.35) and was present even at 4 years (HR, 2.58) after initial use.
- A higher risk for uveitis was observed among individuals with a history of both e-cigarette and traditional cigarette use than among those who used traditional cigarettes only (HR, 1.39).
IN PRACTICE:
“This study has real-world implications as clinicians caring for patients with e-cigarette history should be aware of the potentially increased risk of new-onset uveitis,” the authors wrote.
SOURCE:
The study was led by Alan Y. Hsu, MD, from the Department of Ophthalmology at China Medical University Hospital in Taichung, Taiwan, and was published online on November 12, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study limited the determination of direct causality between e-cigarette use and the risk for uveitis. The study lacked information on the duration and quantity of e-cigarette exposure, which may have impacted the findings. Moreover, researchers could not isolate the effect of secondhand exposure to vaping or traditional cigarettes.
DISCLOSURES:
Study authors reported no relevant financial disclosures.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Stages I-III Screen-Detected CRC Boosts Disease-Free Survival Rates
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Patients with screen-detected CRC have better stage-specific overall survival rates than those with non-screen–detected CRC, but the impact of screening on recurrence rates is unknown.
- A retrospective study analyzed patients with CRC (age, 55-75 years) from the Netherlands Cancer Registry diagnosed by screening or not.
- Screen-detected CRC were identified in patients who underwent colonoscopy after a positive fecal immunochemical test (FIT), whereas non-screen–detected CRC were those that were detected in symptomatic patients.
TAKEAWAY:
- Researchers included 3725 patients with CRC (39.6% women), of which 1652 (44.3%) and 2073 (55.7%) patients had screen-detected and non-screen–detected CRC, respectively; CRC was distributed approximately evenly across stages I-III (35.3%, 27.1%, and 37.6%, respectively).
- Screen-detected CRC had significantly higher 3-year rates of disease-free survival compared with non-screen–detected CRC (87.8% vs 77.2%; P < .001).
- The improvement in disease-free survival rates for screen-detected CRC was particularly notable in stage III cases, with rates of 77.9% vs 66.7% for non-screen–detected CRC (P < .001).
- Screen-detected CRC was more often detected at an earlier stage than non-screen–detected CRC (stage I or II: 72.4% vs 54.4%; P < .001).
- Across all stages, detection of CRC by screening was associated with a 33% lower risk for recurrence (P < .001) independent of patient age, gender, tumor location, stage, and treatment.
- Recurrence was the strongest predictor of overall survival across the study population (hazard ratio, 15.90; P < .001).
IN PRACTICE:
“Apart from CRC stage, mode of detection could be used to assess an individual’s risk for recurrence and survival, which may contribute to a more personalized treatment,” the authors wrote.
SOURCE:
The study, led by Sanne J.K.F. Pluimers, Department of Gastroenterology and Hepatology, Erasmus University Medical Center/Erasmus MC Cancer Institute, Rotterdam, the Netherlands, was published online in Clinical Gastroenterology and Hepatology.
LIMITATIONS:
The follow-up time was relatively short, restricting the ability to evaluate the long-term effects of screening on CRC recurrence. This study focused on recurrence solely within the FIT-based screening program, and the results were not generalizable to other screening methods. Due to Dutch privacy law, data on CRC-specific causes of death were unavailable, which may have affected the specificity of survival outcomes.
DISCLOSURES:
There was no funding source for this study. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Does Semaglutide Increase Risk for Optic Neuropathy?
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study using data from the TriNetX Analytics Network to investigate the potential risk for NAION associated with semaglutide use in a broader population worldwide.
- They included Caucasians aged ≥ 18 years with only type 2 diabetes (n = 37,245) , only obesity (n = 138,391), or both (n = 64,989) who visited healthcare facilities three or more times.
- The participants were further grouped into those prescribed semaglutide and those using non–GLP-1 RA medications.
- Propensity score matching was performed to balance age, sex, body mass index, A1C levels, medications, and underlying comorbidities between the participants using semaglutide or non–GLP-1 RAs.
- The main outcome measure was the occurrence of NAION, evaluated at 1, 2, and 3 years of follow-up.
TAKEAWAY:
- The use of semaglutide vs non–GLP-1 RAs was not associated with an increased risk for NAION in people with only type 2 diabetes during the 1-year (hazard ratio [HR], 2.32; 95% CI, 0.60-8.97), 2-year (HR, 2.31; 95% CI, 0.86-6.17), and 3-year (HR, 1.51; 0.71-3.25) follow-up periods.
- Similarly, in the obesity-only cohort, use of semaglutide was not linked to the development of NAION across 1-year (HR, 0.41; 95% CI, 0.08-2.09), 2-year (HR, 0.67; 95% CI, 0.20-2.24), and 3-year (HR, 0.72; 95% CI, 0.24-2.17) follow-up periods.
- The patients with both diabetes and obesity also showed no significant association between use of semaglutide and the risk for NAION across each follow-up period.
- Sensitivity analysis confirmed the prescription of semaglutide was not associated with an increased risk for NAION compared with non–GLP-1 RA medications.
IN PRACTICE:
“Our large, multinational, population-based, real-world study found that semaglutide is not associated with an increased risk of NAION in the general population,” the authors of the study wrote.
SOURCE:
The study was led by Chien-Chih Chou, MD, PhD, of National Yang Ming Chiao Tung University, in Taipei City, Taiwan, and was published online on November 02, 2024, in Ophthalmology.
LIMITATIONS:
The retrospective nature of the study may have limited the ability to establish causality between the use of semaglutide and the risk for NAION. The reliance on diagnosis coding for NAION may have introduced a potential misclassification of cases. Moreover, approximately half of the healthcare organizations in the TriNetX network are based in the United States, potentially limiting the diversity of the data.
DISCLOSURES:
This study was supported by a grant from Taichung Veterans General Hospital. The authors declared no potential conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Home Spirometry Has Potential for Detecting Pulmonary Decline in Systemic Sclerosis
TOPLINE:
Home spirometry shows potential for early detection of pulmonary function decline in patients with systemic sclerosis–associated interstitial lung disease (SSc-ILD). It shows good cross-sectional correlation with hospital tests, along with 60% sensitivity and 87% specificity for detecting progressive ILD.
METHODOLOGY:
- Researchers conducted a prospective, observational study to examine the validity of home spirometry for detecting a decline in pulmonary function in patients with SSc-ILD.
- They included 43 patients aged 18 years or older with SSc-ILD from two tertiary referral centers in the Netherlands who received treatment with immunosuppressives for a maximum duration of 8 weeks prior to baseline.
- All participants were required to take weekly home spirometry measurements using a handheld spirometer for 1 year, with 35 completing 6 months of follow-up and 31 completing 12 months.
- Pulmonary function tests were conducted in the hospital at baseline and semiannual visits.
- The primary outcome was the κ (kappa statistic) agreement between home and hospital measurements after 1 year to detect a decline in forced vital capacity (FVC) of 5% or more; the sensitivity and specificity of home spirometry were also evaluated to detect an absolute decline in FVC%, using hospital tests as the gold standard.
TAKEAWAY:
- Home spirometry showed a fair agreement with the pulmonary function tests conducted at the hospital (κ, 0.40; 95% CI, 0.01-0.79).
- Home spirometry showed a sensitivity of 60% and specificity of 87% in detecting a decline in FVC% predicted of 5% or more.
- The intraclass correlation coefficient between home and hospital FVC measurements was moderate to high, with values of 0.85 at baseline, 0.84 at 6 months, and 0.72 at 12 months (P < .0001 for all).
- However, the longitudinal agreement between home and hospital measurements was lower with a correlation coefficient of 0.55.
IN PRACTICE:
“These findings suggest that home spirometry is both feasible and moderately accurate in patients with systemic sclerosis–associated ILD. However, where home spirometry fell short was the low sensitivity in detecting a decline in FVC% predicted,” experts wrote in an accompanying editorial.
“The results of this study support further evaluation of the implementation of home spirometry in addition to regular healthcare management but do not endorse relying solely on home monitoring to detect a decline in pulmonary function,” study authors wrote.
SOURCE:
The study was led by Arthiha Velauthapillai, MD, Department of Rheumatology, Radboud University Medical Center, Nijmegen, the Netherlands, and was published online November 8, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study might have been underpowered because of inaccuracies in initial assumptions, with a lower-than-anticipated prevalence of progressive ILD and a higher dropout rate. The study included only Dutch patients, which may have limited the generalizability of its findings to other settings with lower internet access or literacy rates.
DISCLOSURES:
This study was partly supported by grants from Galapagos and Boehringer Ingelheim. Some authors received grants or consulting or speaker fees from Boehringer Ingelheim, AstraZeneca, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Home spirometry shows potential for early detection of pulmonary function decline in patients with systemic sclerosis–associated interstitial lung disease (SSc-ILD). It shows good cross-sectional correlation with hospital tests, along with 60% sensitivity and 87% specificity for detecting progressive ILD.
METHODOLOGY:
- Researchers conducted a prospective, observational study to examine the validity of home spirometry for detecting a decline in pulmonary function in patients with SSc-ILD.
- They included 43 patients aged 18 years or older with SSc-ILD from two tertiary referral centers in the Netherlands who received treatment with immunosuppressives for a maximum duration of 8 weeks prior to baseline.
- All participants were required to take weekly home spirometry measurements using a handheld spirometer for 1 year, with 35 completing 6 months of follow-up and 31 completing 12 months.
- Pulmonary function tests were conducted in the hospital at baseline and semiannual visits.
- The primary outcome was the κ (kappa statistic) agreement between home and hospital measurements after 1 year to detect a decline in forced vital capacity (FVC) of 5% or more; the sensitivity and specificity of home spirometry were also evaluated to detect an absolute decline in FVC%, using hospital tests as the gold standard.
TAKEAWAY:
- Home spirometry showed a fair agreement with the pulmonary function tests conducted at the hospital (κ, 0.40; 95% CI, 0.01-0.79).
- Home spirometry showed a sensitivity of 60% and specificity of 87% in detecting a decline in FVC% predicted of 5% or more.
- The intraclass correlation coefficient between home and hospital FVC measurements was moderate to high, with values of 0.85 at baseline, 0.84 at 6 months, and 0.72 at 12 months (P < .0001 for all).
- However, the longitudinal agreement between home and hospital measurements was lower with a correlation coefficient of 0.55.
IN PRACTICE:
“These findings suggest that home spirometry is both feasible and moderately accurate in patients with systemic sclerosis–associated ILD. However, where home spirometry fell short was the low sensitivity in detecting a decline in FVC% predicted,” experts wrote in an accompanying editorial.
“The results of this study support further evaluation of the implementation of home spirometry in addition to regular healthcare management but do not endorse relying solely on home monitoring to detect a decline in pulmonary function,” study authors wrote.
SOURCE:
The study was led by Arthiha Velauthapillai, MD, Department of Rheumatology, Radboud University Medical Center, Nijmegen, the Netherlands, and was published online November 8, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study might have been underpowered because of inaccuracies in initial assumptions, with a lower-than-anticipated prevalence of progressive ILD and a higher dropout rate. The study included only Dutch patients, which may have limited the generalizability of its findings to other settings with lower internet access or literacy rates.
DISCLOSURES:
This study was partly supported by grants from Galapagos and Boehringer Ingelheim. Some authors received grants or consulting or speaker fees from Boehringer Ingelheim, AstraZeneca, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Home spirometry shows potential for early detection of pulmonary function decline in patients with systemic sclerosis–associated interstitial lung disease (SSc-ILD). It shows good cross-sectional correlation with hospital tests, along with 60% sensitivity and 87% specificity for detecting progressive ILD.
METHODOLOGY:
- Researchers conducted a prospective, observational study to examine the validity of home spirometry for detecting a decline in pulmonary function in patients with SSc-ILD.
- They included 43 patients aged 18 years or older with SSc-ILD from two tertiary referral centers in the Netherlands who received treatment with immunosuppressives for a maximum duration of 8 weeks prior to baseline.
- All participants were required to take weekly home spirometry measurements using a handheld spirometer for 1 year, with 35 completing 6 months of follow-up and 31 completing 12 months.
- Pulmonary function tests were conducted in the hospital at baseline and semiannual visits.
- The primary outcome was the κ (kappa statistic) agreement between home and hospital measurements after 1 year to detect a decline in forced vital capacity (FVC) of 5% or more; the sensitivity and specificity of home spirometry were also evaluated to detect an absolute decline in FVC%, using hospital tests as the gold standard.
TAKEAWAY:
- Home spirometry showed a fair agreement with the pulmonary function tests conducted at the hospital (κ, 0.40; 95% CI, 0.01-0.79).
- Home spirometry showed a sensitivity of 60% and specificity of 87% in detecting a decline in FVC% predicted of 5% or more.
- The intraclass correlation coefficient between home and hospital FVC measurements was moderate to high, with values of 0.85 at baseline, 0.84 at 6 months, and 0.72 at 12 months (P < .0001 for all).
- However, the longitudinal agreement between home and hospital measurements was lower with a correlation coefficient of 0.55.
IN PRACTICE:
“These findings suggest that home spirometry is both feasible and moderately accurate in patients with systemic sclerosis–associated ILD. However, where home spirometry fell short was the low sensitivity in detecting a decline in FVC% predicted,” experts wrote in an accompanying editorial.
“The results of this study support further evaluation of the implementation of home spirometry in addition to regular healthcare management but do not endorse relying solely on home monitoring to detect a decline in pulmonary function,” study authors wrote.
SOURCE:
The study was led by Arthiha Velauthapillai, MD, Department of Rheumatology, Radboud University Medical Center, Nijmegen, the Netherlands, and was published online November 8, 2024, in The Lancet Rheumatology.
LIMITATIONS:
The study might have been underpowered because of inaccuracies in initial assumptions, with a lower-than-anticipated prevalence of progressive ILD and a higher dropout rate. The study included only Dutch patients, which may have limited the generalizability of its findings to other settings with lower internet access or literacy rates.
DISCLOSURES:
This study was partly supported by grants from Galapagos and Boehringer Ingelheim. Some authors received grants or consulting or speaker fees from Boehringer Ingelheim, AstraZeneca, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
New Strategy Led to Modest Decline in Antibiotic Misuse
TOPLINE:
particularly in general practice.
METHODOLOGY:
- Researchers conducted this study to assess the impact of an intervention on antibiotic prescribing and dispensing for common infections.
- Healthcare professionals from general practice, out-of-hours services, nursing homes, and community pharmacies in France, Greece, Lithuania, Poland, and Spain registered their interactions with patients related to antibiotic prescribing and dispensing both prior to and following the intervention.
- Overall, 407 healthcare professionals participated in the first registration, of whom 345 undertook the intervention and participated in the second registration; they documented 10,744 infections during the initial registration and 10,132 cases during the second period.
- The 5-hour intervention included evaluating and discussing feedback on the outcomes of the initial registration, improving communication skills, and offering communication tools.
- The impact of this intervention was calculated from potential unnecessary antibiotic prescriptions, non–first-line antibiotic choices, and percentage of good and wrong safety advice given for each prescription.
TAKEAWAY:
- General practice clinicians showed a significant overall reduction in unnecessary antibiotic prescriptions from 72.2% during the first registration to 65.2% after the intervention (P < .001), with variations across countries ranging from a 19.9% reduction in Lithuania to a 1.3% increase in Greece.
- Out-of-hours services showed a minimal change in unnecessary antibiotic prescribing from 52.5% to 52.1%, whereas nursing homes showed a slight increase from 56.1% to 58.6%.
- Community pharmacies showed significant improvements, with the provision of correct advice increasing by 17% (P < .001) and safety checks improving from 47% to 55.3% in 1 year (P < .001).
- However, the choice of non–first-line antibiotics significantly increased by 29.2% in the second registration period (P < .001).
IN PRACTICE:
“These findings highlight the need for alternative and tailored approaches in antimicrobial stewardship programs in long-term care facilities, with a greater focus on nurses. This includes implementing hygiene measures and empowering nurses to improve the diagnosis of suspected infections, such as urinary tract infections, while debunking prevalent myths and providing clear-cut information for better management of these common infections,” the authors wrote.
SOURCE:
The study was led by Ana García-Sangenís, of Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, Barcelona, Spain, and was published online on November 12, 2024, in Family Practice.
LIMITATIONS:
The study lacked a control group, which limited the ability to attribute changes solely to the intervention. The voluntary participation of healthcare professionals might have introduced selection bias, as participants might have had a greater interest in quality improvement programs than the general population of healthcare providers. Clinical outcomes were not evaluated, which may have created ambiguity regarding whether complication rates or clinical failures varied between the groups.
DISCLOSURES:
This study received funding from the European Union’s Third Health Programme. One author reported receiving fees from pharmaceutical companies and acting as a member of the board of Steno Diabetes Center, Odense, Denmark.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
particularly in general practice.
METHODOLOGY:
- Researchers conducted this study to assess the impact of an intervention on antibiotic prescribing and dispensing for common infections.
- Healthcare professionals from general practice, out-of-hours services, nursing homes, and community pharmacies in France, Greece, Lithuania, Poland, and Spain registered their interactions with patients related to antibiotic prescribing and dispensing both prior to and following the intervention.
- Overall, 407 healthcare professionals participated in the first registration, of whom 345 undertook the intervention and participated in the second registration; they documented 10,744 infections during the initial registration and 10,132 cases during the second period.
- The 5-hour intervention included evaluating and discussing feedback on the outcomes of the initial registration, improving communication skills, and offering communication tools.
- The impact of this intervention was calculated from potential unnecessary antibiotic prescriptions, non–first-line antibiotic choices, and percentage of good and wrong safety advice given for each prescription.
TAKEAWAY:
- General practice clinicians showed a significant overall reduction in unnecessary antibiotic prescriptions from 72.2% during the first registration to 65.2% after the intervention (P < .001), with variations across countries ranging from a 19.9% reduction in Lithuania to a 1.3% increase in Greece.
- Out-of-hours services showed a minimal change in unnecessary antibiotic prescribing from 52.5% to 52.1%, whereas nursing homes showed a slight increase from 56.1% to 58.6%.
- Community pharmacies showed significant improvements, with the provision of correct advice increasing by 17% (P < .001) and safety checks improving from 47% to 55.3% in 1 year (P < .001).
- However, the choice of non–first-line antibiotics significantly increased by 29.2% in the second registration period (P < .001).
IN PRACTICE:
“These findings highlight the need for alternative and tailored approaches in antimicrobial stewardship programs in long-term care facilities, with a greater focus on nurses. This includes implementing hygiene measures and empowering nurses to improve the diagnosis of suspected infections, such as urinary tract infections, while debunking prevalent myths and providing clear-cut information for better management of these common infections,” the authors wrote.
SOURCE:
The study was led by Ana García-Sangenís, of Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, Barcelona, Spain, and was published online on November 12, 2024, in Family Practice.
LIMITATIONS:
The study lacked a control group, which limited the ability to attribute changes solely to the intervention. The voluntary participation of healthcare professionals might have introduced selection bias, as participants might have had a greater interest in quality improvement programs than the general population of healthcare providers. Clinical outcomes were not evaluated, which may have created ambiguity regarding whether complication rates or clinical failures varied between the groups.
DISCLOSURES:
This study received funding from the European Union’s Third Health Programme. One author reported receiving fees from pharmaceutical companies and acting as a member of the board of Steno Diabetes Center, Odense, Denmark.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
particularly in general practice.
METHODOLOGY:
- Researchers conducted this study to assess the impact of an intervention on antibiotic prescribing and dispensing for common infections.
- Healthcare professionals from general practice, out-of-hours services, nursing homes, and community pharmacies in France, Greece, Lithuania, Poland, and Spain registered their interactions with patients related to antibiotic prescribing and dispensing both prior to and following the intervention.
- Overall, 407 healthcare professionals participated in the first registration, of whom 345 undertook the intervention and participated in the second registration; they documented 10,744 infections during the initial registration and 10,132 cases during the second period.
- The 5-hour intervention included evaluating and discussing feedback on the outcomes of the initial registration, improving communication skills, and offering communication tools.
- The impact of this intervention was calculated from potential unnecessary antibiotic prescriptions, non–first-line antibiotic choices, and percentage of good and wrong safety advice given for each prescription.
TAKEAWAY:
- General practice clinicians showed a significant overall reduction in unnecessary antibiotic prescriptions from 72.2% during the first registration to 65.2% after the intervention (P < .001), with variations across countries ranging from a 19.9% reduction in Lithuania to a 1.3% increase in Greece.
- Out-of-hours services showed a minimal change in unnecessary antibiotic prescribing from 52.5% to 52.1%, whereas nursing homes showed a slight increase from 56.1% to 58.6%.
- Community pharmacies showed significant improvements, with the provision of correct advice increasing by 17% (P < .001) and safety checks improving from 47% to 55.3% in 1 year (P < .001).
- However, the choice of non–first-line antibiotics significantly increased by 29.2% in the second registration period (P < .001).
IN PRACTICE:
“These findings highlight the need for alternative and tailored approaches in antimicrobial stewardship programs in long-term care facilities, with a greater focus on nurses. This includes implementing hygiene measures and empowering nurses to improve the diagnosis of suspected infections, such as urinary tract infections, while debunking prevalent myths and providing clear-cut information for better management of these common infections,” the authors wrote.
SOURCE:
The study was led by Ana García-Sangenís, of Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, Barcelona, Spain, and was published online on November 12, 2024, in Family Practice.
LIMITATIONS:
The study lacked a control group, which limited the ability to attribute changes solely to the intervention. The voluntary participation of healthcare professionals might have introduced selection bias, as participants might have had a greater interest in quality improvement programs than the general population of healthcare providers. Clinical outcomes were not evaluated, which may have created ambiguity regarding whether complication rates or clinical failures varied between the groups.
DISCLOSURES:
This study received funding from the European Union’s Third Health Programme. One author reported receiving fees from pharmaceutical companies and acting as a member of the board of Steno Diabetes Center, Odense, Denmark.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Infliximab vs Adalimumab: Which Is Best for Behçet Syndrome?
TOPLINE:
Both infliximab and adalimumab are safe and effective in achieving remission in patients with severe mucocutaneous Behçet syndrome, with adalimumab demonstrating a quicker response time; both drugs also improve quality of life and disease activity scores.
METHODOLOGY:
- Researchers conducted a phase 3 prospective study to evaluate the efficacy and safety of the anti–tumor necrosis factor–alpha agents infliximab and adalimumab in patients with Behçet syndrome presenting with mucocutaneous manifestations and inadequate response to prior treatments who were recruited from four Italian tertiary referral centers specializing in Behçet syndrome.
- Patients were randomly assigned to receive either 5 mg/kg intravenous infliximab at weeks 0, 2, and 6 and then every 6-8 weeks (n = 22; mean age, 46 years; 32% women) or 40 mg subcutaneous adalimumab every 2 weeks (n = 18; mean age, 48 years; 28% women) for 24 weeks.
- Patients were followed-up for an additional 12 weeks after the treatment period, with regular assessments of disease activity, safety, and adherence to treatment.
- The primary outcome was the time to response of mucocutaneous manifestations over 6 months; the secondary outcomes included relapse rates; quality of life, assessed using the Short-Form Health Survey 36; and disease activity, assessed using the Behçet Disease Current Activity Form.
- The safety and tolerability of the drugs were evaluated as the frequency of treatment-emergent adverse events (AEs) and serious AEs, monitored every 2 weeks.
TAKEAWAY:
- The resolution of mucocutaneous manifestations was achieved significantly more quickly with adalimumab than with infliximab, with a median time to response of 42 vs 152 days (P = .001); the proportion of responders was also higher in the adalimumab group than in the infliximab group (94% vs 64%; P = .023).
- Patients in the infliximab group had a higher risk for nonresponse (adjusted hazard ratio [HR], 3.33; P = .012) and relapse (adjusted HR, 7.57; P = .036) than those in the adalimumab group.
- Both infliximab and adalimumab significantly improved the quality of life in all dimensions (P < .05 for all) and disease activity scores (P < .001 for both) from baseline to the end of the study period, with no significant differences found between the groups.
- Two AEs were reported in the adalimumab group, one of which was serious (myocardial infarction); three nonserious AEs were reported in the infliximab group.
IN PRACTICE:
“ADA [adalimumab] and IFX [infliximab] were generally well tolerated and efficacious in patients with BS [Behçet syndrome] who showed an inadequate response to prior treatments with at least AZA [azathioprine] or CyA [cyclosporine],” the authors wrote. “Although a more detailed treat-to-target profile is yet to be better defined, [the study] results are also crucial in terms of prescriptiveness (currently off label), not only in Italy but also beyond national borders, as the evidence coming from real life still needs to be confirmed by growing data from clinical trials.”
SOURCE:
The study was led by Rosaria Talarico, MD, PhD, University of Pisa in Italy, and was published online in Annals of the Rheumatic Diseases.
LIMITATIONS:
The small sample size and the distinctive study design may have limited the generalizability of the findings.
DISCLOSURES:
This study was funded through a grant from the Italian Medicines Agency. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Both infliximab and adalimumab are safe and effective in achieving remission in patients with severe mucocutaneous Behçet syndrome, with adalimumab demonstrating a quicker response time; both drugs also improve quality of life and disease activity scores.
METHODOLOGY:
- Researchers conducted a phase 3 prospective study to evaluate the efficacy and safety of the anti–tumor necrosis factor–alpha agents infliximab and adalimumab in patients with Behçet syndrome presenting with mucocutaneous manifestations and inadequate response to prior treatments who were recruited from four Italian tertiary referral centers specializing in Behçet syndrome.
- Patients were randomly assigned to receive either 5 mg/kg intravenous infliximab at weeks 0, 2, and 6 and then every 6-8 weeks (n = 22; mean age, 46 years; 32% women) or 40 mg subcutaneous adalimumab every 2 weeks (n = 18; mean age, 48 years; 28% women) for 24 weeks.
- Patients were followed-up for an additional 12 weeks after the treatment period, with regular assessments of disease activity, safety, and adherence to treatment.
- The primary outcome was the time to response of mucocutaneous manifestations over 6 months; the secondary outcomes included relapse rates; quality of life, assessed using the Short-Form Health Survey 36; and disease activity, assessed using the Behçet Disease Current Activity Form.
- The safety and tolerability of the drugs were evaluated as the frequency of treatment-emergent adverse events (AEs) and serious AEs, monitored every 2 weeks.
TAKEAWAY:
- The resolution of mucocutaneous manifestations was achieved significantly more quickly with adalimumab than with infliximab, with a median time to response of 42 vs 152 days (P = .001); the proportion of responders was also higher in the adalimumab group than in the infliximab group (94% vs 64%; P = .023).
- Patients in the infliximab group had a higher risk for nonresponse (adjusted hazard ratio [HR], 3.33; P = .012) and relapse (adjusted HR, 7.57; P = .036) than those in the adalimumab group.
- Both infliximab and adalimumab significantly improved the quality of life in all dimensions (P < .05 for all) and disease activity scores (P < .001 for both) from baseline to the end of the study period, with no significant differences found between the groups.
- Two AEs were reported in the adalimumab group, one of which was serious (myocardial infarction); three nonserious AEs were reported in the infliximab group.
IN PRACTICE:
“ADA [adalimumab] and IFX [infliximab] were generally well tolerated and efficacious in patients with BS [Behçet syndrome] who showed an inadequate response to prior treatments with at least AZA [azathioprine] or CyA [cyclosporine],” the authors wrote. “Although a more detailed treat-to-target profile is yet to be better defined, [the study] results are also crucial in terms of prescriptiveness (currently off label), not only in Italy but also beyond national borders, as the evidence coming from real life still needs to be confirmed by growing data from clinical trials.”
SOURCE:
The study was led by Rosaria Talarico, MD, PhD, University of Pisa in Italy, and was published online in Annals of the Rheumatic Diseases.
LIMITATIONS:
The small sample size and the distinctive study design may have limited the generalizability of the findings.
DISCLOSURES:
This study was funded through a grant from the Italian Medicines Agency. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Both infliximab and adalimumab are safe and effective in achieving remission in patients with severe mucocutaneous Behçet syndrome, with adalimumab demonstrating a quicker response time; both drugs also improve quality of life and disease activity scores.
METHODOLOGY:
- Researchers conducted a phase 3 prospective study to evaluate the efficacy and safety of the anti–tumor necrosis factor–alpha agents infliximab and adalimumab in patients with Behçet syndrome presenting with mucocutaneous manifestations and inadequate response to prior treatments who were recruited from four Italian tertiary referral centers specializing in Behçet syndrome.
- Patients were randomly assigned to receive either 5 mg/kg intravenous infliximab at weeks 0, 2, and 6 and then every 6-8 weeks (n = 22; mean age, 46 years; 32% women) or 40 mg subcutaneous adalimumab every 2 weeks (n = 18; mean age, 48 years; 28% women) for 24 weeks.
- Patients were followed-up for an additional 12 weeks after the treatment period, with regular assessments of disease activity, safety, and adherence to treatment.
- The primary outcome was the time to response of mucocutaneous manifestations over 6 months; the secondary outcomes included relapse rates; quality of life, assessed using the Short-Form Health Survey 36; and disease activity, assessed using the Behçet Disease Current Activity Form.
- The safety and tolerability of the drugs were evaluated as the frequency of treatment-emergent adverse events (AEs) and serious AEs, monitored every 2 weeks.
TAKEAWAY:
- The resolution of mucocutaneous manifestations was achieved significantly more quickly with adalimumab than with infliximab, with a median time to response of 42 vs 152 days (P = .001); the proportion of responders was also higher in the adalimumab group than in the infliximab group (94% vs 64%; P = .023).
- Patients in the infliximab group had a higher risk for nonresponse (adjusted hazard ratio [HR], 3.33; P = .012) and relapse (adjusted HR, 7.57; P = .036) than those in the adalimumab group.
- Both infliximab and adalimumab significantly improved the quality of life in all dimensions (P < .05 for all) and disease activity scores (P < .001 for both) from baseline to the end of the study period, with no significant differences found between the groups.
- Two AEs were reported in the adalimumab group, one of which was serious (myocardial infarction); three nonserious AEs were reported in the infliximab group.
IN PRACTICE:
“ADA [adalimumab] and IFX [infliximab] were generally well tolerated and efficacious in patients with BS [Behçet syndrome] who showed an inadequate response to prior treatments with at least AZA [azathioprine] or CyA [cyclosporine],” the authors wrote. “Although a more detailed treat-to-target profile is yet to be better defined, [the study] results are also crucial in terms of prescriptiveness (currently off label), not only in Italy but also beyond national borders, as the evidence coming from real life still needs to be confirmed by growing data from clinical trials.”
SOURCE:
The study was led by Rosaria Talarico, MD, PhD, University of Pisa in Italy, and was published online in Annals of the Rheumatic Diseases.
LIMITATIONS:
The small sample size and the distinctive study design may have limited the generalizability of the findings.
DISCLOSURES:
This study was funded through a grant from the Italian Medicines Agency. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Postpartum Depression Common After Cesarean Delivery
TOPLINE:
About one in six women experience symptoms of postpartum depression (PPD) 2 months after cesarean delivery, with certain obstetric factors such as emergency cesarean delivery before labor, cesarean delivery after labor induction, lack of social support in the operating room, and severe postoperative pain influencing the risk.
METHODOLOGY:
- Researchers conducted a prospective ancillary cohort study of the Tranexamic Acid for Preventing Postpartum Hemorrhage after Cesarean Delivery (TRAAP2) trial to examine the prevalence of PPD 2 months after cesarean delivery and associated risk factors.
- A total of 2793 women (median age, 33.5 years) were included who had a cesarean delivery at 34 or more weeks of gestation; they completed the Edinburgh Postnatal Depression Scale (EPDS), a self-administered questionnaire, at 2 months after delivery.
- Information about the cesarean delivery, postpartum blood loss, immediate postpartum period, psychiatric history, and memories of delivery and postoperative pain were prospectively collected.
- Medical records were used to obtain details about characteristics of patients; 5.0% had a psychiatric history (2.4% composed of depression).
- The main endpoint was a positive screening for symptoms consistent with this depression — defined as a PPD diagnosis — 2 months after caesarian delivery, with an EPDS score of 13 or higher.
TAKEAWAY:
- The prevalence of a provisional PPD diagnosis at 2 months after cesarean delivery was 16.4% (95% CI, 14.9-18.0) with an EPDS score of 13 or higher and was 23.1% (95% CI, 21.4-24.9%) with a cutoff value of 11 or higher.
- Women who had an emergency cesarean delivery before labor had a higher risk for PPD than those who had a normal cesarean delivery before labor started (adjusted odds ratio [aOR], 1.70; 95% CI, 1.15-2.50); women who had started labor after induction but then had a cesarean delivery also had a higher risk for PPD than those who had a cesarean delivery before going into labor (aOR, 1.36; 95% CI, 1.03-1.84).
- Severe pain during the postpartum stay (aOR, 1.73; 95% CI, 1.32-2.26) and bad memories of delivery (aOR, 1.67; 95% CI, 1.14-2.45) were also risk factors for PPD.
- However, women who had social support in the operating room showed a 27% lower risk for PPD (P = .02).
IN PRACTICE:
“Identifying subgroups of women at risk for PPD based on aspects of their obstetric experience could help to screen for women who might benefit from early screening and interventions,” the authors wrote.
SOURCE:
This study was led by Alizée Froeliger, MD, MPH, of the Department of Obstetrics and Gynecology at Bordeaux University Hospital in France, and was published online in American Journal of Obstetrics & Gynecology.
LIMITATIONS:
The study population was derived from a randomized controlled trial, which may have underestimated the prevalence of PPD. The use of a self-administered questionnaire for PPD screening may not have provided a definitive diagnosis. Moreover, this study did not assess the prevalence of depressive symptoms during pregnancy.
DISCLOSURES:
The TRAAP2 trial was supported by a grant from the French Ministry of Health under its Clinical Research Hospital Program. One author reported carrying out consultancy work and lecturing for Ferring Laboratories, GlaxoSmithKline, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
About one in six women experience symptoms of postpartum depression (PPD) 2 months after cesarean delivery, with certain obstetric factors such as emergency cesarean delivery before labor, cesarean delivery after labor induction, lack of social support in the operating room, and severe postoperative pain influencing the risk.
METHODOLOGY:
- Researchers conducted a prospective ancillary cohort study of the Tranexamic Acid for Preventing Postpartum Hemorrhage after Cesarean Delivery (TRAAP2) trial to examine the prevalence of PPD 2 months after cesarean delivery and associated risk factors.
- A total of 2793 women (median age, 33.5 years) were included who had a cesarean delivery at 34 or more weeks of gestation; they completed the Edinburgh Postnatal Depression Scale (EPDS), a self-administered questionnaire, at 2 months after delivery.
- Information about the cesarean delivery, postpartum blood loss, immediate postpartum period, psychiatric history, and memories of delivery and postoperative pain were prospectively collected.
- Medical records were used to obtain details about characteristics of patients; 5.0% had a psychiatric history (2.4% composed of depression).
- The main endpoint was a positive screening for symptoms consistent with this depression — defined as a PPD diagnosis — 2 months after caesarian delivery, with an EPDS score of 13 or higher.
TAKEAWAY:
- The prevalence of a provisional PPD diagnosis at 2 months after cesarean delivery was 16.4% (95% CI, 14.9-18.0) with an EPDS score of 13 or higher and was 23.1% (95% CI, 21.4-24.9%) with a cutoff value of 11 or higher.
- Women who had an emergency cesarean delivery before labor had a higher risk for PPD than those who had a normal cesarean delivery before labor started (adjusted odds ratio [aOR], 1.70; 95% CI, 1.15-2.50); women who had started labor after induction but then had a cesarean delivery also had a higher risk for PPD than those who had a cesarean delivery before going into labor (aOR, 1.36; 95% CI, 1.03-1.84).
- Severe pain during the postpartum stay (aOR, 1.73; 95% CI, 1.32-2.26) and bad memories of delivery (aOR, 1.67; 95% CI, 1.14-2.45) were also risk factors for PPD.
- However, women who had social support in the operating room showed a 27% lower risk for PPD (P = .02).
IN PRACTICE:
“Identifying subgroups of women at risk for PPD based on aspects of their obstetric experience could help to screen for women who might benefit from early screening and interventions,” the authors wrote.
SOURCE:
This study was led by Alizée Froeliger, MD, MPH, of the Department of Obstetrics and Gynecology at Bordeaux University Hospital in France, and was published online in American Journal of Obstetrics & Gynecology.
LIMITATIONS:
The study population was derived from a randomized controlled trial, which may have underestimated the prevalence of PPD. The use of a self-administered questionnaire for PPD screening may not have provided a definitive diagnosis. Moreover, this study did not assess the prevalence of depressive symptoms during pregnancy.
DISCLOSURES:
The TRAAP2 trial was supported by a grant from the French Ministry of Health under its Clinical Research Hospital Program. One author reported carrying out consultancy work and lecturing for Ferring Laboratories, GlaxoSmithKline, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
About one in six women experience symptoms of postpartum depression (PPD) 2 months after cesarean delivery, with certain obstetric factors such as emergency cesarean delivery before labor, cesarean delivery after labor induction, lack of social support in the operating room, and severe postoperative pain influencing the risk.
METHODOLOGY:
- Researchers conducted a prospective ancillary cohort study of the Tranexamic Acid for Preventing Postpartum Hemorrhage after Cesarean Delivery (TRAAP2) trial to examine the prevalence of PPD 2 months after cesarean delivery and associated risk factors.
- A total of 2793 women (median age, 33.5 years) were included who had a cesarean delivery at 34 or more weeks of gestation; they completed the Edinburgh Postnatal Depression Scale (EPDS), a self-administered questionnaire, at 2 months after delivery.
- Information about the cesarean delivery, postpartum blood loss, immediate postpartum period, psychiatric history, and memories of delivery and postoperative pain were prospectively collected.
- Medical records were used to obtain details about characteristics of patients; 5.0% had a psychiatric history (2.4% composed of depression).
- The main endpoint was a positive screening for symptoms consistent with this depression — defined as a PPD diagnosis — 2 months after caesarian delivery, with an EPDS score of 13 or higher.
TAKEAWAY:
- The prevalence of a provisional PPD diagnosis at 2 months after cesarean delivery was 16.4% (95% CI, 14.9-18.0) with an EPDS score of 13 or higher and was 23.1% (95% CI, 21.4-24.9%) with a cutoff value of 11 or higher.
- Women who had an emergency cesarean delivery before labor had a higher risk for PPD than those who had a normal cesarean delivery before labor started (adjusted odds ratio [aOR], 1.70; 95% CI, 1.15-2.50); women who had started labor after induction but then had a cesarean delivery also had a higher risk for PPD than those who had a cesarean delivery before going into labor (aOR, 1.36; 95% CI, 1.03-1.84).
- Severe pain during the postpartum stay (aOR, 1.73; 95% CI, 1.32-2.26) and bad memories of delivery (aOR, 1.67; 95% CI, 1.14-2.45) were also risk factors for PPD.
- However, women who had social support in the operating room showed a 27% lower risk for PPD (P = .02).
IN PRACTICE:
“Identifying subgroups of women at risk for PPD based on aspects of their obstetric experience could help to screen for women who might benefit from early screening and interventions,” the authors wrote.
SOURCE:
This study was led by Alizée Froeliger, MD, MPH, of the Department of Obstetrics and Gynecology at Bordeaux University Hospital in France, and was published online in American Journal of Obstetrics & Gynecology.
LIMITATIONS:
The study population was derived from a randomized controlled trial, which may have underestimated the prevalence of PPD. The use of a self-administered questionnaire for PPD screening may not have provided a definitive diagnosis. Moreover, this study did not assess the prevalence of depressive symptoms during pregnancy.
DISCLOSURES:
The TRAAP2 trial was supported by a grant from the French Ministry of Health under its Clinical Research Hospital Program. One author reported carrying out consultancy work and lecturing for Ferring Laboratories, GlaxoSmithKline, and other pharmaceutical companies.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Topiramate Plus Metformin Effective for Weight Loss in PCOS
TOPLINE:
In women with polycystic ovary syndrome (PCOS) and with obesity or overweight, the combination of topiramate and metformin along with a low-calorie diet can result in effective weight loss and improve androgen levels, lipid levels, and psychosocial scores, without any serious adverse events.
METHODOLOGY:
- Topiramate is often used off-label for weight loss and may be a promising option added to a metformin regimen to improve cardiometabolic and reproductive health in women with PCOS and obesity or overweight when lifestyle changes alone fall short.
- This double-blind trial conducted at Hospital de Clínicas de Porto Alegre in Porto Alegre, Brazil, evaluated the effects of adding topiramate to metformin in 61 women aged 14-40 years with PCOS and body mass index (BMI) ≥ 30 or BMI ≥ 27 with concurrent hypertension, type 2 diabetes, or dyslipidemia.
- All participants were prescribed a 20 kcal/kg diet, as well as desogestrel for contraception during the study, and either started on 850 mg metformin or continued with their existing metformin regimen.
- They were randomly assigned to receive either topiramate or placebo (25 mg for 15 days and then 50 mg at night) along with metformin, with dose adjustments based on weight loss at 3 months.
- The primary outcome was the percent change in body weight from baseline, and the secondary outcomes included changes in clinical, cardiometabolic, and hormonal parameters and psychosocial features at 3 and 6 months.
TAKEAWAY:
- Topiramate combined with metformin resulted in greater mean weight loss at 3 months (−3.4% vs −1.6%; P = .03) and 6 months (−4.5% vs −1.4%; P = .03) than placebo plus metformin.
- Both treatment groups showed improvements in androgen and lipid levels and psychosocial scores, while the levels of C-reactive protein decreased only in the topiramate plus metformin group.
- Women who experienced ≥ 3% weight loss at 6 months showed a significant improvement in hirsutism scores (change in modified Ferriman-Gallwey scores, 8.4-6.5), unlike those who experienced < 3% weight loss (change in modified Ferriman-Gallwey scores, 8.02-8.78).
- Paresthesia was more common in the topiramate plus metformin group than in the metformin plus placebo group (23.3% vs 3.2%), but no serious adverse events were reported.
IN PRACTICE:
“In the era of new effective drugs for treating obesity, topiramate with metformin can be an option for women with obesity and PCOS, considering its low cost, reports of long-term experience with this medication, and ease to use,” the authors wrote.
SOURCE:
The study was led by Lucas Bandeira Marchesan, Gynecological Endocrinology Unit, Division of Endocrinology, Hospital de Clínicas de Porto Alegre, and was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The small sample size and high attrition rates were major limitations of this study. Increasing the topiramate dose at 3 months in those with < 3% weight loss did not provide additional benefit, and this study did not test for a higher topiramate dose response from the beginning, which could have potentially provided a better response to the medication. The small sample size of the study also prevented the authors from conducting a subgroup analysis.
DISCLOSURES:
The study was supported by research grants from the Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil, and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, Brazil. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
In women with polycystic ovary syndrome (PCOS) and with obesity or overweight, the combination of topiramate and metformin along with a low-calorie diet can result in effective weight loss and improve androgen levels, lipid levels, and psychosocial scores, without any serious adverse events.
METHODOLOGY:
- Topiramate is often used off-label for weight loss and may be a promising option added to a metformin regimen to improve cardiometabolic and reproductive health in women with PCOS and obesity or overweight when lifestyle changes alone fall short.
- This double-blind trial conducted at Hospital de Clínicas de Porto Alegre in Porto Alegre, Brazil, evaluated the effects of adding topiramate to metformin in 61 women aged 14-40 years with PCOS and body mass index (BMI) ≥ 30 or BMI ≥ 27 with concurrent hypertension, type 2 diabetes, or dyslipidemia.
- All participants were prescribed a 20 kcal/kg diet, as well as desogestrel for contraception during the study, and either started on 850 mg metformin or continued with their existing metformin regimen.
- They were randomly assigned to receive either topiramate or placebo (25 mg for 15 days and then 50 mg at night) along with metformin, with dose adjustments based on weight loss at 3 months.
- The primary outcome was the percent change in body weight from baseline, and the secondary outcomes included changes in clinical, cardiometabolic, and hormonal parameters and psychosocial features at 3 and 6 months.
TAKEAWAY:
- Topiramate combined with metformin resulted in greater mean weight loss at 3 months (−3.4% vs −1.6%; P = .03) and 6 months (−4.5% vs −1.4%; P = .03) than placebo plus metformin.
- Both treatment groups showed improvements in androgen and lipid levels and psychosocial scores, while the levels of C-reactive protein decreased only in the topiramate plus metformin group.
- Women who experienced ≥ 3% weight loss at 6 months showed a significant improvement in hirsutism scores (change in modified Ferriman-Gallwey scores, 8.4-6.5), unlike those who experienced < 3% weight loss (change in modified Ferriman-Gallwey scores, 8.02-8.78).
- Paresthesia was more common in the topiramate plus metformin group than in the metformin plus placebo group (23.3% vs 3.2%), but no serious adverse events were reported.
IN PRACTICE:
“In the era of new effective drugs for treating obesity, topiramate with metformin can be an option for women with obesity and PCOS, considering its low cost, reports of long-term experience with this medication, and ease to use,” the authors wrote.
SOURCE:
The study was led by Lucas Bandeira Marchesan, Gynecological Endocrinology Unit, Division of Endocrinology, Hospital de Clínicas de Porto Alegre, and was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The small sample size and high attrition rates were major limitations of this study. Increasing the topiramate dose at 3 months in those with < 3% weight loss did not provide additional benefit, and this study did not test for a higher topiramate dose response from the beginning, which could have potentially provided a better response to the medication. The small sample size of the study also prevented the authors from conducting a subgroup analysis.
DISCLOSURES:
The study was supported by research grants from the Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil, and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, Brazil. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
In women with polycystic ovary syndrome (PCOS) and with obesity or overweight, the combination of topiramate and metformin along with a low-calorie diet can result in effective weight loss and improve androgen levels, lipid levels, and psychosocial scores, without any serious adverse events.
METHODOLOGY:
- Topiramate is often used off-label for weight loss and may be a promising option added to a metformin regimen to improve cardiometabolic and reproductive health in women with PCOS and obesity or overweight when lifestyle changes alone fall short.
- This double-blind trial conducted at Hospital de Clínicas de Porto Alegre in Porto Alegre, Brazil, evaluated the effects of adding topiramate to metformin in 61 women aged 14-40 years with PCOS and body mass index (BMI) ≥ 30 or BMI ≥ 27 with concurrent hypertension, type 2 diabetes, or dyslipidemia.
- All participants were prescribed a 20 kcal/kg diet, as well as desogestrel for contraception during the study, and either started on 850 mg metformin or continued with their existing metformin regimen.
- They were randomly assigned to receive either topiramate or placebo (25 mg for 15 days and then 50 mg at night) along with metformin, with dose adjustments based on weight loss at 3 months.
- The primary outcome was the percent change in body weight from baseline, and the secondary outcomes included changes in clinical, cardiometabolic, and hormonal parameters and psychosocial features at 3 and 6 months.
TAKEAWAY:
- Topiramate combined with metformin resulted in greater mean weight loss at 3 months (−3.4% vs −1.6%; P = .03) and 6 months (−4.5% vs −1.4%; P = .03) than placebo plus metformin.
- Both treatment groups showed improvements in androgen and lipid levels and psychosocial scores, while the levels of C-reactive protein decreased only in the topiramate plus metformin group.
- Women who experienced ≥ 3% weight loss at 6 months showed a significant improvement in hirsutism scores (change in modified Ferriman-Gallwey scores, 8.4-6.5), unlike those who experienced < 3% weight loss (change in modified Ferriman-Gallwey scores, 8.02-8.78).
- Paresthesia was more common in the topiramate plus metformin group than in the metformin plus placebo group (23.3% vs 3.2%), but no serious adverse events were reported.
IN PRACTICE:
“In the era of new effective drugs for treating obesity, topiramate with metformin can be an option for women with obesity and PCOS, considering its low cost, reports of long-term experience with this medication, and ease to use,” the authors wrote.
SOURCE:
The study was led by Lucas Bandeira Marchesan, Gynecological Endocrinology Unit, Division of Endocrinology, Hospital de Clínicas de Porto Alegre, and was published online in The Journal of Clinical Endocrinology & Metabolism.
LIMITATIONS:
The small sample size and high attrition rates were major limitations of this study. Increasing the topiramate dose at 3 months in those with < 3% weight loss did not provide additional benefit, and this study did not test for a higher topiramate dose response from the beginning, which could have potentially provided a better response to the medication. The small sample size of the study also prevented the authors from conducting a subgroup analysis.
DISCLOSURES:
The study was supported by research grants from the Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil, and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul, Brazil. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
JIA Treatment Has Increasingly Involved New DMARDs Since 2001
TOPLINE:
The use of newer biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs) for treating juvenile idiopathic arthritis (JIA) rose sharply from 2001 to 2022, while the use of conventional synthetic DMARDs (csDMARDs) plummeted, with adalimumab becoming the most commonly used b/tsDMARD.
METHODOLOGY:
- Researchers performed a serial cross-sectional study using Merative MarketScan Commercial Claims and Encounters data from 2000 to 2022 to describe recent trends in DMARD use for children with JIA in the United States.
- They identified 20,258 new episodes of DMARD use among 13,696 children with JIA (median age, 14 years; 67.5% girls) who newly initiated at least one DMARD.
- Participants were required to have ≥ 365 days of continuous healthcare and pharmacy eligibility prior to the index date, defined as the date of DMARD initiation.
TAKEAWAY:
- The use of csDMARDs declined from 89.5% to 43.2% between 2001 and 2022 (P < .001 for trend), whereas the use of bDMARDs increased from 10.5% to 50.0% over the same period (P < .001).
- Methotrexate was the most commonly used DMARD throughout the study period ; however, as with other csDMARDs, its use declined from 42.1% in 2001 to 21.5% in 2022 (P < .001 ).
- Use of the tumor necrosis factor inhibitor adalimumab doubled from 7% in 2007 to 14% in 2008 and increased further up to 20.5% by 2022; adalimumab also became the most predominantly used b/tsDMARD after csDMARD monotherapy, accounting for 77.8% of prescriptions following csDMARDs in 2022.
- Even though the use of individual TNF inhibitors increased, their overall popularity fell in recent years as the use of newer b/tsDMARDs, such as ustekinumab and secukinumab, increased.
IN PRACTICE:
“These real-world treatment patterns give us insight into how selection of therapies for JIA has evolved with increasing availability of effective agents and help prepare for future studies on comparative DMARD safety and effectiveness,” the authors wrote.
SOURCE:
The study was led by Priyanka Yalamanchili, PharmD, MS, Center for Pharmacoepidemiology and Treatment Science, Institute for Health, Rutgers University, New Brunswick, New Jersey, and was published online October 22, 2024, in Arthritis & Rheumatology.
LIMITATIONS:
The dependence on commercial claims data may have limited the generalizability of the findings to other populations, such as those with public insurance or without insurance. The study did not have access to demographic data of the participants to investigate the presence of disparities in the use of DMARDs. Moreover, the lack of clinical details about the patients with JIA, including disease severity and specialty of prescribers, may have affected the interpretation of the results.
DISCLOSURES:
The study was supported by funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and several other institutes of the National Institutes of Health, as well as the Rheumatology Research Foundation and the Juvenile Diabetes Research Foundation. No conflicts of interest were reported by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The use of newer biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs) for treating juvenile idiopathic arthritis (JIA) rose sharply from 2001 to 2022, while the use of conventional synthetic DMARDs (csDMARDs) plummeted, with adalimumab becoming the most commonly used b/tsDMARD.
METHODOLOGY:
- Researchers performed a serial cross-sectional study using Merative MarketScan Commercial Claims and Encounters data from 2000 to 2022 to describe recent trends in DMARD use for children with JIA in the United States.
- They identified 20,258 new episodes of DMARD use among 13,696 children with JIA (median age, 14 years; 67.5% girls) who newly initiated at least one DMARD.
- Participants were required to have ≥ 365 days of continuous healthcare and pharmacy eligibility prior to the index date, defined as the date of DMARD initiation.
TAKEAWAY:
- The use of csDMARDs declined from 89.5% to 43.2% between 2001 and 2022 (P < .001 for trend), whereas the use of bDMARDs increased from 10.5% to 50.0% over the same period (P < .001).
- Methotrexate was the most commonly used DMARD throughout the study period ; however, as with other csDMARDs, its use declined from 42.1% in 2001 to 21.5% in 2022 (P < .001 ).
- Use of the tumor necrosis factor inhibitor adalimumab doubled from 7% in 2007 to 14% in 2008 and increased further up to 20.5% by 2022; adalimumab also became the most predominantly used b/tsDMARD after csDMARD monotherapy, accounting for 77.8% of prescriptions following csDMARDs in 2022.
- Even though the use of individual TNF inhibitors increased, their overall popularity fell in recent years as the use of newer b/tsDMARDs, such as ustekinumab and secukinumab, increased.
IN PRACTICE:
“These real-world treatment patterns give us insight into how selection of therapies for JIA has evolved with increasing availability of effective agents and help prepare for future studies on comparative DMARD safety and effectiveness,” the authors wrote.
SOURCE:
The study was led by Priyanka Yalamanchili, PharmD, MS, Center for Pharmacoepidemiology and Treatment Science, Institute for Health, Rutgers University, New Brunswick, New Jersey, and was published online October 22, 2024, in Arthritis & Rheumatology.
LIMITATIONS:
The dependence on commercial claims data may have limited the generalizability of the findings to other populations, such as those with public insurance or without insurance. The study did not have access to demographic data of the participants to investigate the presence of disparities in the use of DMARDs. Moreover, the lack of clinical details about the patients with JIA, including disease severity and specialty of prescribers, may have affected the interpretation of the results.
DISCLOSURES:
The study was supported by funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and several other institutes of the National Institutes of Health, as well as the Rheumatology Research Foundation and the Juvenile Diabetes Research Foundation. No conflicts of interest were reported by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The use of newer biologic or targeted synthetic disease-modifying antirheumatic drugs (b/tsDMARDs) for treating juvenile idiopathic arthritis (JIA) rose sharply from 2001 to 2022, while the use of conventional synthetic DMARDs (csDMARDs) plummeted, with adalimumab becoming the most commonly used b/tsDMARD.
METHODOLOGY:
- Researchers performed a serial cross-sectional study using Merative MarketScan Commercial Claims and Encounters data from 2000 to 2022 to describe recent trends in DMARD use for children with JIA in the United States.
- They identified 20,258 new episodes of DMARD use among 13,696 children with JIA (median age, 14 years; 67.5% girls) who newly initiated at least one DMARD.
- Participants were required to have ≥ 365 days of continuous healthcare and pharmacy eligibility prior to the index date, defined as the date of DMARD initiation.
TAKEAWAY:
- The use of csDMARDs declined from 89.5% to 43.2% between 2001 and 2022 (P < .001 for trend), whereas the use of bDMARDs increased from 10.5% to 50.0% over the same period (P < .001).
- Methotrexate was the most commonly used DMARD throughout the study period ; however, as with other csDMARDs, its use declined from 42.1% in 2001 to 21.5% in 2022 (P < .001 ).
- Use of the tumor necrosis factor inhibitor adalimumab doubled from 7% in 2007 to 14% in 2008 and increased further up to 20.5% by 2022; adalimumab also became the most predominantly used b/tsDMARD after csDMARD monotherapy, accounting for 77.8% of prescriptions following csDMARDs in 2022.
- Even though the use of individual TNF inhibitors increased, their overall popularity fell in recent years as the use of newer b/tsDMARDs, such as ustekinumab and secukinumab, increased.
IN PRACTICE:
“These real-world treatment patterns give us insight into how selection of therapies for JIA has evolved with increasing availability of effective agents and help prepare for future studies on comparative DMARD safety and effectiveness,” the authors wrote.
SOURCE:
The study was led by Priyanka Yalamanchili, PharmD, MS, Center for Pharmacoepidemiology and Treatment Science, Institute for Health, Rutgers University, New Brunswick, New Jersey, and was published online October 22, 2024, in Arthritis & Rheumatology.
LIMITATIONS:
The dependence on commercial claims data may have limited the generalizability of the findings to other populations, such as those with public insurance or without insurance. The study did not have access to demographic data of the participants to investigate the presence of disparities in the use of DMARDs. Moreover, the lack of clinical details about the patients with JIA, including disease severity and specialty of prescribers, may have affected the interpretation of the results.
DISCLOSURES:
The study was supported by funding from the National Institute of Arthritis and Musculoskeletal and Skin Diseases and several other institutes of the National Institutes of Health, as well as the Rheumatology Research Foundation and the Juvenile Diabetes Research Foundation. No conflicts of interest were reported by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.