Wrinkles, Dyspigmentation Improve with PDT, in Small Study

Article Type
Changed
Tue, 10/22/2024 - 13:08

Photodynamic therapy (PDT) — a treatment most commonly thought of for field cancerization — is an effective tool for reducing rhytides and lentigines, results from a small prospective study showed.

“Our study helps capture and quantify a phenomenon that clinicians who use PDT in their practice have already noticed: Patients experience a visible improvement across several cosmetically important metrics including but not limited to fine lines, wrinkles, and skin tightness following PDT,” one of the study authors, Luke Horton, MD, a fourth-year dermatology resident at the University of California, Irvine, said in an interview following the annual meeting of the American Society for Dermatologic Surgery, where he presented the results during an oral abstract session.

Dr. Horton
Dr. Luke Horton

For the study, 11 patients underwent a 120-minute incubation period with 17% 5-aminolevulinic acid over the face, followed by visible blue light PDT exposure for 16 minutes, to reduce rhytides. The researchers used a Vectra imaging system to capture three-dimensional images of the patients before the procedure and during the follow-up. Three dermatologists analyzed the pre-procedure and post-procedure images and used a validated five-point Merz wrinkle severity scale to grade various regions of the face including the forehead, glabella, lateral canthal rhytides, melolabial folds, nasolabial folds, and perioral rhytides.

They also used a five-point solar lentigines scale to evaluate the change in degree of pigmentation and quantity of age spots as well as the change in rhytid severity before and after PDT and the change in the seven-point Global Aesthetic Improvement Scale (GAIS) to gauge overall improvement of fine lines and wrinkles.

After a mean follow-up of 4.25 months, rhytid severity among the 11 patients was reduced by an average of 0.65 points on the Merz scale, with an SD of 0.20. Broken down by region, rhytid severity scores decreased by 0.2 points (SD, 0.42) for the forehead, 0.7 points (SD, 0.48) for the glabella and lateral canthal rhytides, 0.88 points (SD, 0.35) for the melolabial folds and perioral rhytides, and 0.8 points (SD, 0.42) for the nasolabial folds. (The researchers excluded ratings for the melolabial folds and perioral rhytides in two patients with beards.)

In other findings, solar lentigines grading showed an average reduction of 1 point (SD, 0.45), while the GAIS score improved by 1 or more for every patient, with an average of score of 1.45 (SD, 0.52), showing that some degree of improvement in facial rhytides was noted for all patients following PDT.

“The degree of improvement as measured by our independent physician graders was impressive and not far off from those reported with CO2 ablative laser,” Horton said. “Further, the effect was not isolated to actinic keratoses but extended to improved appearance of fine lines, some deep lines, and lentigines. Although we are not implying that PDT is superior to and should replace lasers or other energy-based devices, it does provide a real, measurable cosmetic benefit.”

Clinicians, he added, can use these findings “to counsel their patients when discussing field cancerization treatment options, especially for patients who may be hesitant to undergo PDT as it can be a painful therapy with a considerable downtime for some.”

Lawrence J. Green, MD, clinical professor of dermatology, The George Washington University, Washington, DC, who was asked to comment on the study results, said that the findings “shine more light on the long-standing off-label use of PDT for lessening signs of photoaging. Like studies done before it, I think this adds an additional benefit to discuss for those who are considering PDT treatment for their actinic keratoses.”

Horton acknowledged certain limitations of the study including its small sample size and the fact that physician graders were not blinded to which images were pre- and post-treatment, “which could introduce an element of bias in the data,” he said. “But this being an unfunded project born out of clinical observation, we hope to later expand its size. Furthermore, we invite other physicians to join us to better study these effects and to design protocols that minimize adverse effects and maximize clinical outcomes.”

His co-authors were Milan Hirpara; Sarah Choe; Joel Cohen, MD; and Natasha A. Mesinkovska, MD, PhD.

No relevant disclosures were reported. Green had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Photodynamic therapy (PDT) — a treatment most commonly thought of for field cancerization — is an effective tool for reducing rhytides and lentigines, results from a small prospective study showed.

“Our study helps capture and quantify a phenomenon that clinicians who use PDT in their practice have already noticed: Patients experience a visible improvement across several cosmetically important metrics including but not limited to fine lines, wrinkles, and skin tightness following PDT,” one of the study authors, Luke Horton, MD, a fourth-year dermatology resident at the University of California, Irvine, said in an interview following the annual meeting of the American Society for Dermatologic Surgery, where he presented the results during an oral abstract session.

Dr. Horton
Dr. Luke Horton

For the study, 11 patients underwent a 120-minute incubation period with 17% 5-aminolevulinic acid over the face, followed by visible blue light PDT exposure for 16 minutes, to reduce rhytides. The researchers used a Vectra imaging system to capture three-dimensional images of the patients before the procedure and during the follow-up. Three dermatologists analyzed the pre-procedure and post-procedure images and used a validated five-point Merz wrinkle severity scale to grade various regions of the face including the forehead, glabella, lateral canthal rhytides, melolabial folds, nasolabial folds, and perioral rhytides.

They also used a five-point solar lentigines scale to evaluate the change in degree of pigmentation and quantity of age spots as well as the change in rhytid severity before and after PDT and the change in the seven-point Global Aesthetic Improvement Scale (GAIS) to gauge overall improvement of fine lines and wrinkles.

After a mean follow-up of 4.25 months, rhytid severity among the 11 patients was reduced by an average of 0.65 points on the Merz scale, with an SD of 0.20. Broken down by region, rhytid severity scores decreased by 0.2 points (SD, 0.42) for the forehead, 0.7 points (SD, 0.48) for the glabella and lateral canthal rhytides, 0.88 points (SD, 0.35) for the melolabial folds and perioral rhytides, and 0.8 points (SD, 0.42) for the nasolabial folds. (The researchers excluded ratings for the melolabial folds and perioral rhytides in two patients with beards.)

In other findings, solar lentigines grading showed an average reduction of 1 point (SD, 0.45), while the GAIS score improved by 1 or more for every patient, with an average of score of 1.45 (SD, 0.52), showing that some degree of improvement in facial rhytides was noted for all patients following PDT.

“The degree of improvement as measured by our independent physician graders was impressive and not far off from those reported with CO2 ablative laser,” Horton said. “Further, the effect was not isolated to actinic keratoses but extended to improved appearance of fine lines, some deep lines, and lentigines. Although we are not implying that PDT is superior to and should replace lasers or other energy-based devices, it does provide a real, measurable cosmetic benefit.”

Clinicians, he added, can use these findings “to counsel their patients when discussing field cancerization treatment options, especially for patients who may be hesitant to undergo PDT as it can be a painful therapy with a considerable downtime for some.”

Lawrence J. Green, MD, clinical professor of dermatology, The George Washington University, Washington, DC, who was asked to comment on the study results, said that the findings “shine more light on the long-standing off-label use of PDT for lessening signs of photoaging. Like studies done before it, I think this adds an additional benefit to discuss for those who are considering PDT treatment for their actinic keratoses.”

Horton acknowledged certain limitations of the study including its small sample size and the fact that physician graders were not blinded to which images were pre- and post-treatment, “which could introduce an element of bias in the data,” he said. “But this being an unfunded project born out of clinical observation, we hope to later expand its size. Furthermore, we invite other physicians to join us to better study these effects and to design protocols that minimize adverse effects and maximize clinical outcomes.”

His co-authors were Milan Hirpara; Sarah Choe; Joel Cohen, MD; and Natasha A. Mesinkovska, MD, PhD.

No relevant disclosures were reported. Green had no relevant disclosures.

A version of this article appeared on Medscape.com.

Photodynamic therapy (PDT) — a treatment most commonly thought of for field cancerization — is an effective tool for reducing rhytides and lentigines, results from a small prospective study showed.

“Our study helps capture and quantify a phenomenon that clinicians who use PDT in their practice have already noticed: Patients experience a visible improvement across several cosmetically important metrics including but not limited to fine lines, wrinkles, and skin tightness following PDT,” one of the study authors, Luke Horton, MD, a fourth-year dermatology resident at the University of California, Irvine, said in an interview following the annual meeting of the American Society for Dermatologic Surgery, where he presented the results during an oral abstract session.

Dr. Horton
Dr. Luke Horton

For the study, 11 patients underwent a 120-minute incubation period with 17% 5-aminolevulinic acid over the face, followed by visible blue light PDT exposure for 16 minutes, to reduce rhytides. The researchers used a Vectra imaging system to capture three-dimensional images of the patients before the procedure and during the follow-up. Three dermatologists analyzed the pre-procedure and post-procedure images and used a validated five-point Merz wrinkle severity scale to grade various regions of the face including the forehead, glabella, lateral canthal rhytides, melolabial folds, nasolabial folds, and perioral rhytides.

They also used a five-point solar lentigines scale to evaluate the change in degree of pigmentation and quantity of age spots as well as the change in rhytid severity before and after PDT and the change in the seven-point Global Aesthetic Improvement Scale (GAIS) to gauge overall improvement of fine lines and wrinkles.

After a mean follow-up of 4.25 months, rhytid severity among the 11 patients was reduced by an average of 0.65 points on the Merz scale, with an SD of 0.20. Broken down by region, rhytid severity scores decreased by 0.2 points (SD, 0.42) for the forehead, 0.7 points (SD, 0.48) for the glabella and lateral canthal rhytides, 0.88 points (SD, 0.35) for the melolabial folds and perioral rhytides, and 0.8 points (SD, 0.42) for the nasolabial folds. (The researchers excluded ratings for the melolabial folds and perioral rhytides in two patients with beards.)

In other findings, solar lentigines grading showed an average reduction of 1 point (SD, 0.45), while the GAIS score improved by 1 or more for every patient, with an average of score of 1.45 (SD, 0.52), showing that some degree of improvement in facial rhytides was noted for all patients following PDT.

“The degree of improvement as measured by our independent physician graders was impressive and not far off from those reported with CO2 ablative laser,” Horton said. “Further, the effect was not isolated to actinic keratoses but extended to improved appearance of fine lines, some deep lines, and lentigines. Although we are not implying that PDT is superior to and should replace lasers or other energy-based devices, it does provide a real, measurable cosmetic benefit.”

Clinicians, he added, can use these findings “to counsel their patients when discussing field cancerization treatment options, especially for patients who may be hesitant to undergo PDT as it can be a painful therapy with a considerable downtime for some.”

Lawrence J. Green, MD, clinical professor of dermatology, The George Washington University, Washington, DC, who was asked to comment on the study results, said that the findings “shine more light on the long-standing off-label use of PDT for lessening signs of photoaging. Like studies done before it, I think this adds an additional benefit to discuss for those who are considering PDT treatment for their actinic keratoses.”

Horton acknowledged certain limitations of the study including its small sample size and the fact that physician graders were not blinded to which images were pre- and post-treatment, “which could introduce an element of bias in the data,” he said. “But this being an unfunded project born out of clinical observation, we hope to later expand its size. Furthermore, we invite other physicians to join us to better study these effects and to design protocols that minimize adverse effects and maximize clinical outcomes.”

His co-authors were Milan Hirpara; Sarah Choe; Joel Cohen, MD; and Natasha A. Mesinkovska, MD, PhD.

No relevant disclosures were reported. Green had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Adjuvant Chemo Beneficial in TNBC With High Immune Infiltration

Article Type
Changed
Tue, 10/22/2024 - 13:04

 

TOPLINE:

Patients with early-stage triple-negative breast cancer (TNBC) and high immune infiltration showed improved disease-free survival (DFS) with adjuvant capecitabine. These “immune-hot” patients had a 5-year DFS rate of 96.9% compared with 79.4% in the control group.

METHODOLOGY:

  • In some studies, adding extended capecitabine to standard adjuvant chemotherapy has been shown to improve DFS in patients with early-stage TNBC, and one subset analysis suggested improved outcomes were most strongly associated with high immune infiltration.
  • Researchers conducted a retrospective analysis of CBCSG010, a randomized phase 3 clinical trial, to identify the specific population that benefited from adjuvant capecitabine by analyzing the immune infiltration status of the tumors.
  • The CBCSGO10 study of 585 patients originally found adjuvant capecitabine improved 5-year survival in patients with TNBC by 5.9%.
  • This analysis included 207 patients (capecitabine arm, n = 104; control arm, n = 103) with serial formalin-fixed, paraffin-embedded tumor specimens, of which RNA sequencing data were available from 36 patients (capecitabine, n = 24; control, n = 12).
  • Transcriptome data on the tumor microenvironment were validated with immunohistochemical staining of two markers, programmed death-ligand 1 (PD-L1) and CD8, as well as stromal tumor-infiltrating lymphocytes (sTILs); patients with high PD-L1, CD8, and sTIL expression levels were defined as “immune hot.”

TAKEAWAY:

  • Patients with TNBC and high immune infiltration treated with capecitabine had a 5-year DFS rate of 96.9% compared with 79.4% in the control group (hazard ratio [HR], 0.13; 95% CI, 0.03-0.52; P = .049).
  • In the capecitabine group, the immune-hot patients had a higher 5-year DFS rate (96.9%) compared with immune-cold patients (76.4%; HR, 0.11; 95% CI, 0.04-0.29; P = .028).
  • Gene ontology analysis showed greater enrichment of immune-related pathways in patients without recurrence in the capecitabine group, as well as higher expression of TYMP, a key liver enzyme in the metabolism of capecitabine.
  • High expression levels of immune biomarkers PD-L1, CD8, and sTILs were associated with significantly improved DFS in the capecitabine group.

IN PRACTICE:

“Our study suggested that immune-hot patients with TNBC are more likely to benefit from adjuvant capecitabine and that combining immunotherapy with chemotherapy may be expected to be more effective in immune-hot patients,” wrote the study authors.

SOURCE:

The study was led by Wenya Wu, MMed, and Yunsong Yang, MD, at the Department of Breast Surgery, Fudan University Shanghai Cancer Center in Shanghai, People’s Republic of China. It was published online October 2024 in JNCCN — Journal of the National Comprehensive Cancer Network.

LIMITATIONS:

The retrospective nature of the sample collection limited the availability of RNA sequencing data. External verification was challenging due to limited accessibility of transcriptome data from patients treated with additional adjuvant capecitabine or standard chemotherapy alone. The criteria for identifying immune-hot tumors require further exploration and determination.

DISCLOSURES:

This study was funded by the National Natural Science Foundation of China, China Postdoctoral Science Foundation, and Shanghai Science and Technology Development Foundation. The authors disclosed no relevant conflicts of interest.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Patients with early-stage triple-negative breast cancer (TNBC) and high immune infiltration showed improved disease-free survival (DFS) with adjuvant capecitabine. These “immune-hot” patients had a 5-year DFS rate of 96.9% compared with 79.4% in the control group.

METHODOLOGY:

  • In some studies, adding extended capecitabine to standard adjuvant chemotherapy has been shown to improve DFS in patients with early-stage TNBC, and one subset analysis suggested improved outcomes were most strongly associated with high immune infiltration.
  • Researchers conducted a retrospective analysis of CBCSG010, a randomized phase 3 clinical trial, to identify the specific population that benefited from adjuvant capecitabine by analyzing the immune infiltration status of the tumors.
  • The CBCSGO10 study of 585 patients originally found adjuvant capecitabine improved 5-year survival in patients with TNBC by 5.9%.
  • This analysis included 207 patients (capecitabine arm, n = 104; control arm, n = 103) with serial formalin-fixed, paraffin-embedded tumor specimens, of which RNA sequencing data were available from 36 patients (capecitabine, n = 24; control, n = 12).
  • Transcriptome data on the tumor microenvironment were validated with immunohistochemical staining of two markers, programmed death-ligand 1 (PD-L1) and CD8, as well as stromal tumor-infiltrating lymphocytes (sTILs); patients with high PD-L1, CD8, and sTIL expression levels were defined as “immune hot.”

TAKEAWAY:

  • Patients with TNBC and high immune infiltration treated with capecitabine had a 5-year DFS rate of 96.9% compared with 79.4% in the control group (hazard ratio [HR], 0.13; 95% CI, 0.03-0.52; P = .049).
  • In the capecitabine group, the immune-hot patients had a higher 5-year DFS rate (96.9%) compared with immune-cold patients (76.4%; HR, 0.11; 95% CI, 0.04-0.29; P = .028).
  • Gene ontology analysis showed greater enrichment of immune-related pathways in patients without recurrence in the capecitabine group, as well as higher expression of TYMP, a key liver enzyme in the metabolism of capecitabine.
  • High expression levels of immune biomarkers PD-L1, CD8, and sTILs were associated with significantly improved DFS in the capecitabine group.

IN PRACTICE:

“Our study suggested that immune-hot patients with TNBC are more likely to benefit from adjuvant capecitabine and that combining immunotherapy with chemotherapy may be expected to be more effective in immune-hot patients,” wrote the study authors.

SOURCE:

The study was led by Wenya Wu, MMed, and Yunsong Yang, MD, at the Department of Breast Surgery, Fudan University Shanghai Cancer Center in Shanghai, People’s Republic of China. It was published online October 2024 in JNCCN — Journal of the National Comprehensive Cancer Network.

LIMITATIONS:

The retrospective nature of the sample collection limited the availability of RNA sequencing data. External verification was challenging due to limited accessibility of transcriptome data from patients treated with additional adjuvant capecitabine or standard chemotherapy alone. The criteria for identifying immune-hot tumors require further exploration and determination.

DISCLOSURES:

This study was funded by the National Natural Science Foundation of China, China Postdoctoral Science Foundation, and Shanghai Science and Technology Development Foundation. The authors disclosed no relevant conflicts of interest.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Patients with early-stage triple-negative breast cancer (TNBC) and high immune infiltration showed improved disease-free survival (DFS) with adjuvant capecitabine. These “immune-hot” patients had a 5-year DFS rate of 96.9% compared with 79.4% in the control group.

METHODOLOGY:

  • In some studies, adding extended capecitabine to standard adjuvant chemotherapy has been shown to improve DFS in patients with early-stage TNBC, and one subset analysis suggested improved outcomes were most strongly associated with high immune infiltration.
  • Researchers conducted a retrospective analysis of CBCSG010, a randomized phase 3 clinical trial, to identify the specific population that benefited from adjuvant capecitabine by analyzing the immune infiltration status of the tumors.
  • The CBCSGO10 study of 585 patients originally found adjuvant capecitabine improved 5-year survival in patients with TNBC by 5.9%.
  • This analysis included 207 patients (capecitabine arm, n = 104; control arm, n = 103) with serial formalin-fixed, paraffin-embedded tumor specimens, of which RNA sequencing data were available from 36 patients (capecitabine, n = 24; control, n = 12).
  • Transcriptome data on the tumor microenvironment were validated with immunohistochemical staining of two markers, programmed death-ligand 1 (PD-L1) and CD8, as well as stromal tumor-infiltrating lymphocytes (sTILs); patients with high PD-L1, CD8, and sTIL expression levels were defined as “immune hot.”

TAKEAWAY:

  • Patients with TNBC and high immune infiltration treated with capecitabine had a 5-year DFS rate of 96.9% compared with 79.4% in the control group (hazard ratio [HR], 0.13; 95% CI, 0.03-0.52; P = .049).
  • In the capecitabine group, the immune-hot patients had a higher 5-year DFS rate (96.9%) compared with immune-cold patients (76.4%; HR, 0.11; 95% CI, 0.04-0.29; P = .028).
  • Gene ontology analysis showed greater enrichment of immune-related pathways in patients without recurrence in the capecitabine group, as well as higher expression of TYMP, a key liver enzyme in the metabolism of capecitabine.
  • High expression levels of immune biomarkers PD-L1, CD8, and sTILs were associated with significantly improved DFS in the capecitabine group.

IN PRACTICE:

“Our study suggested that immune-hot patients with TNBC are more likely to benefit from adjuvant capecitabine and that combining immunotherapy with chemotherapy may be expected to be more effective in immune-hot patients,” wrote the study authors.

SOURCE:

The study was led by Wenya Wu, MMed, and Yunsong Yang, MD, at the Department of Breast Surgery, Fudan University Shanghai Cancer Center in Shanghai, People’s Republic of China. It was published online October 2024 in JNCCN — Journal of the National Comprehensive Cancer Network.

LIMITATIONS:

The retrospective nature of the sample collection limited the availability of RNA sequencing data. External verification was challenging due to limited accessibility of transcriptome data from patients treated with additional adjuvant capecitabine or standard chemotherapy alone. The criteria for identifying immune-hot tumors require further exploration and determination.

DISCLOSURES:

This study was funded by the National Natural Science Foundation of China, China Postdoctoral Science Foundation, and Shanghai Science and Technology Development Foundation. The authors disclosed no relevant conflicts of interest.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hospital Diagnostic Errors May Affect 7% of Patients

Article Type
Changed
Tue, 10/22/2024 - 12:47

Diagnostic errors are common in hospitals and are largely preventable, according to a new observational study led by Anuj K. Dalal, MD, from the Division of General Internal Medicine at Brigham and Women’s Hospital and Harvard Medical School in Boston, published in BMJ Quality & Safety.

Dalal and his colleagues found that 1 in 14 general medicine patients (7%) suffer harm due to diagnostic errors, and up to 85% of these cases could be prevented.
 

Few Studies on Diagnostic Errors

The study found that adverse event surveillance in hospital underestimated the prevalence of harmful diagnostic errors.

“It is difficult to quantify and characterize diagnostic errors, which have been studied less than medication errors,” Micaela La Regina, MD, an internist and head of the Clinical Governance and Risk Management Unit at ASL 5 in La Spezia, Italy, told Univadis Italy. “Generally, it is estimated that around 50% of diagnostic errors are preventable, but the authors of this study went beyond simply observing the hospital admission period and followed their sample for 90 days after discharge. Their findings will need to be verified in other studies, but they seem convincing.”

The researchers in Boston selected a random sample of 675 hospital patients from a total of 9147 eligible cases who received general medical care between July 2019 and September 2021, excluding the peak of the COVID-19 pandemic (April-December 2020). They retrospectively reviewed the patients’ electronic health records using a structured method to evaluate the diagnostic process for potential errors and then estimated the impact and severity of any harm.

Cases sampled were those featuring transfer to intensive care more than 24 hours after admission (100% of 130 cases), death within 90 days of hospital admission or after discharge (38.5% of 141 cases), complex clinical problems without transfer to intensive care or death within 90 days of admission (7% of 298 cases), and 2.4% of 106 cases without high-risk criteria.

Each case was reviewed by two experts trained in the use of diagnostic error evaluation and research taxonomy, modified for acute care. Harm was classified as mild, moderate, severe, or fatal. The review assessed whether diagnostic error contributed to the harm and whether it was preventable. Cases with discrepancies or uncertainties regarding the diagnostic error or its impact were further examined by an expert panel.
 

Most Frequent Situations

Among all the cases examined, diagnostic errors were identified in 160 instances in 154 patients. The most frequent situations with diagnostic errors involved transfer to intensive care (54 cases), death within 90 days (34 cases), and complex clinical problems (52 cases). Diagnostic errors causing harm were found in 84 cases (82 patients), of which 37 (28.5%) occurred in those transferred to intensive care; 18 (13%) among patients who died within 90 days; 23 (8%) among patients with complex clinical issues; and 6 (6%) in low-risk cases.

The severity of harm was categorized as minor in 5 cases (6%), moderate in 36 (43%), major in 25 (30%), and fatal in 18 cases (21.5%). Overall, the researchers estimated that the proportion of harmful, preventable diagnostic errors with serious harm in general medicine patients was slightly more than 7%, 6%, and 1%, respectively.
 

 

 

Most Frequent Diagnoses

The most common diagnoses associated with diagnostic errors in the study included heart failure, acute kidney injury, sepsis, pneumonia, respiratory failure, altered mental state, abdominal pain, and hypoxemia. Dalal and colleagues emphasize the need for more attention to diagnostic error analysis, including the adoption of artificial intelligence–based tools for medical record screening.

“The technological approach, with alert-based systems, can certainly be helpful, but more attention must also be paid to continuous training and the well-being of healthcare workers. It is also crucial to encourage greater listening to caregivers and patients,” said La Regina. She noted that in the past, a focus on error prevention has often led to an increased workload and administrative burden on healthcare workers. However, the well-being of healthcare workers is key to ensuring patient safety.

“Countermeasures to reduce diagnostic errors require a multimodal approach, targeting professionals, the healthcare system, and organizational aspects, because even waiting lists are a critical factor,” she said. As a clinical risk expert, she recently proposed an adaptation of the value-based medicine formula in the International Journal for Quality in Health Care to include healthcare professionals’ care experience as one of the elements that contribute to determining high-value healthcare interventions. “Experiments are already underway to reimburse healthcare costs based on this formula, which also allows the assessment of the value of skills and expertise acquired by healthcare workers,” concluded La Regina.
 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Diagnostic errors are common in hospitals and are largely preventable, according to a new observational study led by Anuj K. Dalal, MD, from the Division of General Internal Medicine at Brigham and Women’s Hospital and Harvard Medical School in Boston, published in BMJ Quality & Safety.

Dalal and his colleagues found that 1 in 14 general medicine patients (7%) suffer harm due to diagnostic errors, and up to 85% of these cases could be prevented.
 

Few Studies on Diagnostic Errors

The study found that adverse event surveillance in hospital underestimated the prevalence of harmful diagnostic errors.

“It is difficult to quantify and characterize diagnostic errors, which have been studied less than medication errors,” Micaela La Regina, MD, an internist and head of the Clinical Governance and Risk Management Unit at ASL 5 in La Spezia, Italy, told Univadis Italy. “Generally, it is estimated that around 50% of diagnostic errors are preventable, but the authors of this study went beyond simply observing the hospital admission period and followed their sample for 90 days after discharge. Their findings will need to be verified in other studies, but they seem convincing.”

The researchers in Boston selected a random sample of 675 hospital patients from a total of 9147 eligible cases who received general medical care between July 2019 and September 2021, excluding the peak of the COVID-19 pandemic (April-December 2020). They retrospectively reviewed the patients’ electronic health records using a structured method to evaluate the diagnostic process for potential errors and then estimated the impact and severity of any harm.

Cases sampled were those featuring transfer to intensive care more than 24 hours after admission (100% of 130 cases), death within 90 days of hospital admission or after discharge (38.5% of 141 cases), complex clinical problems without transfer to intensive care or death within 90 days of admission (7% of 298 cases), and 2.4% of 106 cases without high-risk criteria.

Each case was reviewed by two experts trained in the use of diagnostic error evaluation and research taxonomy, modified for acute care. Harm was classified as mild, moderate, severe, or fatal. The review assessed whether diagnostic error contributed to the harm and whether it was preventable. Cases with discrepancies or uncertainties regarding the diagnostic error or its impact were further examined by an expert panel.
 

Most Frequent Situations

Among all the cases examined, diagnostic errors were identified in 160 instances in 154 patients. The most frequent situations with diagnostic errors involved transfer to intensive care (54 cases), death within 90 days (34 cases), and complex clinical problems (52 cases). Diagnostic errors causing harm were found in 84 cases (82 patients), of which 37 (28.5%) occurred in those transferred to intensive care; 18 (13%) among patients who died within 90 days; 23 (8%) among patients with complex clinical issues; and 6 (6%) in low-risk cases.

The severity of harm was categorized as minor in 5 cases (6%), moderate in 36 (43%), major in 25 (30%), and fatal in 18 cases (21.5%). Overall, the researchers estimated that the proportion of harmful, preventable diagnostic errors with serious harm in general medicine patients was slightly more than 7%, 6%, and 1%, respectively.
 

 

 

Most Frequent Diagnoses

The most common diagnoses associated with diagnostic errors in the study included heart failure, acute kidney injury, sepsis, pneumonia, respiratory failure, altered mental state, abdominal pain, and hypoxemia. Dalal and colleagues emphasize the need for more attention to diagnostic error analysis, including the adoption of artificial intelligence–based tools for medical record screening.

“The technological approach, with alert-based systems, can certainly be helpful, but more attention must also be paid to continuous training and the well-being of healthcare workers. It is also crucial to encourage greater listening to caregivers and patients,” said La Regina. She noted that in the past, a focus on error prevention has often led to an increased workload and administrative burden on healthcare workers. However, the well-being of healthcare workers is key to ensuring patient safety.

“Countermeasures to reduce diagnostic errors require a multimodal approach, targeting professionals, the healthcare system, and organizational aspects, because even waiting lists are a critical factor,” she said. As a clinical risk expert, she recently proposed an adaptation of the value-based medicine formula in the International Journal for Quality in Health Care to include healthcare professionals’ care experience as one of the elements that contribute to determining high-value healthcare interventions. “Experiments are already underway to reimburse healthcare costs based on this formula, which also allows the assessment of the value of skills and expertise acquired by healthcare workers,” concluded La Regina.
 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Diagnostic errors are common in hospitals and are largely preventable, according to a new observational study led by Anuj K. Dalal, MD, from the Division of General Internal Medicine at Brigham and Women’s Hospital and Harvard Medical School in Boston, published in BMJ Quality & Safety.

Dalal and his colleagues found that 1 in 14 general medicine patients (7%) suffer harm due to diagnostic errors, and up to 85% of these cases could be prevented.
 

Few Studies on Diagnostic Errors

The study found that adverse event surveillance in hospital underestimated the prevalence of harmful diagnostic errors.

“It is difficult to quantify and characterize diagnostic errors, which have been studied less than medication errors,” Micaela La Regina, MD, an internist and head of the Clinical Governance and Risk Management Unit at ASL 5 in La Spezia, Italy, told Univadis Italy. “Generally, it is estimated that around 50% of diagnostic errors are preventable, but the authors of this study went beyond simply observing the hospital admission period and followed their sample for 90 days after discharge. Their findings will need to be verified in other studies, but they seem convincing.”

The researchers in Boston selected a random sample of 675 hospital patients from a total of 9147 eligible cases who received general medical care between July 2019 and September 2021, excluding the peak of the COVID-19 pandemic (April-December 2020). They retrospectively reviewed the patients’ electronic health records using a structured method to evaluate the diagnostic process for potential errors and then estimated the impact and severity of any harm.

Cases sampled were those featuring transfer to intensive care more than 24 hours after admission (100% of 130 cases), death within 90 days of hospital admission or after discharge (38.5% of 141 cases), complex clinical problems without transfer to intensive care or death within 90 days of admission (7% of 298 cases), and 2.4% of 106 cases without high-risk criteria.

Each case was reviewed by two experts trained in the use of diagnostic error evaluation and research taxonomy, modified for acute care. Harm was classified as mild, moderate, severe, or fatal. The review assessed whether diagnostic error contributed to the harm and whether it was preventable. Cases with discrepancies or uncertainties regarding the diagnostic error or its impact were further examined by an expert panel.
 

Most Frequent Situations

Among all the cases examined, diagnostic errors were identified in 160 instances in 154 patients. The most frequent situations with diagnostic errors involved transfer to intensive care (54 cases), death within 90 days (34 cases), and complex clinical problems (52 cases). Diagnostic errors causing harm were found in 84 cases (82 patients), of which 37 (28.5%) occurred in those transferred to intensive care; 18 (13%) among patients who died within 90 days; 23 (8%) among patients with complex clinical issues; and 6 (6%) in low-risk cases.

The severity of harm was categorized as minor in 5 cases (6%), moderate in 36 (43%), major in 25 (30%), and fatal in 18 cases (21.5%). Overall, the researchers estimated that the proportion of harmful, preventable diagnostic errors with serious harm in general medicine patients was slightly more than 7%, 6%, and 1%, respectively.
 

 

 

Most Frequent Diagnoses

The most common diagnoses associated with diagnostic errors in the study included heart failure, acute kidney injury, sepsis, pneumonia, respiratory failure, altered mental state, abdominal pain, and hypoxemia. Dalal and colleagues emphasize the need for more attention to diagnostic error analysis, including the adoption of artificial intelligence–based tools for medical record screening.

“The technological approach, with alert-based systems, can certainly be helpful, but more attention must also be paid to continuous training and the well-being of healthcare workers. It is also crucial to encourage greater listening to caregivers and patients,” said La Regina. She noted that in the past, a focus on error prevention has often led to an increased workload and administrative burden on healthcare workers. However, the well-being of healthcare workers is key to ensuring patient safety.

“Countermeasures to reduce diagnostic errors require a multimodal approach, targeting professionals, the healthcare system, and organizational aspects, because even waiting lists are a critical factor,” she said. As a clinical risk expert, she recently proposed an adaptation of the value-based medicine formula in the International Journal for Quality in Health Care to include healthcare professionals’ care experience as one of the elements that contribute to determining high-value healthcare interventions. “Experiments are already underway to reimburse healthcare costs based on this formula, which also allows the assessment of the value of skills and expertise acquired by healthcare workers,” concluded La Regina.
 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How Effective Is the High-Dose Flu Vaccine in Older Adults?

Article Type
Changed
Wed, 10/23/2024 - 10:22

How can the immunogenicity and effectiveness of flu vaccines be improved in older adults? Several strategies are available, one being the addition of an adjuvant. For example, the MF59-adjuvanted vaccine has shown superior immunogenicity. However, “we do not have data from controlled and randomized clinical trials showing superior clinical effectiveness versus the standard dose,” Professor Odile Launay, an infectious disease specialist at Cochin Hospital in Paris, France, noted during a press conference. Another option is to increase the antigen dose in the vaccine, creating a high-dose (HD) flu vaccine.

Why is there a need for an HD vaccine? “The elderly population bears the greatest burden from the flu,” explained Launay. “This is due to three factors: An aging immune system, a higher number of comorbidities, and increased frailty.” Standard-dose flu vaccines are seen as offering suboptimal protection for those older than 65 years, which led to the development of a quadrivalent vaccine with four times the antigen dose of standard flu vaccines. This HD vaccine was introduced in France during the 2021/2022 flu season. A real-world cohort study has since been conducted to evaluate its effectiveness in the target population — those aged 65 years or older. The results were recently published in Clinical Microbiology and Infection.

Cohort Study

The study included 405,385 noninstitutionalized people aged 65 years or older matched with 1,621,540 individuals in a 1:4 ratio. The first group received the HD vaccine, while the second group received the standard-dose vaccine. Both the groups had an average age of 77 years, with 56% women, and 51% vaccinated in pharmacies. The majority had been previously vaccinated against flu (91%), and 97% had completed a full COVID-19 vaccination schedule. More than half had at least one chronic illness.

Hospitalization rates for flu — the study’s primary outcome — were 69.5 vs 90.5 per 100,000 person-years in the HD vs standard-dose group. This represented a 23.3% reduction (95% CI, 8.4-35.8; P = .003).
 

Strengths and Limitations

Among the strengths of the study, Launay highlighted the large number of vaccinated participants older than 65 years — more than 7 million — and the widespread use of polymerase chain reaction flu tests in cases of hospitalization for respiratory infections, which improved flu coding in the database used. Additionally, the results were consistent with those of previous studies.

However, limitations included the retrospective design, which did not randomize participants and introduced potential bias. For example, the HD vaccine may have been prioritized for the oldest people or those with multiple comorbidities. Additionally, the 2021/2022 flu season was atypical, with the simultaneous circulation of the flu virus and SARS-CoV-2, as noted by Launay.
 

Conclusion

In conclusion, this first evaluation of the HD flu vaccine’s effectiveness in France showed a 25% reduction in hospitalizations, consistent with existing data covering 12 flu seasons. The vaccine has been available for a longer period in the United States and Northern Europe.

“The latest unpublished data from the 2022/23 season show a 27% reduction in hospitalizations with the HD vaccine in people over 65,” added Launay.

Note: Due to a pricing disagreement with the French government, Sanofi’s HD flu vaccine Efluelda, intended for people older than 65 years, will not be available this year. (See: Withdrawal of the Efluelda Influenza Vaccine: The Academy of Medicine Reacts). However, the company has submitted a dossier for a trivalent form for a return in the 2025/2026 season and is working on developing mRNA vaccines. Additionally, a combined flu/COVID-19 vaccine is currently in development.

The study was funded by Sanofi. Several authors are Sanofi employees. Odile Launay reported conflicts of interest with Sanofi, MSD, Pfizer, GSK, and Moderna.
 

This story was translated from Medscape’s French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

How can the immunogenicity and effectiveness of flu vaccines be improved in older adults? Several strategies are available, one being the addition of an adjuvant. For example, the MF59-adjuvanted vaccine has shown superior immunogenicity. However, “we do not have data from controlled and randomized clinical trials showing superior clinical effectiveness versus the standard dose,” Professor Odile Launay, an infectious disease specialist at Cochin Hospital in Paris, France, noted during a press conference. Another option is to increase the antigen dose in the vaccine, creating a high-dose (HD) flu vaccine.

Why is there a need for an HD vaccine? “The elderly population bears the greatest burden from the flu,” explained Launay. “This is due to three factors: An aging immune system, a higher number of comorbidities, and increased frailty.” Standard-dose flu vaccines are seen as offering suboptimal protection for those older than 65 years, which led to the development of a quadrivalent vaccine with four times the antigen dose of standard flu vaccines. This HD vaccine was introduced in France during the 2021/2022 flu season. A real-world cohort study has since been conducted to evaluate its effectiveness in the target population — those aged 65 years or older. The results were recently published in Clinical Microbiology and Infection.

Cohort Study

The study included 405,385 noninstitutionalized people aged 65 years or older matched with 1,621,540 individuals in a 1:4 ratio. The first group received the HD vaccine, while the second group received the standard-dose vaccine. Both the groups had an average age of 77 years, with 56% women, and 51% vaccinated in pharmacies. The majority had been previously vaccinated against flu (91%), and 97% had completed a full COVID-19 vaccination schedule. More than half had at least one chronic illness.

Hospitalization rates for flu — the study’s primary outcome — were 69.5 vs 90.5 per 100,000 person-years in the HD vs standard-dose group. This represented a 23.3% reduction (95% CI, 8.4-35.8; P = .003).
 

Strengths and Limitations

Among the strengths of the study, Launay highlighted the large number of vaccinated participants older than 65 years — more than 7 million — and the widespread use of polymerase chain reaction flu tests in cases of hospitalization for respiratory infections, which improved flu coding in the database used. Additionally, the results were consistent with those of previous studies.

However, limitations included the retrospective design, which did not randomize participants and introduced potential bias. For example, the HD vaccine may have been prioritized for the oldest people or those with multiple comorbidities. Additionally, the 2021/2022 flu season was atypical, with the simultaneous circulation of the flu virus and SARS-CoV-2, as noted by Launay.
 

Conclusion

In conclusion, this first evaluation of the HD flu vaccine’s effectiveness in France showed a 25% reduction in hospitalizations, consistent with existing data covering 12 flu seasons. The vaccine has been available for a longer period in the United States and Northern Europe.

“The latest unpublished data from the 2022/23 season show a 27% reduction in hospitalizations with the HD vaccine in people over 65,” added Launay.

Note: Due to a pricing disagreement with the French government, Sanofi’s HD flu vaccine Efluelda, intended for people older than 65 years, will not be available this year. (See: Withdrawal of the Efluelda Influenza Vaccine: The Academy of Medicine Reacts). However, the company has submitted a dossier for a trivalent form for a return in the 2025/2026 season and is working on developing mRNA vaccines. Additionally, a combined flu/COVID-19 vaccine is currently in development.

The study was funded by Sanofi. Several authors are Sanofi employees. Odile Launay reported conflicts of interest with Sanofi, MSD, Pfizer, GSK, and Moderna.
 

This story was translated from Medscape’s French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

How can the immunogenicity and effectiveness of flu vaccines be improved in older adults? Several strategies are available, one being the addition of an adjuvant. For example, the MF59-adjuvanted vaccine has shown superior immunogenicity. However, “we do not have data from controlled and randomized clinical trials showing superior clinical effectiveness versus the standard dose,” Professor Odile Launay, an infectious disease specialist at Cochin Hospital in Paris, France, noted during a press conference. Another option is to increase the antigen dose in the vaccine, creating a high-dose (HD) flu vaccine.

Why is there a need for an HD vaccine? “The elderly population bears the greatest burden from the flu,” explained Launay. “This is due to three factors: An aging immune system, a higher number of comorbidities, and increased frailty.” Standard-dose flu vaccines are seen as offering suboptimal protection for those older than 65 years, which led to the development of a quadrivalent vaccine with four times the antigen dose of standard flu vaccines. This HD vaccine was introduced in France during the 2021/2022 flu season. A real-world cohort study has since been conducted to evaluate its effectiveness in the target population — those aged 65 years or older. The results were recently published in Clinical Microbiology and Infection.

Cohort Study

The study included 405,385 noninstitutionalized people aged 65 years or older matched with 1,621,540 individuals in a 1:4 ratio. The first group received the HD vaccine, while the second group received the standard-dose vaccine. Both the groups had an average age of 77 years, with 56% women, and 51% vaccinated in pharmacies. The majority had been previously vaccinated against flu (91%), and 97% had completed a full COVID-19 vaccination schedule. More than half had at least one chronic illness.

Hospitalization rates for flu — the study’s primary outcome — were 69.5 vs 90.5 per 100,000 person-years in the HD vs standard-dose group. This represented a 23.3% reduction (95% CI, 8.4-35.8; P = .003).
 

Strengths and Limitations

Among the strengths of the study, Launay highlighted the large number of vaccinated participants older than 65 years — more than 7 million — and the widespread use of polymerase chain reaction flu tests in cases of hospitalization for respiratory infections, which improved flu coding in the database used. Additionally, the results were consistent with those of previous studies.

However, limitations included the retrospective design, which did not randomize participants and introduced potential bias. For example, the HD vaccine may have been prioritized for the oldest people or those with multiple comorbidities. Additionally, the 2021/2022 flu season was atypical, with the simultaneous circulation of the flu virus and SARS-CoV-2, as noted by Launay.
 

Conclusion

In conclusion, this first evaluation of the HD flu vaccine’s effectiveness in France showed a 25% reduction in hospitalizations, consistent with existing data covering 12 flu seasons. The vaccine has been available for a longer period in the United States and Northern Europe.

“The latest unpublished data from the 2022/23 season show a 27% reduction in hospitalizations with the HD vaccine in people over 65,” added Launay.

Note: Due to a pricing disagreement with the French government, Sanofi’s HD flu vaccine Efluelda, intended for people older than 65 years, will not be available this year. (See: Withdrawal of the Efluelda Influenza Vaccine: The Academy of Medicine Reacts). However, the company has submitted a dossier for a trivalent form for a return in the 2025/2026 season and is working on developing mRNA vaccines. Additionally, a combined flu/COVID-19 vaccine is currently in development.

The study was funded by Sanofi. Several authors are Sanofi employees. Odile Launay reported conflicts of interest with Sanofi, MSD, Pfizer, GSK, and Moderna.
 

This story was translated from Medscape’s French edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Responses Sustained with Ritlecitinib in Patients with Alopecia Through 48 Weeks

Article Type
Changed
Tue, 10/22/2024 - 12:34

 

TOPLINE:

Treatment with ritlecitinib sustained hair regrowth through week 48 in patients with alopecia areata (AA), and up to one third of nonresponders at week 24 also achieved responses by week 48.

METHODOLOGY:

  • Researchers conducted a post hoc analysis of an international, randomized, double-blind, placebo-controlled, phase 2b/3 trial (ALLEGRO) and included 718 adults and adolescents aged 12 or older with severe AA (Severity of Alopecia Tool [SALT] score ≥ 50).
  • Patients received various doses of the oral Janus kinase inhibitor ritlecitinib, with or without a 4-week loading dose, including 200/50 mg, 200/30 mg, 50 mg, or 30 mg, with or without a 4-week loading dose for up to 24 weeks and continued to receive their assigned maintenance dose.
  • Researchers assessed sustained clinical responses at week 48 for those who had achieved SALT scores ≤ 20 and ≤ 10 at 24 weeks, and nonresponders at week 24 were assessed for responses through week 48.
  • Adverse events were also evaluated.

TAKEAWAY:

  • Among patients on ritlecitinib who had responded at week 24, SALT responses ≤ 20 were sustained in 85.2%-100% of patients through week 48. Similar results were seen among patients who achieved a SALT score ≤ 10 (68.8%-91.7%) and improvements in eyebrow (70.4%-96.9%) or eyelash (52.4%-94.1%) assessment scores.
  • Among those who were nonresponders at week 24, 22.2%-33.7% achieved a SALT score ≤ 20 and 19.8%-25.5% achieved a SALT score ≤ 10 by week 48. Similarly, among those with no eyebrow or eyelash responses at week 24, 19.7%-32.8% and 16.7%-30.2% had improved eyebrow or eyelash assessment scores, respectively, at week 48.
  • Between weeks 24 and 48, adverse events were reported in 74%-93% of patients who achieved a SALT score ≤ 20, most were mild or moderate; two serious events were reported but deemed unrelated to treatment. The safety profile was similar across all subgroups.
  • No deaths, malignancies, major cardiovascular events, opportunistic infections, or herpes zoster infections were observed.

IN PRACTICE:

“The majority of ritlecitinib-treated patients with AA who met target clinical response based on scalp, eyebrow, or eyelash regrowth at week 24 sustained their response through week 48 with continued treatment,” the authors wrote. “Some patients, including those with more extensive hair loss, may require ritlecitinib treatment beyond 6 months to achieve target clinical response,” they added.

SOURCE:

The study was led by Melissa Piliang, MD, of the Department of Dermatology, Cleveland Clinic, and was published online on October 17 in the Journal of the American Academy of Dermatology.

LIMITATIONS:

The analysis was limited by its post hoc nature, small sample size in each treatment group, and a follow-up period of only 48 weeks.

DISCLOSURES:

This study was funded by Pfizer. Piliang disclosed being a consultant or investigator for Pfizer, Eli Lilly, and Procter & Gamble. Six authors were employees or shareholders of or received salary from Pfizer. Other authors also reported financial relationships with pharmaceutical companies outside this work, including Pfizer.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Treatment with ritlecitinib sustained hair regrowth through week 48 in patients with alopecia areata (AA), and up to one third of nonresponders at week 24 also achieved responses by week 48.

METHODOLOGY:

  • Researchers conducted a post hoc analysis of an international, randomized, double-blind, placebo-controlled, phase 2b/3 trial (ALLEGRO) and included 718 adults and adolescents aged 12 or older with severe AA (Severity of Alopecia Tool [SALT] score ≥ 50).
  • Patients received various doses of the oral Janus kinase inhibitor ritlecitinib, with or without a 4-week loading dose, including 200/50 mg, 200/30 mg, 50 mg, or 30 mg, with or without a 4-week loading dose for up to 24 weeks and continued to receive their assigned maintenance dose.
  • Researchers assessed sustained clinical responses at week 48 for those who had achieved SALT scores ≤ 20 and ≤ 10 at 24 weeks, and nonresponders at week 24 were assessed for responses through week 48.
  • Adverse events were also evaluated.

TAKEAWAY:

  • Among patients on ritlecitinib who had responded at week 24, SALT responses ≤ 20 were sustained in 85.2%-100% of patients through week 48. Similar results were seen among patients who achieved a SALT score ≤ 10 (68.8%-91.7%) and improvements in eyebrow (70.4%-96.9%) or eyelash (52.4%-94.1%) assessment scores.
  • Among those who were nonresponders at week 24, 22.2%-33.7% achieved a SALT score ≤ 20 and 19.8%-25.5% achieved a SALT score ≤ 10 by week 48. Similarly, among those with no eyebrow or eyelash responses at week 24, 19.7%-32.8% and 16.7%-30.2% had improved eyebrow or eyelash assessment scores, respectively, at week 48.
  • Between weeks 24 and 48, adverse events were reported in 74%-93% of patients who achieved a SALT score ≤ 20, most were mild or moderate; two serious events were reported but deemed unrelated to treatment. The safety profile was similar across all subgroups.
  • No deaths, malignancies, major cardiovascular events, opportunistic infections, or herpes zoster infections were observed.

IN PRACTICE:

“The majority of ritlecitinib-treated patients with AA who met target clinical response based on scalp, eyebrow, or eyelash regrowth at week 24 sustained their response through week 48 with continued treatment,” the authors wrote. “Some patients, including those with more extensive hair loss, may require ritlecitinib treatment beyond 6 months to achieve target clinical response,” they added.

SOURCE:

The study was led by Melissa Piliang, MD, of the Department of Dermatology, Cleveland Clinic, and was published online on October 17 in the Journal of the American Academy of Dermatology.

LIMITATIONS:

The analysis was limited by its post hoc nature, small sample size in each treatment group, and a follow-up period of only 48 weeks.

DISCLOSURES:

This study was funded by Pfizer. Piliang disclosed being a consultant or investigator for Pfizer, Eli Lilly, and Procter & Gamble. Six authors were employees or shareholders of or received salary from Pfizer. Other authors also reported financial relationships with pharmaceutical companies outside this work, including Pfizer.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Treatment with ritlecitinib sustained hair regrowth through week 48 in patients with alopecia areata (AA), and up to one third of nonresponders at week 24 also achieved responses by week 48.

METHODOLOGY:

  • Researchers conducted a post hoc analysis of an international, randomized, double-blind, placebo-controlled, phase 2b/3 trial (ALLEGRO) and included 718 adults and adolescents aged 12 or older with severe AA (Severity of Alopecia Tool [SALT] score ≥ 50).
  • Patients received various doses of the oral Janus kinase inhibitor ritlecitinib, with or without a 4-week loading dose, including 200/50 mg, 200/30 mg, 50 mg, or 30 mg, with or without a 4-week loading dose for up to 24 weeks and continued to receive their assigned maintenance dose.
  • Researchers assessed sustained clinical responses at week 48 for those who had achieved SALT scores ≤ 20 and ≤ 10 at 24 weeks, and nonresponders at week 24 were assessed for responses through week 48.
  • Adverse events were also evaluated.

TAKEAWAY:

  • Among patients on ritlecitinib who had responded at week 24, SALT responses ≤ 20 were sustained in 85.2%-100% of patients through week 48. Similar results were seen among patients who achieved a SALT score ≤ 10 (68.8%-91.7%) and improvements in eyebrow (70.4%-96.9%) or eyelash (52.4%-94.1%) assessment scores.
  • Among those who were nonresponders at week 24, 22.2%-33.7% achieved a SALT score ≤ 20 and 19.8%-25.5% achieved a SALT score ≤ 10 by week 48. Similarly, among those with no eyebrow or eyelash responses at week 24, 19.7%-32.8% and 16.7%-30.2% had improved eyebrow or eyelash assessment scores, respectively, at week 48.
  • Between weeks 24 and 48, adverse events were reported in 74%-93% of patients who achieved a SALT score ≤ 20, most were mild or moderate; two serious events were reported but deemed unrelated to treatment. The safety profile was similar across all subgroups.
  • No deaths, malignancies, major cardiovascular events, opportunistic infections, or herpes zoster infections were observed.

IN PRACTICE:

“The majority of ritlecitinib-treated patients with AA who met target clinical response based on scalp, eyebrow, or eyelash regrowth at week 24 sustained their response through week 48 with continued treatment,” the authors wrote. “Some patients, including those with more extensive hair loss, may require ritlecitinib treatment beyond 6 months to achieve target clinical response,” they added.

SOURCE:

The study was led by Melissa Piliang, MD, of the Department of Dermatology, Cleveland Clinic, and was published online on October 17 in the Journal of the American Academy of Dermatology.

LIMITATIONS:

The analysis was limited by its post hoc nature, small sample size in each treatment group, and a follow-up period of only 48 weeks.

DISCLOSURES:

This study was funded by Pfizer. Piliang disclosed being a consultant or investigator for Pfizer, Eli Lilly, and Procter & Gamble. Six authors were employees or shareholders of or received salary from Pfizer. Other authors also reported financial relationships with pharmaceutical companies outside this work, including Pfizer.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Clinician Tool Aims to Stop ALS Diagnosis Delays

Article Type
Changed
Tue, 10/22/2024 - 11:46

A new clinical education tool aims to speed the diagnosis of amyotrophic lateral sclerosis (ALS), which often goes undetected for months even in neurologist offices.

The one-page “thinkALS” tool, designed for clinicians who don’t specialize in neuromuscular disorders, offers a guide to recognize ALS symptoms and determine when it’s time to refer patients to ALS clinics.

“Time is of the essence. It’s really important because the paradigm of looking at ALS is shifting from this being a fatal disease that nobody can do anything about,” said Suma Babu, MBBS, MPH, assistant professor of neurology at Massachusetts General Hospital/Harvard Medical School in Boston, in a presentation at American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) 2024. “As a community, we need to think about how can get to the diagnosis point early and get patients started on therapies.”
 

On Average, ALS Diagnosis Takes 12-15 Months

As Babu noted, the percentage of patients initially diagnosed with something else may be as high as 52%. The time to diagnosis in ALS remained steady at a mean 12-15 months from 1996-1998 to 2000-2018.

“If you keep in mind that an average ALS patient lives only 3-5 years from symptom onset, they’re spending one third of their survival time in just trying to figure out what the diagnosis is,” Babu said. “Often, they may even undergo unnecessary testing and unnecessary surgeries — carpal tunnel releases, spinal surgeries, and so on.”

Babu’s own research, which is under review for publication, examined 2011-2021 Medicare claims to determine the typical time from first neurologist consult to confirmed ALS diagnosis. The mean for ALS/neuromuscular specialists is 9.6 months, while it’s 16.7 months for nonspecialist neurologists.

“It’s a hard pill to swallow,” Babu said, referring to the fact that neurologists are contributing to some of this situation. “But it is a challenge because ALS does not have a definitive diagnostic test, and you’re ruling out other possibilities.”
 

A ‘Sense of Nihilism’ About Prognoses

She added that “unless you’re seeing a lot of ALS patients, this is not going to be on a neurologist’s or a nurse practitioner’s radar to think about ALS early and then refer them to the right place.”

There’s also an unwarranted “sense of nihilism” about prognoses for patients, she said. “Sometimes people do not understand what’s going on within the ALS field in terms of ‘What are we going to do about it if it’s diagnosed?’ ”

The new one-page tool will be helpful in making diagnoses, she said. “If you have a patient who has asymmetric, progressive weakness, there is an instrument you can turn to that will walk you through the most common symptoms. It’ll also walk you through what to do next.”

The tool lists features of ALS and factors that support — or don’t support — an ALS diagnosis. Users are told to “think ALS” if features in two categories are present and no features in a third category are present.
 

Referral Wording Is Crucial

Babu added that the “important key feature of this instrument” is guidance for non-neurologists regarding what to write on a referral to neurology so the patient is channeled directly to an ALS clinic. The recommended wording: “CLINICAL SUSPICION FOR ALS.”

Neurologist Ximena Arcila-Londono, MD, of Henry Ford Health in Detroit, spoke after Babu’s presentation and agreed that wording is crucial in referrals. “Please include in your words ‘Rule out motor neuron disorder’ or ‘Rule out ALS,’ ” she said. “Some people in the community are very reluctant to use those words in their referral. If you don’t use the referral and you send them [regarding] weakness, that person is going to get stuck in the general neurology pile. The moment you use the word ‘motor neuron disorder’ or ALS, most of us will get to those patients within a month.”

The tool’s wording adds that “most ALS centers can accommodate urgent ALS referrals within 2 weeks.”

Babu disclosed receiving research funding from the AANEM Foundation, American Academy of Neurology, Muscular Dystrophy Association, OrphAI, Biogen, Ionis, Novartis, Denali, uniQure, and MarvelBiome. Arcila-Londono had no disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A new clinical education tool aims to speed the diagnosis of amyotrophic lateral sclerosis (ALS), which often goes undetected for months even in neurologist offices.

The one-page “thinkALS” tool, designed for clinicians who don’t specialize in neuromuscular disorders, offers a guide to recognize ALS symptoms and determine when it’s time to refer patients to ALS clinics.

“Time is of the essence. It’s really important because the paradigm of looking at ALS is shifting from this being a fatal disease that nobody can do anything about,” said Suma Babu, MBBS, MPH, assistant professor of neurology at Massachusetts General Hospital/Harvard Medical School in Boston, in a presentation at American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) 2024. “As a community, we need to think about how can get to the diagnosis point early and get patients started on therapies.”
 

On Average, ALS Diagnosis Takes 12-15 Months

As Babu noted, the percentage of patients initially diagnosed with something else may be as high as 52%. The time to diagnosis in ALS remained steady at a mean 12-15 months from 1996-1998 to 2000-2018.

“If you keep in mind that an average ALS patient lives only 3-5 years from symptom onset, they’re spending one third of their survival time in just trying to figure out what the diagnosis is,” Babu said. “Often, they may even undergo unnecessary testing and unnecessary surgeries — carpal tunnel releases, spinal surgeries, and so on.”

Babu’s own research, which is under review for publication, examined 2011-2021 Medicare claims to determine the typical time from first neurologist consult to confirmed ALS diagnosis. The mean for ALS/neuromuscular specialists is 9.6 months, while it’s 16.7 months for nonspecialist neurologists.

“It’s a hard pill to swallow,” Babu said, referring to the fact that neurologists are contributing to some of this situation. “But it is a challenge because ALS does not have a definitive diagnostic test, and you’re ruling out other possibilities.”
 

A ‘Sense of Nihilism’ About Prognoses

She added that “unless you’re seeing a lot of ALS patients, this is not going to be on a neurologist’s or a nurse practitioner’s radar to think about ALS early and then refer them to the right place.”

There’s also an unwarranted “sense of nihilism” about prognoses for patients, she said. “Sometimes people do not understand what’s going on within the ALS field in terms of ‘What are we going to do about it if it’s diagnosed?’ ”

The new one-page tool will be helpful in making diagnoses, she said. “If you have a patient who has asymmetric, progressive weakness, there is an instrument you can turn to that will walk you through the most common symptoms. It’ll also walk you through what to do next.”

The tool lists features of ALS and factors that support — or don’t support — an ALS diagnosis. Users are told to “think ALS” if features in two categories are present and no features in a third category are present.
 

Referral Wording Is Crucial

Babu added that the “important key feature of this instrument” is guidance for non-neurologists regarding what to write on a referral to neurology so the patient is channeled directly to an ALS clinic. The recommended wording: “CLINICAL SUSPICION FOR ALS.”

Neurologist Ximena Arcila-Londono, MD, of Henry Ford Health in Detroit, spoke after Babu’s presentation and agreed that wording is crucial in referrals. “Please include in your words ‘Rule out motor neuron disorder’ or ‘Rule out ALS,’ ” she said. “Some people in the community are very reluctant to use those words in their referral. If you don’t use the referral and you send them [regarding] weakness, that person is going to get stuck in the general neurology pile. The moment you use the word ‘motor neuron disorder’ or ALS, most of us will get to those patients within a month.”

The tool’s wording adds that “most ALS centers can accommodate urgent ALS referrals within 2 weeks.”

Babu disclosed receiving research funding from the AANEM Foundation, American Academy of Neurology, Muscular Dystrophy Association, OrphAI, Biogen, Ionis, Novartis, Denali, uniQure, and MarvelBiome. Arcila-Londono had no disclosures.
 

A version of this article appeared on Medscape.com.

A new clinical education tool aims to speed the diagnosis of amyotrophic lateral sclerosis (ALS), which often goes undetected for months even in neurologist offices.

The one-page “thinkALS” tool, designed for clinicians who don’t specialize in neuromuscular disorders, offers a guide to recognize ALS symptoms and determine when it’s time to refer patients to ALS clinics.

“Time is of the essence. It’s really important because the paradigm of looking at ALS is shifting from this being a fatal disease that nobody can do anything about,” said Suma Babu, MBBS, MPH, assistant professor of neurology at Massachusetts General Hospital/Harvard Medical School in Boston, in a presentation at American Association of Neuromuscular & Electrodiagnostic Medicine (AANEM) 2024. “As a community, we need to think about how can get to the diagnosis point early and get patients started on therapies.”
 

On Average, ALS Diagnosis Takes 12-15 Months

As Babu noted, the percentage of patients initially diagnosed with something else may be as high as 52%. The time to diagnosis in ALS remained steady at a mean 12-15 months from 1996-1998 to 2000-2018.

“If you keep in mind that an average ALS patient lives only 3-5 years from symptom onset, they’re spending one third of their survival time in just trying to figure out what the diagnosis is,” Babu said. “Often, they may even undergo unnecessary testing and unnecessary surgeries — carpal tunnel releases, spinal surgeries, and so on.”

Babu’s own research, which is under review for publication, examined 2011-2021 Medicare claims to determine the typical time from first neurologist consult to confirmed ALS diagnosis. The mean for ALS/neuromuscular specialists is 9.6 months, while it’s 16.7 months for nonspecialist neurologists.

“It’s a hard pill to swallow,” Babu said, referring to the fact that neurologists are contributing to some of this situation. “But it is a challenge because ALS does not have a definitive diagnostic test, and you’re ruling out other possibilities.”
 

A ‘Sense of Nihilism’ About Prognoses

She added that “unless you’re seeing a lot of ALS patients, this is not going to be on a neurologist’s or a nurse practitioner’s radar to think about ALS early and then refer them to the right place.”

There’s also an unwarranted “sense of nihilism” about prognoses for patients, she said. “Sometimes people do not understand what’s going on within the ALS field in terms of ‘What are we going to do about it if it’s diagnosed?’ ”

The new one-page tool will be helpful in making diagnoses, she said. “If you have a patient who has asymmetric, progressive weakness, there is an instrument you can turn to that will walk you through the most common symptoms. It’ll also walk you through what to do next.”

The tool lists features of ALS and factors that support — or don’t support — an ALS diagnosis. Users are told to “think ALS” if features in two categories are present and no features in a third category are present.
 

Referral Wording Is Crucial

Babu added that the “important key feature of this instrument” is guidance for non-neurologists regarding what to write on a referral to neurology so the patient is channeled directly to an ALS clinic. The recommended wording: “CLINICAL SUSPICION FOR ALS.”

Neurologist Ximena Arcila-Londono, MD, of Henry Ford Health in Detroit, spoke after Babu’s presentation and agreed that wording is crucial in referrals. “Please include in your words ‘Rule out motor neuron disorder’ or ‘Rule out ALS,’ ” she said. “Some people in the community are very reluctant to use those words in their referral. If you don’t use the referral and you send them [regarding] weakness, that person is going to get stuck in the general neurology pile. The moment you use the word ‘motor neuron disorder’ or ALS, most of us will get to those patients within a month.”

The tool’s wording adds that “most ALS centers can accommodate urgent ALS referrals within 2 weeks.”

Babu disclosed receiving research funding from the AANEM Foundation, American Academy of Neurology, Muscular Dystrophy Association, OrphAI, Biogen, Ionis, Novartis, Denali, uniQure, and MarvelBiome. Arcila-Londono had no disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AANEM 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Contraceptive Users in the United States Show Preference for Alternative Sources

Article Type
Changed
Tue, 10/22/2024 - 11:39

 

TOPLINE:

Individuals using contraceptive pills, patches, and rings must frequently interact with the healthcare system for continued use. More than half of US contraceptive users prefer alternative sources over traditional in-person care. Only 35.6% of respondents selected in-person care as their most preferred source.

METHODOLOGY:

  • Researchers conducted a cross-sectional nationally representative survey in the United States in 2022 through NORC’s AmeriSpeak panel.
  • A total of 3059 eligible panelists, aged 15-44 years, completed the survey, with 595 individuals currently using a pill, patch, or ring contraceptive included in the analysis.
  • Primary outcomes measured were the use of any preferred source and the most preferred source when obtaining contraception.
  • Sources included in-person care, telehealth, pharmacist-prescribed, online service, and over the counter.
  • Data were analyzed from January 25, 2023, to August 15, 2024.

TAKEAWAY:

  • Only 35.6% of respondents selected in-person care as their most preferred source of contraception.
  • Only 49.7% of respondents obtained their method from a preferred source, while 39.8% received it from their most preferred source.
  • Respondents who previously reported being unable to get their method on time had higher odds of preferring an alternative source (adjusted odds ratio [AOR], 2.57; 95% CI, 1.36-4.87).
  • Those who recently received person-centered contraceptive counseling had lower odds of preferring an alternative source (AOR, 0.59; 95% CI, 0.35-0.98).

IN PRACTICE:

“The low level of preference for in-person care suggests that expanding contraceptive sources outside of traditional healthcare settings has a role in ameliorating barriers to access and can promote reproductive autonomy,” wrote the authors of the study.

SOURCE:

The study was led by Anu Manchikanti Gómez, PhD, Sexual Health and Reproductive Equity Program, School of Social Welfare, University of California, Berkeley. It was published online in JAMA Network Open.

LIMITATIONS:

The study’s cross-sectional design limited the ability to establish causality. The sample was limited to individuals aged 15-44 years, which may not represent all contraceptive users. Self-reported data may be subject to recall bias. The study did not distinguish between synchronous and asynchronous telehealth preferences.

DISCLOSURES:

The study was supported by Arnold Ventures. Gómez disclosed receiving personal fees from various organizations outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Individuals using contraceptive pills, patches, and rings must frequently interact with the healthcare system for continued use. More than half of US contraceptive users prefer alternative sources over traditional in-person care. Only 35.6% of respondents selected in-person care as their most preferred source.

METHODOLOGY:

  • Researchers conducted a cross-sectional nationally representative survey in the United States in 2022 through NORC’s AmeriSpeak panel.
  • A total of 3059 eligible panelists, aged 15-44 years, completed the survey, with 595 individuals currently using a pill, patch, or ring contraceptive included in the analysis.
  • Primary outcomes measured were the use of any preferred source and the most preferred source when obtaining contraception.
  • Sources included in-person care, telehealth, pharmacist-prescribed, online service, and over the counter.
  • Data were analyzed from January 25, 2023, to August 15, 2024.

TAKEAWAY:

  • Only 35.6% of respondents selected in-person care as their most preferred source of contraception.
  • Only 49.7% of respondents obtained their method from a preferred source, while 39.8% received it from their most preferred source.
  • Respondents who previously reported being unable to get their method on time had higher odds of preferring an alternative source (adjusted odds ratio [AOR], 2.57; 95% CI, 1.36-4.87).
  • Those who recently received person-centered contraceptive counseling had lower odds of preferring an alternative source (AOR, 0.59; 95% CI, 0.35-0.98).

IN PRACTICE:

“The low level of preference for in-person care suggests that expanding contraceptive sources outside of traditional healthcare settings has a role in ameliorating barriers to access and can promote reproductive autonomy,” wrote the authors of the study.

SOURCE:

The study was led by Anu Manchikanti Gómez, PhD, Sexual Health and Reproductive Equity Program, School of Social Welfare, University of California, Berkeley. It was published online in JAMA Network Open.

LIMITATIONS:

The study’s cross-sectional design limited the ability to establish causality. The sample was limited to individuals aged 15-44 years, which may not represent all contraceptive users. Self-reported data may be subject to recall bias. The study did not distinguish between synchronous and asynchronous telehealth preferences.

DISCLOSURES:

The study was supported by Arnold Ventures. Gómez disclosed receiving personal fees from various organizations outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Individuals using contraceptive pills, patches, and rings must frequently interact with the healthcare system for continued use. More than half of US contraceptive users prefer alternative sources over traditional in-person care. Only 35.6% of respondents selected in-person care as their most preferred source.

METHODOLOGY:

  • Researchers conducted a cross-sectional nationally representative survey in the United States in 2022 through NORC’s AmeriSpeak panel.
  • A total of 3059 eligible panelists, aged 15-44 years, completed the survey, with 595 individuals currently using a pill, patch, or ring contraceptive included in the analysis.
  • Primary outcomes measured were the use of any preferred source and the most preferred source when obtaining contraception.
  • Sources included in-person care, telehealth, pharmacist-prescribed, online service, and over the counter.
  • Data were analyzed from January 25, 2023, to August 15, 2024.

TAKEAWAY:

  • Only 35.6% of respondents selected in-person care as their most preferred source of contraception.
  • Only 49.7% of respondents obtained their method from a preferred source, while 39.8% received it from their most preferred source.
  • Respondents who previously reported being unable to get their method on time had higher odds of preferring an alternative source (adjusted odds ratio [AOR], 2.57; 95% CI, 1.36-4.87).
  • Those who recently received person-centered contraceptive counseling had lower odds of preferring an alternative source (AOR, 0.59; 95% CI, 0.35-0.98).

IN PRACTICE:

“The low level of preference for in-person care suggests that expanding contraceptive sources outside of traditional healthcare settings has a role in ameliorating barriers to access and can promote reproductive autonomy,” wrote the authors of the study.

SOURCE:

The study was led by Anu Manchikanti Gómez, PhD, Sexual Health and Reproductive Equity Program, School of Social Welfare, University of California, Berkeley. It was published online in JAMA Network Open.

LIMITATIONS:

The study’s cross-sectional design limited the ability to establish causality. The sample was limited to individuals aged 15-44 years, which may not represent all contraceptive users. Self-reported data may be subject to recall bias. The study did not distinguish between synchronous and asynchronous telehealth preferences.

DISCLOSURES:

The study was supported by Arnold Ventures. Gómez disclosed receiving personal fees from various organizations outside the submitted work. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

PCOS Linked to Hypertensive Blood Pressure in Teens

Article Type
Changed
Tue, 10/22/2024 - 11:35

 

TOPLINE:

Adolescent girls with polycystic ovary syndrome (PCOS) have an increased risk for hypertension, according to a new study which underscores the importance of blood pressure surveillance in this population.

METHODOLOGY:

  • The retrospective cohort study examined the association between PCOS and hypertension in adolescent girls within a diverse community-based US healthcare population.
  • The researchers analyzed data from 224,418 adolescent girls (mean age at index visit, 14.9 years; 15.8% classified as having obesity) who had a well-child visit between 2013 and 2019, during which their systolic blood pressure and diastolic blood pressure were measured.
  • Blood pressure in the hypertensive range was classified using the 2017 American Academy of Pediatrics Practice Guideline, with thresholds of 130/80 mm Hg or greater.

TAKEAWAY:

  • The proportion of adolescent girls with high blood pressure was significantly greater among those with PCOS than among those without the condition (18.2% vs 7.1%; P < .001).
  • Adolescent girls with PCOS had a 25% higher risk for hypertension than those without the disorder (adjusted odds ratio [aOR], 1.25; 95% CI, 1.10-1.42).
  • Similarly, adolescent girls with obesity and PCOS had a 23% higher risk for high blood pressure than those without PCOS (aOR, 1.23; 95% CI, 1.06-1.42).

IN PRACTICE:

“The high prevalence of [hypertension] associated with PCOS emphasizes the key role of early [blood pressure] monitoring in this high-risk group,” the authors of the study wrote.

SOURCE:

The study was led by Sherry Zhang, MD, Kaiser Permanente Oakland Medical Center, Oakland, California, and was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

The study relied on coded diagnoses of PCOS from clinical settings, which may have led to detection and referral biases. The findings may not be generalizable to an unselected population in which adolescent girls are systematically screened for both PCOS and hypertension.

DISCLOSURES:

This study was funded by the Cardiovascular and Metabolic Conditions Research Section and the Biostatistical Consulting Unit at the Division of Research, Kaiser Permanente Northern California and by the Kaiser Permanente Northern California Community Health Program. The authors declared having no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Adolescent girls with polycystic ovary syndrome (PCOS) have an increased risk for hypertension, according to a new study which underscores the importance of blood pressure surveillance in this population.

METHODOLOGY:

  • The retrospective cohort study examined the association between PCOS and hypertension in adolescent girls within a diverse community-based US healthcare population.
  • The researchers analyzed data from 224,418 adolescent girls (mean age at index visit, 14.9 years; 15.8% classified as having obesity) who had a well-child visit between 2013 and 2019, during which their systolic blood pressure and diastolic blood pressure were measured.
  • Blood pressure in the hypertensive range was classified using the 2017 American Academy of Pediatrics Practice Guideline, with thresholds of 130/80 mm Hg or greater.

TAKEAWAY:

  • The proportion of adolescent girls with high blood pressure was significantly greater among those with PCOS than among those without the condition (18.2% vs 7.1%; P < .001).
  • Adolescent girls with PCOS had a 25% higher risk for hypertension than those without the disorder (adjusted odds ratio [aOR], 1.25; 95% CI, 1.10-1.42).
  • Similarly, adolescent girls with obesity and PCOS had a 23% higher risk for high blood pressure than those without PCOS (aOR, 1.23; 95% CI, 1.06-1.42).

IN PRACTICE:

“The high prevalence of [hypertension] associated with PCOS emphasizes the key role of early [blood pressure] monitoring in this high-risk group,” the authors of the study wrote.

SOURCE:

The study was led by Sherry Zhang, MD, Kaiser Permanente Oakland Medical Center, Oakland, California, and was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

The study relied on coded diagnoses of PCOS from clinical settings, which may have led to detection and referral biases. The findings may not be generalizable to an unselected population in which adolescent girls are systematically screened for both PCOS and hypertension.

DISCLOSURES:

This study was funded by the Cardiovascular and Metabolic Conditions Research Section and the Biostatistical Consulting Unit at the Division of Research, Kaiser Permanente Northern California and by the Kaiser Permanente Northern California Community Health Program. The authors declared having no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Adolescent girls with polycystic ovary syndrome (PCOS) have an increased risk for hypertension, according to a new study which underscores the importance of blood pressure surveillance in this population.

METHODOLOGY:

  • The retrospective cohort study examined the association between PCOS and hypertension in adolescent girls within a diverse community-based US healthcare population.
  • The researchers analyzed data from 224,418 adolescent girls (mean age at index visit, 14.9 years; 15.8% classified as having obesity) who had a well-child visit between 2013 and 2019, during which their systolic blood pressure and diastolic blood pressure were measured.
  • Blood pressure in the hypertensive range was classified using the 2017 American Academy of Pediatrics Practice Guideline, with thresholds of 130/80 mm Hg or greater.

TAKEAWAY:

  • The proportion of adolescent girls with high blood pressure was significantly greater among those with PCOS than among those without the condition (18.2% vs 7.1%; P < .001).
  • Adolescent girls with PCOS had a 25% higher risk for hypertension than those without the disorder (adjusted odds ratio [aOR], 1.25; 95% CI, 1.10-1.42).
  • Similarly, adolescent girls with obesity and PCOS had a 23% higher risk for high blood pressure than those without PCOS (aOR, 1.23; 95% CI, 1.06-1.42).

IN PRACTICE:

“The high prevalence of [hypertension] associated with PCOS emphasizes the key role of early [blood pressure] monitoring in this high-risk group,” the authors of the study wrote.

SOURCE:

The study was led by Sherry Zhang, MD, Kaiser Permanente Oakland Medical Center, Oakland, California, and was published online in the American Journal of Preventive Medicine.

LIMITATIONS:

The study relied on coded diagnoses of PCOS from clinical settings, which may have led to detection and referral biases. The findings may not be generalizable to an unselected population in which adolescent girls are systematically screened for both PCOS and hypertension.

DISCLOSURES:

This study was funded by the Cardiovascular and Metabolic Conditions Research Section and the Biostatistical Consulting Unit at the Division of Research, Kaiser Permanente Northern California and by the Kaiser Permanente Northern California Community Health Program. The authors declared having no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Dry Eye Linked to Increased Risk for Mental Health Disorders

Article Type
Changed
Tue, 11/05/2024 - 08:08

 

TOPLINE:

Patients with dry eye disease are more than three times as likely to have mental health conditions, such as depression and anxiety, as those without the condition.

METHODOLOGY:

  • Researchers used a database from the National Institutes of Health to investigate the association between dry eye disease and mental health disorders in a large and diverse nationwide population of American adults.
  • They identified 18,257 patients (mean age, 64.9 years; 67% women) with dry eye disease who were propensity score–matched with 54,765 participants without the condition.
  • The cases of dry eye disease were identified using Systematized Nomenclature of Medicine codes for dry eyes, meibomian gland dysfunction, and tear film insufficiency.
  • The outcome measures for mental health conditions were clinical diagnoses of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders.

TAKEAWAY:

  • Patients with dry eye disease had more than triple the risk for mental health conditions than participants without the condition (adjusted odds ratio [aOR], 3.21; P < .001).
  • Patients with dry eye disease had a higher risk for a depressive disorder (aOR, 3.47), anxiety-related disorder (aOR, 2.74), bipolar disorder (aOR, 2.23), and schizophrenia spectrum disorder (aOR, 2.48; P < .001 for all) than participants without the condition.
  • The associations between dry eye disease and mental health conditions were significantly stronger among Black individuals than among White individuals, except for bipolar disorder.
  • Dry eye disease was associated with two- to threefold higher odds of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders even in participants who never used medications for mental health (P < .001 for all).

IN PRACTICE:

“Greater efforts should be undertaken to screen patients with DED [dry eye disease] for mental health conditions, particularly in historically medically underserved populations,” the authors of the study wrote.

SOURCE:

This study was led by Aaron T. Zhao, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and was published online on October 15, 2024, in the American Journal of Ophthalmology.

LIMITATIONS:

This study relied on electronic health record data, which may have led to the inclusion of participants with undiagnosed dry eye disease as control participants. Moreover, the study did not evaluate the severity of dry eye disease or the severity and duration of mental health conditions, which may have affected the results. The database analyzed in this study may not have fully captured the complete demographic profile of the nationwide population, which may have affected the generalizability of the findings.

DISCLOSURES:

This study was supported by funding from the National Institutes of Health and Research to Prevent Blindness. The authors declared having no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Patients with dry eye disease are more than three times as likely to have mental health conditions, such as depression and anxiety, as those without the condition.

METHODOLOGY:

  • Researchers used a database from the National Institutes of Health to investigate the association between dry eye disease and mental health disorders in a large and diverse nationwide population of American adults.
  • They identified 18,257 patients (mean age, 64.9 years; 67% women) with dry eye disease who were propensity score–matched with 54,765 participants without the condition.
  • The cases of dry eye disease were identified using Systematized Nomenclature of Medicine codes for dry eyes, meibomian gland dysfunction, and tear film insufficiency.
  • The outcome measures for mental health conditions were clinical diagnoses of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders.

TAKEAWAY:

  • Patients with dry eye disease had more than triple the risk for mental health conditions than participants without the condition (adjusted odds ratio [aOR], 3.21; P < .001).
  • Patients with dry eye disease had a higher risk for a depressive disorder (aOR, 3.47), anxiety-related disorder (aOR, 2.74), bipolar disorder (aOR, 2.23), and schizophrenia spectrum disorder (aOR, 2.48; P < .001 for all) than participants without the condition.
  • The associations between dry eye disease and mental health conditions were significantly stronger among Black individuals than among White individuals, except for bipolar disorder.
  • Dry eye disease was associated with two- to threefold higher odds of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders even in participants who never used medications for mental health (P < .001 for all).

IN PRACTICE:

“Greater efforts should be undertaken to screen patients with DED [dry eye disease] for mental health conditions, particularly in historically medically underserved populations,” the authors of the study wrote.

SOURCE:

This study was led by Aaron T. Zhao, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and was published online on October 15, 2024, in the American Journal of Ophthalmology.

LIMITATIONS:

This study relied on electronic health record data, which may have led to the inclusion of participants with undiagnosed dry eye disease as control participants. Moreover, the study did not evaluate the severity of dry eye disease or the severity and duration of mental health conditions, which may have affected the results. The database analyzed in this study may not have fully captured the complete demographic profile of the nationwide population, which may have affected the generalizability of the findings.

DISCLOSURES:

This study was supported by funding from the National Institutes of Health and Research to Prevent Blindness. The authors declared having no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Patients with dry eye disease are more than three times as likely to have mental health conditions, such as depression and anxiety, as those without the condition.

METHODOLOGY:

  • Researchers used a database from the National Institutes of Health to investigate the association between dry eye disease and mental health disorders in a large and diverse nationwide population of American adults.
  • They identified 18,257 patients (mean age, 64.9 years; 67% women) with dry eye disease who were propensity score–matched with 54,765 participants without the condition.
  • The cases of dry eye disease were identified using Systematized Nomenclature of Medicine codes for dry eyes, meibomian gland dysfunction, and tear film insufficiency.
  • The outcome measures for mental health conditions were clinical diagnoses of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders.

TAKEAWAY:

  • Patients with dry eye disease had more than triple the risk for mental health conditions than participants without the condition (adjusted odds ratio [aOR], 3.21; P < .001).
  • Patients with dry eye disease had a higher risk for a depressive disorder (aOR, 3.47), anxiety-related disorder (aOR, 2.74), bipolar disorder (aOR, 2.23), and schizophrenia spectrum disorder (aOR, 2.48; P < .001 for all) than participants without the condition.
  • The associations between dry eye disease and mental health conditions were significantly stronger among Black individuals than among White individuals, except for bipolar disorder.
  • Dry eye disease was associated with two- to threefold higher odds of depressive disorders, anxiety-related disorders, bipolar disorder, and schizophrenia spectrum disorders even in participants who never used medications for mental health (P < .001 for all).

IN PRACTICE:

“Greater efforts should be undertaken to screen patients with DED [dry eye disease] for mental health conditions, particularly in historically medically underserved populations,” the authors of the study wrote.

SOURCE:

This study was led by Aaron T. Zhao, of the Perelman School of Medicine at the University of Pennsylvania, Philadelphia, and was published online on October 15, 2024, in the American Journal of Ophthalmology.

LIMITATIONS:

This study relied on electronic health record data, which may have led to the inclusion of participants with undiagnosed dry eye disease as control participants. Moreover, the study did not evaluate the severity of dry eye disease or the severity and duration of mental health conditions, which may have affected the results. The database analyzed in this study may not have fully captured the complete demographic profile of the nationwide population, which may have affected the generalizability of the findings.

DISCLOSURES:

This study was supported by funding from the National Institutes of Health and Research to Prevent Blindness. The authors declared having no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Higher Doses of Vitamin D3 Do Not Reduce Cardiac Biomarkers in Older Adults

Article Type
Changed
Tue, 10/22/2024 - 11:14

 

TOPLINE:

Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.

METHODOLOGY:

  • A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
  • Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
  • Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
  • The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
  • The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.

TAKEAWAY:

  • Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
  • No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
  • Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
  • The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.

IN PRACTICE:

“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.

SOURCE:

The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.

LIMITATIONS:

The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.

METHODOLOGY:

  • A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
  • Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
  • Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
  • The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
  • The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.

TAKEAWAY:

  • Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
  • No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
  • Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
  • The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.

IN PRACTICE:

“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.

SOURCE:

The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.

LIMITATIONS:

The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Higher doses of vitamin D3 supplementation did not significantly reduce cardiac biomarkers in older adults with low serum vitamin D levels. The STURDY trial found no significant differences in high-sensitivity cardiac troponin I (hs-cTnI) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) between low- and high-dose groups.

METHODOLOGY:

  • A total of 688 participants aged 70 years or older with low serum 25-hydroxy vitamin D levels (10-29 ng/mL) were included in the STURDY trial.
  • Participants were randomized to receive one of four doses of vitamin D3 supplementation: 200, 1000, 2000, or 4000 IU/d, with 200 IU/d as the reference dose.
  • Cardiac biomarkers, including hs-cTnI and NT-proBNP, were measured at baseline, 3 months, 12 months, and 24 months.
  • The trial was conducted at two community-based research institutions in the United States between July 2015 and March 2019.
  • The effects of vitamin D3 dose on biomarkers were assessed via mixed-effects tobit models, with participants followed up to 24 months or until study termination.

TAKEAWAY:

  • Higher doses of vitamin D3 supplementation did not significantly affect hs-cTnI levels compared with the low-dose group (1.6% difference; 95% CI, −5.3 to 8.9).
  • No significant differences were observed in NT-proBNP levels between the high-dose and low-dose groups (−1.8% difference; 95% CI, −9.3 to 6.3).
  • Both hs-cTnI and NT-proBNP levels increased in both low- and high-dose groups over time, with hs-cTnI increasing by 5.2% and 7.0%, respectively, and NT-proBNP increasing by 11.3% and 9.3%, respectively.
  • The findings suggest that higher doses of vitamin D3 supplementation do not reduce markers of subclinical cardiovascular disease in older adults with low serum vitamin D levels.

IN PRACTICE:

“We can speculate that the systemic effects of vitamin D deficiency are more profound among the very old, and there may be an inverse relationship between supplementation and inflammation. It is also possible that serum vitamin D level is a risk marker but not a risk factor for CVD risk and related underlying mechanisms,” wrote the authors of the study.

SOURCE:

The study was led by Katharine W. Rainer, MD, Beth Israel Deaconess Medical Center in Boston. It was published online in the Journal of the American College of Cardiology.

LIMITATIONS:

The study’s community-based population may limit the generalizability of the findings to populations at higher risk for cardiovascular disease. Additionally, the baseline cardiac biomarkers were lower than those in some high-risk populations, which may affect the precision of the assay performance. The study may not have had adequate power for cross-sectional and subgroup analyses. Both groups received some vitamin D3 supplementation, making it difficult to determine the impact of lower-dose supplementation vs no supplementation.

DISCLOSURES:

The study was supported by grants from the National Institute on Aging, the Office of Dietary Supplements, the Mid-Atlantic Nutrition Obesity Research Center, and the Johns Hopkins Institute for Clinical and Translational Research. Rainer disclosed receiving grants from these organizations.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article