Liposomal Irinotecan May Be Effective in Later Lines for Pancreatic Cancer

Article Type
Changed
Mon, 08/26/2024 - 09:06

Prior exposure to conventional irinotecan does not affect survival in patients with advanced pancreatic ductal adenocarcinoma (PDAC) who are subsequently treated with the liposomal formulation nanoliposomal irinotecan (nal‐IRI), according to a meta-analysis of real-world studies.

“The selection of later lines of chemotherapy regimens should be based on the differential safety profile, patient status, the cost of treatment, and health‐related quality of life,” reported lead author Amol Gupta, MD, from the Sidney Kimmel Comprehensive Cancer Center at the Johns Hopkins Hospital in Baltimore.

The findings, published in Cancer on July 10, 2024, are “noteworthy with respect to calling attention to this important situation and the role of a new drug in the PDAC pharmacologic armamentarium,” Vincent J. Picozzi, MD from the Division of Hematology‐Oncology, Virginia Mason Medical Center, Seattle, wrote in an accompanying editorial.

Treatment of PDAC remains a challenge, the authors noted, as most diagnoses occur at a metastatic or locally advanced stage at which treatments provide only modest benefits and significant toxicities.

The introduction of the first‐line FOLFIRINOX regimen (fluorouracil [5‐FU], leucovorin [LV], irinotecan [IRI], and oxaliplatin) and gemcitabine plus nanoparticle albumin‐bound (nab)‐paclitaxel “has improved the survival of patients with advanced PDAC and increased the number of patients eligible for subsequent therapies” beyond the first-line setting, according to the authors. But survival benefits with FOLFIRINOX and gemcitabine‐based therapy remain poor, with median overall survival (OS) of 6 and 7.6 months, respectively, they noted.
 

Timing Liposomal Irinotecan

The liposomal formulation of IRI has resulted in improved pharmacokinetics and decreased toxicity, but use of this new formulation in later lines of therapy is sometimes limited by concerns that prior exposure to conventional IRI in FOLFIRINOX might produce cross-resistance and reduce the benefit of nal-IRI, the authors explained.

To examine this question, the researchers included eight retrospective chart reviews published up until April 2023 that included a total of 1368 patients who were treated with nal-IRI for locally advanced or metastatic PDAC (five studies) or locally advanced or metastatic PDAC (three studies). Patients ranged in age from 57.8 to 65 years, with the proportion of male patients ranging from 45.7% to 68.6%. The sample sizes of the studies ranged from 29 to 675 patients, with a follow-up range of 6-12.9 months.

Between 84.5% and 100% of patients had received two or more prior lines of therapy, with prior IRI exposure ranging from 20.9% to 100%. In total, 499 patients had prior IRI exposure, mostly in the first-line setting. When reported, most patients gave the reason for IRI discontinuation as disease progression.

Across all patients, the pooled progression-free survival (PFS) and OS were 2.02 months and 4.26 months, respectively, with comparable outcomes in patients with prior IRI exposure compared with those without: PFS (hazard ratio [HR], 1.17; 95% CI, 0.94-1.47; P = .17) and OS (HR, 1.16; 95% CI, 0.95-1.42; P = .16), respectively. This held true regardless of whether patients had experienced progressive disease on conventional IRI. Specifically, the PFS and OS of patients who discontinued conventional IRI because of progressive disease were comparable (HR, 1.50 and HR, 1.70) with those of patients who did not have progressive disease.

The authors reported there was substantial variation in predictors of nal-IRI outcomes among the studies examined. One study reported a numerical but not statistically significant better PFS and OS associated with longer IRI exposure and a higher cumulative dose of prior IRI. Two studies suggested worse PFS and OS associated with later treatment lines of nal-IRI, although adjusted HR did not reflect this. Sequence of treatment was a significant predictor of outcome, as were surgery and metastatic disease, bone and liver metastases, serum albumin < 40 g/L, a neutrophil‐to-lymphocyte ratio > 5, and an elevated baseline carbohydrate antigen 19‐9 level.

“There are several reasons to consider why nal‐IRI may be effective after standard IRI has failed to be so,” said Dr. Picozzi, suggesting pharmacokinetics or an improved tumor tissue/normal tissue exposure ratio as possible explanations. “However, as the authors point out, the results from this pooled, retrospective review could also simply be the result of methodological bias (eg, selection bias) or patient selection,” he added.

“Perhaps the best way of considering the results of the analysis (remembering that the median PFS was essentially the same as the first restaging visit) is that nal‐IRI may be just another in a list of suboptimal treatment options in this situation, he wrote. “If its inherent activity is very low in third‐line treatment (like all other agents to date), the use and response to prior standard IRI may be largely irrelevant.”

Consequently, he concurs with the authors that the selection of third-line treatment options and beyond for advanced APDAC should not be influenced by prior IRI exposure.
 

 

 

First-Line Liposomal Irinotecan Approved

In February, the US Food and Drug Administration (FDA) approved nal-IRI (Onivyde) as part of a new first-line regimen for first-line metastatic PDAC. In the new regimen (NALIRIFOX), nal-IRI is substituted for the conventional IRI found in FOLFIRINOX, boosting the cost more than 15-fold from around $500-$7800 per cycle.

The FDA approval was based on the results of the NAPOLI-3 study, which did not compare outcomes of NALIRIFOX with FOLFIRINOX, but rather, compared first-line nal-IRI with the combination of nab‐paclitaxel and gemcitabine. The study showed longer OS (HR, 0.83; 95% CI, 0.70-0.99; P = .04) and PFS (HR, 0.69; 95% CI, 0.58-0.83; P < .001), with first-line nal-IRI.

The absence of a head-to-head comparison has some oncologists debating whether the new regimen is a potential new standard first-line treatment or whether the cost outweighs the potential benefits.

One study author reported grants/contracts from AbbVie, Bristol Myers Squibb, Curegenix, Medivir, Merck, and Nouscom; personal/consulting fees from Bayer, Catenion, G1 Therapeutics, Janssen Pharmaceuticals, Merck, Merus, Nouscom, Regeneron, Sirtex Medical Inc., Tango Therapeutics, and Tavotek Biotherapeutics; and support for other professional activities from Bristol Myers Squibb and Merck outside the submitted work. Another study author reported personal/consulting fees from Astellas Pharma, AstraZeneca, IDEAYA Biosciences, Merck, Merus, Moderna, RenovoRx, Seattle Genetics, and TriSalus Life Sciences outside the submitted work. The remaining authors and Picozzi disclosed no conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Prior exposure to conventional irinotecan does not affect survival in patients with advanced pancreatic ductal adenocarcinoma (PDAC) who are subsequently treated with the liposomal formulation nanoliposomal irinotecan (nal‐IRI), according to a meta-analysis of real-world studies.

“The selection of later lines of chemotherapy regimens should be based on the differential safety profile, patient status, the cost of treatment, and health‐related quality of life,” reported lead author Amol Gupta, MD, from the Sidney Kimmel Comprehensive Cancer Center at the Johns Hopkins Hospital in Baltimore.

The findings, published in Cancer on July 10, 2024, are “noteworthy with respect to calling attention to this important situation and the role of a new drug in the PDAC pharmacologic armamentarium,” Vincent J. Picozzi, MD from the Division of Hematology‐Oncology, Virginia Mason Medical Center, Seattle, wrote in an accompanying editorial.

Treatment of PDAC remains a challenge, the authors noted, as most diagnoses occur at a metastatic or locally advanced stage at which treatments provide only modest benefits and significant toxicities.

The introduction of the first‐line FOLFIRINOX regimen (fluorouracil [5‐FU], leucovorin [LV], irinotecan [IRI], and oxaliplatin) and gemcitabine plus nanoparticle albumin‐bound (nab)‐paclitaxel “has improved the survival of patients with advanced PDAC and increased the number of patients eligible for subsequent therapies” beyond the first-line setting, according to the authors. But survival benefits with FOLFIRINOX and gemcitabine‐based therapy remain poor, with median overall survival (OS) of 6 and 7.6 months, respectively, they noted.
 

Timing Liposomal Irinotecan

The liposomal formulation of IRI has resulted in improved pharmacokinetics and decreased toxicity, but use of this new formulation in later lines of therapy is sometimes limited by concerns that prior exposure to conventional IRI in FOLFIRINOX might produce cross-resistance and reduce the benefit of nal-IRI, the authors explained.

To examine this question, the researchers included eight retrospective chart reviews published up until April 2023 that included a total of 1368 patients who were treated with nal-IRI for locally advanced or metastatic PDAC (five studies) or locally advanced or metastatic PDAC (three studies). Patients ranged in age from 57.8 to 65 years, with the proportion of male patients ranging from 45.7% to 68.6%. The sample sizes of the studies ranged from 29 to 675 patients, with a follow-up range of 6-12.9 months.

Between 84.5% and 100% of patients had received two or more prior lines of therapy, with prior IRI exposure ranging from 20.9% to 100%. In total, 499 patients had prior IRI exposure, mostly in the first-line setting. When reported, most patients gave the reason for IRI discontinuation as disease progression.

Across all patients, the pooled progression-free survival (PFS) and OS were 2.02 months and 4.26 months, respectively, with comparable outcomes in patients with prior IRI exposure compared with those without: PFS (hazard ratio [HR], 1.17; 95% CI, 0.94-1.47; P = .17) and OS (HR, 1.16; 95% CI, 0.95-1.42; P = .16), respectively. This held true regardless of whether patients had experienced progressive disease on conventional IRI. Specifically, the PFS and OS of patients who discontinued conventional IRI because of progressive disease were comparable (HR, 1.50 and HR, 1.70) with those of patients who did not have progressive disease.

The authors reported there was substantial variation in predictors of nal-IRI outcomes among the studies examined. One study reported a numerical but not statistically significant better PFS and OS associated with longer IRI exposure and a higher cumulative dose of prior IRI. Two studies suggested worse PFS and OS associated with later treatment lines of nal-IRI, although adjusted HR did not reflect this. Sequence of treatment was a significant predictor of outcome, as were surgery and metastatic disease, bone and liver metastases, serum albumin < 40 g/L, a neutrophil‐to-lymphocyte ratio > 5, and an elevated baseline carbohydrate antigen 19‐9 level.

“There are several reasons to consider why nal‐IRI may be effective after standard IRI has failed to be so,” said Dr. Picozzi, suggesting pharmacokinetics or an improved tumor tissue/normal tissue exposure ratio as possible explanations. “However, as the authors point out, the results from this pooled, retrospective review could also simply be the result of methodological bias (eg, selection bias) or patient selection,” he added.

“Perhaps the best way of considering the results of the analysis (remembering that the median PFS was essentially the same as the first restaging visit) is that nal‐IRI may be just another in a list of suboptimal treatment options in this situation, he wrote. “If its inherent activity is very low in third‐line treatment (like all other agents to date), the use and response to prior standard IRI may be largely irrelevant.”

Consequently, he concurs with the authors that the selection of third-line treatment options and beyond for advanced APDAC should not be influenced by prior IRI exposure.
 

 

 

First-Line Liposomal Irinotecan Approved

In February, the US Food and Drug Administration (FDA) approved nal-IRI (Onivyde) as part of a new first-line regimen for first-line metastatic PDAC. In the new regimen (NALIRIFOX), nal-IRI is substituted for the conventional IRI found in FOLFIRINOX, boosting the cost more than 15-fold from around $500-$7800 per cycle.

The FDA approval was based on the results of the NAPOLI-3 study, which did not compare outcomes of NALIRIFOX with FOLFIRINOX, but rather, compared first-line nal-IRI with the combination of nab‐paclitaxel and gemcitabine. The study showed longer OS (HR, 0.83; 95% CI, 0.70-0.99; P = .04) and PFS (HR, 0.69; 95% CI, 0.58-0.83; P < .001), with first-line nal-IRI.

The absence of a head-to-head comparison has some oncologists debating whether the new regimen is a potential new standard first-line treatment or whether the cost outweighs the potential benefits.

One study author reported grants/contracts from AbbVie, Bristol Myers Squibb, Curegenix, Medivir, Merck, and Nouscom; personal/consulting fees from Bayer, Catenion, G1 Therapeutics, Janssen Pharmaceuticals, Merck, Merus, Nouscom, Regeneron, Sirtex Medical Inc., Tango Therapeutics, and Tavotek Biotherapeutics; and support for other professional activities from Bristol Myers Squibb and Merck outside the submitted work. Another study author reported personal/consulting fees from Astellas Pharma, AstraZeneca, IDEAYA Biosciences, Merck, Merus, Moderna, RenovoRx, Seattle Genetics, and TriSalus Life Sciences outside the submitted work. The remaining authors and Picozzi disclosed no conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Prior exposure to conventional irinotecan does not affect survival in patients with advanced pancreatic ductal adenocarcinoma (PDAC) who are subsequently treated with the liposomal formulation nanoliposomal irinotecan (nal‐IRI), according to a meta-analysis of real-world studies.

“The selection of later lines of chemotherapy regimens should be based on the differential safety profile, patient status, the cost of treatment, and health‐related quality of life,” reported lead author Amol Gupta, MD, from the Sidney Kimmel Comprehensive Cancer Center at the Johns Hopkins Hospital in Baltimore.

The findings, published in Cancer on July 10, 2024, are “noteworthy with respect to calling attention to this important situation and the role of a new drug in the PDAC pharmacologic armamentarium,” Vincent J. Picozzi, MD from the Division of Hematology‐Oncology, Virginia Mason Medical Center, Seattle, wrote in an accompanying editorial.

Treatment of PDAC remains a challenge, the authors noted, as most diagnoses occur at a metastatic or locally advanced stage at which treatments provide only modest benefits and significant toxicities.

The introduction of the first‐line FOLFIRINOX regimen (fluorouracil [5‐FU], leucovorin [LV], irinotecan [IRI], and oxaliplatin) and gemcitabine plus nanoparticle albumin‐bound (nab)‐paclitaxel “has improved the survival of patients with advanced PDAC and increased the number of patients eligible for subsequent therapies” beyond the first-line setting, according to the authors. But survival benefits with FOLFIRINOX and gemcitabine‐based therapy remain poor, with median overall survival (OS) of 6 and 7.6 months, respectively, they noted.
 

Timing Liposomal Irinotecan

The liposomal formulation of IRI has resulted in improved pharmacokinetics and decreased toxicity, but use of this new formulation in later lines of therapy is sometimes limited by concerns that prior exposure to conventional IRI in FOLFIRINOX might produce cross-resistance and reduce the benefit of nal-IRI, the authors explained.

To examine this question, the researchers included eight retrospective chart reviews published up until April 2023 that included a total of 1368 patients who were treated with nal-IRI for locally advanced or metastatic PDAC (five studies) or locally advanced or metastatic PDAC (three studies). Patients ranged in age from 57.8 to 65 years, with the proportion of male patients ranging from 45.7% to 68.6%. The sample sizes of the studies ranged from 29 to 675 patients, with a follow-up range of 6-12.9 months.

Between 84.5% and 100% of patients had received two or more prior lines of therapy, with prior IRI exposure ranging from 20.9% to 100%. In total, 499 patients had prior IRI exposure, mostly in the first-line setting. When reported, most patients gave the reason for IRI discontinuation as disease progression.

Across all patients, the pooled progression-free survival (PFS) and OS were 2.02 months and 4.26 months, respectively, with comparable outcomes in patients with prior IRI exposure compared with those without: PFS (hazard ratio [HR], 1.17; 95% CI, 0.94-1.47; P = .17) and OS (HR, 1.16; 95% CI, 0.95-1.42; P = .16), respectively. This held true regardless of whether patients had experienced progressive disease on conventional IRI. Specifically, the PFS and OS of patients who discontinued conventional IRI because of progressive disease were comparable (HR, 1.50 and HR, 1.70) with those of patients who did not have progressive disease.

The authors reported there was substantial variation in predictors of nal-IRI outcomes among the studies examined. One study reported a numerical but not statistically significant better PFS and OS associated with longer IRI exposure and a higher cumulative dose of prior IRI. Two studies suggested worse PFS and OS associated with later treatment lines of nal-IRI, although adjusted HR did not reflect this. Sequence of treatment was a significant predictor of outcome, as were surgery and metastatic disease, bone and liver metastases, serum albumin < 40 g/L, a neutrophil‐to-lymphocyte ratio > 5, and an elevated baseline carbohydrate antigen 19‐9 level.

“There are several reasons to consider why nal‐IRI may be effective after standard IRI has failed to be so,” said Dr. Picozzi, suggesting pharmacokinetics or an improved tumor tissue/normal tissue exposure ratio as possible explanations. “However, as the authors point out, the results from this pooled, retrospective review could also simply be the result of methodological bias (eg, selection bias) or patient selection,” he added.

“Perhaps the best way of considering the results of the analysis (remembering that the median PFS was essentially the same as the first restaging visit) is that nal‐IRI may be just another in a list of suboptimal treatment options in this situation, he wrote. “If its inherent activity is very low in third‐line treatment (like all other agents to date), the use and response to prior standard IRI may be largely irrelevant.”

Consequently, he concurs with the authors that the selection of third-line treatment options and beyond for advanced APDAC should not be influenced by prior IRI exposure.
 

 

 

First-Line Liposomal Irinotecan Approved

In February, the US Food and Drug Administration (FDA) approved nal-IRI (Onivyde) as part of a new first-line regimen for first-line metastatic PDAC. In the new regimen (NALIRIFOX), nal-IRI is substituted for the conventional IRI found in FOLFIRINOX, boosting the cost more than 15-fold from around $500-$7800 per cycle.

The FDA approval was based on the results of the NAPOLI-3 study, which did not compare outcomes of NALIRIFOX with FOLFIRINOX, but rather, compared first-line nal-IRI with the combination of nab‐paclitaxel and gemcitabine. The study showed longer OS (HR, 0.83; 95% CI, 0.70-0.99; P = .04) and PFS (HR, 0.69; 95% CI, 0.58-0.83; P < .001), with first-line nal-IRI.

The absence of a head-to-head comparison has some oncologists debating whether the new regimen is a potential new standard first-line treatment or whether the cost outweighs the potential benefits.

One study author reported grants/contracts from AbbVie, Bristol Myers Squibb, Curegenix, Medivir, Merck, and Nouscom; personal/consulting fees from Bayer, Catenion, G1 Therapeutics, Janssen Pharmaceuticals, Merck, Merus, Nouscom, Regeneron, Sirtex Medical Inc., Tango Therapeutics, and Tavotek Biotherapeutics; and support for other professional activities from Bristol Myers Squibb and Merck outside the submitted work. Another study author reported personal/consulting fees from Astellas Pharma, AstraZeneca, IDEAYA Biosciences, Merck, Merus, Moderna, RenovoRx, Seattle Genetics, and TriSalus Life Sciences outside the submitted work. The remaining authors and Picozzi disclosed no conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Elinzanetant Reduces Menopausal Symptoms

Article Type
Changed
Fri, 08/23/2024 - 15:39

 

TOPLINE: 

Elinzanetant significantly reduced the frequency and severity of vasomotor symptoms in menopausal women by week 12. The drug also improved sleep disturbances and menopause-related quality of life, with a favorable safety profile.

METHODOLOGY:  

  • Researchers conducted two randomized, double-blind, placebo-controlled phase 3 trials (OASIS 1 and 2) across 77 sites in the United States, Europe, Canada, and Israel.
  • A total of 796 postmenopausal participants aged 40-65 years experiencing moderate to severe vasomotor symptoms were included.
  • Participants received either 120 mg of elinzanetant or a placebo once daily for 12 weeks, followed by elinzanetant for an additional 14 weeks.
  • Primary outcomes measured were changes in frequency and severity of vasomotor symptoms from baseline to weeks 4 and 12, using an electronic hot flash daily diary.
  • Secondary outcomes included changes in sleep disturbances and menopause-related quality of life, assessed using the Patient-Reported Outcomes Measurement Information System Sleep Disturbance–Short Form 8b (PROMIS SD SF 8b) and Menopause-Specific Quality of Life (MENQOL) questionnaires.

TAKEAWAY:  

  • Elinzanetant significantly reduced the frequency of vasomotor symptoms by week 4 (OASIS 1: −3.3 [95% CI, −4.5 to −2.1]; OASIS 2: −3.0 [95% CI, −4.4 to −1.7]; P < .001).
  • By week 12, elinzanetant further reduced vasomotor symptom frequency (OASIS 1: −3.2 [95% CI, −4.8 to −1.6]; OASIS 2: −3.2 [95% CI, −4.6 to −1.9]; P < .001).
  • Elinzanetant improved sleep disturbances, with significant reductions in PROMIS SD-SF 8b total T scores at week 12 (OASIS 1: −5.6 [95% CI, −7.2 to −4.0]; OASIS 2: −4.3 [95% CI, −5.8 to −2.9]; P < .001).
  • Menopause-related quality of life also improved significantly with elinzanetant, as indicated by reductions in MENQOL total scores at week 12 (OASIS 1: −0.4 [95% CI, −0.6 to −0.2]; OASIS 2: − 0.3 [95% CI, −0.5 to − 0.1]; P = .0059).

IN PRACTICE:

“These results have clinically relevant implications because vasomotor symptoms often pose significant impacts on menopausal individual’s overall health, everyday activities, sleep, quality of life, and work productivity,” wrote the study authors.

SOURCE:

The studies were led by JoAnn V. Pinkerton, MD, MSCP, University of Virginia Health in Charlottesville, and James A. Simon, MD, MSCP, George Washington University in Washington, DC. The results were published online in JAMA.

LIMITATIONS: 

The OASIS 1 and 2 trials included only postmenopausal individuals, which may limit the generalizability of the findings to other populations. The study relied on patient-reported outcomes, which can be influenced by subjective perception and may introduce bias. The placebo response observed in the trials is consistent with that seen in other vasomotor symptom studies, potentially affecting the interpretation of the results. Further research is needed to assess the long-term safety and efficacy of elinzanetant beyond the 26-week treatment period.

DISCLOSURES:

Dr. Pinkerton received grants from Bayer Pharmaceuticals to the University of Virginia and consulting fees from Bayer Pharmaceutical. Dr. Simon reported grants from Bayer Healthcare, AbbVie, Daré Bioscience, Mylan, and Myovant/Sumitomo and personal fees from Astellas Pharma, Ascend Therapeutics, California Institute of Integral Studies, Femasys, Khyra, Madorra, Mayne Pharma, Pfizer, Pharmavite, Scynexis Inc, Vella Bioscience, and Bayer. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE: 

Elinzanetant significantly reduced the frequency and severity of vasomotor symptoms in menopausal women by week 12. The drug also improved sleep disturbances and menopause-related quality of life, with a favorable safety profile.

METHODOLOGY:  

  • Researchers conducted two randomized, double-blind, placebo-controlled phase 3 trials (OASIS 1 and 2) across 77 sites in the United States, Europe, Canada, and Israel.
  • A total of 796 postmenopausal participants aged 40-65 years experiencing moderate to severe vasomotor symptoms were included.
  • Participants received either 120 mg of elinzanetant or a placebo once daily for 12 weeks, followed by elinzanetant for an additional 14 weeks.
  • Primary outcomes measured were changes in frequency and severity of vasomotor symptoms from baseline to weeks 4 and 12, using an electronic hot flash daily diary.
  • Secondary outcomes included changes in sleep disturbances and menopause-related quality of life, assessed using the Patient-Reported Outcomes Measurement Information System Sleep Disturbance–Short Form 8b (PROMIS SD SF 8b) and Menopause-Specific Quality of Life (MENQOL) questionnaires.

TAKEAWAY:  

  • Elinzanetant significantly reduced the frequency of vasomotor symptoms by week 4 (OASIS 1: −3.3 [95% CI, −4.5 to −2.1]; OASIS 2: −3.0 [95% CI, −4.4 to −1.7]; P < .001).
  • By week 12, elinzanetant further reduced vasomotor symptom frequency (OASIS 1: −3.2 [95% CI, −4.8 to −1.6]; OASIS 2: −3.2 [95% CI, −4.6 to −1.9]; P < .001).
  • Elinzanetant improved sleep disturbances, with significant reductions in PROMIS SD-SF 8b total T scores at week 12 (OASIS 1: −5.6 [95% CI, −7.2 to −4.0]; OASIS 2: −4.3 [95% CI, −5.8 to −2.9]; P < .001).
  • Menopause-related quality of life also improved significantly with elinzanetant, as indicated by reductions in MENQOL total scores at week 12 (OASIS 1: −0.4 [95% CI, −0.6 to −0.2]; OASIS 2: − 0.3 [95% CI, −0.5 to − 0.1]; P = .0059).

IN PRACTICE:

“These results have clinically relevant implications because vasomotor symptoms often pose significant impacts on menopausal individual’s overall health, everyday activities, sleep, quality of life, and work productivity,” wrote the study authors.

SOURCE:

The studies were led by JoAnn V. Pinkerton, MD, MSCP, University of Virginia Health in Charlottesville, and James A. Simon, MD, MSCP, George Washington University in Washington, DC. The results were published online in JAMA.

LIMITATIONS: 

The OASIS 1 and 2 trials included only postmenopausal individuals, which may limit the generalizability of the findings to other populations. The study relied on patient-reported outcomes, which can be influenced by subjective perception and may introduce bias. The placebo response observed in the trials is consistent with that seen in other vasomotor symptom studies, potentially affecting the interpretation of the results. Further research is needed to assess the long-term safety and efficacy of elinzanetant beyond the 26-week treatment period.

DISCLOSURES:

Dr. Pinkerton received grants from Bayer Pharmaceuticals to the University of Virginia and consulting fees from Bayer Pharmaceutical. Dr. Simon reported grants from Bayer Healthcare, AbbVie, Daré Bioscience, Mylan, and Myovant/Sumitomo and personal fees from Astellas Pharma, Ascend Therapeutics, California Institute of Integral Studies, Femasys, Khyra, Madorra, Mayne Pharma, Pfizer, Pharmavite, Scynexis Inc, Vella Bioscience, and Bayer. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE: 

Elinzanetant significantly reduced the frequency and severity of vasomotor symptoms in menopausal women by week 12. The drug also improved sleep disturbances and menopause-related quality of life, with a favorable safety profile.

METHODOLOGY:  

  • Researchers conducted two randomized, double-blind, placebo-controlled phase 3 trials (OASIS 1 and 2) across 77 sites in the United States, Europe, Canada, and Israel.
  • A total of 796 postmenopausal participants aged 40-65 years experiencing moderate to severe vasomotor symptoms were included.
  • Participants received either 120 mg of elinzanetant or a placebo once daily for 12 weeks, followed by elinzanetant for an additional 14 weeks.
  • Primary outcomes measured were changes in frequency and severity of vasomotor symptoms from baseline to weeks 4 and 12, using an electronic hot flash daily diary.
  • Secondary outcomes included changes in sleep disturbances and menopause-related quality of life, assessed using the Patient-Reported Outcomes Measurement Information System Sleep Disturbance–Short Form 8b (PROMIS SD SF 8b) and Menopause-Specific Quality of Life (MENQOL) questionnaires.

TAKEAWAY:  

  • Elinzanetant significantly reduced the frequency of vasomotor symptoms by week 4 (OASIS 1: −3.3 [95% CI, −4.5 to −2.1]; OASIS 2: −3.0 [95% CI, −4.4 to −1.7]; P < .001).
  • By week 12, elinzanetant further reduced vasomotor symptom frequency (OASIS 1: −3.2 [95% CI, −4.8 to −1.6]; OASIS 2: −3.2 [95% CI, −4.6 to −1.9]; P < .001).
  • Elinzanetant improved sleep disturbances, with significant reductions in PROMIS SD-SF 8b total T scores at week 12 (OASIS 1: −5.6 [95% CI, −7.2 to −4.0]; OASIS 2: −4.3 [95% CI, −5.8 to −2.9]; P < .001).
  • Menopause-related quality of life also improved significantly with elinzanetant, as indicated by reductions in MENQOL total scores at week 12 (OASIS 1: −0.4 [95% CI, −0.6 to −0.2]; OASIS 2: − 0.3 [95% CI, −0.5 to − 0.1]; P = .0059).

IN PRACTICE:

“These results have clinically relevant implications because vasomotor symptoms often pose significant impacts on menopausal individual’s overall health, everyday activities, sleep, quality of life, and work productivity,” wrote the study authors.

SOURCE:

The studies were led by JoAnn V. Pinkerton, MD, MSCP, University of Virginia Health in Charlottesville, and James A. Simon, MD, MSCP, George Washington University in Washington, DC. The results were published online in JAMA.

LIMITATIONS: 

The OASIS 1 and 2 trials included only postmenopausal individuals, which may limit the generalizability of the findings to other populations. The study relied on patient-reported outcomes, which can be influenced by subjective perception and may introduce bias. The placebo response observed in the trials is consistent with that seen in other vasomotor symptom studies, potentially affecting the interpretation of the results. Further research is needed to assess the long-term safety and efficacy of elinzanetant beyond the 26-week treatment period.

DISCLOSURES:

Dr. Pinkerton received grants from Bayer Pharmaceuticals to the University of Virginia and consulting fees from Bayer Pharmaceutical. Dr. Simon reported grants from Bayer Healthcare, AbbVie, Daré Bioscience, Mylan, and Myovant/Sumitomo and personal fees from Astellas Pharma, Ascend Therapeutics, California Institute of Integral Studies, Femasys, Khyra, Madorra, Mayne Pharma, Pfizer, Pharmavite, Scynexis Inc, Vella Bioscience, and Bayer. Additional disclosures are noted in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Untreated Hypertension Tied to Alzheimer’s Disease Risk

Article Type
Changed
Fri, 08/23/2024 - 15:34

 

TOPLINE:

Older adults with untreated hypertension have a 36% increased risk for Alzheimer’s disease (AD) compared with those without hypertension and a 42% increased risk for AD compared with those with treated hypertension.

METHODOLOGY:

  • In this meta-analysis, researchers analyzed the data of 31,250 participants aged 60 years or older (mean age, 72.1 years; 41% men) from 14 community-based studies across 14 countries.
  • Mean follow-up was 4.2 years, and blood pressure measurements, hypertension diagnosis, and antihypertensive medication use were recorded.
  • Overall, 35.9% had no history of hypertension or antihypertensive medication use, 50.7% had a history of hypertension with antihypertensive medication use, and 9.4% had a history of hypertension without antihypertensive medication use.
  • The main outcomes were AD and non-AD dementia.

TAKEAWAY:

  • In total, 1415 participants developed AD, and 681 developed non-AD dementia.
  • Participants with untreated hypertension had a 36% increased risk for AD compared with healthy controls (hazard ratio [HR], 1.36; P = .041) and a 42% increased risk for AD (HR, 1.42; P = .013) compared with those with treated hypertension.
  • Compared with healthy controls, patients with treated hypertension did not show an elevated risk for AD (HR, 0.961; P = .6644).
  • Patients with both treated (HR, 1.285; P = .027) and untreated (HR, 1.693; P = .003) hypertension had an increased risk for non-AD dementia compared with healthy controls. Patients with treated and untreated hypertension had a similar risk for non-AD dementia.

IN PRACTICE:

“These results suggest that treating high blood pressure as a person ages continues to be a crucial factor in reducing their risk of Alzheimer’s disease,” the lead author Matthew J. Lennon, MD, PhD, said in a press release.

SOURCE:

This study was led by Matthew J. Lennon, MD, PhD, School of Clinical Medicine, UNSW Sydney, Sydney, Australia. It was published online in Neurology.

LIMITATIONS: 

Varied definitions for hypertension across different locations might have led to discrepancies in diagnosis. Additionally, the study did not account for potential confounders such as stroke, transient ischemic attack, and heart disease, which may act as mediators rather than covariates. Furthermore, the study did not report mortality data, which may have affected the interpretation of dementia risk.

DISCLOSURES:

This research was supported by the National Institute on Aging of the National Institutes of Health. Some authors reported ties with several institutions and pharmaceutical companies outside this work. Full disclosures are available in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Older adults with untreated hypertension have a 36% increased risk for Alzheimer’s disease (AD) compared with those without hypertension and a 42% increased risk for AD compared with those with treated hypertension.

METHODOLOGY:

  • In this meta-analysis, researchers analyzed the data of 31,250 participants aged 60 years or older (mean age, 72.1 years; 41% men) from 14 community-based studies across 14 countries.
  • Mean follow-up was 4.2 years, and blood pressure measurements, hypertension diagnosis, and antihypertensive medication use were recorded.
  • Overall, 35.9% had no history of hypertension or antihypertensive medication use, 50.7% had a history of hypertension with antihypertensive medication use, and 9.4% had a history of hypertension without antihypertensive medication use.
  • The main outcomes were AD and non-AD dementia.

TAKEAWAY:

  • In total, 1415 participants developed AD, and 681 developed non-AD dementia.
  • Participants with untreated hypertension had a 36% increased risk for AD compared with healthy controls (hazard ratio [HR], 1.36; P = .041) and a 42% increased risk for AD (HR, 1.42; P = .013) compared with those with treated hypertension.
  • Compared with healthy controls, patients with treated hypertension did not show an elevated risk for AD (HR, 0.961; P = .6644).
  • Patients with both treated (HR, 1.285; P = .027) and untreated (HR, 1.693; P = .003) hypertension had an increased risk for non-AD dementia compared with healthy controls. Patients with treated and untreated hypertension had a similar risk for non-AD dementia.

IN PRACTICE:

“These results suggest that treating high blood pressure as a person ages continues to be a crucial factor in reducing their risk of Alzheimer’s disease,” the lead author Matthew J. Lennon, MD, PhD, said in a press release.

SOURCE:

This study was led by Matthew J. Lennon, MD, PhD, School of Clinical Medicine, UNSW Sydney, Sydney, Australia. It was published online in Neurology.

LIMITATIONS: 

Varied definitions for hypertension across different locations might have led to discrepancies in diagnosis. Additionally, the study did not account for potential confounders such as stroke, transient ischemic attack, and heart disease, which may act as mediators rather than covariates. Furthermore, the study did not report mortality data, which may have affected the interpretation of dementia risk.

DISCLOSURES:

This research was supported by the National Institute on Aging of the National Institutes of Health. Some authors reported ties with several institutions and pharmaceutical companies outside this work. Full disclosures are available in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Older adults with untreated hypertension have a 36% increased risk for Alzheimer’s disease (AD) compared with those without hypertension and a 42% increased risk for AD compared with those with treated hypertension.

METHODOLOGY:

  • In this meta-analysis, researchers analyzed the data of 31,250 participants aged 60 years or older (mean age, 72.1 years; 41% men) from 14 community-based studies across 14 countries.
  • Mean follow-up was 4.2 years, and blood pressure measurements, hypertension diagnosis, and antihypertensive medication use were recorded.
  • Overall, 35.9% had no history of hypertension or antihypertensive medication use, 50.7% had a history of hypertension with antihypertensive medication use, and 9.4% had a history of hypertension without antihypertensive medication use.
  • The main outcomes were AD and non-AD dementia.

TAKEAWAY:

  • In total, 1415 participants developed AD, and 681 developed non-AD dementia.
  • Participants with untreated hypertension had a 36% increased risk for AD compared with healthy controls (hazard ratio [HR], 1.36; P = .041) and a 42% increased risk for AD (HR, 1.42; P = .013) compared with those with treated hypertension.
  • Compared with healthy controls, patients with treated hypertension did not show an elevated risk for AD (HR, 0.961; P = .6644).
  • Patients with both treated (HR, 1.285; P = .027) and untreated (HR, 1.693; P = .003) hypertension had an increased risk for non-AD dementia compared with healthy controls. Patients with treated and untreated hypertension had a similar risk for non-AD dementia.

IN PRACTICE:

“These results suggest that treating high blood pressure as a person ages continues to be a crucial factor in reducing their risk of Alzheimer’s disease,” the lead author Matthew J. Lennon, MD, PhD, said in a press release.

SOURCE:

This study was led by Matthew J. Lennon, MD, PhD, School of Clinical Medicine, UNSW Sydney, Sydney, Australia. It was published online in Neurology.

LIMITATIONS: 

Varied definitions for hypertension across different locations might have led to discrepancies in diagnosis. Additionally, the study did not account for potential confounders such as stroke, transient ischemic attack, and heart disease, which may act as mediators rather than covariates. Furthermore, the study did not report mortality data, which may have affected the interpretation of dementia risk.

DISCLOSURES:

This research was supported by the National Institute on Aging of the National Institutes of Health. Some authors reported ties with several institutions and pharmaceutical companies outside this work. Full disclosures are available in the original article.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Veterans Found Relief From Chronic Pain Through Telehealth Mindfulness

Article Type
Changed
Fri, 08/23/2024 - 15:09

 

TOPLINE:

Mindfulness-based interventions (MBIs) via telehealth improves pain-related function and biopsychosocial outcomes in veterans with chronic pain as compared with usual care.

METHODOLOGY:

  • Researchers conducted a randomized clinical trial of 811 veterans who had moderate to severe chronic pain and were recruited from three Veterans Affairs facilities in the United States.
  • Participants were divided into three groups: Group MBI (270), self-paced MBI (271), and usual care (270), with interventions lasting 8 weeks.
  • The primary outcome was pain-related function measured using a scale on interference from pain in areas like mood, walking, work, relationships, and sleep at 10 weeks, 6 months, and 1 year.
  • Secondary outcomes included pain intensity, anxiety, fatigue, sleep disturbance, participation in social roles and activities, depression, and posttraumatic stress disorder (PTSD).

TAKEAWAY:

  • Pain-related function significantly improved in participants in both the MBI groups versus usual care group, with a mean difference of −0.4 (95% CI, −0.7 to −0.2) for group MBI and −0.7 (95% CI, −1.0 to −0.4) for self-paced MBI (P < .001).
  • Compared with the usual care group, both the MBI groups had significantly improved secondary outcomes, including pain intensity, depression, and PTSD.
  • The probability of achieving 30% improvement in pain-related function was higher for group MBI at 10 weeks and 6 months and for self-paced MBI at all three timepoints.
  • No significant differences were found between the MBI groups for primary and secondary outcomes.

IN PRACTICE:

“The viability and similarity of both these approaches for delivering MBIs increase patient options for meeting their individual needs and could help accelerate and improve the implementation of nonpharmacological pain treatment in health care systems,” the study authors wrote.

SOURCE:

The study was led by Diana J. Burgess, PhD, of the Center for Care Delivery and Outcomes Research, VA Health Systems Research in Minneapolis, Minnesota, and published online in JAMA Internal Medicine

LIMITATIONS:

The trial was not designed to compare less resource-intensive MBIs with more intensive mindfulness-based stress reduction programs or in-person MBIs. The study did not address cost-effectiveness or control for time, attention, and other contextual factors. The high nonresponse rate (81%) to initial recruitment may have affected the generalizability of the findings.

DISCLOSURES:

The study was supported by the Pain Management Collaboratory–Pragmatic Clinical Trials Demonstration. Various authors reported grants from the National Center for Complementary and Integrative Health and the National Institute of Nursing Research.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Mindfulness-based interventions (MBIs) via telehealth improves pain-related function and biopsychosocial outcomes in veterans with chronic pain as compared with usual care.

METHODOLOGY:

  • Researchers conducted a randomized clinical trial of 811 veterans who had moderate to severe chronic pain and were recruited from three Veterans Affairs facilities in the United States.
  • Participants were divided into three groups: Group MBI (270), self-paced MBI (271), and usual care (270), with interventions lasting 8 weeks.
  • The primary outcome was pain-related function measured using a scale on interference from pain in areas like mood, walking, work, relationships, and sleep at 10 weeks, 6 months, and 1 year.
  • Secondary outcomes included pain intensity, anxiety, fatigue, sleep disturbance, participation in social roles and activities, depression, and posttraumatic stress disorder (PTSD).

TAKEAWAY:

  • Pain-related function significantly improved in participants in both the MBI groups versus usual care group, with a mean difference of −0.4 (95% CI, −0.7 to −0.2) for group MBI and −0.7 (95% CI, −1.0 to −0.4) for self-paced MBI (P < .001).
  • Compared with the usual care group, both the MBI groups had significantly improved secondary outcomes, including pain intensity, depression, and PTSD.
  • The probability of achieving 30% improvement in pain-related function was higher for group MBI at 10 weeks and 6 months and for self-paced MBI at all three timepoints.
  • No significant differences were found between the MBI groups for primary and secondary outcomes.

IN PRACTICE:

“The viability and similarity of both these approaches for delivering MBIs increase patient options for meeting their individual needs and could help accelerate and improve the implementation of nonpharmacological pain treatment in health care systems,” the study authors wrote.

SOURCE:

The study was led by Diana J. Burgess, PhD, of the Center for Care Delivery and Outcomes Research, VA Health Systems Research in Minneapolis, Minnesota, and published online in JAMA Internal Medicine

LIMITATIONS:

The trial was not designed to compare less resource-intensive MBIs with more intensive mindfulness-based stress reduction programs or in-person MBIs. The study did not address cost-effectiveness or control for time, attention, and other contextual factors. The high nonresponse rate (81%) to initial recruitment may have affected the generalizability of the findings.

DISCLOSURES:

The study was supported by the Pain Management Collaboratory–Pragmatic Clinical Trials Demonstration. Various authors reported grants from the National Center for Complementary and Integrative Health and the National Institute of Nursing Research.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Mindfulness-based interventions (MBIs) via telehealth improves pain-related function and biopsychosocial outcomes in veterans with chronic pain as compared with usual care.

METHODOLOGY:

  • Researchers conducted a randomized clinical trial of 811 veterans who had moderate to severe chronic pain and were recruited from three Veterans Affairs facilities in the United States.
  • Participants were divided into three groups: Group MBI (270), self-paced MBI (271), and usual care (270), with interventions lasting 8 weeks.
  • The primary outcome was pain-related function measured using a scale on interference from pain in areas like mood, walking, work, relationships, and sleep at 10 weeks, 6 months, and 1 year.
  • Secondary outcomes included pain intensity, anxiety, fatigue, sleep disturbance, participation in social roles and activities, depression, and posttraumatic stress disorder (PTSD).

TAKEAWAY:

  • Pain-related function significantly improved in participants in both the MBI groups versus usual care group, with a mean difference of −0.4 (95% CI, −0.7 to −0.2) for group MBI and −0.7 (95% CI, −1.0 to −0.4) for self-paced MBI (P < .001).
  • Compared with the usual care group, both the MBI groups had significantly improved secondary outcomes, including pain intensity, depression, and PTSD.
  • The probability of achieving 30% improvement in pain-related function was higher for group MBI at 10 weeks and 6 months and for self-paced MBI at all three timepoints.
  • No significant differences were found between the MBI groups for primary and secondary outcomes.

IN PRACTICE:

“The viability and similarity of both these approaches for delivering MBIs increase patient options for meeting their individual needs and could help accelerate and improve the implementation of nonpharmacological pain treatment in health care systems,” the study authors wrote.

SOURCE:

The study was led by Diana J. Burgess, PhD, of the Center for Care Delivery and Outcomes Research, VA Health Systems Research in Minneapolis, Minnesota, and published online in JAMA Internal Medicine

LIMITATIONS:

The trial was not designed to compare less resource-intensive MBIs with more intensive mindfulness-based stress reduction programs or in-person MBIs. The study did not address cost-effectiveness or control for time, attention, and other contextual factors. The high nonresponse rate (81%) to initial recruitment may have affected the generalizability of the findings.

DISCLOSURES:

The study was supported by the Pain Management Collaboratory–Pragmatic Clinical Trials Demonstration. Various authors reported grants from the National Center for Complementary and Integrative Health and the National Institute of Nursing Research.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The Next Frontier of Antibiotic Discovery: Inside Your Gut

Article Type
Changed
Tue, 08/27/2024 - 09:29

Scientists at Stanford University and the University of Pennsylvania have discovered a new antibiotic candidate in a surprising place: the human gut. 

In mice, the antibiotic — a peptide known as prevotellin-2 — showed antimicrobial potency on par with polymyxin B, an antibiotic medication used to treat multidrug-resistant infections. Meanwhile, the peptide mainly left commensal, or beneficial, bacteria alone. The study, published in Cell, also identified several other potent antibiotic peptides with the potential to combat antimicrobial-resistant infections.

The research is part of a larger quest to find new antibiotics that can fight drug-resistant infections, a critical public health threat with more than 2.8 million cases and 35,000 deaths annually in the United States. That quest is urgent, said study author César de la Fuente, PhD, professor of bioengineering at the University of Pennsylvania, Philadelphia. 

“The main pillars that have enabled us to almost double our lifespan in the last 100 years or so have been antibiotics, vaccines, and clean water,” said Dr. de la Fuente. “Imagine taking out one of those. I think it would be pretty dramatic.” (Dr. De la Fuente’s lab has become known for finding antibiotic candidates in unusual places, like ancient genetic information of Neanderthals and woolly mammoths.)  

The first widely used antibiotic, penicillin, was discovered in 1928, when a physician studying Staphylococcus bacteria returned to his lab after summer break to find mold growing in one of his petri dishes. But many other antibiotics — like streptomycin, tetracycline, and erythromycin — were discovered from soil bacteria, which produce variations of these substances to compete with other microorganisms. 

By looking in the gut microbiome, the researchers hoped to identify peptides that the trillions of microbes use against each other in the fight for limited resources — ideally, peptides that wouldn’t broadly kill off the entire microbiome. 
 

Kill the Bad, Spare the Good

Many traditional antibiotics are small molecules. This means they can wipe out the good bacteria in your body, and because each targets a specific bacterial function, bad bacteria can become resistant to them.

Peptide antibiotics, on the other hand, don’t diffuse into the whole body. If taken orally, they stay in the gut; if taken intravenously, they generally stay in the blood. And because of how they kill bacteria, targeting the membrane, they’re also less prone to bacterial resistance.

The microbiome is like a big reservoir of pathogens, said Ami Bhatt, MD, PhD, hematologist at Stanford University in California and one of the study’s authors. Because many antibiotics kill healthy gut bacteria, “what you have left over,” Dr. Bhatt said, “is this big open niche that gets filled up with multidrug-resistant organisms like E coli [Escherichia coli] or vancomycin-resistant Enterococcus.”

Dr. Bhatt has seen cancer patients undergo successful treatment only to die of a multidrug-resistant infection, because current antibiotics fail against those pathogens. “That’s like winning the battle to lose the war.”

By investigating the microbiome, “we wanted to see if we could identify antimicrobial peptides that might spare key members of our regular microbiome, so that we wouldn’t totally disrupt the microbiome the way we do when we use broad-spectrum, small molecule–based antibiotics,” Dr. Bhatt said.

The researchers used artificial intelligence to sift through 400,000 proteins to predict, based on known antibiotics, which peptide sequences might have antimicrobial properties. From the results, they chose 78 peptides to synthesize and test.

“The application of computational approaches combined with experimental validation is very powerful and exciting,” said Jennifer Geddes-McAlister, PhD, professor of cell biology at the University of Guelph in Ontario, Canada, who was not involved in the study. “The study is robust in its approach to microbiome sampling.” 
 

 

 

The Long Journey from Lab to Clinic

More than half of the peptides the team tested effectively inhibited the growth of harmful bacteria, and prevotellin-2 (derived from the bacteria Prevotella copri)stood out as the most powerful.

“The study validates experimental data from the lab using animal models, which moves discoveries closer to the clinic,” said Dr. Geddes-McAlister. “Further testing with clinical trials is needed, but the potential for clinical application is promising.” 

Unfortunately, that’s not likely to happen anytime soon, said Dr. de la Fuente. “There is not enough economic incentive” for companies to develop new antibiotics. Ten years is his most hopeful guess for when we might see prevotellin-2, or a similar antibiotic, complete clinical trials.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Scientists at Stanford University and the University of Pennsylvania have discovered a new antibiotic candidate in a surprising place: the human gut. 

In mice, the antibiotic — a peptide known as prevotellin-2 — showed antimicrobial potency on par with polymyxin B, an antibiotic medication used to treat multidrug-resistant infections. Meanwhile, the peptide mainly left commensal, or beneficial, bacteria alone. The study, published in Cell, also identified several other potent antibiotic peptides with the potential to combat antimicrobial-resistant infections.

The research is part of a larger quest to find new antibiotics that can fight drug-resistant infections, a critical public health threat with more than 2.8 million cases and 35,000 deaths annually in the United States. That quest is urgent, said study author César de la Fuente, PhD, professor of bioengineering at the University of Pennsylvania, Philadelphia. 

“The main pillars that have enabled us to almost double our lifespan in the last 100 years or so have been antibiotics, vaccines, and clean water,” said Dr. de la Fuente. “Imagine taking out one of those. I think it would be pretty dramatic.” (Dr. De la Fuente’s lab has become known for finding antibiotic candidates in unusual places, like ancient genetic information of Neanderthals and woolly mammoths.)  

The first widely used antibiotic, penicillin, was discovered in 1928, when a physician studying Staphylococcus bacteria returned to his lab after summer break to find mold growing in one of his petri dishes. But many other antibiotics — like streptomycin, tetracycline, and erythromycin — were discovered from soil bacteria, which produce variations of these substances to compete with other microorganisms. 

By looking in the gut microbiome, the researchers hoped to identify peptides that the trillions of microbes use against each other in the fight for limited resources — ideally, peptides that wouldn’t broadly kill off the entire microbiome. 
 

Kill the Bad, Spare the Good

Many traditional antibiotics are small molecules. This means they can wipe out the good bacteria in your body, and because each targets a specific bacterial function, bad bacteria can become resistant to them.

Peptide antibiotics, on the other hand, don’t diffuse into the whole body. If taken orally, they stay in the gut; if taken intravenously, they generally stay in the blood. And because of how they kill bacteria, targeting the membrane, they’re also less prone to bacterial resistance.

The microbiome is like a big reservoir of pathogens, said Ami Bhatt, MD, PhD, hematologist at Stanford University in California and one of the study’s authors. Because many antibiotics kill healthy gut bacteria, “what you have left over,” Dr. Bhatt said, “is this big open niche that gets filled up with multidrug-resistant organisms like E coli [Escherichia coli] or vancomycin-resistant Enterococcus.”

Dr. Bhatt has seen cancer patients undergo successful treatment only to die of a multidrug-resistant infection, because current antibiotics fail against those pathogens. “That’s like winning the battle to lose the war.”

By investigating the microbiome, “we wanted to see if we could identify antimicrobial peptides that might spare key members of our regular microbiome, so that we wouldn’t totally disrupt the microbiome the way we do when we use broad-spectrum, small molecule–based antibiotics,” Dr. Bhatt said.

The researchers used artificial intelligence to sift through 400,000 proteins to predict, based on known antibiotics, which peptide sequences might have antimicrobial properties. From the results, they chose 78 peptides to synthesize and test.

“The application of computational approaches combined with experimental validation is very powerful and exciting,” said Jennifer Geddes-McAlister, PhD, professor of cell biology at the University of Guelph in Ontario, Canada, who was not involved in the study. “The study is robust in its approach to microbiome sampling.” 
 

 

 

The Long Journey from Lab to Clinic

More than half of the peptides the team tested effectively inhibited the growth of harmful bacteria, and prevotellin-2 (derived from the bacteria Prevotella copri)stood out as the most powerful.

“The study validates experimental data from the lab using animal models, which moves discoveries closer to the clinic,” said Dr. Geddes-McAlister. “Further testing with clinical trials is needed, but the potential for clinical application is promising.” 

Unfortunately, that’s not likely to happen anytime soon, said Dr. de la Fuente. “There is not enough economic incentive” for companies to develop new antibiotics. Ten years is his most hopeful guess for when we might see prevotellin-2, or a similar antibiotic, complete clinical trials.

A version of this article first appeared on Medscape.com.

Scientists at Stanford University and the University of Pennsylvania have discovered a new antibiotic candidate in a surprising place: the human gut. 

In mice, the antibiotic — a peptide known as prevotellin-2 — showed antimicrobial potency on par with polymyxin B, an antibiotic medication used to treat multidrug-resistant infections. Meanwhile, the peptide mainly left commensal, or beneficial, bacteria alone. The study, published in Cell, also identified several other potent antibiotic peptides with the potential to combat antimicrobial-resistant infections.

The research is part of a larger quest to find new antibiotics that can fight drug-resistant infections, a critical public health threat with more than 2.8 million cases and 35,000 deaths annually in the United States. That quest is urgent, said study author César de la Fuente, PhD, professor of bioengineering at the University of Pennsylvania, Philadelphia. 

“The main pillars that have enabled us to almost double our lifespan in the last 100 years or so have been antibiotics, vaccines, and clean water,” said Dr. de la Fuente. “Imagine taking out one of those. I think it would be pretty dramatic.” (Dr. De la Fuente’s lab has become known for finding antibiotic candidates in unusual places, like ancient genetic information of Neanderthals and woolly mammoths.)  

The first widely used antibiotic, penicillin, was discovered in 1928, when a physician studying Staphylococcus bacteria returned to his lab after summer break to find mold growing in one of his petri dishes. But many other antibiotics — like streptomycin, tetracycline, and erythromycin — were discovered from soil bacteria, which produce variations of these substances to compete with other microorganisms. 

By looking in the gut microbiome, the researchers hoped to identify peptides that the trillions of microbes use against each other in the fight for limited resources — ideally, peptides that wouldn’t broadly kill off the entire microbiome. 
 

Kill the Bad, Spare the Good

Many traditional antibiotics are small molecules. This means they can wipe out the good bacteria in your body, and because each targets a specific bacterial function, bad bacteria can become resistant to them.

Peptide antibiotics, on the other hand, don’t diffuse into the whole body. If taken orally, they stay in the gut; if taken intravenously, they generally stay in the blood. And because of how they kill bacteria, targeting the membrane, they’re also less prone to bacterial resistance.

The microbiome is like a big reservoir of pathogens, said Ami Bhatt, MD, PhD, hematologist at Stanford University in California and one of the study’s authors. Because many antibiotics kill healthy gut bacteria, “what you have left over,” Dr. Bhatt said, “is this big open niche that gets filled up with multidrug-resistant organisms like E coli [Escherichia coli] or vancomycin-resistant Enterococcus.”

Dr. Bhatt has seen cancer patients undergo successful treatment only to die of a multidrug-resistant infection, because current antibiotics fail against those pathogens. “That’s like winning the battle to lose the war.”

By investigating the microbiome, “we wanted to see if we could identify antimicrobial peptides that might spare key members of our regular microbiome, so that we wouldn’t totally disrupt the microbiome the way we do when we use broad-spectrum, small molecule–based antibiotics,” Dr. Bhatt said.

The researchers used artificial intelligence to sift through 400,000 proteins to predict, based on known antibiotics, which peptide sequences might have antimicrobial properties. From the results, they chose 78 peptides to synthesize and test.

“The application of computational approaches combined with experimental validation is very powerful and exciting,” said Jennifer Geddes-McAlister, PhD, professor of cell biology at the University of Guelph in Ontario, Canada, who was not involved in the study. “The study is robust in its approach to microbiome sampling.” 
 

 

 

The Long Journey from Lab to Clinic

More than half of the peptides the team tested effectively inhibited the growth of harmful bacteria, and prevotellin-2 (derived from the bacteria Prevotella copri)stood out as the most powerful.

“The study validates experimental data from the lab using animal models, which moves discoveries closer to the clinic,” said Dr. Geddes-McAlister. “Further testing with clinical trials is needed, but the potential for clinical application is promising.” 

Unfortunately, that’s not likely to happen anytime soon, said Dr. de la Fuente. “There is not enough economic incentive” for companies to develop new antibiotics. Ten years is his most hopeful guess for when we might see prevotellin-2, or a similar antibiotic, complete clinical trials.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Severe COVID-19 Tied to Increased Risk for Mental Illness

Article Type
Changed
Fri, 08/23/2024 - 13:09

New research adds to a growing body of evidence suggesting that COVID-19 infection can be hard on mental health. 

The UK study of more than 18 million adults showed an elevated rate of mental illness, including depression and serious mental illness, for up to a year following a bout of COVID-19, particularly in those with severe COVID who had not been vaccinated. 

Importantly, vaccination appeared to mitigate the adverse effects of COVID-19 on mental health, the investigators found. 

“Our results highlight the importance COVID-19 vaccination in the general population and particularly among those with mental illnesses, who may be at higher risk of both SARS-CoV-2 infection and adverse outcomes following COVID-19,” first author Venexia Walker, PhD, with University of Bristol, United Kingdom, said in a news release. 

The study was published online on August 21 in JAMA Psychiatry.
 

Novel Data

“Before this study, a number of papers had looked at associations of COVID diagnosis with mental ill health, and broadly speaking, they had reported associations of different magnitudes,” study author Jonathan A. C. Sterne, PhD, with University of Bristol, noted in a journal podcast. 

“Some studies were restricted to patients who were hospitalized with COVID-19 and some not and the duration of follow-up varied. And importantly, the nature of COVID-19 changed profoundly as vaccination became available and there was little data on the impact of vaccination on associations of COVID-19 with subsequent mental ill health,” Dr. Sterne said. 

The UK study was conducted in three cohorts — a cohort of about 18.6 million people who were diagnosed with COVID-19 before a vaccine was available, a cohort of about 14 million adults who were vaccinated, and a cohort of about 3.2 million people who were unvaccinated.

The researchers compared rates of various mental illnesses after COVID-19 with rates before or without COVID-19 and by vaccination status.

Across all cohorts, rates of most mental illnesses examined were “markedly elevated” during the first month following a COVID-19 diagnosis compared with rates before or without COVID-19.

For example, the adjusted hazard ratios for depression (the most common illness) and serious mental illness in the month after COVID-19 were 1.93 and 1.49, respectively, in the prevaccination cohort and 1.79 and 1.45, respectively, in the unvaccinated cohort compared with 1.16 and 0.91 in the vaccinated cohort.

This elevation in the rate of mental illnesses was mainly seen after severe COVID-19 that led to hospitalization and remained higher for up to a year following severe COVID-19 in unvaccinated adults.

For severe COVID-19 with hospitalization, the adjusted hazard ratio for depression in the month following admission was 16.3 in the prevaccine cohort, 15.6 in the unvaccinated cohort, and 12.9 in the vaccinated cohort.

The adjusted hazard ratios for serious mental illness in the month after COVID hospitalization was 9.71 in the prevaccine cohort, 8.75 with no vaccination, and 6.52 with vaccination. 

“Incidences of other mental illnesses were broadly similar to those of depression and serious mental illness, both overall and for COVID-19 with and without hospitalization,” the authors report in their paper.

Consistent with prior research, subgroup analyzes found the association of COVID-19 and mental illness was stronger among older adults and men, with no marked differences by ethnic group.

“We should be concerned about continuing consequences in people who experienced severe COVID-19 early in the pandemic, and they may include a continuing higher incidence of mental ill health, such as depression and serious mental illness,” Dr. Sterne said in the podcast. 

In terms of ongoing booster vaccinations, “people who are advised that they are under vaccinated or recommended for further COVID-19 vaccination, should take those invitations seriously, because by preventing severe COVID-19, which is what vaccination does, you can prevent consequences such as mental illness,” Dr. Sterne added. 

The study was supported by the COVID-19 Longitudinal Health and Wellbeing National Core Study, which is funded by the Medical Research Council and National Institute for Health and Care Research. The authors had no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New research adds to a growing body of evidence suggesting that COVID-19 infection can be hard on mental health. 

The UK study of more than 18 million adults showed an elevated rate of mental illness, including depression and serious mental illness, for up to a year following a bout of COVID-19, particularly in those with severe COVID who had not been vaccinated. 

Importantly, vaccination appeared to mitigate the adverse effects of COVID-19 on mental health, the investigators found. 

“Our results highlight the importance COVID-19 vaccination in the general population and particularly among those with mental illnesses, who may be at higher risk of both SARS-CoV-2 infection and adverse outcomes following COVID-19,” first author Venexia Walker, PhD, with University of Bristol, United Kingdom, said in a news release. 

The study was published online on August 21 in JAMA Psychiatry.
 

Novel Data

“Before this study, a number of papers had looked at associations of COVID diagnosis with mental ill health, and broadly speaking, they had reported associations of different magnitudes,” study author Jonathan A. C. Sterne, PhD, with University of Bristol, noted in a journal podcast. 

“Some studies were restricted to patients who were hospitalized with COVID-19 and some not and the duration of follow-up varied. And importantly, the nature of COVID-19 changed profoundly as vaccination became available and there was little data on the impact of vaccination on associations of COVID-19 with subsequent mental ill health,” Dr. Sterne said. 

The UK study was conducted in three cohorts — a cohort of about 18.6 million people who were diagnosed with COVID-19 before a vaccine was available, a cohort of about 14 million adults who were vaccinated, and a cohort of about 3.2 million people who were unvaccinated.

The researchers compared rates of various mental illnesses after COVID-19 with rates before or without COVID-19 and by vaccination status.

Across all cohorts, rates of most mental illnesses examined were “markedly elevated” during the first month following a COVID-19 diagnosis compared with rates before or without COVID-19.

For example, the adjusted hazard ratios for depression (the most common illness) and serious mental illness in the month after COVID-19 were 1.93 and 1.49, respectively, in the prevaccination cohort and 1.79 and 1.45, respectively, in the unvaccinated cohort compared with 1.16 and 0.91 in the vaccinated cohort.

This elevation in the rate of mental illnesses was mainly seen after severe COVID-19 that led to hospitalization and remained higher for up to a year following severe COVID-19 in unvaccinated adults.

For severe COVID-19 with hospitalization, the adjusted hazard ratio for depression in the month following admission was 16.3 in the prevaccine cohort, 15.6 in the unvaccinated cohort, and 12.9 in the vaccinated cohort.

The adjusted hazard ratios for serious mental illness in the month after COVID hospitalization was 9.71 in the prevaccine cohort, 8.75 with no vaccination, and 6.52 with vaccination. 

“Incidences of other mental illnesses were broadly similar to those of depression and serious mental illness, both overall and for COVID-19 with and without hospitalization,” the authors report in their paper.

Consistent with prior research, subgroup analyzes found the association of COVID-19 and mental illness was stronger among older adults and men, with no marked differences by ethnic group.

“We should be concerned about continuing consequences in people who experienced severe COVID-19 early in the pandemic, and they may include a continuing higher incidence of mental ill health, such as depression and serious mental illness,” Dr. Sterne said in the podcast. 

In terms of ongoing booster vaccinations, “people who are advised that they are under vaccinated or recommended for further COVID-19 vaccination, should take those invitations seriously, because by preventing severe COVID-19, which is what vaccination does, you can prevent consequences such as mental illness,” Dr. Sterne added. 

The study was supported by the COVID-19 Longitudinal Health and Wellbeing National Core Study, which is funded by the Medical Research Council and National Institute for Health and Care Research. The authors had no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

New research adds to a growing body of evidence suggesting that COVID-19 infection can be hard on mental health. 

The UK study of more than 18 million adults showed an elevated rate of mental illness, including depression and serious mental illness, for up to a year following a bout of COVID-19, particularly in those with severe COVID who had not been vaccinated. 

Importantly, vaccination appeared to mitigate the adverse effects of COVID-19 on mental health, the investigators found. 

“Our results highlight the importance COVID-19 vaccination in the general population and particularly among those with mental illnesses, who may be at higher risk of both SARS-CoV-2 infection and adverse outcomes following COVID-19,” first author Venexia Walker, PhD, with University of Bristol, United Kingdom, said in a news release. 

The study was published online on August 21 in JAMA Psychiatry.
 

Novel Data

“Before this study, a number of papers had looked at associations of COVID diagnosis with mental ill health, and broadly speaking, they had reported associations of different magnitudes,” study author Jonathan A. C. Sterne, PhD, with University of Bristol, noted in a journal podcast. 

“Some studies were restricted to patients who were hospitalized with COVID-19 and some not and the duration of follow-up varied. And importantly, the nature of COVID-19 changed profoundly as vaccination became available and there was little data on the impact of vaccination on associations of COVID-19 with subsequent mental ill health,” Dr. Sterne said. 

The UK study was conducted in three cohorts — a cohort of about 18.6 million people who were diagnosed with COVID-19 before a vaccine was available, a cohort of about 14 million adults who were vaccinated, and a cohort of about 3.2 million people who were unvaccinated.

The researchers compared rates of various mental illnesses after COVID-19 with rates before or without COVID-19 and by vaccination status.

Across all cohorts, rates of most mental illnesses examined were “markedly elevated” during the first month following a COVID-19 diagnosis compared with rates before or without COVID-19.

For example, the adjusted hazard ratios for depression (the most common illness) and serious mental illness in the month after COVID-19 were 1.93 and 1.49, respectively, in the prevaccination cohort and 1.79 and 1.45, respectively, in the unvaccinated cohort compared with 1.16 and 0.91 in the vaccinated cohort.

This elevation in the rate of mental illnesses was mainly seen after severe COVID-19 that led to hospitalization and remained higher for up to a year following severe COVID-19 in unvaccinated adults.

For severe COVID-19 with hospitalization, the adjusted hazard ratio for depression in the month following admission was 16.3 in the prevaccine cohort, 15.6 in the unvaccinated cohort, and 12.9 in the vaccinated cohort.

The adjusted hazard ratios for serious mental illness in the month after COVID hospitalization was 9.71 in the prevaccine cohort, 8.75 with no vaccination, and 6.52 with vaccination. 

“Incidences of other mental illnesses were broadly similar to those of depression and serious mental illness, both overall and for COVID-19 with and without hospitalization,” the authors report in their paper.

Consistent with prior research, subgroup analyzes found the association of COVID-19 and mental illness was stronger among older adults and men, with no marked differences by ethnic group.

“We should be concerned about continuing consequences in people who experienced severe COVID-19 early in the pandemic, and they may include a continuing higher incidence of mental ill health, such as depression and serious mental illness,” Dr. Sterne said in the podcast. 

In terms of ongoing booster vaccinations, “people who are advised that they are under vaccinated or recommended for further COVID-19 vaccination, should take those invitations seriously, because by preventing severe COVID-19, which is what vaccination does, you can prevent consequences such as mental illness,” Dr. Sterne added. 

The study was supported by the COVID-19 Longitudinal Health and Wellbeing National Core Study, which is funded by the Medical Research Council and National Institute for Health and Care Research. The authors had no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A Step-by-Step Guide for Diagnosing Cushing Syndrome

Article Type
Changed
Fri, 08/23/2024 - 13:04

“Moon face” is a term that’s become popular on social media, used to describe people with unusually round faces who are purported to have high levels of cortisol. But the term “moon face” isn’t new. It was actually coined in the 1930s by neurosurgeon Harvey Cushing, MD, who identified patients with a constellation of clinical characteristics — a condition that came to bear his name — which included rapidly developing facial adiposity. And indeed, elevated cortisol is a hallmark feature of Cushing syndrome (CS), but there are other reasons for elevated cortisol and other manifestations of CS.

Today, the term “moon face” has been replaced with “round face,” which is considered more encompassing and culturally sensitive, said Maria Fleseriu, MD, professor of medicine and neurological surgery and director of the Pituitary Center at Oregon Health and Science University in Portland, Oregon.

Facial roundness can lead clinicians to be suspicious that their patient is experiencing CS. But because a round face is associated with several other conditions, it’s important to be familiar with its particular presentation in CS, as well as how to diagnose and treat CS.
 

Pathophysiology of CS

Dr. Fleseriu defined CS as “prolonged nonphysiologic increase in cortisol, due either to exogenous use of steroids (oral, topical, or inhaled) or to excess endogenous cortisol production.” She added that it’s important “to always exclude exogenous causes before conducting a further workup to determine the type and cause of cortisol excess.”

Cushing disease is an endogenous form of CS caused by a corticotroph adenoma of the pituitary gland. Cushing disease is rare, with only two to three cases per million annually, Dr. Fleseriu said. Other causes of CS are ectopic (caused by neuroendocrine tumors) or adrenal. CS affects primarily females and typically has an onset between ages 20 and 50 years, depending on the CS type.

Diagnosis of CS is “substantially delayed for most patients, due to metabolic syndrome phenotypic overlap and lack of a single pathognomonic symptom,” according to Dr. Fleseriu.

An accurate diagnosis should be on the basis of signs and symptoms, biochemical screening, other laboratory testing, and diagnostic imaging.
 

Look for Clinical Signs and Symptoms of CS

“CS mostly presents as a combination of two or more features,” Dr. Fleseriu stated. These include increased fat pads (in the face, neck, and trunk), skin changes, signs of protein catabolism, growth retardation and body weight increase in children, and metabolic dysregulations (Table).



“Biochemical screening should be performed in patients with a combination of symptoms, and therefore an increased pretest probability for CS,” Dr. Fleseriu advised.

A CS diagnosis requires not only biochemical confirmation of hypercortisolemia but also determination of the underlying cause of the excess endogenous cortisol production. This is a key step, as the management of CS is specific to its etiology.

Elevated plasma cortisol alone is insufficient for diagnosing CS, as several conditions can be associated with physiologic, nonneoplastic endogenous hypercortisolemia, according to the 2021 updated CS guidelines for which Dr. Fleseriu served as a coauthor. These include depression, alcohol dependence, glucocorticoid resistance, obesity, diabetes, pregnancy, prolonged physical exertion, malnutrition, and cortisol-binding globulin excess.

The diagnosis begins with the following screening tests:

  • Late-night salivary cortisol (LNSC) to assess an abnormal circadian rhythm

According to the 2021 guideline, this is “based on the assumption that patients with CS lose the normal circadian nadir of cortisol secretion.”

  • Overnight 1-mg dexamethasone suppression test (DST) to assess impaired glucocorticoid feedback

The authors noted that in healthy individuals, a supraphysiologic dexamethasone dose inhibits vasopressin and adrenocorticotropic hormone (ACTH) secretion, leading to decreased cortisol concentration. Cortisol concentrations of < 1-8 μg/dL in the morning (after administration of the dexamethasone between 11 p.m. and midnight) are considered “normal,” and a negative result “strongly predicts” the absence of CS. But false-positive and false-negative results can occur. Thus, “it is imperative that first-line testing is elected on the basis of physiologic conditions and drug intake — for example, use of CYP2A4/5 inhibitors or stimulators and oral estrogen — as well as laboratory quality control measure, and special attention to night shift workers,” Dr. Fleseriu emphasized.

  • A 24-hour urinary free cortisol (UFC) test to assess increased bioavailable cortisol

The guideline encourages conducting several 24-hour urine collections to account for intra-patient variability.

Dr. Fleseriu recommended utilizing at least two of the three screening tests, all of which have reasonable sensitivity and specificity.

“Two normal test results usually exclude the presence of CS, except in rare cyclic CS,” she added.
 

Conduct Additional Laboratory Testing

Additional laboratory abnormalities suggestive of CS include:

  • Increased leukocytes with decreased lymphocytes, eosinophils, monocytes, and basophils
  • Elevated glucose and insulin levels
  • Hypokalemia
  • Increased triglycerides and total cholesterol levels
  • Elevated liver enzymes
  • Changes in activated thromboplastin time and plasma concentrations of pro- and anticoagulant factors
  • Hypercalciuria, hypocalcemia (rare), hypophosphatemia, decreased phosphate maximum resorption, and increased alkaline phosphatase activity

Dr. Fleseriu noted that, in most cases, a final CS diagnosis can be reached after confirmation of biochemical hypercortisolism, which is done after an initial positive screening test.

She added that plasma ACTH levels are “instrumental” in distinguishing ACTH-depending forms of CS — such as Cushing disease and ectopic CS — from adrenal cases. Bilateral inferior petrosal sinus sampling is necessary in ACTH-dependent CS.
 

Utilize Diagnostic Imaging

There are several diagnostic imaging techniques that localize the origin of the hypercortisolism, thus informing the course of treatment.

  • Pituitary MRI to detect corticotropin-secreting corticotroph adenomas, which are typically small lesions (< 6 mm in diameter)
  • CT evaluation of the neck, thoracic cavity, and abdomen to diagnose ectopic CS, including lung neuroendocrine tumors and bronchial neuroendocrine tumors
  • Cervical and thyroid ultrasonography to identify primary or metastatic medullary thyroid carcinoma, and PET scans, which have greater sensitivity in detecting tumors, compared with CT scans
  • Contrast-enhanced CT scans to detect adrenal adenomas and adrenocortical carcinomas

Management of CS

“The primary aim of treatment is eucortisolemia, and in those with endogenous CS, complete surgical resection of the underlying tumor is the primary method,” Dr. Fleseriu said.

It’s critical to monitor for biochemical remission following surgery, utilizing 24-hour UFC, LNSC, and DST “because clinical manifestations may lag behind biochemical evidence.”

In Cushing disease, almost half of patients will have either persistent or recurrent hypercortisolemia after surgery. In those cases, individualized adjuvant treatments are recommended. These include repeat surgery, bilateral adrenalectomy, radiation, or medical treatments, including pituitary-directed drugs, adrenal steroidogenesis inhibitors, or glucocorticoid receptor-blocking agents. The last two groups are used for other types of CS.

Dr. Fleseriu pointed out that CS is “associated with increased metabolic, cardiovascular, psychiatric, infectious, and musculoskeletal morbidity, which are only partially reversible with successful [CS] treatment.” These comorbidities need to be addressed via individualized therapies. Moreover, long-term mortality is increased in all forms of CS. Thus, patients require lifelong follow-up to detect recurrence at an early stage and to treat comorbidities.

“It is likely that delayed diagnosis might explain the long-term consequences of CS, including increased morbidity and mortality despite remission,” she said.

Familiarity with the presenting signs and symptoms of CS and ordering recommended screening and confirmatory tests will enable appropriate management of the condition, leading to better outcomes.

Dr. Fleseriu reported receiving research grants from Sparrow Pharmaceuticals to Oregon Health and Science University as principal investigator and receiving occasional fees for scientific consulting/advisory boards from Sparrow Pharmaceuticals, Recordati Rare Diseases Inc., and Xeris Biopharma Holdings Inc.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

“Moon face” is a term that’s become popular on social media, used to describe people with unusually round faces who are purported to have high levels of cortisol. But the term “moon face” isn’t new. It was actually coined in the 1930s by neurosurgeon Harvey Cushing, MD, who identified patients with a constellation of clinical characteristics — a condition that came to bear his name — which included rapidly developing facial adiposity. And indeed, elevated cortisol is a hallmark feature of Cushing syndrome (CS), but there are other reasons for elevated cortisol and other manifestations of CS.

Today, the term “moon face” has been replaced with “round face,” which is considered more encompassing and culturally sensitive, said Maria Fleseriu, MD, professor of medicine and neurological surgery and director of the Pituitary Center at Oregon Health and Science University in Portland, Oregon.

Facial roundness can lead clinicians to be suspicious that their patient is experiencing CS. But because a round face is associated with several other conditions, it’s important to be familiar with its particular presentation in CS, as well as how to diagnose and treat CS.
 

Pathophysiology of CS

Dr. Fleseriu defined CS as “prolonged nonphysiologic increase in cortisol, due either to exogenous use of steroids (oral, topical, or inhaled) or to excess endogenous cortisol production.” She added that it’s important “to always exclude exogenous causes before conducting a further workup to determine the type and cause of cortisol excess.”

Cushing disease is an endogenous form of CS caused by a corticotroph adenoma of the pituitary gland. Cushing disease is rare, with only two to three cases per million annually, Dr. Fleseriu said. Other causes of CS are ectopic (caused by neuroendocrine tumors) or adrenal. CS affects primarily females and typically has an onset between ages 20 and 50 years, depending on the CS type.

Diagnosis of CS is “substantially delayed for most patients, due to metabolic syndrome phenotypic overlap and lack of a single pathognomonic symptom,” according to Dr. Fleseriu.

An accurate diagnosis should be on the basis of signs and symptoms, biochemical screening, other laboratory testing, and diagnostic imaging.
 

Look for Clinical Signs and Symptoms of CS

“CS mostly presents as a combination of two or more features,” Dr. Fleseriu stated. These include increased fat pads (in the face, neck, and trunk), skin changes, signs of protein catabolism, growth retardation and body weight increase in children, and metabolic dysregulations (Table).



“Biochemical screening should be performed in patients with a combination of symptoms, and therefore an increased pretest probability for CS,” Dr. Fleseriu advised.

A CS diagnosis requires not only biochemical confirmation of hypercortisolemia but also determination of the underlying cause of the excess endogenous cortisol production. This is a key step, as the management of CS is specific to its etiology.

Elevated plasma cortisol alone is insufficient for diagnosing CS, as several conditions can be associated with physiologic, nonneoplastic endogenous hypercortisolemia, according to the 2021 updated CS guidelines for which Dr. Fleseriu served as a coauthor. These include depression, alcohol dependence, glucocorticoid resistance, obesity, diabetes, pregnancy, prolonged physical exertion, malnutrition, and cortisol-binding globulin excess.

The diagnosis begins with the following screening tests:

  • Late-night salivary cortisol (LNSC) to assess an abnormal circadian rhythm

According to the 2021 guideline, this is “based on the assumption that patients with CS lose the normal circadian nadir of cortisol secretion.”

  • Overnight 1-mg dexamethasone suppression test (DST) to assess impaired glucocorticoid feedback

The authors noted that in healthy individuals, a supraphysiologic dexamethasone dose inhibits vasopressin and adrenocorticotropic hormone (ACTH) secretion, leading to decreased cortisol concentration. Cortisol concentrations of < 1-8 μg/dL in the morning (after administration of the dexamethasone between 11 p.m. and midnight) are considered “normal,” and a negative result “strongly predicts” the absence of CS. But false-positive and false-negative results can occur. Thus, “it is imperative that first-line testing is elected on the basis of physiologic conditions and drug intake — for example, use of CYP2A4/5 inhibitors or stimulators and oral estrogen — as well as laboratory quality control measure, and special attention to night shift workers,” Dr. Fleseriu emphasized.

  • A 24-hour urinary free cortisol (UFC) test to assess increased bioavailable cortisol

The guideline encourages conducting several 24-hour urine collections to account for intra-patient variability.

Dr. Fleseriu recommended utilizing at least two of the three screening tests, all of which have reasonable sensitivity and specificity.

“Two normal test results usually exclude the presence of CS, except in rare cyclic CS,” she added.
 

Conduct Additional Laboratory Testing

Additional laboratory abnormalities suggestive of CS include:

  • Increased leukocytes with decreased lymphocytes, eosinophils, monocytes, and basophils
  • Elevated glucose and insulin levels
  • Hypokalemia
  • Increased triglycerides and total cholesterol levels
  • Elevated liver enzymes
  • Changes in activated thromboplastin time and plasma concentrations of pro- and anticoagulant factors
  • Hypercalciuria, hypocalcemia (rare), hypophosphatemia, decreased phosphate maximum resorption, and increased alkaline phosphatase activity

Dr. Fleseriu noted that, in most cases, a final CS diagnosis can be reached after confirmation of biochemical hypercortisolism, which is done after an initial positive screening test.

She added that plasma ACTH levels are “instrumental” in distinguishing ACTH-depending forms of CS — such as Cushing disease and ectopic CS — from adrenal cases. Bilateral inferior petrosal sinus sampling is necessary in ACTH-dependent CS.
 

Utilize Diagnostic Imaging

There are several diagnostic imaging techniques that localize the origin of the hypercortisolism, thus informing the course of treatment.

  • Pituitary MRI to detect corticotropin-secreting corticotroph adenomas, which are typically small lesions (< 6 mm in diameter)
  • CT evaluation of the neck, thoracic cavity, and abdomen to diagnose ectopic CS, including lung neuroendocrine tumors and bronchial neuroendocrine tumors
  • Cervical and thyroid ultrasonography to identify primary or metastatic medullary thyroid carcinoma, and PET scans, which have greater sensitivity in detecting tumors, compared with CT scans
  • Contrast-enhanced CT scans to detect adrenal adenomas and adrenocortical carcinomas

Management of CS

“The primary aim of treatment is eucortisolemia, and in those with endogenous CS, complete surgical resection of the underlying tumor is the primary method,” Dr. Fleseriu said.

It’s critical to monitor for biochemical remission following surgery, utilizing 24-hour UFC, LNSC, and DST “because clinical manifestations may lag behind biochemical evidence.”

In Cushing disease, almost half of patients will have either persistent or recurrent hypercortisolemia after surgery. In those cases, individualized adjuvant treatments are recommended. These include repeat surgery, bilateral adrenalectomy, radiation, or medical treatments, including pituitary-directed drugs, adrenal steroidogenesis inhibitors, or glucocorticoid receptor-blocking agents. The last two groups are used for other types of CS.

Dr. Fleseriu pointed out that CS is “associated with increased metabolic, cardiovascular, psychiatric, infectious, and musculoskeletal morbidity, which are only partially reversible with successful [CS] treatment.” These comorbidities need to be addressed via individualized therapies. Moreover, long-term mortality is increased in all forms of CS. Thus, patients require lifelong follow-up to detect recurrence at an early stage and to treat comorbidities.

“It is likely that delayed diagnosis might explain the long-term consequences of CS, including increased morbidity and mortality despite remission,” she said.

Familiarity with the presenting signs and symptoms of CS and ordering recommended screening and confirmatory tests will enable appropriate management of the condition, leading to better outcomes.

Dr. Fleseriu reported receiving research grants from Sparrow Pharmaceuticals to Oregon Health and Science University as principal investigator and receiving occasional fees for scientific consulting/advisory boards from Sparrow Pharmaceuticals, Recordati Rare Diseases Inc., and Xeris Biopharma Holdings Inc.
 

A version of this article first appeared on Medscape.com.

“Moon face” is a term that’s become popular on social media, used to describe people with unusually round faces who are purported to have high levels of cortisol. But the term “moon face” isn’t new. It was actually coined in the 1930s by neurosurgeon Harvey Cushing, MD, who identified patients with a constellation of clinical characteristics — a condition that came to bear his name — which included rapidly developing facial adiposity. And indeed, elevated cortisol is a hallmark feature of Cushing syndrome (CS), but there are other reasons for elevated cortisol and other manifestations of CS.

Today, the term “moon face” has been replaced with “round face,” which is considered more encompassing and culturally sensitive, said Maria Fleseriu, MD, professor of medicine and neurological surgery and director of the Pituitary Center at Oregon Health and Science University in Portland, Oregon.

Facial roundness can lead clinicians to be suspicious that their patient is experiencing CS. But because a round face is associated with several other conditions, it’s important to be familiar with its particular presentation in CS, as well as how to diagnose and treat CS.
 

Pathophysiology of CS

Dr. Fleseriu defined CS as “prolonged nonphysiologic increase in cortisol, due either to exogenous use of steroids (oral, topical, or inhaled) or to excess endogenous cortisol production.” She added that it’s important “to always exclude exogenous causes before conducting a further workup to determine the type and cause of cortisol excess.”

Cushing disease is an endogenous form of CS caused by a corticotroph adenoma of the pituitary gland. Cushing disease is rare, with only two to three cases per million annually, Dr. Fleseriu said. Other causes of CS are ectopic (caused by neuroendocrine tumors) or adrenal. CS affects primarily females and typically has an onset between ages 20 and 50 years, depending on the CS type.

Diagnosis of CS is “substantially delayed for most patients, due to metabolic syndrome phenotypic overlap and lack of a single pathognomonic symptom,” according to Dr. Fleseriu.

An accurate diagnosis should be on the basis of signs and symptoms, biochemical screening, other laboratory testing, and diagnostic imaging.
 

Look for Clinical Signs and Symptoms of CS

“CS mostly presents as a combination of two or more features,” Dr. Fleseriu stated. These include increased fat pads (in the face, neck, and trunk), skin changes, signs of protein catabolism, growth retardation and body weight increase in children, and metabolic dysregulations (Table).



“Biochemical screening should be performed in patients with a combination of symptoms, and therefore an increased pretest probability for CS,” Dr. Fleseriu advised.

A CS diagnosis requires not only biochemical confirmation of hypercortisolemia but also determination of the underlying cause of the excess endogenous cortisol production. This is a key step, as the management of CS is specific to its etiology.

Elevated plasma cortisol alone is insufficient for diagnosing CS, as several conditions can be associated with physiologic, nonneoplastic endogenous hypercortisolemia, according to the 2021 updated CS guidelines for which Dr. Fleseriu served as a coauthor. These include depression, alcohol dependence, glucocorticoid resistance, obesity, diabetes, pregnancy, prolonged physical exertion, malnutrition, and cortisol-binding globulin excess.

The diagnosis begins with the following screening tests:

  • Late-night salivary cortisol (LNSC) to assess an abnormal circadian rhythm

According to the 2021 guideline, this is “based on the assumption that patients with CS lose the normal circadian nadir of cortisol secretion.”

  • Overnight 1-mg dexamethasone suppression test (DST) to assess impaired glucocorticoid feedback

The authors noted that in healthy individuals, a supraphysiologic dexamethasone dose inhibits vasopressin and adrenocorticotropic hormone (ACTH) secretion, leading to decreased cortisol concentration. Cortisol concentrations of < 1-8 μg/dL in the morning (after administration of the dexamethasone between 11 p.m. and midnight) are considered “normal,” and a negative result “strongly predicts” the absence of CS. But false-positive and false-negative results can occur. Thus, “it is imperative that first-line testing is elected on the basis of physiologic conditions and drug intake — for example, use of CYP2A4/5 inhibitors or stimulators and oral estrogen — as well as laboratory quality control measure, and special attention to night shift workers,” Dr. Fleseriu emphasized.

  • A 24-hour urinary free cortisol (UFC) test to assess increased bioavailable cortisol

The guideline encourages conducting several 24-hour urine collections to account for intra-patient variability.

Dr. Fleseriu recommended utilizing at least two of the three screening tests, all of which have reasonable sensitivity and specificity.

“Two normal test results usually exclude the presence of CS, except in rare cyclic CS,” she added.
 

Conduct Additional Laboratory Testing

Additional laboratory abnormalities suggestive of CS include:

  • Increased leukocytes with decreased lymphocytes, eosinophils, monocytes, and basophils
  • Elevated glucose and insulin levels
  • Hypokalemia
  • Increased triglycerides and total cholesterol levels
  • Elevated liver enzymes
  • Changes in activated thromboplastin time and plasma concentrations of pro- and anticoagulant factors
  • Hypercalciuria, hypocalcemia (rare), hypophosphatemia, decreased phosphate maximum resorption, and increased alkaline phosphatase activity

Dr. Fleseriu noted that, in most cases, a final CS diagnosis can be reached after confirmation of biochemical hypercortisolism, which is done after an initial positive screening test.

She added that plasma ACTH levels are “instrumental” in distinguishing ACTH-depending forms of CS — such as Cushing disease and ectopic CS — from adrenal cases. Bilateral inferior petrosal sinus sampling is necessary in ACTH-dependent CS.
 

Utilize Diagnostic Imaging

There are several diagnostic imaging techniques that localize the origin of the hypercortisolism, thus informing the course of treatment.

  • Pituitary MRI to detect corticotropin-secreting corticotroph adenomas, which are typically small lesions (< 6 mm in diameter)
  • CT evaluation of the neck, thoracic cavity, and abdomen to diagnose ectopic CS, including lung neuroendocrine tumors and bronchial neuroendocrine tumors
  • Cervical and thyroid ultrasonography to identify primary or metastatic medullary thyroid carcinoma, and PET scans, which have greater sensitivity in detecting tumors, compared with CT scans
  • Contrast-enhanced CT scans to detect adrenal adenomas and adrenocortical carcinomas

Management of CS

“The primary aim of treatment is eucortisolemia, and in those with endogenous CS, complete surgical resection of the underlying tumor is the primary method,” Dr. Fleseriu said.

It’s critical to monitor for biochemical remission following surgery, utilizing 24-hour UFC, LNSC, and DST “because clinical manifestations may lag behind biochemical evidence.”

In Cushing disease, almost half of patients will have either persistent or recurrent hypercortisolemia after surgery. In those cases, individualized adjuvant treatments are recommended. These include repeat surgery, bilateral adrenalectomy, radiation, or medical treatments, including pituitary-directed drugs, adrenal steroidogenesis inhibitors, or glucocorticoid receptor-blocking agents. The last two groups are used for other types of CS.

Dr. Fleseriu pointed out that CS is “associated with increased metabolic, cardiovascular, psychiatric, infectious, and musculoskeletal morbidity, which are only partially reversible with successful [CS] treatment.” These comorbidities need to be addressed via individualized therapies. Moreover, long-term mortality is increased in all forms of CS. Thus, patients require lifelong follow-up to detect recurrence at an early stage and to treat comorbidities.

“It is likely that delayed diagnosis might explain the long-term consequences of CS, including increased morbidity and mortality despite remission,” she said.

Familiarity with the presenting signs and symptoms of CS and ordering recommended screening and confirmatory tests will enable appropriate management of the condition, leading to better outcomes.

Dr. Fleseriu reported receiving research grants from Sparrow Pharmaceuticals to Oregon Health and Science University as principal investigator and receiving occasional fees for scientific consulting/advisory boards from Sparrow Pharmaceuticals, Recordati Rare Diseases Inc., and Xeris Biopharma Holdings Inc.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Physicians Lament Over Reliance on Relative Value Units: Survey

Article Type
Changed
Fri, 08/23/2024 - 12:54

Most physicians oppose the way standardized relative value units (RVUs) are used to determine performance and compensation, according to Medscape’s 2024 Physicians and RVUs Report. About 6 in 10 survey respondents were unhappy with how RVUs affected them financially, while 7 in 10 said RVUs were poor measures of productivity.

The report analyzed 2024 survey data from 1005 practicing physicians who earn RVUs.

“I’m already mad that the medical field is controlled by health insurers and what they pay and authorize,” said an anesthesiologist in New York. “Then [that approach] is transferred to medical offices and hospitals, where physicians are paid by RVUs.”

Most physicians surveyed produced between 4000 and 8000 RVUs per year. Roughly one in six were high RVU generators, generating more than 10,000 annually.

In most cases, the metric influences earning potential — 42% of doctors surveyed said RVUs affect their salaries to some degree. One quarter said their salary was based entirely on RVUs. More than three fourths of physicians who received performance bonuses said they must meet RVU targets to do so.

“The current RVU system encourages unnecessary procedures, hurting patients,” said an orthopedic surgeon in Maine.

Nearly three fourths of practitioners surveyed said they occasionally to frequently felt pressure to take on more patients as a result of this system.

“I know numerous primary care doctors and specialists who have been forced to increase patient volume to meet RVU goals, and none is happy about it,” said Alok Patel, MD, a pediatric hospitalist with Stanford Hospital in Palo Alto, California. “Plus, patients are definitely not happy about being rushed.”

More than half of respondents said they occasionally or frequently felt compelled by their employer to use higher-level coding, which interferes with a physician’s ethical responsibility to the patient, said Arthur L. Caplan, PhD, a bioethicist at NYU Langone Medical Center in New York City.

“Rather than rewarding excellence or good outcomes, you’re kind of rewarding procedures and volume,” said Dr. Caplan. “It’s more than pressure; it’s expected.”

Nearly 6 in 10 physicians said that the method for calculating reimbursements was unfair. Almost half said that they weren’t happy with how their workplace uses RVUs.

A few respondents said that their RVU model, which is often based on what Dr. Patel called an “overly complicated algorithm,” did not account for the time spent on tasks or the fact that some patients miss appointments. RVUs also rely on factors outside the control of a physician, such as location and patient volume, said one doctor.

The model can also lower the level of care patients receive, Dr. Patel said.

“I know primary care doctors who work in RVU-based systems and simply cannot take the necessary time — even if it’s 30-45 minutes — to thoroughly assess a patient, when the model forces them to take on 15-minute encounters.”

Finally, over half of clinicians said alternatives to the RVU system would be more effective, and 77% suggested including qualitative data. One respondent recommended incorporating time spent doing paperwork and communicating with patients, complexity of conditions, and medication management.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Most physicians oppose the way standardized relative value units (RVUs) are used to determine performance and compensation, according to Medscape’s 2024 Physicians and RVUs Report. About 6 in 10 survey respondents were unhappy with how RVUs affected them financially, while 7 in 10 said RVUs were poor measures of productivity.

The report analyzed 2024 survey data from 1005 practicing physicians who earn RVUs.

“I’m already mad that the medical field is controlled by health insurers and what they pay and authorize,” said an anesthesiologist in New York. “Then [that approach] is transferred to medical offices and hospitals, where physicians are paid by RVUs.”

Most physicians surveyed produced between 4000 and 8000 RVUs per year. Roughly one in six were high RVU generators, generating more than 10,000 annually.

In most cases, the metric influences earning potential — 42% of doctors surveyed said RVUs affect their salaries to some degree. One quarter said their salary was based entirely on RVUs. More than three fourths of physicians who received performance bonuses said they must meet RVU targets to do so.

“The current RVU system encourages unnecessary procedures, hurting patients,” said an orthopedic surgeon in Maine.

Nearly three fourths of practitioners surveyed said they occasionally to frequently felt pressure to take on more patients as a result of this system.

“I know numerous primary care doctors and specialists who have been forced to increase patient volume to meet RVU goals, and none is happy about it,” said Alok Patel, MD, a pediatric hospitalist with Stanford Hospital in Palo Alto, California. “Plus, patients are definitely not happy about being rushed.”

More than half of respondents said they occasionally or frequently felt compelled by their employer to use higher-level coding, which interferes with a physician’s ethical responsibility to the patient, said Arthur L. Caplan, PhD, a bioethicist at NYU Langone Medical Center in New York City.

“Rather than rewarding excellence or good outcomes, you’re kind of rewarding procedures and volume,” said Dr. Caplan. “It’s more than pressure; it’s expected.”

Nearly 6 in 10 physicians said that the method for calculating reimbursements was unfair. Almost half said that they weren’t happy with how their workplace uses RVUs.

A few respondents said that their RVU model, which is often based on what Dr. Patel called an “overly complicated algorithm,” did not account for the time spent on tasks or the fact that some patients miss appointments. RVUs also rely on factors outside the control of a physician, such as location and patient volume, said one doctor.

The model can also lower the level of care patients receive, Dr. Patel said.

“I know primary care doctors who work in RVU-based systems and simply cannot take the necessary time — even if it’s 30-45 minutes — to thoroughly assess a patient, when the model forces them to take on 15-minute encounters.”

Finally, over half of clinicians said alternatives to the RVU system would be more effective, and 77% suggested including qualitative data. One respondent recommended incorporating time spent doing paperwork and communicating with patients, complexity of conditions, and medication management.

A version of this article first appeared on Medscape.com.

Most physicians oppose the way standardized relative value units (RVUs) are used to determine performance and compensation, according to Medscape’s 2024 Physicians and RVUs Report. About 6 in 10 survey respondents were unhappy with how RVUs affected them financially, while 7 in 10 said RVUs were poor measures of productivity.

The report analyzed 2024 survey data from 1005 practicing physicians who earn RVUs.

“I’m already mad that the medical field is controlled by health insurers and what they pay and authorize,” said an anesthesiologist in New York. “Then [that approach] is transferred to medical offices and hospitals, where physicians are paid by RVUs.”

Most physicians surveyed produced between 4000 and 8000 RVUs per year. Roughly one in six were high RVU generators, generating more than 10,000 annually.

In most cases, the metric influences earning potential — 42% of doctors surveyed said RVUs affect their salaries to some degree. One quarter said their salary was based entirely on RVUs. More than three fourths of physicians who received performance bonuses said they must meet RVU targets to do so.

“The current RVU system encourages unnecessary procedures, hurting patients,” said an orthopedic surgeon in Maine.

Nearly three fourths of practitioners surveyed said they occasionally to frequently felt pressure to take on more patients as a result of this system.

“I know numerous primary care doctors and specialists who have been forced to increase patient volume to meet RVU goals, and none is happy about it,” said Alok Patel, MD, a pediatric hospitalist with Stanford Hospital in Palo Alto, California. “Plus, patients are definitely not happy about being rushed.”

More than half of respondents said they occasionally or frequently felt compelled by their employer to use higher-level coding, which interferes with a physician’s ethical responsibility to the patient, said Arthur L. Caplan, PhD, a bioethicist at NYU Langone Medical Center in New York City.

“Rather than rewarding excellence or good outcomes, you’re kind of rewarding procedures and volume,” said Dr. Caplan. “It’s more than pressure; it’s expected.”

Nearly 6 in 10 physicians said that the method for calculating reimbursements was unfair. Almost half said that they weren’t happy with how their workplace uses RVUs.

A few respondents said that their RVU model, which is often based on what Dr. Patel called an “overly complicated algorithm,” did not account for the time spent on tasks or the fact that some patients miss appointments. RVUs also rely on factors outside the control of a physician, such as location and patient volume, said one doctor.

The model can also lower the level of care patients receive, Dr. Patel said.

“I know primary care doctors who work in RVU-based systems and simply cannot take the necessary time — even if it’s 30-45 minutes — to thoroughly assess a patient, when the model forces them to take on 15-minute encounters.”

Finally, over half of clinicians said alternatives to the RVU system would be more effective, and 77% suggested including qualitative data. One respondent recommended incorporating time spent doing paperwork and communicating with patients, complexity of conditions, and medication management.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

After Remission Failure in Early RA, Adding Etanercept No Better Than Adding Leflunomide

Article Type
Changed
Fri, 08/23/2024 - 12:49

 

TOPLINE:

Treatment with etanercept led to faster disease control initially in patients with early rheumatoid arthritis (RA) who had an insufficient early response to methotrexate and bridging glucocorticoids therapy, but more patients achieved disease control with leflunomide at 104 weeks.

METHODOLOGY:

  • Researchers conducted CareRA2020, a randomized controlled trial including 276 patients with early RA who were initially treated with oral methotrexate 15 mg/wk and a step-down prednisone scheme, with early insufficient responders (n = 110) randomized to add etanercept 50 mg/wk or leflunomide 10 mg/d for 24 weeks.
  • Patients were classified as early insufficient responders if they did not achieve a 28-joint Disease Activity Score with C-reactive protein (DAS28-CRP) < 3.2 between weeks 8 and 32 or < 2.6 at week 32, despite an increase in methotrexate dose to 20 mg/wk.
  • The primary outcome was the longitudinal disease activity measured by DAS28-CRP over 104 weeks.
  • The secondary outcomes included disease control at 28 weeks post randomization and the use of biologic or targeted synthetic disease-modifying antirheumatic drugs at week 104.

TAKEAWAY:

  • Early introduction of etanercept in patients with RA did not show long-term superiority over leflunomide in disease control over 2 years (P = .157).
  • At 28 weeks post randomization, the percentage of patients who achieved a DAS28-CRP < 2.6 was higher in the etanercept group than in the leflunomide group (59% vs 44%).
  • After stopping etanercept, disease activity scores worsened, and a lower proportion of patients achieved DAS28-CRP < 2.6 in the etanercept group than in the leflunomide group (55% vs 69%) at week 104.
  • Even after treatment with etanercept or leflunomide, the 110 early insufficient responders never reached the same level of disease control as the 142 patients who responded to methotrexate and bridging glucocorticoids within weeks 8-32.

IN PRACTICE:

“The CareRA2020 trial did not completely solve the unmet need of patients responding insufficiently to conventional initial therapy for early RA, but it provides opportunities to further optimize the treatment approach in this population, for instance, by focusing on the identification of potential subgroups with different disease activity trajectories within the early insufficient responder group,” wrote the authors.

SOURCE:

The study was led by Delphine Bertrand of the Skeletal Biology and Engineering Research Center in the Department of Development and Regeneration at KU Leuven in Belgium, and was published online on August 7, 2024, in RMD Open.

LIMITATIONS:

The open-label design of the study may have introduced bias, as patients and investigators were aware of the treatment. The temporary administration of etanercept may not have reflected its long-term effects. The study was conducted in Belgium, which limited the generalizability of the findings to other populations.

DISCLOSURES:

The study was supported by the Belgian Health Care Knowledge Centre. Some authors reported serving as speakers or receiving grants, consulting fees, honoraria, or meeting or travel support from financial ties with Novartis, Pfizer, Amgen, and other pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Treatment with etanercept led to faster disease control initially in patients with early rheumatoid arthritis (RA) who had an insufficient early response to methotrexate and bridging glucocorticoids therapy, but more patients achieved disease control with leflunomide at 104 weeks.

METHODOLOGY:

  • Researchers conducted CareRA2020, a randomized controlled trial including 276 patients with early RA who were initially treated with oral methotrexate 15 mg/wk and a step-down prednisone scheme, with early insufficient responders (n = 110) randomized to add etanercept 50 mg/wk or leflunomide 10 mg/d for 24 weeks.
  • Patients were classified as early insufficient responders if they did not achieve a 28-joint Disease Activity Score with C-reactive protein (DAS28-CRP) < 3.2 between weeks 8 and 32 or < 2.6 at week 32, despite an increase in methotrexate dose to 20 mg/wk.
  • The primary outcome was the longitudinal disease activity measured by DAS28-CRP over 104 weeks.
  • The secondary outcomes included disease control at 28 weeks post randomization and the use of biologic or targeted synthetic disease-modifying antirheumatic drugs at week 104.

TAKEAWAY:

  • Early introduction of etanercept in patients with RA did not show long-term superiority over leflunomide in disease control over 2 years (P = .157).
  • At 28 weeks post randomization, the percentage of patients who achieved a DAS28-CRP < 2.6 was higher in the etanercept group than in the leflunomide group (59% vs 44%).
  • After stopping etanercept, disease activity scores worsened, and a lower proportion of patients achieved DAS28-CRP < 2.6 in the etanercept group than in the leflunomide group (55% vs 69%) at week 104.
  • Even after treatment with etanercept or leflunomide, the 110 early insufficient responders never reached the same level of disease control as the 142 patients who responded to methotrexate and bridging glucocorticoids within weeks 8-32.

IN PRACTICE:

“The CareRA2020 trial did not completely solve the unmet need of patients responding insufficiently to conventional initial therapy for early RA, but it provides opportunities to further optimize the treatment approach in this population, for instance, by focusing on the identification of potential subgroups with different disease activity trajectories within the early insufficient responder group,” wrote the authors.

SOURCE:

The study was led by Delphine Bertrand of the Skeletal Biology and Engineering Research Center in the Department of Development and Regeneration at KU Leuven in Belgium, and was published online on August 7, 2024, in RMD Open.

LIMITATIONS:

The open-label design of the study may have introduced bias, as patients and investigators were aware of the treatment. The temporary administration of etanercept may not have reflected its long-term effects. The study was conducted in Belgium, which limited the generalizability of the findings to other populations.

DISCLOSURES:

The study was supported by the Belgian Health Care Knowledge Centre. Some authors reported serving as speakers or receiving grants, consulting fees, honoraria, or meeting or travel support from financial ties with Novartis, Pfizer, Amgen, and other pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Treatment with etanercept led to faster disease control initially in patients with early rheumatoid arthritis (RA) who had an insufficient early response to methotrexate and bridging glucocorticoids therapy, but more patients achieved disease control with leflunomide at 104 weeks.

METHODOLOGY:

  • Researchers conducted CareRA2020, a randomized controlled trial including 276 patients with early RA who were initially treated with oral methotrexate 15 mg/wk and a step-down prednisone scheme, with early insufficient responders (n = 110) randomized to add etanercept 50 mg/wk or leflunomide 10 mg/d for 24 weeks.
  • Patients were classified as early insufficient responders if they did not achieve a 28-joint Disease Activity Score with C-reactive protein (DAS28-CRP) < 3.2 between weeks 8 and 32 or < 2.6 at week 32, despite an increase in methotrexate dose to 20 mg/wk.
  • The primary outcome was the longitudinal disease activity measured by DAS28-CRP over 104 weeks.
  • The secondary outcomes included disease control at 28 weeks post randomization and the use of biologic or targeted synthetic disease-modifying antirheumatic drugs at week 104.

TAKEAWAY:

  • Early introduction of etanercept in patients with RA did not show long-term superiority over leflunomide in disease control over 2 years (P = .157).
  • At 28 weeks post randomization, the percentage of patients who achieved a DAS28-CRP < 2.6 was higher in the etanercept group than in the leflunomide group (59% vs 44%).
  • After stopping etanercept, disease activity scores worsened, and a lower proportion of patients achieved DAS28-CRP < 2.6 in the etanercept group than in the leflunomide group (55% vs 69%) at week 104.
  • Even after treatment with etanercept or leflunomide, the 110 early insufficient responders never reached the same level of disease control as the 142 patients who responded to methotrexate and bridging glucocorticoids within weeks 8-32.

IN PRACTICE:

“The CareRA2020 trial did not completely solve the unmet need of patients responding insufficiently to conventional initial therapy for early RA, but it provides opportunities to further optimize the treatment approach in this population, for instance, by focusing on the identification of potential subgroups with different disease activity trajectories within the early insufficient responder group,” wrote the authors.

SOURCE:

The study was led by Delphine Bertrand of the Skeletal Biology and Engineering Research Center in the Department of Development and Regeneration at KU Leuven in Belgium, and was published online on August 7, 2024, in RMD Open.

LIMITATIONS:

The open-label design of the study may have introduced bias, as patients and investigators were aware of the treatment. The temporary administration of etanercept may not have reflected its long-term effects. The study was conducted in Belgium, which limited the generalizability of the findings to other populations.

DISCLOSURES:

The study was supported by the Belgian Health Care Knowledge Centre. Some authors reported serving as speakers or receiving grants, consulting fees, honoraria, or meeting or travel support from financial ties with Novartis, Pfizer, Amgen, and other pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The Most Misinterpreted Study in Medicine: Don’t be TRICCed

Article Type
Changed
Tue, 08/27/2024 - 09:31

Ah, blood. That sweet nectar of life that quiets angina, abolishes dyspnea, prevents orthostatic syncope, and quells sinus tachycardia. As a cardiologist, I am an unabashed hemophile. 

But we liberal transfusionists are challenged on every request for consideration of transfusion. Whereas the polite may resort to whispered skepticism, vehement critics respond with scorn as if we’d asked them to burn aromatic herbs or fetch a bucket of leeches. And to what do we owe this pathological angst? The broad and persistent misinterpretation of the pesky TRICC trial (N Engl J Med. 1999;340:409-417). You know; the one that should have been published with a boxed warning stating: “Misinterpretation of this trial could result in significant harm.” 
 

Point 1: Our Actively Bleeding Patient is Not a TRICC Patient. 

Published in 1999, the TRICC trial enrolled critical anemic patients older than 16 years who were stable after fluid resuscitation and were not actively bleeding. They had a hemoglobin level < 9 g/dL and were expected to stay in the intensive care unit (ICU) for more than 24 hours. They were randomly assigned to either a conservative trigger for transfusion of < 7 g/dL or a liberal threshold of < 10 g/dL. Mortality at 30 days was lower with the conservative approach — 18.7% vs 23.3% — but the difference was not statistically significant (P = .11). The findings were similar for the secondary endpoints of inpatient mortality (22.2% vs 28.1%; P = .05) and ICU mortality (13.9% vs 16.2%; P = .29). 

One must admit that these P values are not impressive, and the authors’ conclusion should have warranted caution: “A restrictive strategy ... is at least as effective as and possibly superior to a liberal transfusion strategy in critically ill patients, with the possible exception of patients with acute myocardial infarction and unstable angina.” 
 

Point 2: Our Critically Ill Cardiac Patient is Unlikely to be a “TRICC” Patient.

Another criticism of TRICC is that only 13% of those assessed and 26% of those eligible were enrolled, mostly owing to physician refusal. Only 26% of enrolled patients had cardiac disease. This makes the TRICC population highly selected and not representative of typical ICU patients. 

To prove my point that the edict against higher transfusion thresholds can be dangerous, I’ll describe my most recent interface with TRICC trial misinterpretation 
 

A Case in Point

The patient, Mrs. Kemp,* is 79 years old and has been on aspirin for years following coronary stent placement. One evening, she began spurting bright red blood from her rectum, interrupted only briefly by large clots the consistency of jellied cranberries. When she arrived at the hospital, she was hemodynamically stable, with a hemoglobin level of 10 g/dL, down from her usual 12 g/dL. That level bolstered the confidence of her provider, who insisted that she be managed conservatively. 

Mrs. Kemp was transferred to the ward, where she continued to bleed briskly. Over the next 2 hours, her hemoglobin level dropped to 9 g/dL, then 8 g/dL. Her daughter, a healthcare worker, requested a transfusion. The answer was, wait for it — the well-scripted, somewhat patronizing oft-quoted line, “The medical literature states that we need to wait for a hemoglobin level of 7 g/dL before we transfuse.” 

Later that evening, Mrs. Kemp’s systolic blood pressure dropped to the upper 80s, despite her usual hypertension. The provider was again comforted by the fact that she was not tachycardic (she had a pacemaker and was on bisoprolol). The next morning, Mrs. Kemp felt the need to defecate and was placed on the bedside commode and left to her privacy. Predictably, she became dizzy and experienced frank syncope. Thankfully, she avoided a hip fracture or worse. A stat hemoglobin returned at 6 g/dL. 

Her daughter said she literally heard the hallelujah chorus because her mother’s hemoglobin was finally below that much revered and often misleading threshold of 7 g/dL. Finally, there was an order for platelets and packed red cells. Five units later, Mr. Kemp achieved a hemoglobin of 8 g/dL and survived. Two more units and she was soaring at 9 g/dL! 
 

 

 

Lessons for Transfusion Conservatives

There are many lessons here. 

The TRICC study found that hemodynamically stable, asymptomatic patients who are not actively bleeding may well tolerate a hemoglobin level of 7 g/dL. But a patient with bright red blood actively pouring from an orifice and a rapidly declining hemoglobin level isn’t one of those people. Additionally, a patient who faints from hypovolemia is not one of those people. 

Patients with a history of bleeding presenting with new resting sinus tachycardia (in those who have chronotropic competence) should be presumed to be actively bleeding, and the findings of TRICC do not apply to them. Patients who have bled buckets on anticoagulant or antiplatelet therapies and have dropped their hemoglobin will probably continue to ooze and should be subject to a low threshold for transfusion. 

Additionally, anemic people who are hemodynamically stable but can’t walk without new significant shortness of air or new rest angina need blood, and sometimes at hemoglobin levels higher than generally accepted by conservative strategists. Finally, failing to treat or at least monitor patients who are spontaneously bleeding as aggressively as some trauma patients is a failure to provide proper medical care. 

The vast majority of my healthcare clinician colleagues are competent, compassionate individuals who can reasonably discuss the nuances of any medical scenario. One important distinction of a good medical team is the willingness to change course based on a change in patient status or the presentation of what may be new information for the provider. 

But those proud transfusion conservatives who will not budge until their threshold is met need to make certain their patient is truly subject to their supposed edicts. Our blood banks should not be more difficult to access than Fort Knox, and transfusion should be used appropriately and liberally in the hemodynamically unstable, the symptomatic, and active brisk bleeders. 

I beg staunch transfusion conservatives to consider how they might feel if someone stuck a magic spigot in their brachial artery and acutely drained their hemoglobin to that magic threshold of 7 g/dL. When syncope, shortness of air, fatigue, and angina find them, they may generate empathy for those who need transfusion. Might that do the TRICC? 

*Some details have been changed to conceal the identity of the patient, but the essence of the case has been preserved.

Dr. Walton-Shirley, a native Kentuckian who retired from full-time invasive cardiology and now does locums work in Montana, is a champion of physician rights and patient safety. She has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Ah, blood. That sweet nectar of life that quiets angina, abolishes dyspnea, prevents orthostatic syncope, and quells sinus tachycardia. As a cardiologist, I am an unabashed hemophile. 

But we liberal transfusionists are challenged on every request for consideration of transfusion. Whereas the polite may resort to whispered skepticism, vehement critics respond with scorn as if we’d asked them to burn aromatic herbs or fetch a bucket of leeches. And to what do we owe this pathological angst? The broad and persistent misinterpretation of the pesky TRICC trial (N Engl J Med. 1999;340:409-417). You know; the one that should have been published with a boxed warning stating: “Misinterpretation of this trial could result in significant harm.” 
 

Point 1: Our Actively Bleeding Patient is Not a TRICC Patient. 

Published in 1999, the TRICC trial enrolled critical anemic patients older than 16 years who were stable after fluid resuscitation and were not actively bleeding. They had a hemoglobin level < 9 g/dL and were expected to stay in the intensive care unit (ICU) for more than 24 hours. They were randomly assigned to either a conservative trigger for transfusion of < 7 g/dL or a liberal threshold of < 10 g/dL. Mortality at 30 days was lower with the conservative approach — 18.7% vs 23.3% — but the difference was not statistically significant (P = .11). The findings were similar for the secondary endpoints of inpatient mortality (22.2% vs 28.1%; P = .05) and ICU mortality (13.9% vs 16.2%; P = .29). 

One must admit that these P values are not impressive, and the authors’ conclusion should have warranted caution: “A restrictive strategy ... is at least as effective as and possibly superior to a liberal transfusion strategy in critically ill patients, with the possible exception of patients with acute myocardial infarction and unstable angina.” 
 

Point 2: Our Critically Ill Cardiac Patient is Unlikely to be a “TRICC” Patient.

Another criticism of TRICC is that only 13% of those assessed and 26% of those eligible were enrolled, mostly owing to physician refusal. Only 26% of enrolled patients had cardiac disease. This makes the TRICC population highly selected and not representative of typical ICU patients. 

To prove my point that the edict against higher transfusion thresholds can be dangerous, I’ll describe my most recent interface with TRICC trial misinterpretation 
 

A Case in Point

The patient, Mrs. Kemp,* is 79 years old and has been on aspirin for years following coronary stent placement. One evening, she began spurting bright red blood from her rectum, interrupted only briefly by large clots the consistency of jellied cranberries. When she arrived at the hospital, she was hemodynamically stable, with a hemoglobin level of 10 g/dL, down from her usual 12 g/dL. That level bolstered the confidence of her provider, who insisted that she be managed conservatively. 

Mrs. Kemp was transferred to the ward, where she continued to bleed briskly. Over the next 2 hours, her hemoglobin level dropped to 9 g/dL, then 8 g/dL. Her daughter, a healthcare worker, requested a transfusion. The answer was, wait for it — the well-scripted, somewhat patronizing oft-quoted line, “The medical literature states that we need to wait for a hemoglobin level of 7 g/dL before we transfuse.” 

Later that evening, Mrs. Kemp’s systolic blood pressure dropped to the upper 80s, despite her usual hypertension. The provider was again comforted by the fact that she was not tachycardic (she had a pacemaker and was on bisoprolol). The next morning, Mrs. Kemp felt the need to defecate and was placed on the bedside commode and left to her privacy. Predictably, she became dizzy and experienced frank syncope. Thankfully, she avoided a hip fracture or worse. A stat hemoglobin returned at 6 g/dL. 

Her daughter said she literally heard the hallelujah chorus because her mother’s hemoglobin was finally below that much revered and often misleading threshold of 7 g/dL. Finally, there was an order for platelets and packed red cells. Five units later, Mr. Kemp achieved a hemoglobin of 8 g/dL and survived. Two more units and she was soaring at 9 g/dL! 
 

 

 

Lessons for Transfusion Conservatives

There are many lessons here. 

The TRICC study found that hemodynamically stable, asymptomatic patients who are not actively bleeding may well tolerate a hemoglobin level of 7 g/dL. But a patient with bright red blood actively pouring from an orifice and a rapidly declining hemoglobin level isn’t one of those people. Additionally, a patient who faints from hypovolemia is not one of those people. 

Patients with a history of bleeding presenting with new resting sinus tachycardia (in those who have chronotropic competence) should be presumed to be actively bleeding, and the findings of TRICC do not apply to them. Patients who have bled buckets on anticoagulant or antiplatelet therapies and have dropped their hemoglobin will probably continue to ooze and should be subject to a low threshold for transfusion. 

Additionally, anemic people who are hemodynamically stable but can’t walk without new significant shortness of air or new rest angina need blood, and sometimes at hemoglobin levels higher than generally accepted by conservative strategists. Finally, failing to treat or at least monitor patients who are spontaneously bleeding as aggressively as some trauma patients is a failure to provide proper medical care. 

The vast majority of my healthcare clinician colleagues are competent, compassionate individuals who can reasonably discuss the nuances of any medical scenario. One important distinction of a good medical team is the willingness to change course based on a change in patient status or the presentation of what may be new information for the provider. 

But those proud transfusion conservatives who will not budge until their threshold is met need to make certain their patient is truly subject to their supposed edicts. Our blood banks should not be more difficult to access than Fort Knox, and transfusion should be used appropriately and liberally in the hemodynamically unstable, the symptomatic, and active brisk bleeders. 

I beg staunch transfusion conservatives to consider how they might feel if someone stuck a magic spigot in their brachial artery and acutely drained their hemoglobin to that magic threshold of 7 g/dL. When syncope, shortness of air, fatigue, and angina find them, they may generate empathy for those who need transfusion. Might that do the TRICC? 

*Some details have been changed to conceal the identity of the patient, but the essence of the case has been preserved.

Dr. Walton-Shirley, a native Kentuckian who retired from full-time invasive cardiology and now does locums work in Montana, is a champion of physician rights and patient safety. She has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Ah, blood. That sweet nectar of life that quiets angina, abolishes dyspnea, prevents orthostatic syncope, and quells sinus tachycardia. As a cardiologist, I am an unabashed hemophile. 

But we liberal transfusionists are challenged on every request for consideration of transfusion. Whereas the polite may resort to whispered skepticism, vehement critics respond with scorn as if we’d asked them to burn aromatic herbs or fetch a bucket of leeches. And to what do we owe this pathological angst? The broad and persistent misinterpretation of the pesky TRICC trial (N Engl J Med. 1999;340:409-417). You know; the one that should have been published with a boxed warning stating: “Misinterpretation of this trial could result in significant harm.” 
 

Point 1: Our Actively Bleeding Patient is Not a TRICC Patient. 

Published in 1999, the TRICC trial enrolled critical anemic patients older than 16 years who were stable after fluid resuscitation and were not actively bleeding. They had a hemoglobin level < 9 g/dL and were expected to stay in the intensive care unit (ICU) for more than 24 hours. They were randomly assigned to either a conservative trigger for transfusion of < 7 g/dL or a liberal threshold of < 10 g/dL. Mortality at 30 days was lower with the conservative approach — 18.7% vs 23.3% — but the difference was not statistically significant (P = .11). The findings were similar for the secondary endpoints of inpatient mortality (22.2% vs 28.1%; P = .05) and ICU mortality (13.9% vs 16.2%; P = .29). 

One must admit that these P values are not impressive, and the authors’ conclusion should have warranted caution: “A restrictive strategy ... is at least as effective as and possibly superior to a liberal transfusion strategy in critically ill patients, with the possible exception of patients with acute myocardial infarction and unstable angina.” 
 

Point 2: Our Critically Ill Cardiac Patient is Unlikely to be a “TRICC” Patient.

Another criticism of TRICC is that only 13% of those assessed and 26% of those eligible were enrolled, mostly owing to physician refusal. Only 26% of enrolled patients had cardiac disease. This makes the TRICC population highly selected and not representative of typical ICU patients. 

To prove my point that the edict against higher transfusion thresholds can be dangerous, I’ll describe my most recent interface with TRICC trial misinterpretation 
 

A Case in Point

The patient, Mrs. Kemp,* is 79 years old and has been on aspirin for years following coronary stent placement. One evening, she began spurting bright red blood from her rectum, interrupted only briefly by large clots the consistency of jellied cranberries. When she arrived at the hospital, she was hemodynamically stable, with a hemoglobin level of 10 g/dL, down from her usual 12 g/dL. That level bolstered the confidence of her provider, who insisted that she be managed conservatively. 

Mrs. Kemp was transferred to the ward, where she continued to bleed briskly. Over the next 2 hours, her hemoglobin level dropped to 9 g/dL, then 8 g/dL. Her daughter, a healthcare worker, requested a transfusion. The answer was, wait for it — the well-scripted, somewhat patronizing oft-quoted line, “The medical literature states that we need to wait for a hemoglobin level of 7 g/dL before we transfuse.” 

Later that evening, Mrs. Kemp’s systolic blood pressure dropped to the upper 80s, despite her usual hypertension. The provider was again comforted by the fact that she was not tachycardic (she had a pacemaker and was on bisoprolol). The next morning, Mrs. Kemp felt the need to defecate and was placed on the bedside commode and left to her privacy. Predictably, she became dizzy and experienced frank syncope. Thankfully, she avoided a hip fracture or worse. A stat hemoglobin returned at 6 g/dL. 

Her daughter said she literally heard the hallelujah chorus because her mother’s hemoglobin was finally below that much revered and often misleading threshold of 7 g/dL. Finally, there was an order for platelets and packed red cells. Five units later, Mr. Kemp achieved a hemoglobin of 8 g/dL and survived. Two more units and she was soaring at 9 g/dL! 
 

 

 

Lessons for Transfusion Conservatives

There are many lessons here. 

The TRICC study found that hemodynamically stable, asymptomatic patients who are not actively bleeding may well tolerate a hemoglobin level of 7 g/dL. But a patient with bright red blood actively pouring from an orifice and a rapidly declining hemoglobin level isn’t one of those people. Additionally, a patient who faints from hypovolemia is not one of those people. 

Patients with a history of bleeding presenting with new resting sinus tachycardia (in those who have chronotropic competence) should be presumed to be actively bleeding, and the findings of TRICC do not apply to them. Patients who have bled buckets on anticoagulant or antiplatelet therapies and have dropped their hemoglobin will probably continue to ooze and should be subject to a low threshold for transfusion. 

Additionally, anemic people who are hemodynamically stable but can’t walk without new significant shortness of air or new rest angina need blood, and sometimes at hemoglobin levels higher than generally accepted by conservative strategists. Finally, failing to treat or at least monitor patients who are spontaneously bleeding as aggressively as some trauma patients is a failure to provide proper medical care. 

The vast majority of my healthcare clinician colleagues are competent, compassionate individuals who can reasonably discuss the nuances of any medical scenario. One important distinction of a good medical team is the willingness to change course based on a change in patient status or the presentation of what may be new information for the provider. 

But those proud transfusion conservatives who will not budge until their threshold is met need to make certain their patient is truly subject to their supposed edicts. Our blood banks should not be more difficult to access than Fort Knox, and transfusion should be used appropriately and liberally in the hemodynamically unstable, the symptomatic, and active brisk bleeders. 

I beg staunch transfusion conservatives to consider how they might feel if someone stuck a magic spigot in their brachial artery and acutely drained their hemoglobin to that magic threshold of 7 g/dL. When syncope, shortness of air, fatigue, and angina find them, they may generate empathy for those who need transfusion. Might that do the TRICC? 

*Some details have been changed to conceal the identity of the patient, but the essence of the case has been preserved.

Dr. Walton-Shirley, a native Kentuckian who retired from full-time invasive cardiology and now does locums work in Montana, is a champion of physician rights and patient safety. She has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article