User login
Antipsychotic administration fails to treat delirium in hospitalized adults
Background: Delirium is a common disorder in hospitalized adults and is associated with poor outcomes. Antipsychotics are used clinically to treat delirium, but benefits and harms remain unclear.
Study design: A systematic review evaluating treatment of delirium in 16 randomized, controlled trials (RCTs) of antipsychotics vs. placebo or other antipsychotics, as well as 10 prospective observational studies reporting harm.
Setting: Data obtained from PubMed, Embase, CENTRAL, CINAHL, and PsycINFO from inception to July 2019 without language restrictions.
Synopsis: For 5,607 adult inpatients, treatment of delirium with haloperidol showed no difference in sedation status, duration of delirium, hospital length of stay, or mortality when compared with second-generation antipsychotics or placebo (low and moderate strength of evidence). Regarding second-generation antipsychotics versus haloperidol, no difference was found in delirium severity and cognitive function (low strength of evidence). Direct comparisons between second-generation antipsychotics showed no difference in mortality.
Limitations include heterogeneous use of agents, routes, dose, and measurement tools, which limits generalization of evidence. Multiple RCTs excluded patients with underlying cardiac and neurologic conditions that likely led to underrepresentation of harm in routine use. Insufficient evidence still exists for multiple clinically relevant outcomes including long-term cognitive function.
Bottom line: Evidence from several studies does not support the use of haloperidol or newer antipsychotics to treat delirium.
Citation: Nikooie R et al. Antipsychotics for delirium treatment in adults: A systematic review. Ann Intern Med. 2019 Oct 1;171(7):485-95.
Dr. Berry is assistant professor of medicine, hospital medicine, at the Rocky Mountain Veterans Affairs Regional Medical Center, Aurora, Colo.
Background: Delirium is a common disorder in hospitalized adults and is associated with poor outcomes. Antipsychotics are used clinically to treat delirium, but benefits and harms remain unclear.
Study design: A systematic review evaluating treatment of delirium in 16 randomized, controlled trials (RCTs) of antipsychotics vs. placebo or other antipsychotics, as well as 10 prospective observational studies reporting harm.
Setting: Data obtained from PubMed, Embase, CENTRAL, CINAHL, and PsycINFO from inception to July 2019 without language restrictions.
Synopsis: For 5,607 adult inpatients, treatment of delirium with haloperidol showed no difference in sedation status, duration of delirium, hospital length of stay, or mortality when compared with second-generation antipsychotics or placebo (low and moderate strength of evidence). Regarding second-generation antipsychotics versus haloperidol, no difference was found in delirium severity and cognitive function (low strength of evidence). Direct comparisons between second-generation antipsychotics showed no difference in mortality.
Limitations include heterogeneous use of agents, routes, dose, and measurement tools, which limits generalization of evidence. Multiple RCTs excluded patients with underlying cardiac and neurologic conditions that likely led to underrepresentation of harm in routine use. Insufficient evidence still exists for multiple clinically relevant outcomes including long-term cognitive function.
Bottom line: Evidence from several studies does not support the use of haloperidol or newer antipsychotics to treat delirium.
Citation: Nikooie R et al. Antipsychotics for delirium treatment in adults: A systematic review. Ann Intern Med. 2019 Oct 1;171(7):485-95.
Dr. Berry is assistant professor of medicine, hospital medicine, at the Rocky Mountain Veterans Affairs Regional Medical Center, Aurora, Colo.
Background: Delirium is a common disorder in hospitalized adults and is associated with poor outcomes. Antipsychotics are used clinically to treat delirium, but benefits and harms remain unclear.
Study design: A systematic review evaluating treatment of delirium in 16 randomized, controlled trials (RCTs) of antipsychotics vs. placebo or other antipsychotics, as well as 10 prospective observational studies reporting harm.
Setting: Data obtained from PubMed, Embase, CENTRAL, CINAHL, and PsycINFO from inception to July 2019 without language restrictions.
Synopsis: For 5,607 adult inpatients, treatment of delirium with haloperidol showed no difference in sedation status, duration of delirium, hospital length of stay, or mortality when compared with second-generation antipsychotics or placebo (low and moderate strength of evidence). Regarding second-generation antipsychotics versus haloperidol, no difference was found in delirium severity and cognitive function (low strength of evidence). Direct comparisons between second-generation antipsychotics showed no difference in mortality.
Limitations include heterogeneous use of agents, routes, dose, and measurement tools, which limits generalization of evidence. Multiple RCTs excluded patients with underlying cardiac and neurologic conditions that likely led to underrepresentation of harm in routine use. Insufficient evidence still exists for multiple clinically relevant outcomes including long-term cognitive function.
Bottom line: Evidence from several studies does not support the use of haloperidol or newer antipsychotics to treat delirium.
Citation: Nikooie R et al. Antipsychotics for delirium treatment in adults: A systematic review. Ann Intern Med. 2019 Oct 1;171(7):485-95.
Dr. Berry is assistant professor of medicine, hospital medicine, at the Rocky Mountain Veterans Affairs Regional Medical Center, Aurora, Colo.
DDSEP® 9 Quick Quiz
Q2. Correct answer: E
Rationale
This patient has nausea and vomiting of pregnancy (NVP), and has tried conservative management. Doxylamine and vitamin B6 have been found to be safe and effective for NVP and are considered first-line therapy. Further testing with gastric-emptying study is not necessary because NVP has a high prevalence at weeks 4-6 of gestation and peaks at week 9-16. A nuclear test such as gastric emptying is not appropriate during pregnancy, though decreased gastric emptying due to estrogen and progesterone is thought to be related to NVP. Upper endoscopy would be considered if the nausea and vomiting is refractory. Ondansetron can be considered, but there have been some questions raised regarding safety and it is not considered first line. Meals high in protein have been found to decrease nausea more that carbohydrate-rich meals.
Reference
ACOG Committee on Practice Bulletins-Obstetrics. Obstet Gynecol. 2018 Jan;131(1):e15-e30.
Q2. Correct answer: E
Rationale
This patient has nausea and vomiting of pregnancy (NVP), and has tried conservative management. Doxylamine and vitamin B6 have been found to be safe and effective for NVP and are considered first-line therapy. Further testing with gastric-emptying study is not necessary because NVP has a high prevalence at weeks 4-6 of gestation and peaks at week 9-16. A nuclear test such as gastric emptying is not appropriate during pregnancy, though decreased gastric emptying due to estrogen and progesterone is thought to be related to NVP. Upper endoscopy would be considered if the nausea and vomiting is refractory. Ondansetron can be considered, but there have been some questions raised regarding safety and it is not considered first line. Meals high in protein have been found to decrease nausea more that carbohydrate-rich meals.
Reference
ACOG Committee on Practice Bulletins-Obstetrics. Obstet Gynecol. 2018 Jan;131(1):e15-e30.
Q2. Correct answer: E
Rationale
This patient has nausea and vomiting of pregnancy (NVP), and has tried conservative management. Doxylamine and vitamin B6 have been found to be safe and effective for NVP and are considered first-line therapy. Further testing with gastric-emptying study is not necessary because NVP has a high prevalence at weeks 4-6 of gestation and peaks at week 9-16. A nuclear test such as gastric emptying is not appropriate during pregnancy, though decreased gastric emptying due to estrogen and progesterone is thought to be related to NVP. Upper endoscopy would be considered if the nausea and vomiting is refractory. Ondansetron can be considered, but there have been some questions raised regarding safety and it is not considered first line. Meals high in protein have been found to decrease nausea more that carbohydrate-rich meals.
Reference
ACOG Committee on Practice Bulletins-Obstetrics. Obstet Gynecol. 2018 Jan;131(1):e15-e30.
Q2. A 26-year-old female who is 7 weeks pregnant presents with nausea and vomiting. She describes nausea that lasts most of the day with vomiting. She has tried rest and hydration, ginger supplementation, and a wrist band she purchased over the counter. However, she comes to clinic to request further management.
DDSEP® 9 Quick Quiz
Q1. Correct answer: E
Rationale
Transient lower esophageal sphincter relaxation (TLESR) is a physiologic phenomenon that allows venting of swallowed air from the stomach in response to distension of the proximal stomach. Patients with GERD typically reflux gastric content through a compliant esophagogastric junction into the esophagus during a TLESR; the frequency of TLESRs may also be higher in patients with GERD. TLESRs are suppressed during deep sleep, and are less frequent when LES relaxation is abnormal (e.g., esophageal outflow obstruction). Baclofen, a GABAB receptor agonist, can reduce TLESR frequency, and can reduce reflux episodes in patients with reflux. Obese patients and those with obstructive sleep apnea can have increased frequency of TLESRs. The frequency of TLESR is not related to degree of gastric acid secretion in the stomach.
References
Kuribayashi S et al. Neurogastroenterol Motil. 2010 Jun;22(6):611-e172.
Hershcovici T et al. Neurogastroenterol Motil. 2011 Sep;23(9):819-30.
Q1. Correct answer: E
Rationale
Transient lower esophageal sphincter relaxation (TLESR) is a physiologic phenomenon that allows venting of swallowed air from the stomach in response to distension of the proximal stomach. Patients with GERD typically reflux gastric content through a compliant esophagogastric junction into the esophagus during a TLESR; the frequency of TLESRs may also be higher in patients with GERD. TLESRs are suppressed during deep sleep, and are less frequent when LES relaxation is abnormal (e.g., esophageal outflow obstruction). Baclofen, a GABAB receptor agonist, can reduce TLESR frequency, and can reduce reflux episodes in patients with reflux. Obese patients and those with obstructive sleep apnea can have increased frequency of TLESRs. The frequency of TLESR is not related to degree of gastric acid secretion in the stomach.
References
Kuribayashi S et al. Neurogastroenterol Motil. 2010 Jun;22(6):611-e172.
Hershcovici T et al. Neurogastroenterol Motil. 2011 Sep;23(9):819-30.
Q1. Correct answer: E
Rationale
Transient lower esophageal sphincter relaxation (TLESR) is a physiologic phenomenon that allows venting of swallowed air from the stomach in response to distension of the proximal stomach. Patients with GERD typically reflux gastric content through a compliant esophagogastric junction into the esophagus during a TLESR; the frequency of TLESRs may also be higher in patients with GERD. TLESRs are suppressed during deep sleep, and are less frequent when LES relaxation is abnormal (e.g., esophageal outflow obstruction). Baclofen, a GABAB receptor agonist, can reduce TLESR frequency, and can reduce reflux episodes in patients with reflux. Obese patients and those with obstructive sleep apnea can have increased frequency of TLESRs. The frequency of TLESR is not related to degree of gastric acid secretion in the stomach.
References
Kuribayashi S et al. Neurogastroenterol Motil. 2010 Jun;22(6):611-e172.
Hershcovici T et al. Neurogastroenterol Motil. 2011 Sep;23(9):819-30.
Question 1:
COVID-19 may alter gut microbiota
COVID-19 infection altered the gut microbiota of adult patients and caused depletion of several types of bacteria with known immunomodulatory properties, based on data from a cohort study of 100 patients with confirmed COVID-19 infections from two hospitals.
“As the GI tract is the largest immunological organ in the body and its resident microbiota are known to modulate host immune responses, we hypothesized that the gut microbiota is associated with host inflammatory immune responses in COVID19,” wrote Yun Kit Yeoh, PhD, of the Chinese University of Hong Kong, and colleagues.
In a study published in Gut, the researchers investigated patient microbiota by collecting blood, stool, and patient records between February and May 2020 from 100 confirmed SARS-CoV-2–infected patients in Hong Kong during hospitalization, as well as follow-up stool samples from 27 patients up to 30 days after they cleared the COVID-19 virus; these observations were compared with 78 non–COVID-19 controls.
Overall, 274 stool samples were sequenced. Samples collected from patients during hospitalization for COVID-19 were compared with non–COVID-19 controls. The presence of phylum Bacteroidetes was significantly higher in COVID-19 patients compared with controls (23.9% vs. 12.8%; P < .001), as were Actinobacteria (26.1% vs. 19.0%; P < .001).
After controlling for antibiotics, the investigators found that “differences between cohorts were primarily linked to enrichment of taxa such as Parabacteroides, Sutterella wadsworthensis, and Bacteroides caccae and depletion of Adlercreutzia equolifaciens, Dorea formicigenerans, and Clostridium leptum in COVID-19 relative to non-COVID-19” (P < .05). In addition, Faecalibacterium prausnitzii and Bifidobacterium bifidum were negatively correlated with COVID-19 severity after investigators controlled for patient age and antibiotic use (P < .05).
The researchers also examined bacteria in COVID-19 patients and controls in the context of cytokines and other inflammatory markers. “We hypothesized that these compositional changes play a role in exacerbating disease by contributing to dysregulation of the immune response,” they said.
In fact, species depleted in COVID-19 patients including included B. adolescentis, E. rectale, and F. prausnitzii were negatively correlated with inflammatory markers including CXCL10, IL-10, TNF-alpha, and CCL2.
In addition, 42 stool samples from 27 patients showed significantly distinct gut microbiota from controls up to 30 days (median, 6 days) after virus clearance, regardless of antibiotics use (P < .05), the researchers said.
Long-term data needed
The study findings were limited by several factors, including the potential confounding of microbial signatures associated with COVID-19 because of heterogeneous patient management in the clinical setting and the potential that gut microbiota reflects a patient’s health with no impact on disease severity, as well as lack of data on the role of antibiotics for severe and critical patients, the researchers noted. In addition, “gut microbiota composition is highly heterogeneous across human populations and changes in compositions reported here may not necessarily be reflected in patients with COVID-19 from other biogeographies,” they wrote.
The “longer follow-up of patients with COVID-19 (e.g., 3 months to 1 year after clearing the virus) is needed to address questions related to the duration of gut microbiota dysbiosis post recovery, link between microbiota dysbiosis and long-term persistent symptoms, and whether the dysbiosis or enrichment/depletion of specific gut microorganisms predisposes recovered individuals to future health problems,” they wrote.
However, the results suggest a likely role for gut microorganisms in host inflammatory responses to COVID-19 infection, and “underscore an urgent need to understand the specific roles of gut microorganisms in human immune function and systemic inflammation,” they concluded.
More than infectious
“A growing body of evidence suggests that severity of illness from COVID-19 is largely determined by the patient’s aberrant immune response to the virus,” Jatin Roper, MD, of Duke University, Durham, N.C., said in an interview. “Therefore, a critical question is: What patient factors determine this immune response? The gut microbiota closely interact with the host immune system and are altered in many immunological diseases,” he said. “Furthermore, the SARS-CoV-2 virus infects enterocytes in the intestine and causes symptomatic gastrointestinal disease in a subset of patients. Therefore, understanding a possible association between gut microbiota and COVID-19 may reveal microbial species involved in disease pathogenesis,” he emphasized.
In the current study, “I was surprised to find that COVID-19 infection is associated with depletion of immunomodulatory gut bacteria,” said Dr. Roper. “An open question is whether these changes are caused by the SARS-CoV-2 virus and then result in altered immune response. Alternatively, the changes in gut microbiota may be a result of the immune response or other changes associated with the disease,” he said.
“COVID-19 is an immunological disease, not just an infectious disease,” explained Dr. Roper. “The gut microbiota may play an important role in the pathogenesis of the disease. Thus, specific gut microbes could one day be analyzed to risk stratify patients, or even modified to treat the disease,” he noted.
Beyond COVID-19
“Given the impact of the gut microbiota on health and disease, as well as the impact of diseases on the microbiota, I am not at all surprised to find that there were significant changes in the microbiota of COVID-19 patients and that these changes are associated with inflammatory cytokines, chemokines, and blood markers of tissue damage,” said Anthony Sung, MD, also of Duke University.
According to Dr. Sung, researchers have already been investigating possible connections between gut microbiota and other conditions such as Alzheimer’s disease, and it’s been hypothesized that these connections are mediated by interactions between the gut microbiota and the immune system.
“While this is an important paper in our understanding of COVID-19, and highlights the microbiome as a potential therapeutic target, we need to conduct clinical trials of microbiota-based interventions before we can fully realize the clinical implications of these findings,” he said.
The study was supported by the Health and Medical Research Fund, the Food and Health Bureau, The Government of the Hong Kong Special Administrative Region, and donations from Hui Hoy & Chow Sin Lan Charity Fund Limited, Pine and Crane Company Limited, Mr. Hui Ming, and The D.H. Chen Foundation. The researchers had no financial conflicts to disclose. Dr. Roper and Dr. Sung had no financial conflicts to disclose.
COVID-19 infection altered the gut microbiota of adult patients and caused depletion of several types of bacteria with known immunomodulatory properties, based on data from a cohort study of 100 patients with confirmed COVID-19 infections from two hospitals.
“As the GI tract is the largest immunological organ in the body and its resident microbiota are known to modulate host immune responses, we hypothesized that the gut microbiota is associated with host inflammatory immune responses in COVID19,” wrote Yun Kit Yeoh, PhD, of the Chinese University of Hong Kong, and colleagues.
In a study published in Gut, the researchers investigated patient microbiota by collecting blood, stool, and patient records between February and May 2020 from 100 confirmed SARS-CoV-2–infected patients in Hong Kong during hospitalization, as well as follow-up stool samples from 27 patients up to 30 days after they cleared the COVID-19 virus; these observations were compared with 78 non–COVID-19 controls.
Overall, 274 stool samples were sequenced. Samples collected from patients during hospitalization for COVID-19 were compared with non–COVID-19 controls. The presence of phylum Bacteroidetes was significantly higher in COVID-19 patients compared with controls (23.9% vs. 12.8%; P < .001), as were Actinobacteria (26.1% vs. 19.0%; P < .001).
After controlling for antibiotics, the investigators found that “differences between cohorts were primarily linked to enrichment of taxa such as Parabacteroides, Sutterella wadsworthensis, and Bacteroides caccae and depletion of Adlercreutzia equolifaciens, Dorea formicigenerans, and Clostridium leptum in COVID-19 relative to non-COVID-19” (P < .05). In addition, Faecalibacterium prausnitzii and Bifidobacterium bifidum were negatively correlated with COVID-19 severity after investigators controlled for patient age and antibiotic use (P < .05).
The researchers also examined bacteria in COVID-19 patients and controls in the context of cytokines and other inflammatory markers. “We hypothesized that these compositional changes play a role in exacerbating disease by contributing to dysregulation of the immune response,” they said.
In fact, species depleted in COVID-19 patients including included B. adolescentis, E. rectale, and F. prausnitzii were negatively correlated with inflammatory markers including CXCL10, IL-10, TNF-alpha, and CCL2.
In addition, 42 stool samples from 27 patients showed significantly distinct gut microbiota from controls up to 30 days (median, 6 days) after virus clearance, regardless of antibiotics use (P < .05), the researchers said.
Long-term data needed
The study findings were limited by several factors, including the potential confounding of microbial signatures associated with COVID-19 because of heterogeneous patient management in the clinical setting and the potential that gut microbiota reflects a patient’s health with no impact on disease severity, as well as lack of data on the role of antibiotics for severe and critical patients, the researchers noted. In addition, “gut microbiota composition is highly heterogeneous across human populations and changes in compositions reported here may not necessarily be reflected in patients with COVID-19 from other biogeographies,” they wrote.
The “longer follow-up of patients with COVID-19 (e.g., 3 months to 1 year after clearing the virus) is needed to address questions related to the duration of gut microbiota dysbiosis post recovery, link between microbiota dysbiosis and long-term persistent symptoms, and whether the dysbiosis or enrichment/depletion of specific gut microorganisms predisposes recovered individuals to future health problems,” they wrote.
However, the results suggest a likely role for gut microorganisms in host inflammatory responses to COVID-19 infection, and “underscore an urgent need to understand the specific roles of gut microorganisms in human immune function and systemic inflammation,” they concluded.
More than infectious
“A growing body of evidence suggests that severity of illness from COVID-19 is largely determined by the patient’s aberrant immune response to the virus,” Jatin Roper, MD, of Duke University, Durham, N.C., said in an interview. “Therefore, a critical question is: What patient factors determine this immune response? The gut microbiota closely interact with the host immune system and are altered in many immunological diseases,” he said. “Furthermore, the SARS-CoV-2 virus infects enterocytes in the intestine and causes symptomatic gastrointestinal disease in a subset of patients. Therefore, understanding a possible association between gut microbiota and COVID-19 may reveal microbial species involved in disease pathogenesis,” he emphasized.
In the current study, “I was surprised to find that COVID-19 infection is associated with depletion of immunomodulatory gut bacteria,” said Dr. Roper. “An open question is whether these changes are caused by the SARS-CoV-2 virus and then result in altered immune response. Alternatively, the changes in gut microbiota may be a result of the immune response or other changes associated with the disease,” he said.
“COVID-19 is an immunological disease, not just an infectious disease,” explained Dr. Roper. “The gut microbiota may play an important role in the pathogenesis of the disease. Thus, specific gut microbes could one day be analyzed to risk stratify patients, or even modified to treat the disease,” he noted.
Beyond COVID-19
“Given the impact of the gut microbiota on health and disease, as well as the impact of diseases on the microbiota, I am not at all surprised to find that there were significant changes in the microbiota of COVID-19 patients and that these changes are associated with inflammatory cytokines, chemokines, and blood markers of tissue damage,” said Anthony Sung, MD, also of Duke University.
According to Dr. Sung, researchers have already been investigating possible connections between gut microbiota and other conditions such as Alzheimer’s disease, and it’s been hypothesized that these connections are mediated by interactions between the gut microbiota and the immune system.
“While this is an important paper in our understanding of COVID-19, and highlights the microbiome as a potential therapeutic target, we need to conduct clinical trials of microbiota-based interventions before we can fully realize the clinical implications of these findings,” he said.
The study was supported by the Health and Medical Research Fund, the Food and Health Bureau, The Government of the Hong Kong Special Administrative Region, and donations from Hui Hoy & Chow Sin Lan Charity Fund Limited, Pine and Crane Company Limited, Mr. Hui Ming, and The D.H. Chen Foundation. The researchers had no financial conflicts to disclose. Dr. Roper and Dr. Sung had no financial conflicts to disclose.
COVID-19 infection altered the gut microbiota of adult patients and caused depletion of several types of bacteria with known immunomodulatory properties, based on data from a cohort study of 100 patients with confirmed COVID-19 infections from two hospitals.
“As the GI tract is the largest immunological organ in the body and its resident microbiota are known to modulate host immune responses, we hypothesized that the gut microbiota is associated with host inflammatory immune responses in COVID19,” wrote Yun Kit Yeoh, PhD, of the Chinese University of Hong Kong, and colleagues.
In a study published in Gut, the researchers investigated patient microbiota by collecting blood, stool, and patient records between February and May 2020 from 100 confirmed SARS-CoV-2–infected patients in Hong Kong during hospitalization, as well as follow-up stool samples from 27 patients up to 30 days after they cleared the COVID-19 virus; these observations were compared with 78 non–COVID-19 controls.
Overall, 274 stool samples were sequenced. Samples collected from patients during hospitalization for COVID-19 were compared with non–COVID-19 controls. The presence of phylum Bacteroidetes was significantly higher in COVID-19 patients compared with controls (23.9% vs. 12.8%; P < .001), as were Actinobacteria (26.1% vs. 19.0%; P < .001).
After controlling for antibiotics, the investigators found that “differences between cohorts were primarily linked to enrichment of taxa such as Parabacteroides, Sutterella wadsworthensis, and Bacteroides caccae and depletion of Adlercreutzia equolifaciens, Dorea formicigenerans, and Clostridium leptum in COVID-19 relative to non-COVID-19” (P < .05). In addition, Faecalibacterium prausnitzii and Bifidobacterium bifidum were negatively correlated with COVID-19 severity after investigators controlled for patient age and antibiotic use (P < .05).
The researchers also examined bacteria in COVID-19 patients and controls in the context of cytokines and other inflammatory markers. “We hypothesized that these compositional changes play a role in exacerbating disease by contributing to dysregulation of the immune response,” they said.
In fact, species depleted in COVID-19 patients including included B. adolescentis, E. rectale, and F. prausnitzii were negatively correlated with inflammatory markers including CXCL10, IL-10, TNF-alpha, and CCL2.
In addition, 42 stool samples from 27 patients showed significantly distinct gut microbiota from controls up to 30 days (median, 6 days) after virus clearance, regardless of antibiotics use (P < .05), the researchers said.
Long-term data needed
The study findings were limited by several factors, including the potential confounding of microbial signatures associated with COVID-19 because of heterogeneous patient management in the clinical setting and the potential that gut microbiota reflects a patient’s health with no impact on disease severity, as well as lack of data on the role of antibiotics for severe and critical patients, the researchers noted. In addition, “gut microbiota composition is highly heterogeneous across human populations and changes in compositions reported here may not necessarily be reflected in patients with COVID-19 from other biogeographies,” they wrote.
The “longer follow-up of patients with COVID-19 (e.g., 3 months to 1 year after clearing the virus) is needed to address questions related to the duration of gut microbiota dysbiosis post recovery, link between microbiota dysbiosis and long-term persistent symptoms, and whether the dysbiosis or enrichment/depletion of specific gut microorganisms predisposes recovered individuals to future health problems,” they wrote.
However, the results suggest a likely role for gut microorganisms in host inflammatory responses to COVID-19 infection, and “underscore an urgent need to understand the specific roles of gut microorganisms in human immune function and systemic inflammation,” they concluded.
More than infectious
“A growing body of evidence suggests that severity of illness from COVID-19 is largely determined by the patient’s aberrant immune response to the virus,” Jatin Roper, MD, of Duke University, Durham, N.C., said in an interview. “Therefore, a critical question is: What patient factors determine this immune response? The gut microbiota closely interact with the host immune system and are altered in many immunological diseases,” he said. “Furthermore, the SARS-CoV-2 virus infects enterocytes in the intestine and causes symptomatic gastrointestinal disease in a subset of patients. Therefore, understanding a possible association between gut microbiota and COVID-19 may reveal microbial species involved in disease pathogenesis,” he emphasized.
In the current study, “I was surprised to find that COVID-19 infection is associated with depletion of immunomodulatory gut bacteria,” said Dr. Roper. “An open question is whether these changes are caused by the SARS-CoV-2 virus and then result in altered immune response. Alternatively, the changes in gut microbiota may be a result of the immune response or other changes associated with the disease,” he said.
“COVID-19 is an immunological disease, not just an infectious disease,” explained Dr. Roper. “The gut microbiota may play an important role in the pathogenesis of the disease. Thus, specific gut microbes could one day be analyzed to risk stratify patients, or even modified to treat the disease,” he noted.
Beyond COVID-19
“Given the impact of the gut microbiota on health and disease, as well as the impact of diseases on the microbiota, I am not at all surprised to find that there were significant changes in the microbiota of COVID-19 patients and that these changes are associated with inflammatory cytokines, chemokines, and blood markers of tissue damage,” said Anthony Sung, MD, also of Duke University.
According to Dr. Sung, researchers have already been investigating possible connections between gut microbiota and other conditions such as Alzheimer’s disease, and it’s been hypothesized that these connections are mediated by interactions between the gut microbiota and the immune system.
“While this is an important paper in our understanding of COVID-19, and highlights the microbiome as a potential therapeutic target, we need to conduct clinical trials of microbiota-based interventions before we can fully realize the clinical implications of these findings,” he said.
The study was supported by the Health and Medical Research Fund, the Food and Health Bureau, The Government of the Hong Kong Special Administrative Region, and donations from Hui Hoy & Chow Sin Lan Charity Fund Limited, Pine and Crane Company Limited, Mr. Hui Ming, and The D.H. Chen Foundation. The researchers had no financial conflicts to disclose. Dr. Roper and Dr. Sung had no financial conflicts to disclose.
FROM GUT
Test could help patients with pancreatic cysts avoid unneeded surgery
A test that uses machine learning may improve the management of patients with pancreatic cysts, sparing some of them unnecessary surgery, a cohort study suggests.
The test, called CompCyst, integrates clinical, imaging, and biomarker data. It proved more accurate than the current standard of care for correctly determining whether patients should be discharged from follow-up, immediately operated on, or monitored.
Rachel Karchin, PhD, of the Johns Hopkins Whiting School of Engineering in Baltimore, reported these results at the AACR Virtual Special Conference: Artificial Intelligence, Diagnosis, and Imaging (Abstract IA-13).
“Preoperative diagnosis of pancreatic cysts and managing patients who present with a cyst are a clinical conundrum because pancreatic cancer is so deadly, while the decision to surgically resect a cyst is complicated by the danger of the surgery, which has high morbidity and mortality,” Dr. Karchin explained. “The challenge of the diagnostic test is to place patients into one of three groups: those who should be discharged, who should be operated on, and who should be monitored.”
High sensitivity is important for the operate and monitor groups to ensure identification of all patients needing these approaches, whereas higher specificity is important for the discharge group to avoid falsely classifying premalignant cysts, Dr. Karchin said.
She and her colleagues applied machine learning to this classification challenge, using data from 862 patients who had undergone resection of pancreatic cysts at 16 centers in the United States, Europe, and Asia. All patients had a known cyst histopathology, which served as the gold standard, and a known clinical management strategy (discharge, operate, or monitor).
The investigators used a multivariate organization of combinatorial alterations algorithm that integrates clinical features, imaging characteristics, cyst fluid genetics, and serum biomarkers to create classifiers. This algorithm can be trained to maximize sensitivity, maximize specificity, or balance these metrics, Dr. Karchin noted.
The resulting test, CompCyst, was trained using data from 436 of the patients and then validated in the remaining 426 patients.
In the validation cohort, for classifying patients who should be discharged from care, the test had a sensitivity of 46% and a specificity of 100%, according to results reported at the conference and published previously (Sci Transl Med. 2019 Jul 19. doi: 10.1126/scitranslmed.aav4772).
For immediately operating, CompCyst had a sensitivity of 91% and a specificity of 54%. And for monitoring the patient, the test had a sensitivity of 99% and a specificity of 30%.
When CompCyst was compared against the standard of care based on conventional clinical and imaging criteria alone, the former was more accurate. CompCyst correctly identified larger shares of patients who should have been discharged (60% vs. 19%) and who should have been monitored (49% vs. 34%), and the test identified a similar share of patients who should have immediately had an operation (91% vs. 89%).
“The takeaway from this is that standard of care is sending too many patients unnecessarily to surgery,” Dr. Karchin commented. “The CompCyst test, with application of the three classifiers sequentially – discharge, operate, or monitor – could reduce unnecessary surgery by 60% or more based on our calculations.”
“While our study was retrospective, it shows promising results in reducing unnecessary surgeries, compared to current standard of care,” she said, adding that a prospective study is planned next.
“In 10-12 weeks, this CompCyst diagnostic test is going to be available at Johns Hopkins for patients. I’m very excited about that,” Dr. Karchin concluded. “We hope that our study shows the potential of combining clinical, imaging, and genetic features with machine learning to improve clinical judgment about many diseases.”
Dr. Karchin disclosed no conflicts of interest. The study was supported by the Lustgarten Foundation for Pancreatic Cancer Research, the Virginia and D.K. Ludwig Fund for Cancer Research, the Sol Goldman Pancreatic Cancer Research Center, the Michael Rolfe Pancreatic Cancer Research Foundation, the Benjamin Baker Scholarship, and the National Institutes of Health.
A test that uses machine learning may improve the management of patients with pancreatic cysts, sparing some of them unnecessary surgery, a cohort study suggests.
The test, called CompCyst, integrates clinical, imaging, and biomarker data. It proved more accurate than the current standard of care for correctly determining whether patients should be discharged from follow-up, immediately operated on, or monitored.
Rachel Karchin, PhD, of the Johns Hopkins Whiting School of Engineering in Baltimore, reported these results at the AACR Virtual Special Conference: Artificial Intelligence, Diagnosis, and Imaging (Abstract IA-13).
“Preoperative diagnosis of pancreatic cysts and managing patients who present with a cyst are a clinical conundrum because pancreatic cancer is so deadly, while the decision to surgically resect a cyst is complicated by the danger of the surgery, which has high morbidity and mortality,” Dr. Karchin explained. “The challenge of the diagnostic test is to place patients into one of three groups: those who should be discharged, who should be operated on, and who should be monitored.”
High sensitivity is important for the operate and monitor groups to ensure identification of all patients needing these approaches, whereas higher specificity is important for the discharge group to avoid falsely classifying premalignant cysts, Dr. Karchin said.
She and her colleagues applied machine learning to this classification challenge, using data from 862 patients who had undergone resection of pancreatic cysts at 16 centers in the United States, Europe, and Asia. All patients had a known cyst histopathology, which served as the gold standard, and a known clinical management strategy (discharge, operate, or monitor).
The investigators used a multivariate organization of combinatorial alterations algorithm that integrates clinical features, imaging characteristics, cyst fluid genetics, and serum biomarkers to create classifiers. This algorithm can be trained to maximize sensitivity, maximize specificity, or balance these metrics, Dr. Karchin noted.
The resulting test, CompCyst, was trained using data from 436 of the patients and then validated in the remaining 426 patients.
In the validation cohort, for classifying patients who should be discharged from care, the test had a sensitivity of 46% and a specificity of 100%, according to results reported at the conference and published previously (Sci Transl Med. 2019 Jul 19. doi: 10.1126/scitranslmed.aav4772).
For immediately operating, CompCyst had a sensitivity of 91% and a specificity of 54%. And for monitoring the patient, the test had a sensitivity of 99% and a specificity of 30%.
When CompCyst was compared against the standard of care based on conventional clinical and imaging criteria alone, the former was more accurate. CompCyst correctly identified larger shares of patients who should have been discharged (60% vs. 19%) and who should have been monitored (49% vs. 34%), and the test identified a similar share of patients who should have immediately had an operation (91% vs. 89%).
“The takeaway from this is that standard of care is sending too many patients unnecessarily to surgery,” Dr. Karchin commented. “The CompCyst test, with application of the three classifiers sequentially – discharge, operate, or monitor – could reduce unnecessary surgery by 60% or more based on our calculations.”
“While our study was retrospective, it shows promising results in reducing unnecessary surgeries, compared to current standard of care,” she said, adding that a prospective study is planned next.
“In 10-12 weeks, this CompCyst diagnostic test is going to be available at Johns Hopkins for patients. I’m very excited about that,” Dr. Karchin concluded. “We hope that our study shows the potential of combining clinical, imaging, and genetic features with machine learning to improve clinical judgment about many diseases.”
Dr. Karchin disclosed no conflicts of interest. The study was supported by the Lustgarten Foundation for Pancreatic Cancer Research, the Virginia and D.K. Ludwig Fund for Cancer Research, the Sol Goldman Pancreatic Cancer Research Center, the Michael Rolfe Pancreatic Cancer Research Foundation, the Benjamin Baker Scholarship, and the National Institutes of Health.
A test that uses machine learning may improve the management of patients with pancreatic cysts, sparing some of them unnecessary surgery, a cohort study suggests.
The test, called CompCyst, integrates clinical, imaging, and biomarker data. It proved more accurate than the current standard of care for correctly determining whether patients should be discharged from follow-up, immediately operated on, or monitored.
Rachel Karchin, PhD, of the Johns Hopkins Whiting School of Engineering in Baltimore, reported these results at the AACR Virtual Special Conference: Artificial Intelligence, Diagnosis, and Imaging (Abstract IA-13).
“Preoperative diagnosis of pancreatic cysts and managing patients who present with a cyst are a clinical conundrum because pancreatic cancer is so deadly, while the decision to surgically resect a cyst is complicated by the danger of the surgery, which has high morbidity and mortality,” Dr. Karchin explained. “The challenge of the diagnostic test is to place patients into one of three groups: those who should be discharged, who should be operated on, and who should be monitored.”
High sensitivity is important for the operate and monitor groups to ensure identification of all patients needing these approaches, whereas higher specificity is important for the discharge group to avoid falsely classifying premalignant cysts, Dr. Karchin said.
She and her colleagues applied machine learning to this classification challenge, using data from 862 patients who had undergone resection of pancreatic cysts at 16 centers in the United States, Europe, and Asia. All patients had a known cyst histopathology, which served as the gold standard, and a known clinical management strategy (discharge, operate, or monitor).
The investigators used a multivariate organization of combinatorial alterations algorithm that integrates clinical features, imaging characteristics, cyst fluid genetics, and serum biomarkers to create classifiers. This algorithm can be trained to maximize sensitivity, maximize specificity, or balance these metrics, Dr. Karchin noted.
The resulting test, CompCyst, was trained using data from 436 of the patients and then validated in the remaining 426 patients.
In the validation cohort, for classifying patients who should be discharged from care, the test had a sensitivity of 46% and a specificity of 100%, according to results reported at the conference and published previously (Sci Transl Med. 2019 Jul 19. doi: 10.1126/scitranslmed.aav4772).
For immediately operating, CompCyst had a sensitivity of 91% and a specificity of 54%. And for monitoring the patient, the test had a sensitivity of 99% and a specificity of 30%.
When CompCyst was compared against the standard of care based on conventional clinical and imaging criteria alone, the former was more accurate. CompCyst correctly identified larger shares of patients who should have been discharged (60% vs. 19%) and who should have been monitored (49% vs. 34%), and the test identified a similar share of patients who should have immediately had an operation (91% vs. 89%).
“The takeaway from this is that standard of care is sending too many patients unnecessarily to surgery,” Dr. Karchin commented. “The CompCyst test, with application of the three classifiers sequentially – discharge, operate, or monitor – could reduce unnecessary surgery by 60% or more based on our calculations.”
“While our study was retrospective, it shows promising results in reducing unnecessary surgeries, compared to current standard of care,” she said, adding that a prospective study is planned next.
“In 10-12 weeks, this CompCyst diagnostic test is going to be available at Johns Hopkins for patients. I’m very excited about that,” Dr. Karchin concluded. “We hope that our study shows the potential of combining clinical, imaging, and genetic features with machine learning to improve clinical judgment about many diseases.”
Dr. Karchin disclosed no conflicts of interest. The study was supported by the Lustgarten Foundation for Pancreatic Cancer Research, the Virginia and D.K. Ludwig Fund for Cancer Research, the Sol Goldman Pancreatic Cancer Research Center, the Michael Rolfe Pancreatic Cancer Research Foundation, the Benjamin Baker Scholarship, and the National Institutes of Health.
FROM AACR: AI, DIAGNOSIS, AND IMAGING 2021
Lessons learned from battlefield can help civilian psychiatrists
COVID has changed our world very rapidly. There are good changes, such as cleaner air and the ability to use telehealth widely. But there are devastating changes. As we are all aware, we have lost more than 400,000 people in America, and that number is climbing.
How can we mitigate some of the psychological effects of the pandemic? It is time to bring lessons learned on the battlefield to civilian psychiatrists and health care systems.
Despite having participated in mass casualty drills, no health system was trained or psychologically prepared for this once-in-a-century event.
The military dictum, “train like you fight; fight like you train” falls short considering the speed of viral replication, the serious flaws and disparities in our health care system revealed by COVID-19, and the public’s disturbingly variable adherence to preventive measures.
Like combat troops, health care workers put the needs of others ahead of their own. They suck up strain and step back from their own needs in favor of the mission.
Whether in combat or pandemic, leaders have valuable opportunities to promote the effectiveness of those on the front lines by caring for them. Those in charge may, themselves, be profoundly affected. While other team members focus on defined roles, leaders are forced to deal with many unknowns. They must often act without adequate information or resources.
Some of us have worked at hospitals treating many COVID patients and have been on “the front lines” for almost a year. We are asked a lot of questions, to which we often answer, "I don't know" or "there are no good choices."
All leaders work hard to model strength, but a difficult lesson that the military has had to learn is that leaders may strengthen cohesion by showing their grief, modeling self-care, drawing attention to even small successes in the face of overwhelming loss, and, when necessary, finding words for those losses.
Peer support is particularly important in high-stress situations. Mental health providers are uniquely qualified to share information, pick up on signs of severe stress, and provide support at the point of need.
Its key elements are:
- Confidence in leadership at all levels – requiring visibility (“battlespace circulation”) of leaders who listen and share timely, accurate information.
- Realistic training – especially for those who, because of staff shortages, assume unfamiliar duties.
- Self-care – including regular meals, adequate sleep, and ongoing contact with family and friends. Here of course, the contact should be virtual as much as possible.
- Belief in the Mission – compassion satisfaction is a buffer against burnout.
- Esprit de corps – cohesive teams suffer significantly fewer combat stress casualties.
It is true that these principles have more often been tested in short-term crisis rather than the long slog that is COVID-19. This pandemic is more like an ongoing civil war than a distant battlefield because your home and those close to you share the risk.
There is no easy path ahead for America’s civilian health care system. These military principles, tested under fire, offer valuable opportunities in the ongoing battle against COVID-19.
Dr. Ritchie practices psychiatry in Washington. She has no disclosures.
Dr. Kudler is associate consulting professor of psychiatry and behavioral sciences at Duke University in Durham. N.C., and recently retired from his post as chief consultant for mental health, at the Department of Veterans Affairs. He has no relevant financial relationships.
Dr. Yehuda is professor of psychiatry and neuroscience and director of the traumatic stress studies division at the Mount Sinai School of Medicine, New York. She also serves as director of mental health at the James J. Peters Veterans Affairs Medical Center, also in New York. Dr. Yehuda has no disclosures.
Dr. Koffman is the senior consultant for Integrative Medicine & Behavioral Health at the National Intrepid Center of Excellence, Bethesda, Md. He has no disclosures.
COVID has changed our world very rapidly. There are good changes, such as cleaner air and the ability to use telehealth widely. But there are devastating changes. As we are all aware, we have lost more than 400,000 people in America, and that number is climbing.
How can we mitigate some of the psychological effects of the pandemic? It is time to bring lessons learned on the battlefield to civilian psychiatrists and health care systems.
Despite having participated in mass casualty drills, no health system was trained or psychologically prepared for this once-in-a-century event.
The military dictum, “train like you fight; fight like you train” falls short considering the speed of viral replication, the serious flaws and disparities in our health care system revealed by COVID-19, and the public’s disturbingly variable adherence to preventive measures.
Like combat troops, health care workers put the needs of others ahead of their own. They suck up strain and step back from their own needs in favor of the mission.
Whether in combat or pandemic, leaders have valuable opportunities to promote the effectiveness of those on the front lines by caring for them. Those in charge may, themselves, be profoundly affected. While other team members focus on defined roles, leaders are forced to deal with many unknowns. They must often act without adequate information or resources.
Some of us have worked at hospitals treating many COVID patients and have been on “the front lines” for almost a year. We are asked a lot of questions, to which we often answer, "I don't know" or "there are no good choices."
All leaders work hard to model strength, but a difficult lesson that the military has had to learn is that leaders may strengthen cohesion by showing their grief, modeling self-care, drawing attention to even small successes in the face of overwhelming loss, and, when necessary, finding words for those losses.
Peer support is particularly important in high-stress situations. Mental health providers are uniquely qualified to share information, pick up on signs of severe stress, and provide support at the point of need.
Its key elements are:
- Confidence in leadership at all levels – requiring visibility (“battlespace circulation”) of leaders who listen and share timely, accurate information.
- Realistic training – especially for those who, because of staff shortages, assume unfamiliar duties.
- Self-care – including regular meals, adequate sleep, and ongoing contact with family and friends. Here of course, the contact should be virtual as much as possible.
- Belief in the Mission – compassion satisfaction is a buffer against burnout.
- Esprit de corps – cohesive teams suffer significantly fewer combat stress casualties.
It is true that these principles have more often been tested in short-term crisis rather than the long slog that is COVID-19. This pandemic is more like an ongoing civil war than a distant battlefield because your home and those close to you share the risk.
There is no easy path ahead for America’s civilian health care system. These military principles, tested under fire, offer valuable opportunities in the ongoing battle against COVID-19.
Dr. Ritchie practices psychiatry in Washington. She has no disclosures.
Dr. Kudler is associate consulting professor of psychiatry and behavioral sciences at Duke University in Durham. N.C., and recently retired from his post as chief consultant for mental health, at the Department of Veterans Affairs. He has no relevant financial relationships.
Dr. Yehuda is professor of psychiatry and neuroscience and director of the traumatic stress studies division at the Mount Sinai School of Medicine, New York. She also serves as director of mental health at the James J. Peters Veterans Affairs Medical Center, also in New York. Dr. Yehuda has no disclosures.
Dr. Koffman is the senior consultant for Integrative Medicine & Behavioral Health at the National Intrepid Center of Excellence, Bethesda, Md. He has no disclosures.
COVID has changed our world very rapidly. There are good changes, such as cleaner air and the ability to use telehealth widely. But there are devastating changes. As we are all aware, we have lost more than 400,000 people in America, and that number is climbing.
How can we mitigate some of the psychological effects of the pandemic? It is time to bring lessons learned on the battlefield to civilian psychiatrists and health care systems.
Despite having participated in mass casualty drills, no health system was trained or psychologically prepared for this once-in-a-century event.
The military dictum, “train like you fight; fight like you train” falls short considering the speed of viral replication, the serious flaws and disparities in our health care system revealed by COVID-19, and the public’s disturbingly variable adherence to preventive measures.
Like combat troops, health care workers put the needs of others ahead of their own. They suck up strain and step back from their own needs in favor of the mission.
Whether in combat or pandemic, leaders have valuable opportunities to promote the effectiveness of those on the front lines by caring for them. Those in charge may, themselves, be profoundly affected. While other team members focus on defined roles, leaders are forced to deal with many unknowns. They must often act without adequate information or resources.
Some of us have worked at hospitals treating many COVID patients and have been on “the front lines” for almost a year. We are asked a lot of questions, to which we often answer, "I don't know" or "there are no good choices."
All leaders work hard to model strength, but a difficult lesson that the military has had to learn is that leaders may strengthen cohesion by showing their grief, modeling self-care, drawing attention to even small successes in the face of overwhelming loss, and, when necessary, finding words for those losses.
Peer support is particularly important in high-stress situations. Mental health providers are uniquely qualified to share information, pick up on signs of severe stress, and provide support at the point of need.
Its key elements are:
- Confidence in leadership at all levels – requiring visibility (“battlespace circulation”) of leaders who listen and share timely, accurate information.
- Realistic training – especially for those who, because of staff shortages, assume unfamiliar duties.
- Self-care – including regular meals, adequate sleep, and ongoing contact with family and friends. Here of course, the contact should be virtual as much as possible.
- Belief in the Mission – compassion satisfaction is a buffer against burnout.
- Esprit de corps – cohesive teams suffer significantly fewer combat stress casualties.
It is true that these principles have more often been tested in short-term crisis rather than the long slog that is COVID-19. This pandemic is more like an ongoing civil war than a distant battlefield because your home and those close to you share the risk.
There is no easy path ahead for America’s civilian health care system. These military principles, tested under fire, offer valuable opportunities in the ongoing battle against COVID-19.
Dr. Ritchie practices psychiatry in Washington. She has no disclosures.
Dr. Kudler is associate consulting professor of psychiatry and behavioral sciences at Duke University in Durham. N.C., and recently retired from his post as chief consultant for mental health, at the Department of Veterans Affairs. He has no relevant financial relationships.
Dr. Yehuda is professor of psychiatry and neuroscience and director of the traumatic stress studies division at the Mount Sinai School of Medicine, New York. She also serves as director of mental health at the James J. Peters Veterans Affairs Medical Center, also in New York. Dr. Yehuda has no disclosures.
Dr. Koffman is the senior consultant for Integrative Medicine & Behavioral Health at the National Intrepid Center of Excellence, Bethesda, Md. He has no disclosures.
Pandemic binge-watching: Is excessive screen time undermining mental health?
During the ongoing COVID-19 pandemic, many people are spending endless hours at home looking at computer, phone, and television screens. Our population has turned to Internet use and television watching as a coping mechanism to deal with their isolation, boredom, stress, and fear of the virus. Indeed, some people have become addicted to watching television and binge-watching entire series in a single sitting on subscription streaming services.
A U.K. study showed that, during the lockdown, adults averaged spending 40% of their waking hours in front of a screen. After a long binge-watch, folks often forget what happened in the episodes or even the name of the program they viewed. When someone finds himself in this situation and can’t remember very much about what he actually watched, he feels as though he has wasted his own time and might become dysphoric and depressed. This type of viewer feels disconnected and forgets what he watched because he is experiencing passive enjoyment, rather than actively relating to the world.
So should television binge-watching give people feelings of guilt?
Fortunately, there are some positive factors about spending excessive time engrossed in these screens during a pandemic; some people use television viewing as a coping mechanism to deal with the reality and the fear of the coronavirus. Some beneficial aspects of television watching include:
- Escaping from the reality and stress of the pandemic in an emotionally safe, isolated cocoon.
- Experiencing safety from contracting COVID-19 by sheltering in place, isolating, and physical distancing from other people in the outside world.
- Experiencing a subdued, private, and mentally relaxing environment.
- Being productive and multitasking while watching television, for example, knit, sew, fold clothes, pay bills, write a letter, etc.
Despite many beneficial aspects of excessive television watching during the pandemic, we have to ask: Can too much television prove detrimental to our mental or physical well-being?
Associated mental, and physical problems
Cause and effect between excessive screen time and sleep disturbances is scientifically unproven, but there is an association between those factors.
Excessive screen time is associated with a sleep deficit, and a proper amount of sleep is necessary for optimal brain function, a healthy immune system, good memory, and overall well-being. Sleep cleans out the short-term memory stage from the information learned that day to make room for new memories. This allows us to store memories every day. An inadequate amount of sleep causes memory problems and cognitive deficits because we are not storing as many memories from days when we are sleep deprived. A good night’s sleep will prevent stress from one day to be carried over to the next day.
Lack of sleep affects people differently, but in some cases, a shortage of sleep can cause feelings of depression and isolation. Television, computer, and phone screens convey excessive damaging LED and blue light, detrimentally affecting our melatonin production and circadian rhythm. Blue light has wavelengths between 380 nm and 500 nm, and although blue wavelengths are beneficial in the day and increase positive mental mood, attention, and reaction times, blue wavelengths are destructive at night. Blue-light exposure suppresses the secretion of melatonin, which, as we know, is a hormone that influences circadian rhythms. The negative disruption of circadian rhythm throws the body’s biological clock in disarray and makes it more difficult for the mind to shut down at night.
Unfortunately, electronics with LED screens increase the amount of exposure to these blue wavelengths. In addition, the U.S. National Toxicology Program has suggested that a link exists between blue-light exposure at night to diabetes, heart disease, cancer, and obesity (Sci Tot Environ. 2017 Dec 31;[607-8]:1073-84).
Advice for patients and clinicians
Time spent watching television and using the Internet should be done in moderation. Make sure that patients understand that they should not feel guilty about watching television during these periods of isolation.
Encourage patients to be selective in their television viewing and to research available programs on streaming services and TV – and limit their screen time only to programs that truly interest them. Discourage them from watching television endlessly, hour after hour. Also, discourage patients from watching too much news. Instead, tell them to limit news to 1 hour per day, because news they perceive as bad might increase their overall anxiety.
Tell patients to engage in physical exercise every day; walk or run outside if possible. When inside, advise them to get up and walk around at least once per hour. Other advice we would like to offer patients and clinicians alike are:
- Put yourself on a schedule and go to sleep the same time each night and try to get 8 hours of sleep in a 24-hour period.
- Put away your devices 1 hour before going to bed or at least use dark mode, and wear blue-block glasses, since they are easier on the eyes and brain. Do not use television to put yourself to sleep. Spending too much time reading news stories is not a good idea, either, because doing so is mentally stimulating and can cause more uncertainty – making it difficult to sleep.
- Protect your eye health by purchasing and installing light bulbs with more internal red coating than blue. These bulbs will produce a warmer tone than the blue, and warmer tones will be less likely to shift circadian rhythm and suppress melatonin, thus reducing blue-light exposure. Blink your eyes often, and use eye solution for dry eyes.
- Sleep in total darkness to reduce your exposure to blue light. Take supplements with lutein and zeaxanthin, which may reduce the oxidative effects of blue light.
Encouraging patients to follow these guidelines – and adhering to them ourselves – should help us emerge from the COVID-19 pandemic mentally and physically healthy.
Dr. Cohen is board certified in psychiatry and has had a private practice in Philadelphia for more than 35 years. His areas of specialty include sports psychiatry, agoraphobia, depression, and substance abuse. In addition, Dr. Cohen is a former professor of psychiatry, family medicine, and otolaryngology at Thomas Jefferson University, Philadelphia. He has no conflicts of interest.
Ms. Cohen holds an MBA from Temple University, Philadelphia, with a focus on health care administration. Previously, Ms. Cohen was an associate administrator at Hahnemann University Hospital and an executive at the Health Services Council, both in Philadelphia. She currently writes biographical summaries of notable 18th- and 19th-century women. Ms. Cohen has no conflicts of interest.
During the ongoing COVID-19 pandemic, many people are spending endless hours at home looking at computer, phone, and television screens. Our population has turned to Internet use and television watching as a coping mechanism to deal with their isolation, boredom, stress, and fear of the virus. Indeed, some people have become addicted to watching television and binge-watching entire series in a single sitting on subscription streaming services.
A U.K. study showed that, during the lockdown, adults averaged spending 40% of their waking hours in front of a screen. After a long binge-watch, folks often forget what happened in the episodes or even the name of the program they viewed. When someone finds himself in this situation and can’t remember very much about what he actually watched, he feels as though he has wasted his own time and might become dysphoric and depressed. This type of viewer feels disconnected and forgets what he watched because he is experiencing passive enjoyment, rather than actively relating to the world.
So should television binge-watching give people feelings of guilt?
Fortunately, there are some positive factors about spending excessive time engrossed in these screens during a pandemic; some people use television viewing as a coping mechanism to deal with the reality and the fear of the coronavirus. Some beneficial aspects of television watching include:
- Escaping from the reality and stress of the pandemic in an emotionally safe, isolated cocoon.
- Experiencing safety from contracting COVID-19 by sheltering in place, isolating, and physical distancing from other people in the outside world.
- Experiencing a subdued, private, and mentally relaxing environment.
- Being productive and multitasking while watching television, for example, knit, sew, fold clothes, pay bills, write a letter, etc.
Despite many beneficial aspects of excessive television watching during the pandemic, we have to ask: Can too much television prove detrimental to our mental or physical well-being?
Associated mental, and physical problems
Cause and effect between excessive screen time and sleep disturbances is scientifically unproven, but there is an association between those factors.
Excessive screen time is associated with a sleep deficit, and a proper amount of sleep is necessary for optimal brain function, a healthy immune system, good memory, and overall well-being. Sleep cleans out the short-term memory stage from the information learned that day to make room for new memories. This allows us to store memories every day. An inadequate amount of sleep causes memory problems and cognitive deficits because we are not storing as many memories from days when we are sleep deprived. A good night’s sleep will prevent stress from one day to be carried over to the next day.
Lack of sleep affects people differently, but in some cases, a shortage of sleep can cause feelings of depression and isolation. Television, computer, and phone screens convey excessive damaging LED and blue light, detrimentally affecting our melatonin production and circadian rhythm. Blue light has wavelengths between 380 nm and 500 nm, and although blue wavelengths are beneficial in the day and increase positive mental mood, attention, and reaction times, blue wavelengths are destructive at night. Blue-light exposure suppresses the secretion of melatonin, which, as we know, is a hormone that influences circadian rhythms. The negative disruption of circadian rhythm throws the body’s biological clock in disarray and makes it more difficult for the mind to shut down at night.
Unfortunately, electronics with LED screens increase the amount of exposure to these blue wavelengths. In addition, the U.S. National Toxicology Program has suggested that a link exists between blue-light exposure at night to diabetes, heart disease, cancer, and obesity (Sci Tot Environ. 2017 Dec 31;[607-8]:1073-84).
Advice for patients and clinicians
Time spent watching television and using the Internet should be done in moderation. Make sure that patients understand that they should not feel guilty about watching television during these periods of isolation.
Encourage patients to be selective in their television viewing and to research available programs on streaming services and TV – and limit their screen time only to programs that truly interest them. Discourage them from watching television endlessly, hour after hour. Also, discourage patients from watching too much news. Instead, tell them to limit news to 1 hour per day, because news they perceive as bad might increase their overall anxiety.
Tell patients to engage in physical exercise every day; walk or run outside if possible. When inside, advise them to get up and walk around at least once per hour. Other advice we would like to offer patients and clinicians alike are:
- Put yourself on a schedule and go to sleep the same time each night and try to get 8 hours of sleep in a 24-hour period.
- Put away your devices 1 hour before going to bed or at least use dark mode, and wear blue-block glasses, since they are easier on the eyes and brain. Do not use television to put yourself to sleep. Spending too much time reading news stories is not a good idea, either, because doing so is mentally stimulating and can cause more uncertainty – making it difficult to sleep.
- Protect your eye health by purchasing and installing light bulbs with more internal red coating than blue. These bulbs will produce a warmer tone than the blue, and warmer tones will be less likely to shift circadian rhythm and suppress melatonin, thus reducing blue-light exposure. Blink your eyes often, and use eye solution for dry eyes.
- Sleep in total darkness to reduce your exposure to blue light. Take supplements with lutein and zeaxanthin, which may reduce the oxidative effects of blue light.
Encouraging patients to follow these guidelines – and adhering to them ourselves – should help us emerge from the COVID-19 pandemic mentally and physically healthy.
Dr. Cohen is board certified in psychiatry and has had a private practice in Philadelphia for more than 35 years. His areas of specialty include sports psychiatry, agoraphobia, depression, and substance abuse. In addition, Dr. Cohen is a former professor of psychiatry, family medicine, and otolaryngology at Thomas Jefferson University, Philadelphia. He has no conflicts of interest.
Ms. Cohen holds an MBA from Temple University, Philadelphia, with a focus on health care administration. Previously, Ms. Cohen was an associate administrator at Hahnemann University Hospital and an executive at the Health Services Council, both in Philadelphia. She currently writes biographical summaries of notable 18th- and 19th-century women. Ms. Cohen has no conflicts of interest.
During the ongoing COVID-19 pandemic, many people are spending endless hours at home looking at computer, phone, and television screens. Our population has turned to Internet use and television watching as a coping mechanism to deal with their isolation, boredom, stress, and fear of the virus. Indeed, some people have become addicted to watching television and binge-watching entire series in a single sitting on subscription streaming services.
A U.K. study showed that, during the lockdown, adults averaged spending 40% of their waking hours in front of a screen. After a long binge-watch, folks often forget what happened in the episodes or even the name of the program they viewed. When someone finds himself in this situation and can’t remember very much about what he actually watched, he feels as though he has wasted his own time and might become dysphoric and depressed. This type of viewer feels disconnected and forgets what he watched because he is experiencing passive enjoyment, rather than actively relating to the world.
So should television binge-watching give people feelings of guilt?
Fortunately, there are some positive factors about spending excessive time engrossed in these screens during a pandemic; some people use television viewing as a coping mechanism to deal with the reality and the fear of the coronavirus. Some beneficial aspects of television watching include:
- Escaping from the reality and stress of the pandemic in an emotionally safe, isolated cocoon.
- Experiencing safety from contracting COVID-19 by sheltering in place, isolating, and physical distancing from other people in the outside world.
- Experiencing a subdued, private, and mentally relaxing environment.
- Being productive and multitasking while watching television, for example, knit, sew, fold clothes, pay bills, write a letter, etc.
Despite many beneficial aspects of excessive television watching during the pandemic, we have to ask: Can too much television prove detrimental to our mental or physical well-being?
Associated mental, and physical problems
Cause and effect between excessive screen time and sleep disturbances is scientifically unproven, but there is an association between those factors.
Excessive screen time is associated with a sleep deficit, and a proper amount of sleep is necessary for optimal brain function, a healthy immune system, good memory, and overall well-being. Sleep cleans out the short-term memory stage from the information learned that day to make room for new memories. This allows us to store memories every day. An inadequate amount of sleep causes memory problems and cognitive deficits because we are not storing as many memories from days when we are sleep deprived. A good night’s sleep will prevent stress from one day to be carried over to the next day.
Lack of sleep affects people differently, but in some cases, a shortage of sleep can cause feelings of depression and isolation. Television, computer, and phone screens convey excessive damaging LED and blue light, detrimentally affecting our melatonin production and circadian rhythm. Blue light has wavelengths between 380 nm and 500 nm, and although blue wavelengths are beneficial in the day and increase positive mental mood, attention, and reaction times, blue wavelengths are destructive at night. Blue-light exposure suppresses the secretion of melatonin, which, as we know, is a hormone that influences circadian rhythms. The negative disruption of circadian rhythm throws the body’s biological clock in disarray and makes it more difficult for the mind to shut down at night.
Unfortunately, electronics with LED screens increase the amount of exposure to these blue wavelengths. In addition, the U.S. National Toxicology Program has suggested that a link exists between blue-light exposure at night to diabetes, heart disease, cancer, and obesity (Sci Tot Environ. 2017 Dec 31;[607-8]:1073-84).
Advice for patients and clinicians
Time spent watching television and using the Internet should be done in moderation. Make sure that patients understand that they should not feel guilty about watching television during these periods of isolation.
Encourage patients to be selective in their television viewing and to research available programs on streaming services and TV – and limit their screen time only to programs that truly interest them. Discourage them from watching television endlessly, hour after hour. Also, discourage patients from watching too much news. Instead, tell them to limit news to 1 hour per day, because news they perceive as bad might increase their overall anxiety.
Tell patients to engage in physical exercise every day; walk or run outside if possible. When inside, advise them to get up and walk around at least once per hour. Other advice we would like to offer patients and clinicians alike are:
- Put yourself on a schedule and go to sleep the same time each night and try to get 8 hours of sleep in a 24-hour period.
- Put away your devices 1 hour before going to bed or at least use dark mode, and wear blue-block glasses, since they are easier on the eyes and brain. Do not use television to put yourself to sleep. Spending too much time reading news stories is not a good idea, either, because doing so is mentally stimulating and can cause more uncertainty – making it difficult to sleep.
- Protect your eye health by purchasing and installing light bulbs with more internal red coating than blue. These bulbs will produce a warmer tone than the blue, and warmer tones will be less likely to shift circadian rhythm and suppress melatonin, thus reducing blue-light exposure. Blink your eyes often, and use eye solution for dry eyes.
- Sleep in total darkness to reduce your exposure to blue light. Take supplements with lutein and zeaxanthin, which may reduce the oxidative effects of blue light.
Encouraging patients to follow these guidelines – and adhering to them ourselves – should help us emerge from the COVID-19 pandemic mentally and physically healthy.
Dr. Cohen is board certified in psychiatry and has had a private practice in Philadelphia for more than 35 years. His areas of specialty include sports psychiatry, agoraphobia, depression, and substance abuse. In addition, Dr. Cohen is a former professor of psychiatry, family medicine, and otolaryngology at Thomas Jefferson University, Philadelphia. He has no conflicts of interest.
Ms. Cohen holds an MBA from Temple University, Philadelphia, with a focus on health care administration. Previously, Ms. Cohen was an associate administrator at Hahnemann University Hospital and an executive at the Health Services Council, both in Philadelphia. She currently writes biographical summaries of notable 18th- and 19th-century women. Ms. Cohen has no conflicts of interest.
Topical brepocitinib for atopic dermatitis meets endpoints in phase 2b study
, and with a safety profile essentially indistinguishable from vehicle cream in a phase 2b randomized trial, Megan N. Landis, MD, reported at the virtual annual congress of the European Academy of Dermatology and Venereology.
The study included 240 adolescents and adults with mild to moderate AD at 70 sites in the United States and nine other countries. Patients’ mean baseline Eczema Area and Severity Index (EASI) score was 7.3, with 9.2% of their body surface area being involved. Participants were equally split between mild and moderate disease. They were randomized to 6 weeks of double-blind treatment in one of eight study arms: once-daily topical brepocitinib at a concentration of 0.1%, 0.3%, 1%, or 3%; twice-daily brepocitinib at 1% or 3%; or once- or twice-daily vehicle cream.
The primary endpoint was change in EASI score from baseline to week 6. Brepocitinib 1% and 3% once daily and 1% twice daily outperformed vehicle, with EASI score reductions of 70.1%, 67.9%, and 75%, respectively, compared with a 44.4% decrease among those in the once-daily vehicle control group and a 47.6% reduction among those in the twice-daily vehicle control group, according to Dr. Landis, a dermatologist at the University of Louisville (Ky).
The key secondary efficacy endpoint was the proportion of patients achieving an Investigator’s Global Assessment (IGA) score of 0 or 1 – clear or almost clear skin – plus at least a 2-point reduction at week 6. This occurred in a dose-dependent fashion in 27.8%-44.4% of patients on once-daily brepocitinib, all significantly better results than the 10.8% rate in once-daily controls. Patients on the TYK2/JAK1 inhibitor at 0.3% twice daily had a 33.3% IGA response rate, versus 13.9% with twice-daily vehicle, also a significant difference.
A 90% reduction in EASI score at week 6, or EASI 90 response, occurred in a dose-dependent fashion in 27.8%-41.7% of patients on 0.3%, 1%, and 3% of patients on once-daily brepocitinib, all significantly better than the 10.8% rate with once-daily vehicle, and in 27% of patients on brepocitinib 1% twice daily, versus 8.3% with twice-daily vehicle.
Improvement in itch was another secondary endpoint. A clinically meaningful week-6 improvement of at least 4 points on the Peak Pruritus Numerical Rating Scale was documented in 45.2% of patients on 1% brepocitinib once daily, 50% on 3% once daily, and 40.7% on 1% brepocitinib twice daily, all significantly better than the roughly 17% itch response rate in controls.
Treatment-emergent adverse events were about one-third more frequent in controls than in brepocitinib-treated patients. These events were overwhelmingly mild and were similar in nature in the two groups. There was no dose-dependent increase in treatment-emergent adverse events in the brepocitinib patients. Moreover, no serious treatment-emergent adverse events occurred during the study, nor were there any cases of herpes zoster or malignancies, and no changes in laboratory parameters or ECG findings.
Pfizer sponsored the phase 2b AD trial of the topical TYK2/JAK1 inhibitor, which is also in phase 2 studies for psoriatic arthritis, psoriasis, lupus, and alopecia areata.
Dr. Landis reported serving as a paid investigator for Pfizer and numerous other pharmaceutical companies.
, and with a safety profile essentially indistinguishable from vehicle cream in a phase 2b randomized trial, Megan N. Landis, MD, reported at the virtual annual congress of the European Academy of Dermatology and Venereology.
The study included 240 adolescents and adults with mild to moderate AD at 70 sites in the United States and nine other countries. Patients’ mean baseline Eczema Area and Severity Index (EASI) score was 7.3, with 9.2% of their body surface area being involved. Participants were equally split between mild and moderate disease. They were randomized to 6 weeks of double-blind treatment in one of eight study arms: once-daily topical brepocitinib at a concentration of 0.1%, 0.3%, 1%, or 3%; twice-daily brepocitinib at 1% or 3%; or once- or twice-daily vehicle cream.
The primary endpoint was change in EASI score from baseline to week 6. Brepocitinib 1% and 3% once daily and 1% twice daily outperformed vehicle, with EASI score reductions of 70.1%, 67.9%, and 75%, respectively, compared with a 44.4% decrease among those in the once-daily vehicle control group and a 47.6% reduction among those in the twice-daily vehicle control group, according to Dr. Landis, a dermatologist at the University of Louisville (Ky).
The key secondary efficacy endpoint was the proportion of patients achieving an Investigator’s Global Assessment (IGA) score of 0 or 1 – clear or almost clear skin – plus at least a 2-point reduction at week 6. This occurred in a dose-dependent fashion in 27.8%-44.4% of patients on once-daily brepocitinib, all significantly better results than the 10.8% rate in once-daily controls. Patients on the TYK2/JAK1 inhibitor at 0.3% twice daily had a 33.3% IGA response rate, versus 13.9% with twice-daily vehicle, also a significant difference.
A 90% reduction in EASI score at week 6, or EASI 90 response, occurred in a dose-dependent fashion in 27.8%-41.7% of patients on 0.3%, 1%, and 3% of patients on once-daily brepocitinib, all significantly better than the 10.8% rate with once-daily vehicle, and in 27% of patients on brepocitinib 1% twice daily, versus 8.3% with twice-daily vehicle.
Improvement in itch was another secondary endpoint. A clinically meaningful week-6 improvement of at least 4 points on the Peak Pruritus Numerical Rating Scale was documented in 45.2% of patients on 1% brepocitinib once daily, 50% on 3% once daily, and 40.7% on 1% brepocitinib twice daily, all significantly better than the roughly 17% itch response rate in controls.
Treatment-emergent adverse events were about one-third more frequent in controls than in brepocitinib-treated patients. These events were overwhelmingly mild and were similar in nature in the two groups. There was no dose-dependent increase in treatment-emergent adverse events in the brepocitinib patients. Moreover, no serious treatment-emergent adverse events occurred during the study, nor were there any cases of herpes zoster or malignancies, and no changes in laboratory parameters or ECG findings.
Pfizer sponsored the phase 2b AD trial of the topical TYK2/JAK1 inhibitor, which is also in phase 2 studies for psoriatic arthritis, psoriasis, lupus, and alopecia areata.
Dr. Landis reported serving as a paid investigator for Pfizer and numerous other pharmaceutical companies.
, and with a safety profile essentially indistinguishable from vehicle cream in a phase 2b randomized trial, Megan N. Landis, MD, reported at the virtual annual congress of the European Academy of Dermatology and Venereology.
The study included 240 adolescents and adults with mild to moderate AD at 70 sites in the United States and nine other countries. Patients’ mean baseline Eczema Area and Severity Index (EASI) score was 7.3, with 9.2% of their body surface area being involved. Participants were equally split between mild and moderate disease. They were randomized to 6 weeks of double-blind treatment in one of eight study arms: once-daily topical brepocitinib at a concentration of 0.1%, 0.3%, 1%, or 3%; twice-daily brepocitinib at 1% or 3%; or once- or twice-daily vehicle cream.
The primary endpoint was change in EASI score from baseline to week 6. Brepocitinib 1% and 3% once daily and 1% twice daily outperformed vehicle, with EASI score reductions of 70.1%, 67.9%, and 75%, respectively, compared with a 44.4% decrease among those in the once-daily vehicle control group and a 47.6% reduction among those in the twice-daily vehicle control group, according to Dr. Landis, a dermatologist at the University of Louisville (Ky).
The key secondary efficacy endpoint was the proportion of patients achieving an Investigator’s Global Assessment (IGA) score of 0 or 1 – clear or almost clear skin – plus at least a 2-point reduction at week 6. This occurred in a dose-dependent fashion in 27.8%-44.4% of patients on once-daily brepocitinib, all significantly better results than the 10.8% rate in once-daily controls. Patients on the TYK2/JAK1 inhibitor at 0.3% twice daily had a 33.3% IGA response rate, versus 13.9% with twice-daily vehicle, also a significant difference.
A 90% reduction in EASI score at week 6, or EASI 90 response, occurred in a dose-dependent fashion in 27.8%-41.7% of patients on 0.3%, 1%, and 3% of patients on once-daily brepocitinib, all significantly better than the 10.8% rate with once-daily vehicle, and in 27% of patients on brepocitinib 1% twice daily, versus 8.3% with twice-daily vehicle.
Improvement in itch was another secondary endpoint. A clinically meaningful week-6 improvement of at least 4 points on the Peak Pruritus Numerical Rating Scale was documented in 45.2% of patients on 1% brepocitinib once daily, 50% on 3% once daily, and 40.7% on 1% brepocitinib twice daily, all significantly better than the roughly 17% itch response rate in controls.
Treatment-emergent adverse events were about one-third more frequent in controls than in brepocitinib-treated patients. These events were overwhelmingly mild and were similar in nature in the two groups. There was no dose-dependent increase in treatment-emergent adverse events in the brepocitinib patients. Moreover, no serious treatment-emergent adverse events occurred during the study, nor were there any cases of herpes zoster or malignancies, and no changes in laboratory parameters or ECG findings.
Pfizer sponsored the phase 2b AD trial of the topical TYK2/JAK1 inhibitor, which is also in phase 2 studies for psoriatic arthritis, psoriasis, lupus, and alopecia areata.
Dr. Landis reported serving as a paid investigator for Pfizer and numerous other pharmaceutical companies.
FROM THE EADV CONGRESS
Five reasons sacubitril/valsartan should not be approved for HFpEF
In an ideal world, people could afford sacubitril/valsartan (Entresto), and clinicians would be allowed to prescribe it using clinical judgment as their guide. The imprimatur of an “[Food and Drug Administration]–labeled indication” would be unnecessary.
This is not our world. Guideline writers, third-party payers, and FDA regulators now play major roles in clinical decisions.
The angiotensin receptor neprilysin inhibitor is approved for use in patients with heart failure with reduced ejection fraction (HFrEF). In December 2020, an FDA advisory committee voted 12-1 in support of a vaguely worded question: Does PARAGON-HF provide sufficient evidence to support any indication for the drug in patients with heart failure with preserved ejection fraction (HFpEF)? The committee did not reach a consensus on what that indication should be.
Before I list five reasons why I hope the FDA does not approve the drug for any indication in patients with HFpEF, let’s review the seminal trial.
PARAGON-HF
PARAGON-HF randomly assigned slightly more than 4,800 patients with symptomatic HFpEF (left ventricular ejection fraction [LVEF] ≥45%) to sacubitril/valsartan or valsartan alone. The primary endpoint was total hospitalizations for heart failure (HHF) and death because of cardiovascular (CV) events.
Sacubitril/valsartan reduced the rate of the primary endpoint by 13% (rate ratio, 0.87; 95% confidence interval, 0.75-1.01; P = .06). There were 894 primary endpoint events in the sacubitril/valsartan arm, compared with 1,009 events in the valsartan arm.
The lower rate of events in the sacubitril/valsartan arm was driven by fewer hospitalizations for heart failure. CV death was essentially the same in both arms (204 deaths in the sacubitril/valsartan group versus 212 deaths in the valsartan group).
A note on the patients: the investigators screened more than 10,000 patients and enrolled less than half of them. The mean age was 73 years; 52% of patients were women, but only 2% were Black. The mean LVEF was 57%; 95% of patients had hypertension and were receiving diuretics at baseline.
Now to the five reasons not to approve the drug for this indication.
1. Uncertainty of benefit in HFpEF
A P value for the primary endpoint greater than the threshold of .05 suggests some degree of uncertainty. A nice way of describing this uncertainty is with a Bayesian analysis. Whereas a P value tells you the chance of seeing these results if the drug has no benefit, the Bayesian approach tells you the chance of drug benefit given the trial results.
By email, James Brophy, MD, a senior scientist in the Centre for Outcomes Research and Evaluation at McGill University, Montreal, showed me a Bayesian calculation of PARAGON-HF. He estimated a 38% chance that sacubitril/valsartan had a clinically meaningful 15% reduction in the primary endpoint, a 3% chance that it worsens outcomes, and a 58% chance that it is essentially no better than valsartan.
The take-home is that, in PARAGON-HF, a best-case scenario involving select high-risk patients with run-in periods and trial-level follow-up, there is substantial uncertainty as to whether the drug is any better than a generic standard.
2. Modest effect size in PARAGON-HF
Let’s assume the benefit seen in PARAGON-HF is not caused by chance. Was the effect clinically significant?
For context, consider the large effect size that sacubitril/valsartan had versus enalapril for patients with HFrEF.
In PARADIGM-HF, sacubitril/valsartan led to a 20% reduction in the composite primary endpoint. Importantly, this included equal reductions in both HHF and CV death. All-cause death was also significantly reduced in the active arm.
Because patients with HFpEF have a similarly poor prognosis as those with HFrEF, a truly beneficial drug should reduce not only HHF but also CV death and overall death. The lack of effect on these “harder” endpoints in PARAGON-HF points to a far more modest effect size for sacubitril/valsartan in HFpEF.
What’s more, even the signal of reduced HHF in PARAGON-HF is tenuous. The PARAGON-HF authors chose total HHF, whereas previous trials in patients with HFpEF used first HHF as their primary endpoint. Had PARAGON-HF followed the methods of prior trials, first HHF would not have made statistical significance (hazard ratio, 0.90; 95% CI, 0.79-1.04)
3. Subgroups not compelling
Proponents highlight the possibility that sacubitril/valsartan exerted a heterogenous effect in two subgroups.
In women, sacubitril/valsartan resulted in a 27% reduction in the primary endpoint (HR, 0.73; 95% CI, 0.59-0.90), whereas men showed no significant difference (HR, 1.03; 95% CI, 0.85-1.25). And the drug seemed to have little benefit over valsartan in patients with a median LVEF greater than 57%.
The problem with subgroups is that, if you look at enough of them, some can be positive on the basis of chance alone. For instance, patients enrolled in western Europe had an outsized benefit from sacubitril/valsartan, compared with patients from other areas.
FDA reviewers noted: “It is possible that the heterogeneity of treatment effect observed in the subgroups by gender and LVEF in PARAGON-HF is a chance finding.”
By email, clinical trial expert Sanjay Kaul, MD, from Cedars-Sinai Medical Center in Los Angeles, expressed serious concern with the subgroup analyses in PARAGON-HF because the sex interaction was confined to HHF alone. There was no interaction for other outcomes, such as CV death, all-cause mortality, renal endpoints, blood pressure, or lowering of N-terminal of the prohormone brain natriuretic peptide.
Similarly, the interaction with ejection fraction was confined to total HHF; it was not seen with New York Heart Association class improvement, all-cause mortality, quality of life, renal endpoints, or time to first event.
Dr. Kaul also emphasized something cardiologists know well, “that ejection fraction is not a static variable and is expected to change during the course of the trial.” This point makes it hard to believe that a partially subjective measurement, such as LVEF, could be a precise modifier of benefit.
4. Approval would stop research
If the FDA approves sacubitril/valsartan for patients with HFpEF, there is a near-zero chance we will learn whether there are subsets of patients who benefit more or less from the drug.
It will be the defibrillator problem all over again. Namely, while the average effect of a defibrillator is to reduce mortality in patients with HFrEF, in approximately 9 of 10 patients the implanted device is never used. Efforts to find subgroups that are most likely to need (or not need) an implantable defibrillator have been impossible because industry has no incentive to fund trials that may narrow the number of patients who qualify for their product.
It will be the same with sacubitril/valsartan. This is not nefarious; it is merely a limitation of industry funding of trials.
5. Opportunity costs
The category of HFpEF is vast.
FDA approval – even for a subset of these patients – would have huge cost implications. I understand cost issues are considered outside the purview of the FDA, but health care spending isn’t infinite. Money spent covering this costly drug is money not available for other things.
Despite this nation’s wealth, we struggle to provide even basic care to large numbers of people. Approval of an expensive drug with no or modest benefit will only exacerbate these stark disparities.
Conclusion
Given our current system of health care delivery, my pragmatic answer is for the FDA to say no to sacubitril/valsartan for HFpEF.
If you believe the drug has outsized benefits in women or those with mild impairment of systolic function, the way to answer these questions is not with subgroup analyses from a trial that did not reach statistical significance in its primary endpoint, but with more randomized trials. Isn’t that what “exploratory” subgroups are for?
Holding off on an indication for HFpEF will force proponents to define a subset of patients who garner a clear and substantial benefit from sacubitril/valsartan.
Dr. Mandrola practices cardiac electrophysiology in Louisville, Ky., and is a writer and podcaster for Medscape. He espouses a conservative approach to medical practice. He participates in clinical research and writes often about the state of medical evidence. MDedge is part of the Medscape Professional Network.
A version of this article first appeared on Medscape.com.
In an ideal world, people could afford sacubitril/valsartan (Entresto), and clinicians would be allowed to prescribe it using clinical judgment as their guide. The imprimatur of an “[Food and Drug Administration]–labeled indication” would be unnecessary.
This is not our world. Guideline writers, third-party payers, and FDA regulators now play major roles in clinical decisions.
The angiotensin receptor neprilysin inhibitor is approved for use in patients with heart failure with reduced ejection fraction (HFrEF). In December 2020, an FDA advisory committee voted 12-1 in support of a vaguely worded question: Does PARAGON-HF provide sufficient evidence to support any indication for the drug in patients with heart failure with preserved ejection fraction (HFpEF)? The committee did not reach a consensus on what that indication should be.
Before I list five reasons why I hope the FDA does not approve the drug for any indication in patients with HFpEF, let’s review the seminal trial.
PARAGON-HF
PARAGON-HF randomly assigned slightly more than 4,800 patients with symptomatic HFpEF (left ventricular ejection fraction [LVEF] ≥45%) to sacubitril/valsartan or valsartan alone. The primary endpoint was total hospitalizations for heart failure (HHF) and death because of cardiovascular (CV) events.
Sacubitril/valsartan reduced the rate of the primary endpoint by 13% (rate ratio, 0.87; 95% confidence interval, 0.75-1.01; P = .06). There were 894 primary endpoint events in the sacubitril/valsartan arm, compared with 1,009 events in the valsartan arm.
The lower rate of events in the sacubitril/valsartan arm was driven by fewer hospitalizations for heart failure. CV death was essentially the same in both arms (204 deaths in the sacubitril/valsartan group versus 212 deaths in the valsartan group).
A note on the patients: the investigators screened more than 10,000 patients and enrolled less than half of them. The mean age was 73 years; 52% of patients were women, but only 2% were Black. The mean LVEF was 57%; 95% of patients had hypertension and were receiving diuretics at baseline.
Now to the five reasons not to approve the drug for this indication.
1. Uncertainty of benefit in HFpEF
A P value for the primary endpoint greater than the threshold of .05 suggests some degree of uncertainty. A nice way of describing this uncertainty is with a Bayesian analysis. Whereas a P value tells you the chance of seeing these results if the drug has no benefit, the Bayesian approach tells you the chance of drug benefit given the trial results.
By email, James Brophy, MD, a senior scientist in the Centre for Outcomes Research and Evaluation at McGill University, Montreal, showed me a Bayesian calculation of PARAGON-HF. He estimated a 38% chance that sacubitril/valsartan had a clinically meaningful 15% reduction in the primary endpoint, a 3% chance that it worsens outcomes, and a 58% chance that it is essentially no better than valsartan.
The take-home is that, in PARAGON-HF, a best-case scenario involving select high-risk patients with run-in periods and trial-level follow-up, there is substantial uncertainty as to whether the drug is any better than a generic standard.
2. Modest effect size in PARAGON-HF
Let’s assume the benefit seen in PARAGON-HF is not caused by chance. Was the effect clinically significant?
For context, consider the large effect size that sacubitril/valsartan had versus enalapril for patients with HFrEF.
In PARADIGM-HF, sacubitril/valsartan led to a 20% reduction in the composite primary endpoint. Importantly, this included equal reductions in both HHF and CV death. All-cause death was also significantly reduced in the active arm.
Because patients with HFpEF have a similarly poor prognosis as those with HFrEF, a truly beneficial drug should reduce not only HHF but also CV death and overall death. The lack of effect on these “harder” endpoints in PARAGON-HF points to a far more modest effect size for sacubitril/valsartan in HFpEF.
What’s more, even the signal of reduced HHF in PARAGON-HF is tenuous. The PARAGON-HF authors chose total HHF, whereas previous trials in patients with HFpEF used first HHF as their primary endpoint. Had PARAGON-HF followed the methods of prior trials, first HHF would not have made statistical significance (hazard ratio, 0.90; 95% CI, 0.79-1.04)
3. Subgroups not compelling
Proponents highlight the possibility that sacubitril/valsartan exerted a heterogenous effect in two subgroups.
In women, sacubitril/valsartan resulted in a 27% reduction in the primary endpoint (HR, 0.73; 95% CI, 0.59-0.90), whereas men showed no significant difference (HR, 1.03; 95% CI, 0.85-1.25). And the drug seemed to have little benefit over valsartan in patients with a median LVEF greater than 57%.
The problem with subgroups is that, if you look at enough of them, some can be positive on the basis of chance alone. For instance, patients enrolled in western Europe had an outsized benefit from sacubitril/valsartan, compared with patients from other areas.
FDA reviewers noted: “It is possible that the heterogeneity of treatment effect observed in the subgroups by gender and LVEF in PARAGON-HF is a chance finding.”
By email, clinical trial expert Sanjay Kaul, MD, from Cedars-Sinai Medical Center in Los Angeles, expressed serious concern with the subgroup analyses in PARAGON-HF because the sex interaction was confined to HHF alone. There was no interaction for other outcomes, such as CV death, all-cause mortality, renal endpoints, blood pressure, or lowering of N-terminal of the prohormone brain natriuretic peptide.
Similarly, the interaction with ejection fraction was confined to total HHF; it was not seen with New York Heart Association class improvement, all-cause mortality, quality of life, renal endpoints, or time to first event.
Dr. Kaul also emphasized something cardiologists know well, “that ejection fraction is not a static variable and is expected to change during the course of the trial.” This point makes it hard to believe that a partially subjective measurement, such as LVEF, could be a precise modifier of benefit.
4. Approval would stop research
If the FDA approves sacubitril/valsartan for patients with HFpEF, there is a near-zero chance we will learn whether there are subsets of patients who benefit more or less from the drug.
It will be the defibrillator problem all over again. Namely, while the average effect of a defibrillator is to reduce mortality in patients with HFrEF, in approximately 9 of 10 patients the implanted device is never used. Efforts to find subgroups that are most likely to need (or not need) an implantable defibrillator have been impossible because industry has no incentive to fund trials that may narrow the number of patients who qualify for their product.
It will be the same with sacubitril/valsartan. This is not nefarious; it is merely a limitation of industry funding of trials.
5. Opportunity costs
The category of HFpEF is vast.
FDA approval – even for a subset of these patients – would have huge cost implications. I understand cost issues are considered outside the purview of the FDA, but health care spending isn’t infinite. Money spent covering this costly drug is money not available for other things.
Despite this nation’s wealth, we struggle to provide even basic care to large numbers of people. Approval of an expensive drug with no or modest benefit will only exacerbate these stark disparities.
Conclusion
Given our current system of health care delivery, my pragmatic answer is for the FDA to say no to sacubitril/valsartan for HFpEF.
If you believe the drug has outsized benefits in women or those with mild impairment of systolic function, the way to answer these questions is not with subgroup analyses from a trial that did not reach statistical significance in its primary endpoint, but with more randomized trials. Isn’t that what “exploratory” subgroups are for?
Holding off on an indication for HFpEF will force proponents to define a subset of patients who garner a clear and substantial benefit from sacubitril/valsartan.
Dr. Mandrola practices cardiac electrophysiology in Louisville, Ky., and is a writer and podcaster for Medscape. He espouses a conservative approach to medical practice. He participates in clinical research and writes often about the state of medical evidence. MDedge is part of the Medscape Professional Network.
A version of this article first appeared on Medscape.com.
In an ideal world, people could afford sacubitril/valsartan (Entresto), and clinicians would be allowed to prescribe it using clinical judgment as their guide. The imprimatur of an “[Food and Drug Administration]–labeled indication” would be unnecessary.
This is not our world. Guideline writers, third-party payers, and FDA regulators now play major roles in clinical decisions.
The angiotensin receptor neprilysin inhibitor is approved for use in patients with heart failure with reduced ejection fraction (HFrEF). In December 2020, an FDA advisory committee voted 12-1 in support of a vaguely worded question: Does PARAGON-HF provide sufficient evidence to support any indication for the drug in patients with heart failure with preserved ejection fraction (HFpEF)? The committee did not reach a consensus on what that indication should be.
Before I list five reasons why I hope the FDA does not approve the drug for any indication in patients with HFpEF, let’s review the seminal trial.
PARAGON-HF
PARAGON-HF randomly assigned slightly more than 4,800 patients with symptomatic HFpEF (left ventricular ejection fraction [LVEF] ≥45%) to sacubitril/valsartan or valsartan alone. The primary endpoint was total hospitalizations for heart failure (HHF) and death because of cardiovascular (CV) events.
Sacubitril/valsartan reduced the rate of the primary endpoint by 13% (rate ratio, 0.87; 95% confidence interval, 0.75-1.01; P = .06). There were 894 primary endpoint events in the sacubitril/valsartan arm, compared with 1,009 events in the valsartan arm.
The lower rate of events in the sacubitril/valsartan arm was driven by fewer hospitalizations for heart failure. CV death was essentially the same in both arms (204 deaths in the sacubitril/valsartan group versus 212 deaths in the valsartan group).
A note on the patients: the investigators screened more than 10,000 patients and enrolled less than half of them. The mean age was 73 years; 52% of patients were women, but only 2% were Black. The mean LVEF was 57%; 95% of patients had hypertension and were receiving diuretics at baseline.
Now to the five reasons not to approve the drug for this indication.
1. Uncertainty of benefit in HFpEF
A P value for the primary endpoint greater than the threshold of .05 suggests some degree of uncertainty. A nice way of describing this uncertainty is with a Bayesian analysis. Whereas a P value tells you the chance of seeing these results if the drug has no benefit, the Bayesian approach tells you the chance of drug benefit given the trial results.
By email, James Brophy, MD, a senior scientist in the Centre for Outcomes Research and Evaluation at McGill University, Montreal, showed me a Bayesian calculation of PARAGON-HF. He estimated a 38% chance that sacubitril/valsartan had a clinically meaningful 15% reduction in the primary endpoint, a 3% chance that it worsens outcomes, and a 58% chance that it is essentially no better than valsartan.
The take-home is that, in PARAGON-HF, a best-case scenario involving select high-risk patients with run-in periods and trial-level follow-up, there is substantial uncertainty as to whether the drug is any better than a generic standard.
2. Modest effect size in PARAGON-HF
Let’s assume the benefit seen in PARAGON-HF is not caused by chance. Was the effect clinically significant?
For context, consider the large effect size that sacubitril/valsartan had versus enalapril for patients with HFrEF.
In PARADIGM-HF, sacubitril/valsartan led to a 20% reduction in the composite primary endpoint. Importantly, this included equal reductions in both HHF and CV death. All-cause death was also significantly reduced in the active arm.
Because patients with HFpEF have a similarly poor prognosis as those with HFrEF, a truly beneficial drug should reduce not only HHF but also CV death and overall death. The lack of effect on these “harder” endpoints in PARAGON-HF points to a far more modest effect size for sacubitril/valsartan in HFpEF.
What’s more, even the signal of reduced HHF in PARAGON-HF is tenuous. The PARAGON-HF authors chose total HHF, whereas previous trials in patients with HFpEF used first HHF as their primary endpoint. Had PARAGON-HF followed the methods of prior trials, first HHF would not have made statistical significance (hazard ratio, 0.90; 95% CI, 0.79-1.04)
3. Subgroups not compelling
Proponents highlight the possibility that sacubitril/valsartan exerted a heterogenous effect in two subgroups.
In women, sacubitril/valsartan resulted in a 27% reduction in the primary endpoint (HR, 0.73; 95% CI, 0.59-0.90), whereas men showed no significant difference (HR, 1.03; 95% CI, 0.85-1.25). And the drug seemed to have little benefit over valsartan in patients with a median LVEF greater than 57%.
The problem with subgroups is that, if you look at enough of them, some can be positive on the basis of chance alone. For instance, patients enrolled in western Europe had an outsized benefit from sacubitril/valsartan, compared with patients from other areas.
FDA reviewers noted: “It is possible that the heterogeneity of treatment effect observed in the subgroups by gender and LVEF in PARAGON-HF is a chance finding.”
By email, clinical trial expert Sanjay Kaul, MD, from Cedars-Sinai Medical Center in Los Angeles, expressed serious concern with the subgroup analyses in PARAGON-HF because the sex interaction was confined to HHF alone. There was no interaction for other outcomes, such as CV death, all-cause mortality, renal endpoints, blood pressure, or lowering of N-terminal of the prohormone brain natriuretic peptide.
Similarly, the interaction with ejection fraction was confined to total HHF; it was not seen with New York Heart Association class improvement, all-cause mortality, quality of life, renal endpoints, or time to first event.
Dr. Kaul also emphasized something cardiologists know well, “that ejection fraction is not a static variable and is expected to change during the course of the trial.” This point makes it hard to believe that a partially subjective measurement, such as LVEF, could be a precise modifier of benefit.
4. Approval would stop research
If the FDA approves sacubitril/valsartan for patients with HFpEF, there is a near-zero chance we will learn whether there are subsets of patients who benefit more or less from the drug.
It will be the defibrillator problem all over again. Namely, while the average effect of a defibrillator is to reduce mortality in patients with HFrEF, in approximately 9 of 10 patients the implanted device is never used. Efforts to find subgroups that are most likely to need (or not need) an implantable defibrillator have been impossible because industry has no incentive to fund trials that may narrow the number of patients who qualify for their product.
It will be the same with sacubitril/valsartan. This is not nefarious; it is merely a limitation of industry funding of trials.
5. Opportunity costs
The category of HFpEF is vast.
FDA approval – even for a subset of these patients – would have huge cost implications. I understand cost issues are considered outside the purview of the FDA, but health care spending isn’t infinite. Money spent covering this costly drug is money not available for other things.
Despite this nation’s wealth, we struggle to provide even basic care to large numbers of people. Approval of an expensive drug with no or modest benefit will only exacerbate these stark disparities.
Conclusion
Given our current system of health care delivery, my pragmatic answer is for the FDA to say no to sacubitril/valsartan for HFpEF.
If you believe the drug has outsized benefits in women or those with mild impairment of systolic function, the way to answer these questions is not with subgroup analyses from a trial that did not reach statistical significance in its primary endpoint, but with more randomized trials. Isn’t that what “exploratory” subgroups are for?
Holding off on an indication for HFpEF will force proponents to define a subset of patients who garner a clear and substantial benefit from sacubitril/valsartan.
Dr. Mandrola practices cardiac electrophysiology in Louisville, Ky., and is a writer and podcaster for Medscape. He espouses a conservative approach to medical practice. He participates in clinical research and writes often about the state of medical evidence. MDedge is part of the Medscape Professional Network.
A version of this article first appeared on Medscape.com.
Is the EDSS an adequate outcome measure in secondary progressive MS trials?
Clinical trials enrolling patients with progressive multiple sclerosis (MS) commonly use the Expanded Disability Status Scale (EDSS), an instrument that looks at impairment across several different functional domains, as a primary outcome measure. But results from
For their research, published in the Jan. 5 issue of Neurology, Marcus W. Koch, MD, PhD, of the department of neurosciences at Hotchkiss Brain Institute at the University of Calgary (Alta.) and colleagues looked at data from the placebo arms of two randomized trials that collectively enrolled nearly 700 patients with secondary progressive MS (SPMS). The trials were similar in terms of baseline patient characteristics and level of disability.
Comparing three outcome measures
The investigators compared disability progression and improvement across each of the three instruments and their combinations. Because improvement is understood to occur only rarely in untreated secondary progressive MS, most improvement picked up in the placebo arm of a trial is assumed to be noise from random variation or measurement error.
Dr. Koch and colleagues found that the EDSS showed higher rates of improvement than the other tests. The EDSS also showed the smallest differences between progression and improvement among the three instruments, with improvement rate over time increasing in parallel with disability progression rates. With the other two tests, improvement rates remained low – at 10% or less – while disability was seen steadily increasing over time.
The results, the investigators wrote in their analysis, suggest that the timed 25-foot walk and 9-hole peg test are the more reliable outcome measures. The reason “may simply lie in the fact that both the timed 25-foot walk and 9-hole peg test are objective and quantitative interval-scaled measures while the EDSS is a graded categorical measure.” As primary outcome measures in clinical trials, “the lower noise of the timed 25-foot walk and 9-hole peg test may make them preferable over the EDSS,” Dr. Koch and colleagues concluded. The investigators noted that a 2019 analysis of different MS disability scales across more than 13,000 patients in 14 trials did not find such stark differences – but that the patients in the pooled trials had less disability at baseline (median EDSS score of 2.5, compared with 6.0 for the two trials in Dr. Koch and colleagues’ study). This suggests, the investigators wrote, “that the timed 25-foot walk and 9-hole peg test may be more useful outcomes in patients with a progressive disease course and with greater baseline disability.”
‘Considerable implications’ for the design of future clinical trials
In an accompanying editorial, Tomas Kalincik, MD, PhD, of the University of Melbourne, along with colleagues in Italy and Britain, praised Dr. Koch and colleagues’ study as having “considerable implications for the design of future clinical trials because detecting a treatment effect on an outcome that is subject to large measurement error is difficult.” Most trials in progressive MS use change in EDSS score as their primary or key secondary outcomes. “However, as the authors elegantly show, other, more reliable clinical outcomes are needed. As we are revisiting our biological hypotheses for treatment of progressive MS, perhaps the time has come that we should also revisit the instruments that we use to examine their efficacy.”
The editorialists allowed for the possibility that something besides noise or measurement error could be responsible for the disparities seen across the instruments. “An alternative interpretation of the presented results could be that recovery of neurologic function is more common in SPMS than what we had previously thought and that EDSS is more sensitive to its detection than the other two measures,” they wrote.
Dr. Koch and colleagues’ study received no outside funding. Dr. Koch disclosed consulting fees and other financial support from several drug manufacturers, and three coauthors also disclosed financial relationships with pharmaceutical companies. All three editorial writers disclosed similar relationships.
Clinical trials enrolling patients with progressive multiple sclerosis (MS) commonly use the Expanded Disability Status Scale (EDSS), an instrument that looks at impairment across several different functional domains, as a primary outcome measure. But results from
For their research, published in the Jan. 5 issue of Neurology, Marcus W. Koch, MD, PhD, of the department of neurosciences at Hotchkiss Brain Institute at the University of Calgary (Alta.) and colleagues looked at data from the placebo arms of two randomized trials that collectively enrolled nearly 700 patients with secondary progressive MS (SPMS). The trials were similar in terms of baseline patient characteristics and level of disability.
Comparing three outcome measures
The investigators compared disability progression and improvement across each of the three instruments and their combinations. Because improvement is understood to occur only rarely in untreated secondary progressive MS, most improvement picked up in the placebo arm of a trial is assumed to be noise from random variation or measurement error.
Dr. Koch and colleagues found that the EDSS showed higher rates of improvement than the other tests. The EDSS also showed the smallest differences between progression and improvement among the three instruments, with improvement rate over time increasing in parallel with disability progression rates. With the other two tests, improvement rates remained low – at 10% or less – while disability was seen steadily increasing over time.
The results, the investigators wrote in their analysis, suggest that the timed 25-foot walk and 9-hole peg test are the more reliable outcome measures. The reason “may simply lie in the fact that both the timed 25-foot walk and 9-hole peg test are objective and quantitative interval-scaled measures while the EDSS is a graded categorical measure.” As primary outcome measures in clinical trials, “the lower noise of the timed 25-foot walk and 9-hole peg test may make them preferable over the EDSS,” Dr. Koch and colleagues concluded. The investigators noted that a 2019 analysis of different MS disability scales across more than 13,000 patients in 14 trials did not find such stark differences – but that the patients in the pooled trials had less disability at baseline (median EDSS score of 2.5, compared with 6.0 for the two trials in Dr. Koch and colleagues’ study). This suggests, the investigators wrote, “that the timed 25-foot walk and 9-hole peg test may be more useful outcomes in patients with a progressive disease course and with greater baseline disability.”
‘Considerable implications’ for the design of future clinical trials
In an accompanying editorial, Tomas Kalincik, MD, PhD, of the University of Melbourne, along with colleagues in Italy and Britain, praised Dr. Koch and colleagues’ study as having “considerable implications for the design of future clinical trials because detecting a treatment effect on an outcome that is subject to large measurement error is difficult.” Most trials in progressive MS use change in EDSS score as their primary or key secondary outcomes. “However, as the authors elegantly show, other, more reliable clinical outcomes are needed. As we are revisiting our biological hypotheses for treatment of progressive MS, perhaps the time has come that we should also revisit the instruments that we use to examine their efficacy.”
The editorialists allowed for the possibility that something besides noise or measurement error could be responsible for the disparities seen across the instruments. “An alternative interpretation of the presented results could be that recovery of neurologic function is more common in SPMS than what we had previously thought and that EDSS is more sensitive to its detection than the other two measures,” they wrote.
Dr. Koch and colleagues’ study received no outside funding. Dr. Koch disclosed consulting fees and other financial support from several drug manufacturers, and three coauthors also disclosed financial relationships with pharmaceutical companies. All three editorial writers disclosed similar relationships.
Clinical trials enrolling patients with progressive multiple sclerosis (MS) commonly use the Expanded Disability Status Scale (EDSS), an instrument that looks at impairment across several different functional domains, as a primary outcome measure. But results from
For their research, published in the Jan. 5 issue of Neurology, Marcus W. Koch, MD, PhD, of the department of neurosciences at Hotchkiss Brain Institute at the University of Calgary (Alta.) and colleagues looked at data from the placebo arms of two randomized trials that collectively enrolled nearly 700 patients with secondary progressive MS (SPMS). The trials were similar in terms of baseline patient characteristics and level of disability.
Comparing three outcome measures
The investigators compared disability progression and improvement across each of the three instruments and their combinations. Because improvement is understood to occur only rarely in untreated secondary progressive MS, most improvement picked up in the placebo arm of a trial is assumed to be noise from random variation or measurement error.
Dr. Koch and colleagues found that the EDSS showed higher rates of improvement than the other tests. The EDSS also showed the smallest differences between progression and improvement among the three instruments, with improvement rate over time increasing in parallel with disability progression rates. With the other two tests, improvement rates remained low – at 10% or less – while disability was seen steadily increasing over time.
The results, the investigators wrote in their analysis, suggest that the timed 25-foot walk and 9-hole peg test are the more reliable outcome measures. The reason “may simply lie in the fact that both the timed 25-foot walk and 9-hole peg test are objective and quantitative interval-scaled measures while the EDSS is a graded categorical measure.” As primary outcome measures in clinical trials, “the lower noise of the timed 25-foot walk and 9-hole peg test may make them preferable over the EDSS,” Dr. Koch and colleagues concluded. The investigators noted that a 2019 analysis of different MS disability scales across more than 13,000 patients in 14 trials did not find such stark differences – but that the patients in the pooled trials had less disability at baseline (median EDSS score of 2.5, compared with 6.0 for the two trials in Dr. Koch and colleagues’ study). This suggests, the investigators wrote, “that the timed 25-foot walk and 9-hole peg test may be more useful outcomes in patients with a progressive disease course and with greater baseline disability.”
‘Considerable implications’ for the design of future clinical trials
In an accompanying editorial, Tomas Kalincik, MD, PhD, of the University of Melbourne, along with colleagues in Italy and Britain, praised Dr. Koch and colleagues’ study as having “considerable implications for the design of future clinical trials because detecting a treatment effect on an outcome that is subject to large measurement error is difficult.” Most trials in progressive MS use change in EDSS score as their primary or key secondary outcomes. “However, as the authors elegantly show, other, more reliable clinical outcomes are needed. As we are revisiting our biological hypotheses for treatment of progressive MS, perhaps the time has come that we should also revisit the instruments that we use to examine their efficacy.”
The editorialists allowed for the possibility that something besides noise or measurement error could be responsible for the disparities seen across the instruments. “An alternative interpretation of the presented results could be that recovery of neurologic function is more common in SPMS than what we had previously thought and that EDSS is more sensitive to its detection than the other two measures,” they wrote.
Dr. Koch and colleagues’ study received no outside funding. Dr. Koch disclosed consulting fees and other financial support from several drug manufacturers, and three coauthors also disclosed financial relationships with pharmaceutical companies. All three editorial writers disclosed similar relationships.
FROM NEUROLOGY