User login
Oophorectomies continue to dominate torsion treatment
Prompt surgical management is essential in cases of ovarian torsion in order to salvage ovarian function, and recent studies have shown that conservative management with detorsion does not increase postoperative complications, compared with oophorectomy, wrote Hannah Ryles, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
The American College of Obstetricians and Gynecologists issued practice guidelines in November 2016 that recommended ovarian conservation rather than oophorectomy to manage adnexal torsion in women wishing to preserve fertility. However, the impact of this guideline on clinical practice and surgical patterns remains unclear, the researchers said.
In a study published in Obstetrics and Gynecology, the researchers reviewed data from 402 patients who underwent surgeries before the updated ACOG guidelines (2008-2016) and 1,389 who underwent surgeries after the guidelines (2017-2020). Surgery data came from the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database. The study population included women aged 18-50 years who underwent adnexal torsion surgery and were identified as having either oophorectomy or ovarian conservation surgery.
A total of 1,791 surgeries performed for adnexal torsion were included in the study; 542 (30.3%) involved ovarian conservation and 1,249 (69.7%) involved oophorectomy.
The proportion of oophorectomies was similar during the periods before and after the guidelines (71.9% vs. 69.1%; P = .16). However, the proportion of oophorectomies changed significantly across the entire study period, by approximately –1.6% each year.
Factors significantly associated with oophorectomy compared with ovarian conservation included older age (35 years vs. 28 years), higher body mass index (29.2 kg/m2 vs. 27.5 kg/m2), anemia (12.2% vs. 7.2%), hypertension (10.4% vs. 3.1%), and higher American Society of Anesthesiologists classification.
“There remains no defined acceptable rate of oophorectomy; this decision involves multiple factors, such as fertility and other patient desires after a risk and benefit discussion, menopausal status, concern for malignancy, and safety and feasibility of conservative procedures,” the researchers wrote in their discussion. However, in emergency situations, it may be difficult to determine a patient’s preferences, and a lack of desire for future fertility may be presumed, which may contribute to the relatively high oophorectomy rates over time, they said.
The findings were limited by several factors including the retrospective design and lack of data on surgical history, histopathology, and intraoperative appearance of the ovary, as well as lack of clinical data including the time from presentation to diagnosis or surgery, the researchers noted. “Although we were also unable to determine obstetric history and fertility desires, our median age of 32 years reflects a young cohort that was limited to women of reproductive age,” they added.
However, the results reflect studies suggesting that clinical practice often lags behind updated guidelines, and the findings were strengthened by the use of the NSQIP database and reflect a need for greater efforts to promote ovarian conservation in accordance with the current guidelines, the researchers concluded.
Consider unilateral oophorectomy
The current study highlights the discrepancy between the ACOG guidelines and clinical practice, with “disappointingly low” rates of ovarian preservation in the adult population, wrote Riley J. Young, MD, and Kimberly A. Kho, MD, both of the University of Texas Southwestern Medical Center, Dallas, in an accompanying editorial. The reasons for the discrepancy include clinical concerns for conserving a torsed ovary and the difficulty of assessing fertility desires in an emergency situation, they said.
However, consideration of unilateral oophorectomy as an option should be part of clinical decision-making, according to the editorialists. Previous studies suggest that retention of a single ovarian may still allow for a successful pregnancy, and the effects of unilateral oophorectomy have been studied in infertility and assisted reproductive technology settings.
Women with a single ovary have fewer eggs and require higher amounts of gonadotropins, but pregnancy is possible, the editorialists said. However, the long-term effects of unilateral oophorectomy are uncertain, and potential detrimental outcomes include increased mortality and cognitive impairment; therefore “we aim for premenopausal ovaries simply to be conserved, whether fertility is the stated goal or not,” they noted. This may include consideration of unilateral oophorectomy. “Each ovary conserved at midnight moves us closer to a more acceptable ovarian conservation rate,” they concluded.
The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Kho disclosed funding to her institution from Hologic for being on an investigator-initiated study, Dr. Young had no financial conflicts to disclose.
Prompt surgical management is essential in cases of ovarian torsion in order to salvage ovarian function, and recent studies have shown that conservative management with detorsion does not increase postoperative complications, compared with oophorectomy, wrote Hannah Ryles, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
The American College of Obstetricians and Gynecologists issued practice guidelines in November 2016 that recommended ovarian conservation rather than oophorectomy to manage adnexal torsion in women wishing to preserve fertility. However, the impact of this guideline on clinical practice and surgical patterns remains unclear, the researchers said.
In a study published in Obstetrics and Gynecology, the researchers reviewed data from 402 patients who underwent surgeries before the updated ACOG guidelines (2008-2016) and 1,389 who underwent surgeries after the guidelines (2017-2020). Surgery data came from the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database. The study population included women aged 18-50 years who underwent adnexal torsion surgery and were identified as having either oophorectomy or ovarian conservation surgery.
A total of 1,791 surgeries performed for adnexal torsion were included in the study; 542 (30.3%) involved ovarian conservation and 1,249 (69.7%) involved oophorectomy.
The proportion of oophorectomies was similar during the periods before and after the guidelines (71.9% vs. 69.1%; P = .16). However, the proportion of oophorectomies changed significantly across the entire study period, by approximately –1.6% each year.
Factors significantly associated with oophorectomy compared with ovarian conservation included older age (35 years vs. 28 years), higher body mass index (29.2 kg/m2 vs. 27.5 kg/m2), anemia (12.2% vs. 7.2%), hypertension (10.4% vs. 3.1%), and higher American Society of Anesthesiologists classification.
“There remains no defined acceptable rate of oophorectomy; this decision involves multiple factors, such as fertility and other patient desires after a risk and benefit discussion, menopausal status, concern for malignancy, and safety and feasibility of conservative procedures,” the researchers wrote in their discussion. However, in emergency situations, it may be difficult to determine a patient’s preferences, and a lack of desire for future fertility may be presumed, which may contribute to the relatively high oophorectomy rates over time, they said.
The findings were limited by several factors including the retrospective design and lack of data on surgical history, histopathology, and intraoperative appearance of the ovary, as well as lack of clinical data including the time from presentation to diagnosis or surgery, the researchers noted. “Although we were also unable to determine obstetric history and fertility desires, our median age of 32 years reflects a young cohort that was limited to women of reproductive age,” they added.
However, the results reflect studies suggesting that clinical practice often lags behind updated guidelines, and the findings were strengthened by the use of the NSQIP database and reflect a need for greater efforts to promote ovarian conservation in accordance with the current guidelines, the researchers concluded.
Consider unilateral oophorectomy
The current study highlights the discrepancy between the ACOG guidelines and clinical practice, with “disappointingly low” rates of ovarian preservation in the adult population, wrote Riley J. Young, MD, and Kimberly A. Kho, MD, both of the University of Texas Southwestern Medical Center, Dallas, in an accompanying editorial. The reasons for the discrepancy include clinical concerns for conserving a torsed ovary and the difficulty of assessing fertility desires in an emergency situation, they said.
However, consideration of unilateral oophorectomy as an option should be part of clinical decision-making, according to the editorialists. Previous studies suggest that retention of a single ovarian may still allow for a successful pregnancy, and the effects of unilateral oophorectomy have been studied in infertility and assisted reproductive technology settings.
Women with a single ovary have fewer eggs and require higher amounts of gonadotropins, but pregnancy is possible, the editorialists said. However, the long-term effects of unilateral oophorectomy are uncertain, and potential detrimental outcomes include increased mortality and cognitive impairment; therefore “we aim for premenopausal ovaries simply to be conserved, whether fertility is the stated goal or not,” they noted. This may include consideration of unilateral oophorectomy. “Each ovary conserved at midnight moves us closer to a more acceptable ovarian conservation rate,” they concluded.
The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Kho disclosed funding to her institution from Hologic for being on an investigator-initiated study, Dr. Young had no financial conflicts to disclose.
Prompt surgical management is essential in cases of ovarian torsion in order to salvage ovarian function, and recent studies have shown that conservative management with detorsion does not increase postoperative complications, compared with oophorectomy, wrote Hannah Ryles, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
The American College of Obstetricians and Gynecologists issued practice guidelines in November 2016 that recommended ovarian conservation rather than oophorectomy to manage adnexal torsion in women wishing to preserve fertility. However, the impact of this guideline on clinical practice and surgical patterns remains unclear, the researchers said.
In a study published in Obstetrics and Gynecology, the researchers reviewed data from 402 patients who underwent surgeries before the updated ACOG guidelines (2008-2016) and 1,389 who underwent surgeries after the guidelines (2017-2020). Surgery data came from the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database. The study population included women aged 18-50 years who underwent adnexal torsion surgery and were identified as having either oophorectomy or ovarian conservation surgery.
A total of 1,791 surgeries performed for adnexal torsion were included in the study; 542 (30.3%) involved ovarian conservation and 1,249 (69.7%) involved oophorectomy.
The proportion of oophorectomies was similar during the periods before and after the guidelines (71.9% vs. 69.1%; P = .16). However, the proportion of oophorectomies changed significantly across the entire study period, by approximately –1.6% each year.
Factors significantly associated with oophorectomy compared with ovarian conservation included older age (35 years vs. 28 years), higher body mass index (29.2 kg/m2 vs. 27.5 kg/m2), anemia (12.2% vs. 7.2%), hypertension (10.4% vs. 3.1%), and higher American Society of Anesthesiologists classification.
“There remains no defined acceptable rate of oophorectomy; this decision involves multiple factors, such as fertility and other patient desires after a risk and benefit discussion, menopausal status, concern for malignancy, and safety and feasibility of conservative procedures,” the researchers wrote in their discussion. However, in emergency situations, it may be difficult to determine a patient’s preferences, and a lack of desire for future fertility may be presumed, which may contribute to the relatively high oophorectomy rates over time, they said.
The findings were limited by several factors including the retrospective design and lack of data on surgical history, histopathology, and intraoperative appearance of the ovary, as well as lack of clinical data including the time from presentation to diagnosis or surgery, the researchers noted. “Although we were also unable to determine obstetric history and fertility desires, our median age of 32 years reflects a young cohort that was limited to women of reproductive age,” they added.
However, the results reflect studies suggesting that clinical practice often lags behind updated guidelines, and the findings were strengthened by the use of the NSQIP database and reflect a need for greater efforts to promote ovarian conservation in accordance with the current guidelines, the researchers concluded.
Consider unilateral oophorectomy
The current study highlights the discrepancy between the ACOG guidelines and clinical practice, with “disappointingly low” rates of ovarian preservation in the adult population, wrote Riley J. Young, MD, and Kimberly A. Kho, MD, both of the University of Texas Southwestern Medical Center, Dallas, in an accompanying editorial. The reasons for the discrepancy include clinical concerns for conserving a torsed ovary and the difficulty of assessing fertility desires in an emergency situation, they said.
However, consideration of unilateral oophorectomy as an option should be part of clinical decision-making, according to the editorialists. Previous studies suggest that retention of a single ovarian may still allow for a successful pregnancy, and the effects of unilateral oophorectomy have been studied in infertility and assisted reproductive technology settings.
Women with a single ovary have fewer eggs and require higher amounts of gonadotropins, but pregnancy is possible, the editorialists said. However, the long-term effects of unilateral oophorectomy are uncertain, and potential detrimental outcomes include increased mortality and cognitive impairment; therefore “we aim for premenopausal ovaries simply to be conserved, whether fertility is the stated goal or not,” they noted. This may include consideration of unilateral oophorectomy. “Each ovary conserved at midnight moves us closer to a more acceptable ovarian conservation rate,” they concluded.
The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Kho disclosed funding to her institution from Hologic for being on an investigator-initiated study, Dr. Young had no financial conflicts to disclose.
FROM OBSTETRICS & GYNECOLOGY
Noisy incubators could stunt infant hearing
Incubators save the lives of many babies, but new data suggest that the ambient noise associated with the incubator experience could put babies’ hearing and language development skills at risk.
Previous studies have shown that the neonatal intensive care unit is a noisy environment, but specific data on levels of sound inside and outside incubators are limited, wrote Christoph Reuter, MA, a musicology professor at the University of Vienna, and colleagues.
“By the age of 3 years, deficits in language acquisition are detectable in nearly 50% of very preterm infants,” and high levels of NICU noise have been cited as possible contributors to this increased risk, the researchers say.
In a study published in Frontiers in Pediatrics, the researchers aimed to compare real-life NICU noise with previously reported levels to describe the sound characteristics and to identify resonance characteristics inside an incubator.
The study was conducted at the Pediatric Simulation Center at the Medical University of Vienna. The researchers placed a simulation mannequin with an ear microphone inside an incubator. They also placed microphones outside the incubator to collect measures of outside noise and activity involved in NICU care.
Data regarding sound were collected for 11 environmental noises and 12 incubator handlings using weighted and unweighted decibel levels. Specific environmental noises included starting the incubator engine; environmental noise with incubator off; environmental noise with incubator on; normal conversation; light conversation; laughter; telephone sounds; the infusion pump alarm; the monitor alarm (anomaly); the monitor alarm (emergency); and blood pressure measurement.
The 12 incubator handling noises included those associated with water flap, water pouring into the incubator, incubator doors opening properly, incubators doors closing properly, incubator doors closing improperly, hatch closing, hatch opening, incubator drawer, neighbor incubator doors closing (1.82 m distance), taking a stethoscope from the incubator wall, putting a stethoscope on the incubator, and suctioning tube. Noise from six levels of respiratory support was also measured.
The researchers reported that the incubator tended to dampen most sounds but also that some sounds resonated inside the incubator, which raised the interior noise level by as much as 28 decibels.
Most of the measures using both A-weighted decibels (dBA) and sound pressure level decibels (dBSPL) were above the 45-decibel level for neonatal sound exposure recommended by the American Academy of Pediatrics. The measurements (dBA) versus unweighted (dBSPL) are limited in that they are designed to measure low levels of sound and therefore might underestimate proportions of high and low frequencies at stronger levels, the researchers acknowledge.
Overall, most measures were clustered in the 55-75 decibel range, although some sound levels for incubator handling, while below levels previously reported in the literature, reached approximately 100 decibels.
The noise involved inside the incubator was not perceived as loud by those working with the incubator, the researchers note.
As for resonance inside the incubator, the researchers measured a low-frequency main resonance of 97 Hz, but they write that this resonance can be hard to capture in weighted measurements. However, the resonance means that “noises from the outside sound more tonal inside the incubator, booming and muffled as well as less rough or noisy,” and sounds inside the incubator are similarly affected, the researchers say.
“Most of the noise situations described in this manuscript far exceed not only the recommendation of the AAP but also international guidelines provided by the World Health Organization and the U.S. Environmental Protection Agency,” which recommend, respectively, maximum dBA levels of 35 dBA and 45 dBA for daytime and 30 dBA and 35 dBA for night, the researchers indicate.
Potential long-term implications are that babies who spend time in the NICU are at risk for hearing impairment, which could lead to delays in language acquisition, they say.
The findings were limited by several factors, including the variance among the incubators, which prevents generalizability, the researchers note. Other limitations include the use of a simulation room rather than everyday conditions, in which the environmental sounds would likely be even louder.
However, the results provide insights into the specifics of incubator and NICU noise and suggest that sound be a consideration in the development and promotion of incubators to help protect the hearing of the infants inside them, the researchers conclude.
A generalist’s take
“This is an interesting study looking at the level and character of the sound experienced by preterm infants inside an incubator and how it may compare to sounds experienced within the mother’s womb,” said Tim Joos, MD, a Seattle-based clinician with a combination internal medicine/pediatrics practice, in an interview.
In society at large, “there has been more focus lately on the general environment and its effect on health, and this study is a unique take on this concept,” he said. “Although in general the incubators work to dampen external sounds, low-frequency sounds may actually resonate more inside the incubators, and taps on the outside or inside of the incubator itself are amplified within the incubator,” he noted. “It is sad but not surprising that the decibel levels experienced by the infants in the incubators exceed the recommended levels recommended by AAP.”
As for additional research, “it would be interesting to see the results of trials looking at various short- or long-term outcomes experienced by infants exposed to a lower-level noise compared to the current levels,” Dr. Joos told this news organization.
A neonatologist’s perspective
“As the field of neonatology advances, we are caring for an ever-growing number of extremely preterm infants,” said Caitlin M. Drumm, MD, of Walter Reed National Military Medical Center, Bethesda, Md., in an interview.
“These infants will spend the first few months of their lives within an incubator in the neonatal intensive care unit, so it is important to understand the potential long-term implications of environmental effects on these vulnerable patients,” she said.
“As in prior studies, it was not surprising that essentially every environmental, handling, or respiratory intervention led to noise levels higher than the limit recommended by the American Academy of Pediatrics,” Dr. Drumm said. “What was surprising was just how high above the 45-dB recommended noise limit many environmental stimuli are. For example, the authors cite respiratory flow rates of 8 L/min or higher as risky for hearing health at 84.72 dBSPL, “ she said.
The key message for clinicians is to be aware of noise levels in the NICU, Dr. Drumm said. “Environmental stimuli as simple as putting a stethoscope on the incubator lead to noise levels well above the limit recommended by the American Academy of Pediatrics. The entire NICU care team has a role to play in minimizing environmental sound hazards for our most critically ill patients.”
Looking ahead, “future research should focus on providing more information correlating neonatal environmental sound exposure to long-term hearing and neurodevelopmental outcomes,” she said.
The study received no outside funding. The researchers report no relevant financial relationships. Dr. Joos serves on the editorial advisory board of Pediatric News. Dr. Drumm has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Incubators save the lives of many babies, but new data suggest that the ambient noise associated with the incubator experience could put babies’ hearing and language development skills at risk.
Previous studies have shown that the neonatal intensive care unit is a noisy environment, but specific data on levels of sound inside and outside incubators are limited, wrote Christoph Reuter, MA, a musicology professor at the University of Vienna, and colleagues.
“By the age of 3 years, deficits in language acquisition are detectable in nearly 50% of very preterm infants,” and high levels of NICU noise have been cited as possible contributors to this increased risk, the researchers say.
In a study published in Frontiers in Pediatrics, the researchers aimed to compare real-life NICU noise with previously reported levels to describe the sound characteristics and to identify resonance characteristics inside an incubator.
The study was conducted at the Pediatric Simulation Center at the Medical University of Vienna. The researchers placed a simulation mannequin with an ear microphone inside an incubator. They also placed microphones outside the incubator to collect measures of outside noise and activity involved in NICU care.
Data regarding sound were collected for 11 environmental noises and 12 incubator handlings using weighted and unweighted decibel levels. Specific environmental noises included starting the incubator engine; environmental noise with incubator off; environmental noise with incubator on; normal conversation; light conversation; laughter; telephone sounds; the infusion pump alarm; the monitor alarm (anomaly); the monitor alarm (emergency); and blood pressure measurement.
The 12 incubator handling noises included those associated with water flap, water pouring into the incubator, incubator doors opening properly, incubators doors closing properly, incubator doors closing improperly, hatch closing, hatch opening, incubator drawer, neighbor incubator doors closing (1.82 m distance), taking a stethoscope from the incubator wall, putting a stethoscope on the incubator, and suctioning tube. Noise from six levels of respiratory support was also measured.
The researchers reported that the incubator tended to dampen most sounds but also that some sounds resonated inside the incubator, which raised the interior noise level by as much as 28 decibels.
Most of the measures using both A-weighted decibels (dBA) and sound pressure level decibels (dBSPL) were above the 45-decibel level for neonatal sound exposure recommended by the American Academy of Pediatrics. The measurements (dBA) versus unweighted (dBSPL) are limited in that they are designed to measure low levels of sound and therefore might underestimate proportions of high and low frequencies at stronger levels, the researchers acknowledge.
Overall, most measures were clustered in the 55-75 decibel range, although some sound levels for incubator handling, while below levels previously reported in the literature, reached approximately 100 decibels.
The noise involved inside the incubator was not perceived as loud by those working with the incubator, the researchers note.
As for resonance inside the incubator, the researchers measured a low-frequency main resonance of 97 Hz, but they write that this resonance can be hard to capture in weighted measurements. However, the resonance means that “noises from the outside sound more tonal inside the incubator, booming and muffled as well as less rough or noisy,” and sounds inside the incubator are similarly affected, the researchers say.
“Most of the noise situations described in this manuscript far exceed not only the recommendation of the AAP but also international guidelines provided by the World Health Organization and the U.S. Environmental Protection Agency,” which recommend, respectively, maximum dBA levels of 35 dBA and 45 dBA for daytime and 30 dBA and 35 dBA for night, the researchers indicate.
Potential long-term implications are that babies who spend time in the NICU are at risk for hearing impairment, which could lead to delays in language acquisition, they say.
The findings were limited by several factors, including the variance among the incubators, which prevents generalizability, the researchers note. Other limitations include the use of a simulation room rather than everyday conditions, in which the environmental sounds would likely be even louder.
However, the results provide insights into the specifics of incubator and NICU noise and suggest that sound be a consideration in the development and promotion of incubators to help protect the hearing of the infants inside them, the researchers conclude.
A generalist’s take
“This is an interesting study looking at the level and character of the sound experienced by preterm infants inside an incubator and how it may compare to sounds experienced within the mother’s womb,” said Tim Joos, MD, a Seattle-based clinician with a combination internal medicine/pediatrics practice, in an interview.
In society at large, “there has been more focus lately on the general environment and its effect on health, and this study is a unique take on this concept,” he said. “Although in general the incubators work to dampen external sounds, low-frequency sounds may actually resonate more inside the incubators, and taps on the outside or inside of the incubator itself are amplified within the incubator,” he noted. “It is sad but not surprising that the decibel levels experienced by the infants in the incubators exceed the recommended levels recommended by AAP.”
As for additional research, “it would be interesting to see the results of trials looking at various short- or long-term outcomes experienced by infants exposed to a lower-level noise compared to the current levels,” Dr. Joos told this news organization.
A neonatologist’s perspective
“As the field of neonatology advances, we are caring for an ever-growing number of extremely preterm infants,” said Caitlin M. Drumm, MD, of Walter Reed National Military Medical Center, Bethesda, Md., in an interview.
“These infants will spend the first few months of their lives within an incubator in the neonatal intensive care unit, so it is important to understand the potential long-term implications of environmental effects on these vulnerable patients,” she said.
“As in prior studies, it was not surprising that essentially every environmental, handling, or respiratory intervention led to noise levels higher than the limit recommended by the American Academy of Pediatrics,” Dr. Drumm said. “What was surprising was just how high above the 45-dB recommended noise limit many environmental stimuli are. For example, the authors cite respiratory flow rates of 8 L/min or higher as risky for hearing health at 84.72 dBSPL, “ she said.
The key message for clinicians is to be aware of noise levels in the NICU, Dr. Drumm said. “Environmental stimuli as simple as putting a stethoscope on the incubator lead to noise levels well above the limit recommended by the American Academy of Pediatrics. The entire NICU care team has a role to play in minimizing environmental sound hazards for our most critically ill patients.”
Looking ahead, “future research should focus on providing more information correlating neonatal environmental sound exposure to long-term hearing and neurodevelopmental outcomes,” she said.
The study received no outside funding. The researchers report no relevant financial relationships. Dr. Joos serves on the editorial advisory board of Pediatric News. Dr. Drumm has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Incubators save the lives of many babies, but new data suggest that the ambient noise associated with the incubator experience could put babies’ hearing and language development skills at risk.
Previous studies have shown that the neonatal intensive care unit is a noisy environment, but specific data on levels of sound inside and outside incubators are limited, wrote Christoph Reuter, MA, a musicology professor at the University of Vienna, and colleagues.
“By the age of 3 years, deficits in language acquisition are detectable in nearly 50% of very preterm infants,” and high levels of NICU noise have been cited as possible contributors to this increased risk, the researchers say.
In a study published in Frontiers in Pediatrics, the researchers aimed to compare real-life NICU noise with previously reported levels to describe the sound characteristics and to identify resonance characteristics inside an incubator.
The study was conducted at the Pediatric Simulation Center at the Medical University of Vienna. The researchers placed a simulation mannequin with an ear microphone inside an incubator. They also placed microphones outside the incubator to collect measures of outside noise and activity involved in NICU care.
Data regarding sound were collected for 11 environmental noises and 12 incubator handlings using weighted and unweighted decibel levels. Specific environmental noises included starting the incubator engine; environmental noise with incubator off; environmental noise with incubator on; normal conversation; light conversation; laughter; telephone sounds; the infusion pump alarm; the monitor alarm (anomaly); the monitor alarm (emergency); and blood pressure measurement.
The 12 incubator handling noises included those associated with water flap, water pouring into the incubator, incubator doors opening properly, incubators doors closing properly, incubator doors closing improperly, hatch closing, hatch opening, incubator drawer, neighbor incubator doors closing (1.82 m distance), taking a stethoscope from the incubator wall, putting a stethoscope on the incubator, and suctioning tube. Noise from six levels of respiratory support was also measured.
The researchers reported that the incubator tended to dampen most sounds but also that some sounds resonated inside the incubator, which raised the interior noise level by as much as 28 decibels.
Most of the measures using both A-weighted decibels (dBA) and sound pressure level decibels (dBSPL) were above the 45-decibel level for neonatal sound exposure recommended by the American Academy of Pediatrics. The measurements (dBA) versus unweighted (dBSPL) are limited in that they are designed to measure low levels of sound and therefore might underestimate proportions of high and low frequencies at stronger levels, the researchers acknowledge.
Overall, most measures were clustered in the 55-75 decibel range, although some sound levels for incubator handling, while below levels previously reported in the literature, reached approximately 100 decibels.
The noise involved inside the incubator was not perceived as loud by those working with the incubator, the researchers note.
As for resonance inside the incubator, the researchers measured a low-frequency main resonance of 97 Hz, but they write that this resonance can be hard to capture in weighted measurements. However, the resonance means that “noises from the outside sound more tonal inside the incubator, booming and muffled as well as less rough or noisy,” and sounds inside the incubator are similarly affected, the researchers say.
“Most of the noise situations described in this manuscript far exceed not only the recommendation of the AAP but also international guidelines provided by the World Health Organization and the U.S. Environmental Protection Agency,” which recommend, respectively, maximum dBA levels of 35 dBA and 45 dBA for daytime and 30 dBA and 35 dBA for night, the researchers indicate.
Potential long-term implications are that babies who spend time in the NICU are at risk for hearing impairment, which could lead to delays in language acquisition, they say.
The findings were limited by several factors, including the variance among the incubators, which prevents generalizability, the researchers note. Other limitations include the use of a simulation room rather than everyday conditions, in which the environmental sounds would likely be even louder.
However, the results provide insights into the specifics of incubator and NICU noise and suggest that sound be a consideration in the development and promotion of incubators to help protect the hearing of the infants inside them, the researchers conclude.
A generalist’s take
“This is an interesting study looking at the level and character of the sound experienced by preterm infants inside an incubator and how it may compare to sounds experienced within the mother’s womb,” said Tim Joos, MD, a Seattle-based clinician with a combination internal medicine/pediatrics practice, in an interview.
In society at large, “there has been more focus lately on the general environment and its effect on health, and this study is a unique take on this concept,” he said. “Although in general the incubators work to dampen external sounds, low-frequency sounds may actually resonate more inside the incubators, and taps on the outside or inside of the incubator itself are amplified within the incubator,” he noted. “It is sad but not surprising that the decibel levels experienced by the infants in the incubators exceed the recommended levels recommended by AAP.”
As for additional research, “it would be interesting to see the results of trials looking at various short- or long-term outcomes experienced by infants exposed to a lower-level noise compared to the current levels,” Dr. Joos told this news organization.
A neonatologist’s perspective
“As the field of neonatology advances, we are caring for an ever-growing number of extremely preterm infants,” said Caitlin M. Drumm, MD, of Walter Reed National Military Medical Center, Bethesda, Md., in an interview.
“These infants will spend the first few months of their lives within an incubator in the neonatal intensive care unit, so it is important to understand the potential long-term implications of environmental effects on these vulnerable patients,” she said.
“As in prior studies, it was not surprising that essentially every environmental, handling, or respiratory intervention led to noise levels higher than the limit recommended by the American Academy of Pediatrics,” Dr. Drumm said. “What was surprising was just how high above the 45-dB recommended noise limit many environmental stimuli are. For example, the authors cite respiratory flow rates of 8 L/min or higher as risky for hearing health at 84.72 dBSPL, “ she said.
The key message for clinicians is to be aware of noise levels in the NICU, Dr. Drumm said. “Environmental stimuli as simple as putting a stethoscope on the incubator lead to noise levels well above the limit recommended by the American Academy of Pediatrics. The entire NICU care team has a role to play in minimizing environmental sound hazards for our most critically ill patients.”
Looking ahead, “future research should focus on providing more information correlating neonatal environmental sound exposure to long-term hearing and neurodevelopmental outcomes,” she said.
The study received no outside funding. The researchers report no relevant financial relationships. Dr. Joos serves on the editorial advisory board of Pediatric News. Dr. Drumm has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Low-dose olanzapine improves appetite in chemotherapy patients
Anorexia is a problem in approximately 50% of newly-diagnosed cancer patients, and can compromise survival, wrote study author Lakshmi Sandhya, MD, of Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Puducherry, India, and colleagues. In particular, patients with lung and gastrointestinal tract cancers are prone to anorexia during chemotherapy, they said. Olanzapine is a demonstrated appetite stimulant and has been used in cancer patients as a short-term antiemetic, but its use for long-term appetite stimulation has not been well-studied, they said.
In the study, published in the Journal of Clinical Oncology, the researchers randomized 124 adults aged 18 years and older to a 2.5 grams of olanzapine or placebo for 12 weeks. The participants had untreated, locally advanced, or metastatic gastric, hepatopancreaticobiliary (HPB), or lung cancers.
The median age of the participants was 55 years. The primary outcome was a weight gain greater than 5% and improved appetite based on the visual analog scale (VAS) and questionnaires. A change in nutritional status, quality of life (QOL), and chemotherapy toxicity, were secondary endpoints.
After 12 weeks, complete data were available for 58 patients in the olanzapine group and 54 in the placebo group. Of these, 60% of the olanzapine group and 9% of the placebo group met the primary endpoint of a weight gain greater than 5%. The proportion of patients with improved appetite based on VAS scores and questionnaire scores was significantly higher in olanzapine patients vs. placebo patients (43% vs. 13% and 22% vs. 4%, respectively).
In addition, 52% of the olanzapine group vs. 18% of the placebo group achieved more than 75% intake of recommended daily calories.
Most of the reported toxicities were not hematological and similar between the groups (85% for olanzapine vs. 88% for placebo). The proportion of patients with toxicities of grade 3 or higher was lower in the olanzapine group vs. the placebo group (12% vs. 37%, P = .002). Patients in the olanzapine group also reported significantly improved quality of life from baseline compared to the placebo patients.
The findings were limited by several factors including the heterogeneous cancers and treatment regimens, the lack of data on weight beyond 12 weeks, the relatively small study population, and the subjective nature of anorexia measurements, the researchers noted.
However, the results suggest that low-dose olanzapine is an effective and well-tolerated add-on intervention for the subset of patients at risk for anorexia at the start of chemotherapy, they said.
“Future studies could look at various cancers in a multicentric setting and long-term endpoints such as patient survival,” they concluded.
The study drug and placebo were funded by an intramural grant from Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER). The researchers had no financial conflicts to disclose.
Anorexia is a problem in approximately 50% of newly-diagnosed cancer patients, and can compromise survival, wrote study author Lakshmi Sandhya, MD, of Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Puducherry, India, and colleagues. In particular, patients with lung and gastrointestinal tract cancers are prone to anorexia during chemotherapy, they said. Olanzapine is a demonstrated appetite stimulant and has been used in cancer patients as a short-term antiemetic, but its use for long-term appetite stimulation has not been well-studied, they said.
In the study, published in the Journal of Clinical Oncology, the researchers randomized 124 adults aged 18 years and older to a 2.5 grams of olanzapine or placebo for 12 weeks. The participants had untreated, locally advanced, or metastatic gastric, hepatopancreaticobiliary (HPB), or lung cancers.
The median age of the participants was 55 years. The primary outcome was a weight gain greater than 5% and improved appetite based on the visual analog scale (VAS) and questionnaires. A change in nutritional status, quality of life (QOL), and chemotherapy toxicity, were secondary endpoints.
After 12 weeks, complete data were available for 58 patients in the olanzapine group and 54 in the placebo group. Of these, 60% of the olanzapine group and 9% of the placebo group met the primary endpoint of a weight gain greater than 5%. The proportion of patients with improved appetite based on VAS scores and questionnaire scores was significantly higher in olanzapine patients vs. placebo patients (43% vs. 13% and 22% vs. 4%, respectively).
In addition, 52% of the olanzapine group vs. 18% of the placebo group achieved more than 75% intake of recommended daily calories.
Most of the reported toxicities were not hematological and similar between the groups (85% for olanzapine vs. 88% for placebo). The proportion of patients with toxicities of grade 3 or higher was lower in the olanzapine group vs. the placebo group (12% vs. 37%, P = .002). Patients in the olanzapine group also reported significantly improved quality of life from baseline compared to the placebo patients.
The findings were limited by several factors including the heterogeneous cancers and treatment regimens, the lack of data on weight beyond 12 weeks, the relatively small study population, and the subjective nature of anorexia measurements, the researchers noted.
However, the results suggest that low-dose olanzapine is an effective and well-tolerated add-on intervention for the subset of patients at risk for anorexia at the start of chemotherapy, they said.
“Future studies could look at various cancers in a multicentric setting and long-term endpoints such as patient survival,” they concluded.
The study drug and placebo were funded by an intramural grant from Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER). The researchers had no financial conflicts to disclose.
Anorexia is a problem in approximately 50% of newly-diagnosed cancer patients, and can compromise survival, wrote study author Lakshmi Sandhya, MD, of Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Puducherry, India, and colleagues. In particular, patients with lung and gastrointestinal tract cancers are prone to anorexia during chemotherapy, they said. Olanzapine is a demonstrated appetite stimulant and has been used in cancer patients as a short-term antiemetic, but its use for long-term appetite stimulation has not been well-studied, they said.
In the study, published in the Journal of Clinical Oncology, the researchers randomized 124 adults aged 18 years and older to a 2.5 grams of olanzapine or placebo for 12 weeks. The participants had untreated, locally advanced, or metastatic gastric, hepatopancreaticobiliary (HPB), or lung cancers.
The median age of the participants was 55 years. The primary outcome was a weight gain greater than 5% and improved appetite based on the visual analog scale (VAS) and questionnaires. A change in nutritional status, quality of life (QOL), and chemotherapy toxicity, were secondary endpoints.
After 12 weeks, complete data were available for 58 patients in the olanzapine group and 54 in the placebo group. Of these, 60% of the olanzapine group and 9% of the placebo group met the primary endpoint of a weight gain greater than 5%. The proportion of patients with improved appetite based on VAS scores and questionnaire scores was significantly higher in olanzapine patients vs. placebo patients (43% vs. 13% and 22% vs. 4%, respectively).
In addition, 52% of the olanzapine group vs. 18% of the placebo group achieved more than 75% intake of recommended daily calories.
Most of the reported toxicities were not hematological and similar between the groups (85% for olanzapine vs. 88% for placebo). The proportion of patients with toxicities of grade 3 or higher was lower in the olanzapine group vs. the placebo group (12% vs. 37%, P = .002). Patients in the olanzapine group also reported significantly improved quality of life from baseline compared to the placebo patients.
The findings were limited by several factors including the heterogeneous cancers and treatment regimens, the lack of data on weight beyond 12 weeks, the relatively small study population, and the subjective nature of anorexia measurements, the researchers noted.
However, the results suggest that low-dose olanzapine is an effective and well-tolerated add-on intervention for the subset of patients at risk for anorexia at the start of chemotherapy, they said.
“Future studies could look at various cancers in a multicentric setting and long-term endpoints such as patient survival,” they concluded.
The study drug and placebo were funded by an intramural grant from Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER). The researchers had no financial conflicts to disclose.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Children ate more fruits and vegetables during longer meals: Study
Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.
Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.
However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.
In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.
The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.
Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.
During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).
A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.
Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.
“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”
Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.
The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.
However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.
“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
Findings intriguing, more data needed
The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.
The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”
Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”
The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.
Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said.
The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.
Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.
Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.
However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.
In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.
The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.
Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.
During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).
A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.
Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.
“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”
Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.
The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.
However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.
“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
Findings intriguing, more data needed
The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.
The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”
Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”
The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.
Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said.
The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.
Adding 10 minutes to family mealtimes increased children’s consumption of fruits and vegetables by approximately one portion, based on data from 50 parent-child dyads.
Family meals are known to affect children’s food choices and preferences and can be an effective setting for improving children’s nutrition, wrote Mattea Dallacker, PhD, of the University of Mannheim, Germany, and colleagues.
However, the effect of extending meal duration on increasing fruit and vegetable intake in particular has not been examined, they said.
In a study published in JAMA Network Open, the researchers provided two free evening meals to 50 parent-child dyads under each of two different conditions. The control condition was defined by the families as a regular family mealtime duration (an average meal was 20.83 minutes), while the intervention was an average meal time 10 minutes (50%) longer. The age of the parents ranged from 22 to 55 years, with a mean of 43 years; 72% of the parent participants were mothers. The children’s ages ranged from 6 to 11 years, with a mean of 8 years, with approximately equal numbers of boys and girls.
The study was conducted in a family meal laboratory setting in Berlin, and groups were randomized to the longer or shorter meal setting first. The primary outcome was the total number of pieces of fruit and vegetables eaten by the child as part of each of the two meals.
Both meals were the “typical German evening meal of sliced bread, cold cuts of cheese and meat, and bite-sized pieces of fruits and vegetables,” followed by a dessert course of chocolate pudding or fruit yogurt and cookies, the researchers wrote. Beverages were water and one sugar-sweetened beverage; the specific foods and beverages were based on the child’s preferences, reported in an online preassessment, and the foods were consistent for the longer and shorter meals. All participants were asked not to eat for 2 hours prior to arriving for their meals at the laboratory.
During longer meals, children ate an average of seven additional bite-sized pieces of fruits and vegetables, which translates to approximately a full portion (defined as 100 g, such as a medium apple), the researchers wrote. The difference was significant compared with the shorter meals for fruits (P = .01) and vegetables (P < .001).
A piece of fruit was approximately 10 grams (6-10 g for grapes and tangerine segments; 10-14 g for cherry tomatoes; and 9-11 g for apple, banana, carrot, or cucumber). Other foods served with the meals included cheese, meats, butter, and sweet spreads.
Children also ate more slowly (defined as fewer bites per minute) during the longer meals, and they reported significantly greater satiety after the longer meals (P < .001 for both). The consumption of bread and cold cuts was similar for the two meal settings.
“Higher intake of fruits and vegetables during longer meals cannot be explained by longer exposure to food alone; otherwise, an increased intake of bread and cold cuts would have occurred,” the researchers wrote in their discussion. “One possible explanation is that the fruits and vegetables were cut into bite-sized pieces, making them convenient to eat.”
Further analysis showed that during the longer meals, more fruits and vegetables were consumed overall, but more vegetables were eaten from the start of the meal, while the additional fruit was eaten during the additional time at the end.
The findings were limited by several factors, primarily use of a laboratory setting that does not generalize to natural eating environments, the researchers noted. Other potential limitations included the effect of a video cameras on desirable behaviors and the limited ethnic and socioeconomic diversity of the study population, they said. The results were strengthened by the within-dyad study design that allowed for control of factors such as video observation, but more research is needed with more diverse groups and across longer time frames, the researchers said.
However, the results suggest that adding 10 minutes to a family mealtime can yield significant improvements in children’s diets, they said. They suggested strategies including playing music chosen by the child/children and setting rules that everyone must remain at the table for a certain length of time, with fruits and vegetables available on the table.
“If the effects of this simple, inexpensive, and low-threshold intervention prove stable over time, it could contribute to addressing a major public health problem,” the researchers concluded.
Findings intriguing, more data needed
The current study is important because food and vegetable intake in the majority of children falls below the recommended daily allowance, Karalyn Kinsella, MD, a pediatrician in private practice in Cheshire, Conn., said in an interview.
The key take-home message for clinicians is the continued need to stress the importance of family meals, said Dr. Kinsella. “Many children continue to be overbooked with activities, and it may be rare for many families to sit down together for a meal for any length of time.”
Don’t discount the potential effect of a longer school lunch on children’s fruit and vegetable consumption as well, she added. “Advocating for longer lunch time is important, as many kids report not being able to finish their lunch at school.”
The current study was limited by being conducted in a lab setting, which may have influenced children’s desire for different foods, “also they had fewer distractions, and were being offered favorite foods,” said Dr. Kinsella.
Looking ahead, “it would be interesting to see if this result carried over to nonpreferred fruits and veggies and made any difference for picky eaters,” she said.
The study received no outside funding. The open-access publication of the study (but not the study itself) was supported by the Max Planck Institute for Human Development Library Open Access Fund. The researchers had no financial conflicts to disclose. Dr. Kinsella had no financial conflicts to disclose and serves on the editorial advisory board of Pediatric News.
FROM JAMA NETWORK OPEN
Cesarean deliveries drop in women at low risk
Although clinically indicated cesarean deliveries may improve outcomes for mothers and infants, “when not clinically indicated, cesarean delivery is a major surgical intervention that increases risk for adverse outcomes,” wrote Anna M. Frappaolo of Columbia University College of Physicians and Surgeons, New York, and colleagues.
The Healthy People 2030 campaign includes the reduction of cesarean deliveries, but trends in these procedures, especially with regard to diagnoses of labor arrest, have not been well studied, the researchers said.
In an analysis published in JAMA Network Open, the researchers reviewed delivery hospitalizations using data from the National Inpatient Sample from 2000 to 2019.
Births deemed low risk for cesarean delivery were identified by using criteria of the Society for Maternal-Fetal Medicine and additional criteria, and joinpoint regression analysis was used to estimate changes.
The researchers examined overall trends in cesarean deliveries as well as trends for three specific diagnoses: nonreassuring fetal status, labor arrest, and obstructed labor.
The final analysis included 40,517,867 deliveries; of these, 4,885,716 (12.1%) were cesarean deliveries.
Overall, cesarean deliveries in patients deemed at low risk increased from 9.7% in 2000 to 13.9% in 2009, then plateaued and decreased from 13.0% in 2012 to 11.1% in 2019. The average annual percentage change (AAPC) for cesarean delivery was 6.4% for the years from 2000 to 2005, 1.2% from 2005 to 2009, and −2.2% from 2009 to 2019.
Cesarean delivery for nonreassuring fetal status increased over the entire study period, from 3.4% in 2000 to 5.1% in 2019. By contrast, overall cesarean delivery for labor arrest increased from 3.6% in 2000 to a high of 4.8% in 2009, then decreased to 2.7% in 2019. Cesarean deliveries with a diagnosis of obstructed labor decreased from 0.9% in 2008 to 0.3% in 2019.
More specifically, cesarean deliveries for labor arrest in the active phase, latent phase, and second stage of labor increased from 1.5% to 2.1%, 1.1% to 1.5%, and 0.9% to 1.3%, respectively, from 2000 to 2009, and decreased from 2.1% to 1.7% for the active phase, from 1.5% to 1.2% for the latent phase, and from 1.2% to 0.9% for the second stage between 2010 and 2019.
Patients with increased odds of cesarean delivery were older (aged 35-39 years vs. 25-29 years, adjusted odds ratio 1.27), delivered in a hospital in the South vs. the Northeast of the United States (aOR 1.11), and were more likely to be non-Hispanic Black vs. non-Hispanic White (OR 1.23).
Notably, changes in nomenclature and interpretation of intrapartum electronic fetal heart monitoring occurred during the study period, with recommendations for the adoption of a three-tiered system for fetal heart rate patterns in 2008. “It is possible that current evidence and nomenclature related to intrapartum FHR interpretation may result in identification of a larger number of fetuses deemed at indeterminate risk for abnormal acid-base status,” the researchers wrote in their discussion.
The study findings were limited by several factors including the use of administrative discharge data rather than clinical records, the exclusion of patients with chronic conditions associated with cesarean delivery, changes in billing codes during the study period, and the inability to account for the effect of health factors, maternal age, and use of assisted reproductive technology, the researchers noted.
However, the results were strengthened by the large sample size and 20-year study period, as well as the stratification of labor arrest by stage, and suggest uptake of newer recommendations, they said. “Future reductions in cesarean deliveries among patients at low risk for cesarean delivery may be dependent on improved assessment of intrapartum fetal status,” they concluded.
Consider populations and outcomes in cesarean risk assessment
The decreasing rates of cesarean deliveries in the current study can be seen as positive, but more research is needed to examine maternal and neonatal outcomes, and to consider other conditions that affect risk for cesarean delivery, Paolo Ivo Cavoretto, MD, and Massimo Candiani, MD, of IRCCS San Raffaele Scientific Institute, and Antonio Farina, MD, of the University of Bologna, Italy, wrote in an accompanying editorial.
Notably, the study authors identified a population aged 15-39 years as low risk, and an increased risk for cesarean delivery within this range increased with age. “Maternal age remains a major risk factor associated with the risk of cesarean delivery, both from results of this study and those of previous analyses assessing its independence from other related risk factors,” the editorialists said.
The study findings also reflect the changes in standards for labor duration during the study period, they noted. The longer duration of labor may reduce cesarean delivery rates, but it is not without maternal and fetal-neonatal risks, they wrote.
“To be sure that the described trend of cesarean delivery rate reduction can be considered positive, there would be the theoretical need to analyze other maternal-fetal-neonatal outcomes (e.g., rates of operative deliveries, neonatal acidemia, intensive care unit use, maternal hemorrhage, pelvic floor trauma and dysfunction, and psychological distress),” the editorialists concluded.
More research needed to explore clinical decisions
“Reducing the cesarean delivery rate is a top priority, but evidence is lacking on an optimal rate that improves maternal and neonatal outcomes,” Iris Krishna, MD, a maternal-fetal medicine specialist at Emory University, Atlanta, said in an interview.
“Hospital quality and safety committees have been working to decrease cesarean deliveries amongst low-risk women, and identifying contemporary trends gives us insight on whether some of these efforts have translated to a lower cesarean delivery rate,” she said.
Dr. Krishna said she was not surprised by the higher cesarean section rate in the South. “The decision for cesarean delivery is multifaceted, and although this study was not able to assess clinical indications for cesarean delivery or maternal and fetal outcomes, we cannot ignore that social determinants of health contribute greatly to overall health outcomes,” she said. The trends in the current study further underscore the geographic disparities in access to health care present in the South, she added.
“This study notes that cesarean delivery for nonreassuring fetal status increased; however, nonreassuring fetal status as an indication for cesarean delivery can be subjective,” Dr. Krishna said. “Hospital quality and safety committees should consider reviewing the clinical scenarios that led to this decision to identify opportunities for improvement and further education,” she said.
“Defining contemporary trends in cesarean delivery for low-risk patients has merit, but the study findings should be interpreted with caution,” said Dr. Krishna, who is a member of the Ob.Gyn. News advisory board. More research is needed to define an optimal cesarean section rate that promotes positive maternal and fetal outcomes, and to determine whether identifying an optimal rate should be based on patient risk profiles, she said.
The study received no outside funding. Lead author Ms. Frappaolo had no financial conflicts to disclose; nor did the editorial authors or Dr. Krishna.
Although clinically indicated cesarean deliveries may improve outcomes for mothers and infants, “when not clinically indicated, cesarean delivery is a major surgical intervention that increases risk for adverse outcomes,” wrote Anna M. Frappaolo of Columbia University College of Physicians and Surgeons, New York, and colleagues.
The Healthy People 2030 campaign includes the reduction of cesarean deliveries, but trends in these procedures, especially with regard to diagnoses of labor arrest, have not been well studied, the researchers said.
In an analysis published in JAMA Network Open, the researchers reviewed delivery hospitalizations using data from the National Inpatient Sample from 2000 to 2019.
Births deemed low risk for cesarean delivery were identified by using criteria of the Society for Maternal-Fetal Medicine and additional criteria, and joinpoint regression analysis was used to estimate changes.
The researchers examined overall trends in cesarean deliveries as well as trends for three specific diagnoses: nonreassuring fetal status, labor arrest, and obstructed labor.
The final analysis included 40,517,867 deliveries; of these, 4,885,716 (12.1%) were cesarean deliveries.
Overall, cesarean deliveries in patients deemed at low risk increased from 9.7% in 2000 to 13.9% in 2009, then plateaued and decreased from 13.0% in 2012 to 11.1% in 2019. The average annual percentage change (AAPC) for cesarean delivery was 6.4% for the years from 2000 to 2005, 1.2% from 2005 to 2009, and −2.2% from 2009 to 2019.
Cesarean delivery for nonreassuring fetal status increased over the entire study period, from 3.4% in 2000 to 5.1% in 2019. By contrast, overall cesarean delivery for labor arrest increased from 3.6% in 2000 to a high of 4.8% in 2009, then decreased to 2.7% in 2019. Cesarean deliveries with a diagnosis of obstructed labor decreased from 0.9% in 2008 to 0.3% in 2019.
More specifically, cesarean deliveries for labor arrest in the active phase, latent phase, and second stage of labor increased from 1.5% to 2.1%, 1.1% to 1.5%, and 0.9% to 1.3%, respectively, from 2000 to 2009, and decreased from 2.1% to 1.7% for the active phase, from 1.5% to 1.2% for the latent phase, and from 1.2% to 0.9% for the second stage between 2010 and 2019.
Patients with increased odds of cesarean delivery were older (aged 35-39 years vs. 25-29 years, adjusted odds ratio 1.27), delivered in a hospital in the South vs. the Northeast of the United States (aOR 1.11), and were more likely to be non-Hispanic Black vs. non-Hispanic White (OR 1.23).
Notably, changes in nomenclature and interpretation of intrapartum electronic fetal heart monitoring occurred during the study period, with recommendations for the adoption of a three-tiered system for fetal heart rate patterns in 2008. “It is possible that current evidence and nomenclature related to intrapartum FHR interpretation may result in identification of a larger number of fetuses deemed at indeterminate risk for abnormal acid-base status,” the researchers wrote in their discussion.
The study findings were limited by several factors including the use of administrative discharge data rather than clinical records, the exclusion of patients with chronic conditions associated with cesarean delivery, changes in billing codes during the study period, and the inability to account for the effect of health factors, maternal age, and use of assisted reproductive technology, the researchers noted.
However, the results were strengthened by the large sample size and 20-year study period, as well as the stratification of labor arrest by stage, and suggest uptake of newer recommendations, they said. “Future reductions in cesarean deliveries among patients at low risk for cesarean delivery may be dependent on improved assessment of intrapartum fetal status,” they concluded.
Consider populations and outcomes in cesarean risk assessment
The decreasing rates of cesarean deliveries in the current study can be seen as positive, but more research is needed to examine maternal and neonatal outcomes, and to consider other conditions that affect risk for cesarean delivery, Paolo Ivo Cavoretto, MD, and Massimo Candiani, MD, of IRCCS San Raffaele Scientific Institute, and Antonio Farina, MD, of the University of Bologna, Italy, wrote in an accompanying editorial.
Notably, the study authors identified a population aged 15-39 years as low risk, and an increased risk for cesarean delivery within this range increased with age. “Maternal age remains a major risk factor associated with the risk of cesarean delivery, both from results of this study and those of previous analyses assessing its independence from other related risk factors,” the editorialists said.
The study findings also reflect the changes in standards for labor duration during the study period, they noted. The longer duration of labor may reduce cesarean delivery rates, but it is not without maternal and fetal-neonatal risks, they wrote.
“To be sure that the described trend of cesarean delivery rate reduction can be considered positive, there would be the theoretical need to analyze other maternal-fetal-neonatal outcomes (e.g., rates of operative deliveries, neonatal acidemia, intensive care unit use, maternal hemorrhage, pelvic floor trauma and dysfunction, and psychological distress),” the editorialists concluded.
More research needed to explore clinical decisions
“Reducing the cesarean delivery rate is a top priority, but evidence is lacking on an optimal rate that improves maternal and neonatal outcomes,” Iris Krishna, MD, a maternal-fetal medicine specialist at Emory University, Atlanta, said in an interview.
“Hospital quality and safety committees have been working to decrease cesarean deliveries amongst low-risk women, and identifying contemporary trends gives us insight on whether some of these efforts have translated to a lower cesarean delivery rate,” she said.
Dr. Krishna said she was not surprised by the higher cesarean section rate in the South. “The decision for cesarean delivery is multifaceted, and although this study was not able to assess clinical indications for cesarean delivery or maternal and fetal outcomes, we cannot ignore that social determinants of health contribute greatly to overall health outcomes,” she said. The trends in the current study further underscore the geographic disparities in access to health care present in the South, she added.
“This study notes that cesarean delivery for nonreassuring fetal status increased; however, nonreassuring fetal status as an indication for cesarean delivery can be subjective,” Dr. Krishna said. “Hospital quality and safety committees should consider reviewing the clinical scenarios that led to this decision to identify opportunities for improvement and further education,” she said.
“Defining contemporary trends in cesarean delivery for low-risk patients has merit, but the study findings should be interpreted with caution,” said Dr. Krishna, who is a member of the Ob.Gyn. News advisory board. More research is needed to define an optimal cesarean section rate that promotes positive maternal and fetal outcomes, and to determine whether identifying an optimal rate should be based on patient risk profiles, she said.
The study received no outside funding. Lead author Ms. Frappaolo had no financial conflicts to disclose; nor did the editorial authors or Dr. Krishna.
Although clinically indicated cesarean deliveries may improve outcomes for mothers and infants, “when not clinically indicated, cesarean delivery is a major surgical intervention that increases risk for adverse outcomes,” wrote Anna M. Frappaolo of Columbia University College of Physicians and Surgeons, New York, and colleagues.
The Healthy People 2030 campaign includes the reduction of cesarean deliveries, but trends in these procedures, especially with regard to diagnoses of labor arrest, have not been well studied, the researchers said.
In an analysis published in JAMA Network Open, the researchers reviewed delivery hospitalizations using data from the National Inpatient Sample from 2000 to 2019.
Births deemed low risk for cesarean delivery were identified by using criteria of the Society for Maternal-Fetal Medicine and additional criteria, and joinpoint regression analysis was used to estimate changes.
The researchers examined overall trends in cesarean deliveries as well as trends for three specific diagnoses: nonreassuring fetal status, labor arrest, and obstructed labor.
The final analysis included 40,517,867 deliveries; of these, 4,885,716 (12.1%) were cesarean deliveries.
Overall, cesarean deliveries in patients deemed at low risk increased from 9.7% in 2000 to 13.9% in 2009, then plateaued and decreased from 13.0% in 2012 to 11.1% in 2019. The average annual percentage change (AAPC) for cesarean delivery was 6.4% for the years from 2000 to 2005, 1.2% from 2005 to 2009, and −2.2% from 2009 to 2019.
Cesarean delivery for nonreassuring fetal status increased over the entire study period, from 3.4% in 2000 to 5.1% in 2019. By contrast, overall cesarean delivery for labor arrest increased from 3.6% in 2000 to a high of 4.8% in 2009, then decreased to 2.7% in 2019. Cesarean deliveries with a diagnosis of obstructed labor decreased from 0.9% in 2008 to 0.3% in 2019.
More specifically, cesarean deliveries for labor arrest in the active phase, latent phase, and second stage of labor increased from 1.5% to 2.1%, 1.1% to 1.5%, and 0.9% to 1.3%, respectively, from 2000 to 2009, and decreased from 2.1% to 1.7% for the active phase, from 1.5% to 1.2% for the latent phase, and from 1.2% to 0.9% for the second stage between 2010 and 2019.
Patients with increased odds of cesarean delivery were older (aged 35-39 years vs. 25-29 years, adjusted odds ratio 1.27), delivered in a hospital in the South vs. the Northeast of the United States (aOR 1.11), and were more likely to be non-Hispanic Black vs. non-Hispanic White (OR 1.23).
Notably, changes in nomenclature and interpretation of intrapartum electronic fetal heart monitoring occurred during the study period, with recommendations for the adoption of a three-tiered system for fetal heart rate patterns in 2008. “It is possible that current evidence and nomenclature related to intrapartum FHR interpretation may result in identification of a larger number of fetuses deemed at indeterminate risk for abnormal acid-base status,” the researchers wrote in their discussion.
The study findings were limited by several factors including the use of administrative discharge data rather than clinical records, the exclusion of patients with chronic conditions associated with cesarean delivery, changes in billing codes during the study period, and the inability to account for the effect of health factors, maternal age, and use of assisted reproductive technology, the researchers noted.
However, the results were strengthened by the large sample size and 20-year study period, as well as the stratification of labor arrest by stage, and suggest uptake of newer recommendations, they said. “Future reductions in cesarean deliveries among patients at low risk for cesarean delivery may be dependent on improved assessment of intrapartum fetal status,” they concluded.
Consider populations and outcomes in cesarean risk assessment
The decreasing rates of cesarean deliveries in the current study can be seen as positive, but more research is needed to examine maternal and neonatal outcomes, and to consider other conditions that affect risk for cesarean delivery, Paolo Ivo Cavoretto, MD, and Massimo Candiani, MD, of IRCCS San Raffaele Scientific Institute, and Antonio Farina, MD, of the University of Bologna, Italy, wrote in an accompanying editorial.
Notably, the study authors identified a population aged 15-39 years as low risk, and an increased risk for cesarean delivery within this range increased with age. “Maternal age remains a major risk factor associated with the risk of cesarean delivery, both from results of this study and those of previous analyses assessing its independence from other related risk factors,” the editorialists said.
The study findings also reflect the changes in standards for labor duration during the study period, they noted. The longer duration of labor may reduce cesarean delivery rates, but it is not without maternal and fetal-neonatal risks, they wrote.
“To be sure that the described trend of cesarean delivery rate reduction can be considered positive, there would be the theoretical need to analyze other maternal-fetal-neonatal outcomes (e.g., rates of operative deliveries, neonatal acidemia, intensive care unit use, maternal hemorrhage, pelvic floor trauma and dysfunction, and psychological distress),” the editorialists concluded.
More research needed to explore clinical decisions
“Reducing the cesarean delivery rate is a top priority, but evidence is lacking on an optimal rate that improves maternal and neonatal outcomes,” Iris Krishna, MD, a maternal-fetal medicine specialist at Emory University, Atlanta, said in an interview.
“Hospital quality and safety committees have been working to decrease cesarean deliveries amongst low-risk women, and identifying contemporary trends gives us insight on whether some of these efforts have translated to a lower cesarean delivery rate,” she said.
Dr. Krishna said she was not surprised by the higher cesarean section rate in the South. “The decision for cesarean delivery is multifaceted, and although this study was not able to assess clinical indications for cesarean delivery or maternal and fetal outcomes, we cannot ignore that social determinants of health contribute greatly to overall health outcomes,” she said. The trends in the current study further underscore the geographic disparities in access to health care present in the South, she added.
“This study notes that cesarean delivery for nonreassuring fetal status increased; however, nonreassuring fetal status as an indication for cesarean delivery can be subjective,” Dr. Krishna said. “Hospital quality and safety committees should consider reviewing the clinical scenarios that led to this decision to identify opportunities for improvement and further education,” she said.
“Defining contemporary trends in cesarean delivery for low-risk patients has merit, but the study findings should be interpreted with caution,” said Dr. Krishna, who is a member of the Ob.Gyn. News advisory board. More research is needed to define an optimal cesarean section rate that promotes positive maternal and fetal outcomes, and to determine whether identifying an optimal rate should be based on patient risk profiles, she said.
The study received no outside funding. Lead author Ms. Frappaolo had no financial conflicts to disclose; nor did the editorial authors or Dr. Krishna.
FROM JAMA NETWORK OPEN
High-dose prophylactic anticoagulation benefits patients with COVID-19 pneumonia
High-dose prophylactic anticoagulation or therapeutic anticoagulation reduced de novo thrombosis in patients with hypoxemic COVID-19 pneumonia, based on data from 334 adults.
Patients with hypoxemic COVID-19 pneumonia are at increased risk of thrombosis and anticoagulation-related bleeding, therefore data to identify the lowest effective anticoagulant dose are needed, wrote Vincent Labbé, MD, of Sorbonne University, Paris, and colleagues.
Previous studies of different anticoagulation strategies for noncritically ill and critically ill patients with COVID-19 pneumonia have shown contrasting results, but some institutions recommend a high-dose regimen in the wake of data showing macrovascular thrombosis in patients with COVID-19 who were treated with standard anticoagulation, the authors wrote.
However, no previously published studies have compared the effectiveness of the three anticoagulation strategies: high-dose prophylactic anticoagulation (HD-PA), standard dose prophylactic anticoagulation (SD-PA), and therapeutic anticoagulation (TA), they said.
In the open-label Anticoagulation COVID-19 (ANTICOVID) trial, published in JAMA Internal Medicine, the researchers identified consecutively hospitalized adults aged 18 years and older being treated for hypoxemic COVID-19 pneumonia in 23 centers in France between April 2021 and December 2021.
The patients were randomly assigned to SD-PA (116 patients), HD-PA (111 patients), and TA (112 patients) using low-molecular-weight heparin for 14 days, or until either hospital discharge or weaning from supplemental oxygen for 48 consecutive hours, whichever outcome occurred first. The HD-PA patients received two times the SD-PA dose. The mean age of the patients was 58.3 years, and approximately two-thirds were men; race and ethnicity data were not collected. Participants had no macrovascular thrombosis at the start of the study.
The primary outcomes were all-cause mortality and time to clinical improvement (defined as the time from randomization to a 2-point improvement on a 7-category respiratory function scale).
The secondary outcome was a combination of safety and efficacy at day 28 that included a composite of thrombosis (ischemic stroke, noncerebrovascular arterial thrombosis, deep venous thrombosis, pulmonary artery thrombosis, and central venous catheter–related deep venous thrombosis), major bleeding, or all-cause death.
For the primary outcome, results were similar among the groups; HD-PA had no significant benefit over SD-PA or TA. All-cause death rates for SD-PA, HD-PA, and TA patients were 14%, 12%, and 13%, respectively. The time to clinical improvement for the three groups was approximately 8 days, 9 days, and 8 days, respectively. Results for the primary outcome were consistent across all prespecified subgroups.
However, HD-PA was associated with a significant fourfold reduced risk of de novo thrombosis compared with SD-PA (5.5% vs. 20.2%) with no observed increase in major bleeding. TA was not associated with any significant improvement in primary or secondary outcomes compared with HD-PA or SD-PA.
The current study findings of no improvement in survival or disease resolution in patients with a higher anticoagulant dose reflects data from previous studies, the researchers wrote in their discussion. “Our study results together with those of previous RCTs support the premise that the role of microvascular thrombosis in worsening organ dysfunction may be narrower than estimated,” they said.
The findings were limited by several factors including the open-label design and the relatively small sample size, the lack of data on microvascular (vs. macrovascular) thrombosis at baseline, and the predominance of the Delta variant of COVID-19 among the study participants, which may have contributed to a lower mortality rate, the researchers noted.
However, given the significant reduction in de novo thrombosis, the results support the routine use of HD-PA in patients with severe hypoxemic COVID-19 pneumonia, they concluded.
Results inform current clinical practice
Over the course of the COVID-19 pandemic, “Patients hospitalized with COVID-19 manifested the highest risk for thromboembolic complications, especially patients in the intensive care setting,” and early reports suggested that standard prophylactic doses of anticoagulant therapy might be insufficient to prevent thrombotic events, Richard C. Becker, MD, of the University of Cincinnati, and Thomas L. Ortel, MD, of Duke University, Durham, N.C., wrote in an accompanying editorial.
“Although there have been several studies that have investigated the role of anticoagulant therapy in hospitalized patients with COVID-19, this is the first study that specifically compared a standard, prophylactic dose of low-molecular-weight heparin to a ‘high-dose’ prophylactic regimen and to a full therapeutic dose regimen,” Dr. Ortel said in an interview.
“Given the concerns about an increased thrombotic risk with prophylactic dose anticoagulation, and the potential bleeding risk associated with a full therapeutic dose of anticoagulation, this approach enabled the investigators to explore the efficacy and safety of an intermediate dose between these two extremes,” he said.
In the current study, , a finding that was not observed in other studies investigating anticoagulant therapy in hospitalized patients with severe COVID-19,” Dr. Ortel told this news organization. “Much initial concern about progression of disease in patients hospitalized with severe COVID-19 focused on the role of microvascular thrombosis, which appears to be less important in this process, or, alternatively, less responsive to anticoagulant therapy.”
The clinical takeaway from the study, Dr. Ortel said, is the decreased risk for venous thromboembolism with a high-dose prophylactic anticoagulation strategy compared with a standard-dose prophylactic regimen for patients hospitalized with hypoxemic COVID-19 pneumonia, “leading to an improved net clinical outcome.”
Looking ahead, “Additional research is needed to determine whether a higher dose of prophylactic anticoagulation would be beneficial for patients hospitalized with COVID-19 pneumonia who are not in an intensive care unit setting,” Dr. Ortel said. Studies are needed to determine whether therapeutic interventions are equally beneficial in patients with different coronavirus variants, since most patients in the current study were infected with the Delta variant, he added.
The study was supported by LEO Pharma. Dr. Labbé disclosed grants from LEO Pharma during the study and fees from AOP Health unrelated to the current study.
Dr. Becker disclosed personal fees from Novartis Data Safety Monitoring Board, Ionis Data Safety Monitoring Board, and Basking Biosciences Scientific Advisory Board unrelated to the current study. Dr. Ortel disclosed grants from the National Institutes of Health, Instrumentation Laboratory, Stago, and Siemens; contract fees from the Centers for Disease Control and Prevention; and honoraria from UpToDate unrelated to the current study.
A version of this article originally appeared on Medscape.com.
High-dose prophylactic anticoagulation or therapeutic anticoagulation reduced de novo thrombosis in patients with hypoxemic COVID-19 pneumonia, based on data from 334 adults.
Patients with hypoxemic COVID-19 pneumonia are at increased risk of thrombosis and anticoagulation-related bleeding, therefore data to identify the lowest effective anticoagulant dose are needed, wrote Vincent Labbé, MD, of Sorbonne University, Paris, and colleagues.
Previous studies of different anticoagulation strategies for noncritically ill and critically ill patients with COVID-19 pneumonia have shown contrasting results, but some institutions recommend a high-dose regimen in the wake of data showing macrovascular thrombosis in patients with COVID-19 who were treated with standard anticoagulation, the authors wrote.
However, no previously published studies have compared the effectiveness of the three anticoagulation strategies: high-dose prophylactic anticoagulation (HD-PA), standard dose prophylactic anticoagulation (SD-PA), and therapeutic anticoagulation (TA), they said.
In the open-label Anticoagulation COVID-19 (ANTICOVID) trial, published in JAMA Internal Medicine, the researchers identified consecutively hospitalized adults aged 18 years and older being treated for hypoxemic COVID-19 pneumonia in 23 centers in France between April 2021 and December 2021.
The patients were randomly assigned to SD-PA (116 patients), HD-PA (111 patients), and TA (112 patients) using low-molecular-weight heparin for 14 days, or until either hospital discharge or weaning from supplemental oxygen for 48 consecutive hours, whichever outcome occurred first. The HD-PA patients received two times the SD-PA dose. The mean age of the patients was 58.3 years, and approximately two-thirds were men; race and ethnicity data were not collected. Participants had no macrovascular thrombosis at the start of the study.
The primary outcomes were all-cause mortality and time to clinical improvement (defined as the time from randomization to a 2-point improvement on a 7-category respiratory function scale).
The secondary outcome was a combination of safety and efficacy at day 28 that included a composite of thrombosis (ischemic stroke, noncerebrovascular arterial thrombosis, deep venous thrombosis, pulmonary artery thrombosis, and central venous catheter–related deep venous thrombosis), major bleeding, or all-cause death.
For the primary outcome, results were similar among the groups; HD-PA had no significant benefit over SD-PA or TA. All-cause death rates for SD-PA, HD-PA, and TA patients were 14%, 12%, and 13%, respectively. The time to clinical improvement for the three groups was approximately 8 days, 9 days, and 8 days, respectively. Results for the primary outcome were consistent across all prespecified subgroups.
However, HD-PA was associated with a significant fourfold reduced risk of de novo thrombosis compared with SD-PA (5.5% vs. 20.2%) with no observed increase in major bleeding. TA was not associated with any significant improvement in primary or secondary outcomes compared with HD-PA or SD-PA.
The current study findings of no improvement in survival or disease resolution in patients with a higher anticoagulant dose reflects data from previous studies, the researchers wrote in their discussion. “Our study results together with those of previous RCTs support the premise that the role of microvascular thrombosis in worsening organ dysfunction may be narrower than estimated,” they said.
The findings were limited by several factors including the open-label design and the relatively small sample size, the lack of data on microvascular (vs. macrovascular) thrombosis at baseline, and the predominance of the Delta variant of COVID-19 among the study participants, which may have contributed to a lower mortality rate, the researchers noted.
However, given the significant reduction in de novo thrombosis, the results support the routine use of HD-PA in patients with severe hypoxemic COVID-19 pneumonia, they concluded.
Results inform current clinical practice
Over the course of the COVID-19 pandemic, “Patients hospitalized with COVID-19 manifested the highest risk for thromboembolic complications, especially patients in the intensive care setting,” and early reports suggested that standard prophylactic doses of anticoagulant therapy might be insufficient to prevent thrombotic events, Richard C. Becker, MD, of the University of Cincinnati, and Thomas L. Ortel, MD, of Duke University, Durham, N.C., wrote in an accompanying editorial.
“Although there have been several studies that have investigated the role of anticoagulant therapy in hospitalized patients with COVID-19, this is the first study that specifically compared a standard, prophylactic dose of low-molecular-weight heparin to a ‘high-dose’ prophylactic regimen and to a full therapeutic dose regimen,” Dr. Ortel said in an interview.
“Given the concerns about an increased thrombotic risk with prophylactic dose anticoagulation, and the potential bleeding risk associated with a full therapeutic dose of anticoagulation, this approach enabled the investigators to explore the efficacy and safety of an intermediate dose between these two extremes,” he said.
In the current study, , a finding that was not observed in other studies investigating anticoagulant therapy in hospitalized patients with severe COVID-19,” Dr. Ortel told this news organization. “Much initial concern about progression of disease in patients hospitalized with severe COVID-19 focused on the role of microvascular thrombosis, which appears to be less important in this process, or, alternatively, less responsive to anticoagulant therapy.”
The clinical takeaway from the study, Dr. Ortel said, is the decreased risk for venous thromboembolism with a high-dose prophylactic anticoagulation strategy compared with a standard-dose prophylactic regimen for patients hospitalized with hypoxemic COVID-19 pneumonia, “leading to an improved net clinical outcome.”
Looking ahead, “Additional research is needed to determine whether a higher dose of prophylactic anticoagulation would be beneficial for patients hospitalized with COVID-19 pneumonia who are not in an intensive care unit setting,” Dr. Ortel said. Studies are needed to determine whether therapeutic interventions are equally beneficial in patients with different coronavirus variants, since most patients in the current study were infected with the Delta variant, he added.
The study was supported by LEO Pharma. Dr. Labbé disclosed grants from LEO Pharma during the study and fees from AOP Health unrelated to the current study.
Dr. Becker disclosed personal fees from Novartis Data Safety Monitoring Board, Ionis Data Safety Monitoring Board, and Basking Biosciences Scientific Advisory Board unrelated to the current study. Dr. Ortel disclosed grants from the National Institutes of Health, Instrumentation Laboratory, Stago, and Siemens; contract fees from the Centers for Disease Control and Prevention; and honoraria from UpToDate unrelated to the current study.
A version of this article originally appeared on Medscape.com.
High-dose prophylactic anticoagulation or therapeutic anticoagulation reduced de novo thrombosis in patients with hypoxemic COVID-19 pneumonia, based on data from 334 adults.
Patients with hypoxemic COVID-19 pneumonia are at increased risk of thrombosis and anticoagulation-related bleeding, therefore data to identify the lowest effective anticoagulant dose are needed, wrote Vincent Labbé, MD, of Sorbonne University, Paris, and colleagues.
Previous studies of different anticoagulation strategies for noncritically ill and critically ill patients with COVID-19 pneumonia have shown contrasting results, but some institutions recommend a high-dose regimen in the wake of data showing macrovascular thrombosis in patients with COVID-19 who were treated with standard anticoagulation, the authors wrote.
However, no previously published studies have compared the effectiveness of the three anticoagulation strategies: high-dose prophylactic anticoagulation (HD-PA), standard dose prophylactic anticoagulation (SD-PA), and therapeutic anticoagulation (TA), they said.
In the open-label Anticoagulation COVID-19 (ANTICOVID) trial, published in JAMA Internal Medicine, the researchers identified consecutively hospitalized adults aged 18 years and older being treated for hypoxemic COVID-19 pneumonia in 23 centers in France between April 2021 and December 2021.
The patients were randomly assigned to SD-PA (116 patients), HD-PA (111 patients), and TA (112 patients) using low-molecular-weight heparin for 14 days, or until either hospital discharge or weaning from supplemental oxygen for 48 consecutive hours, whichever outcome occurred first. The HD-PA patients received two times the SD-PA dose. The mean age of the patients was 58.3 years, and approximately two-thirds were men; race and ethnicity data were not collected. Participants had no macrovascular thrombosis at the start of the study.
The primary outcomes were all-cause mortality and time to clinical improvement (defined as the time from randomization to a 2-point improvement on a 7-category respiratory function scale).
The secondary outcome was a combination of safety and efficacy at day 28 that included a composite of thrombosis (ischemic stroke, noncerebrovascular arterial thrombosis, deep venous thrombosis, pulmonary artery thrombosis, and central venous catheter–related deep venous thrombosis), major bleeding, or all-cause death.
For the primary outcome, results were similar among the groups; HD-PA had no significant benefit over SD-PA or TA. All-cause death rates for SD-PA, HD-PA, and TA patients were 14%, 12%, and 13%, respectively. The time to clinical improvement for the three groups was approximately 8 days, 9 days, and 8 days, respectively. Results for the primary outcome were consistent across all prespecified subgroups.
However, HD-PA was associated with a significant fourfold reduced risk of de novo thrombosis compared with SD-PA (5.5% vs. 20.2%) with no observed increase in major bleeding. TA was not associated with any significant improvement in primary or secondary outcomes compared with HD-PA or SD-PA.
The current study findings of no improvement in survival or disease resolution in patients with a higher anticoagulant dose reflects data from previous studies, the researchers wrote in their discussion. “Our study results together with those of previous RCTs support the premise that the role of microvascular thrombosis in worsening organ dysfunction may be narrower than estimated,” they said.
The findings were limited by several factors including the open-label design and the relatively small sample size, the lack of data on microvascular (vs. macrovascular) thrombosis at baseline, and the predominance of the Delta variant of COVID-19 among the study participants, which may have contributed to a lower mortality rate, the researchers noted.
However, given the significant reduction in de novo thrombosis, the results support the routine use of HD-PA in patients with severe hypoxemic COVID-19 pneumonia, they concluded.
Results inform current clinical practice
Over the course of the COVID-19 pandemic, “Patients hospitalized with COVID-19 manifested the highest risk for thromboembolic complications, especially patients in the intensive care setting,” and early reports suggested that standard prophylactic doses of anticoagulant therapy might be insufficient to prevent thrombotic events, Richard C. Becker, MD, of the University of Cincinnati, and Thomas L. Ortel, MD, of Duke University, Durham, N.C., wrote in an accompanying editorial.
“Although there have been several studies that have investigated the role of anticoagulant therapy in hospitalized patients with COVID-19, this is the first study that specifically compared a standard, prophylactic dose of low-molecular-weight heparin to a ‘high-dose’ prophylactic regimen and to a full therapeutic dose regimen,” Dr. Ortel said in an interview.
“Given the concerns about an increased thrombotic risk with prophylactic dose anticoagulation, and the potential bleeding risk associated with a full therapeutic dose of anticoagulation, this approach enabled the investigators to explore the efficacy and safety of an intermediate dose between these two extremes,” he said.
In the current study, , a finding that was not observed in other studies investigating anticoagulant therapy in hospitalized patients with severe COVID-19,” Dr. Ortel told this news organization. “Much initial concern about progression of disease in patients hospitalized with severe COVID-19 focused on the role of microvascular thrombosis, which appears to be less important in this process, or, alternatively, less responsive to anticoagulant therapy.”
The clinical takeaway from the study, Dr. Ortel said, is the decreased risk for venous thromboembolism with a high-dose prophylactic anticoagulation strategy compared with a standard-dose prophylactic regimen for patients hospitalized with hypoxemic COVID-19 pneumonia, “leading to an improved net clinical outcome.”
Looking ahead, “Additional research is needed to determine whether a higher dose of prophylactic anticoagulation would be beneficial for patients hospitalized with COVID-19 pneumonia who are not in an intensive care unit setting,” Dr. Ortel said. Studies are needed to determine whether therapeutic interventions are equally beneficial in patients with different coronavirus variants, since most patients in the current study were infected with the Delta variant, he added.
The study was supported by LEO Pharma. Dr. Labbé disclosed grants from LEO Pharma during the study and fees from AOP Health unrelated to the current study.
Dr. Becker disclosed personal fees from Novartis Data Safety Monitoring Board, Ionis Data Safety Monitoring Board, and Basking Biosciences Scientific Advisory Board unrelated to the current study. Dr. Ortel disclosed grants from the National Institutes of Health, Instrumentation Laboratory, Stago, and Siemens; contract fees from the Centers for Disease Control and Prevention; and honoraria from UpToDate unrelated to the current study.
A version of this article originally appeared on Medscape.com.
Psychiatric comorbidities predict complex polypharmacy in bipolar disorder
Patients with bipolar disorder (BD) often receive prescriptions for multiple medications to manage a range of medical and psychiatric symptoms, but the definition of polypharmacy in these patients is inconsistent, and characteristics associated with complex polypharmacy have not been well studied, wrote Andrea Aguglia, MD, of the University of Genoa, Italy, and colleagues.
Previous studies have shown an increased risk for comorbid medical and psychiatric illnesses in BD patients, the researchers noted, and changes in prescribing trends have prompted greater use of combination therapies such as mood stabilizers with or without antipsychotics.
In a study published in Psychiatry Research, the investigators reviewed data from 556 adults with BD. Participants were aged 18 and older with a primary diagnosis of BD type I or II based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, criteria. The mean age of the participants was 49.17 years, 43.7% were male, and 34.2% were employed. A total of 327 patients (58.8%) had a medical comorbidity, and 193 (34.7%) used an illicit substance.
A total of 225 patients (40.5%) met the criteria for complex polypharmacy by taking at least 4 medications.
BD patients with complex polypharmacy were significantly more likely than those without complex polypharmacy to be single (50.7% vs. 37.8%, P = .025) and unemployed (25.3% vs. 40.2%, P < .001).
On the clinical side, complex polypharmacy in BD patients was significantly associated with a higher prevalence of both medical and psychiatric comorbidities (65.3% vs. 54.4%, P = .010; and 50.7% vs. 34.1%, P < .001, respectively). The association with medical comorbidities and complex polypharmacy in BD was unexpected, the researchers said, “as psychotropic medications should be used with cautiousness in patients suffering from medical conditions.”
BD patients with complex polypharmacy also had a significantly earlier age of onset, longer duration of illness, and increased number of hospitalizations than those without complex polypharmacy.
Rates of at least one substance including alcohol, cannabinoids, and cocaine/amphetamines were significantly higher among BD patients with complex polypharmacy, compared with those without, but no differences in heroin use were noted between the groups.
In a logistic regression analysis, single status, older age, number of hospitalizations, and the presence of psychiatric comorbidities were significantly associated with complex polypharmacy.
The study findings were limited by several factors including the focus on an inpatient population, inability to consider clinical factors such as type of mood episode and bipolar cycle, and the cross-sectional design that prevented conclusions of causality, the researchers noted.
However, the study is the first known to focus on both sociodemographic and clinical factors associated with polypharmacy in BD, and the results suggest that implementing complementary psychosocial strategies might help reduce medication use in these patients, they concluded. Data from further longitudinal studies may help guide long-term management of BD, “especially when pharmacological discontinuation is needed,” they said.
The study received no outside funding. The researchers had no financial conflicts to disclose.
Patients with bipolar disorder (BD) often receive prescriptions for multiple medications to manage a range of medical and psychiatric symptoms, but the definition of polypharmacy in these patients is inconsistent, and characteristics associated with complex polypharmacy have not been well studied, wrote Andrea Aguglia, MD, of the University of Genoa, Italy, and colleagues.
Previous studies have shown an increased risk for comorbid medical and psychiatric illnesses in BD patients, the researchers noted, and changes in prescribing trends have prompted greater use of combination therapies such as mood stabilizers with or without antipsychotics.
In a study published in Psychiatry Research, the investigators reviewed data from 556 adults with BD. Participants were aged 18 and older with a primary diagnosis of BD type I or II based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, criteria. The mean age of the participants was 49.17 years, 43.7% were male, and 34.2% were employed. A total of 327 patients (58.8%) had a medical comorbidity, and 193 (34.7%) used an illicit substance.
A total of 225 patients (40.5%) met the criteria for complex polypharmacy by taking at least 4 medications.
BD patients with complex polypharmacy were significantly more likely than those without complex polypharmacy to be single (50.7% vs. 37.8%, P = .025) and unemployed (25.3% vs. 40.2%, P < .001).
On the clinical side, complex polypharmacy in BD patients was significantly associated with a higher prevalence of both medical and psychiatric comorbidities (65.3% vs. 54.4%, P = .010; and 50.7% vs. 34.1%, P < .001, respectively). The association with medical comorbidities and complex polypharmacy in BD was unexpected, the researchers said, “as psychotropic medications should be used with cautiousness in patients suffering from medical conditions.”
BD patients with complex polypharmacy also had a significantly earlier age of onset, longer duration of illness, and increased number of hospitalizations than those without complex polypharmacy.
Rates of at least one substance including alcohol, cannabinoids, and cocaine/amphetamines were significantly higher among BD patients with complex polypharmacy, compared with those without, but no differences in heroin use were noted between the groups.
In a logistic regression analysis, single status, older age, number of hospitalizations, and the presence of psychiatric comorbidities were significantly associated with complex polypharmacy.
The study findings were limited by several factors including the focus on an inpatient population, inability to consider clinical factors such as type of mood episode and bipolar cycle, and the cross-sectional design that prevented conclusions of causality, the researchers noted.
However, the study is the first known to focus on both sociodemographic and clinical factors associated with polypharmacy in BD, and the results suggest that implementing complementary psychosocial strategies might help reduce medication use in these patients, they concluded. Data from further longitudinal studies may help guide long-term management of BD, “especially when pharmacological discontinuation is needed,” they said.
The study received no outside funding. The researchers had no financial conflicts to disclose.
Patients with bipolar disorder (BD) often receive prescriptions for multiple medications to manage a range of medical and psychiatric symptoms, but the definition of polypharmacy in these patients is inconsistent, and characteristics associated with complex polypharmacy have not been well studied, wrote Andrea Aguglia, MD, of the University of Genoa, Italy, and colleagues.
Previous studies have shown an increased risk for comorbid medical and psychiatric illnesses in BD patients, the researchers noted, and changes in prescribing trends have prompted greater use of combination therapies such as mood stabilizers with or without antipsychotics.
In a study published in Psychiatry Research, the investigators reviewed data from 556 adults with BD. Participants were aged 18 and older with a primary diagnosis of BD type I or II based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, criteria. The mean age of the participants was 49.17 years, 43.7% were male, and 34.2% were employed. A total of 327 patients (58.8%) had a medical comorbidity, and 193 (34.7%) used an illicit substance.
A total of 225 patients (40.5%) met the criteria for complex polypharmacy by taking at least 4 medications.
BD patients with complex polypharmacy were significantly more likely than those without complex polypharmacy to be single (50.7% vs. 37.8%, P = .025) and unemployed (25.3% vs. 40.2%, P < .001).
On the clinical side, complex polypharmacy in BD patients was significantly associated with a higher prevalence of both medical and psychiatric comorbidities (65.3% vs. 54.4%, P = .010; and 50.7% vs. 34.1%, P < .001, respectively). The association with medical comorbidities and complex polypharmacy in BD was unexpected, the researchers said, “as psychotropic medications should be used with cautiousness in patients suffering from medical conditions.”
BD patients with complex polypharmacy also had a significantly earlier age of onset, longer duration of illness, and increased number of hospitalizations than those without complex polypharmacy.
Rates of at least one substance including alcohol, cannabinoids, and cocaine/amphetamines were significantly higher among BD patients with complex polypharmacy, compared with those without, but no differences in heroin use were noted between the groups.
In a logistic regression analysis, single status, older age, number of hospitalizations, and the presence of psychiatric comorbidities were significantly associated with complex polypharmacy.
The study findings were limited by several factors including the focus on an inpatient population, inability to consider clinical factors such as type of mood episode and bipolar cycle, and the cross-sectional design that prevented conclusions of causality, the researchers noted.
However, the study is the first known to focus on both sociodemographic and clinical factors associated with polypharmacy in BD, and the results suggest that implementing complementary psychosocial strategies might help reduce medication use in these patients, they concluded. Data from further longitudinal studies may help guide long-term management of BD, “especially when pharmacological discontinuation is needed,” they said.
The study received no outside funding. The researchers had no financial conflicts to disclose.
FROM PSYCHIATRY RESEARCH
One or two high-step days may reduce mortality risks
Taking 8,000 steps or more for just 1 or 2 days a week was linked to a significant reduction in all-cause and cardiovascular mortality, according to a study of about 3,000 adults.
Previous research has shown lower mortality rates among individuals who walk consistently, especially those who log at least 8,000 steps daily, but the benefit of intense walking just once or twice a week on long-term health outcomes has not been examined, wrote Kosuke Inoue, MD, of Kyoto University, Japan, and colleagues.
In a study published in JAMA Network Open, the researchers reviewed 10-year follow-up data for 3,101 adults aged 20 years and older who were part of the 2005 and 2006 National Health and Nutrition Examination Survey (NHANES).
The participants were asked to wear accelerometers to track their steps for 7 consecutive days. The researchers assessed the dose-response relationship between days of taking 8,000 steps or more (about 4 miles) during 1 week, and the primary outcome of all-cause mortality risk after 10 years. Cardiovascular mortality risk after 10 years was a secondary outcome.
The mean age of the participants was 50.5 years and 51% were women. The breakdown by ethnicity was 51% White, 21% Black, 24% Hispanic, and 4% other races/ethnicities. A total of 632 individuals took 8,000 steps or more 0 days a week, 532 took at least 8,000 steps 1-2 days per week, and 1,937 took at least 8,000 steps 3-7 days a week.
During the 10-year follow-up period, overall all-cause mortality was 14.2% and cardiovascular mortality was 5.3% across all step groups.
In an adjusted analysis, individuals who took at least 8,000 steps 1-2 days a week had a 14.9% lower all-cause mortality risk compared with those who never reached 8,000 daily steps. This difference was similar to the 16.5% reduced mortality risk for those who took at least 8,000 steps 3-7 days a week.
Similarly, compared with the group with no days of at least 8,000 steps, cardiovascular mortality risk was 8.1% lower for those who took 8,000 steps 1-2 days per week and 8.4% lower for those who took at least 8,000 steps 3-7 days per week. The decreased mortality risk plateaued at 3-4 days.
These patterns in reduced all-cause mortality risk persisted in a stratified analysis by age (younger than 65 years and 65 years and older) and sex. Similar patterns in reduced mortality also emerged when the researchers used different thresholds of daily steps, such as a minimum of 10,000 steps instead of 8,000. The adjusted all-cause mortality for groups who took at least 10,000 steps 1-2 days a week, 3-7 days a week, and no days a week were 8.1%, 7.3%, and 16.7%, respectively, with corresponding cardiovascular mortality risks of 2.4%, 2.3%, and 7.0%, respectively.
“Given the simplicity and ease of counting daily steps, our findings indicate that the recommended number of steps taken on as few as 1 to 2 days per week may be a feasible option for individuals who are striving to achieve some health benefits through adhering to a recommended daily step count but are unable to accomplish this on a daily basis,” the researchers wrote in their discussion.
The findings were limited by several factors including the use daily step measures for 1 week only at baseline, with no data on how physical activity changes might impact mortality risk, the researchers noted. Other limitations included possible accelerometer error and misclassification of activity, possible selection bias, and lack of data on cause-specific mortality outside of cardiovascular death, they said.
However, the results were strengthened by the use of accelerometers as objective measures of activity and by the availability of 10-year follow-up data for nearly 100% of the participants, they said.
“Although our findings might suffer from residual confounding that should be addressed in future research, they suggest that people may receive substantial health benefits even if a sufficient number of steps are taken on only a couple days of the week,” they concluded.
Proceed with caution
The current study findings should be interpreted cautiously in light of the potential unmeasured confounding factors and selection bias that often occur in studies of physical activity, James Sawalla Guseh, MD, of Massachusetts General Hospital, and Jose F. Figueroa, MD, of Harvard T.H. Chan School of Public Health, Boston, wrote in an accompanying editorial.
The results support previous studies showing some longevity benefits with “weekend warrior” patterns of intense physical activity for only a couple of days; however, “the body of evidence for sporadic activity is not as robust as the evidence for sustained and regular aerobic activity,” the authors emphasized.
The editorial authors also highlighted the limitations of the current study, including the observational design and significant differences in demographics and comorbidities between the 1- to 2-days of 8,000 steps exercise group and the 0-day group, as well as the reliance on only a week’s worth of data to infer 10 years’ mortality.
Although the data are consistent with previous observations that increased exercise volume reduces mortality, more research is needed, as the current study findings may not reflect other dimensions of health, including neurological health, they said.
Despite the need for cautious interpretation of the results, the current study “supports the emerging and popular idea that step counting, which does not require consideration of exercise duration or intensity, can offer guidance toward robust and favorable health outcomes,” and may inform step-based activity goals to improve public health, the editorialists wrote.
The study was supported by the Japan Agency for Medical Research and Development, the Japan Society for the Promotion of Science, the Japan Endocrine Society, and the Meiji Yasuda Life Foundation of Health and Welfare. Dr. Inoue also was supported by the Program for the Development of Next-Generation Leading Scientists With Global Insight sponsored by the Ministry of Education, Culture, Sports, Science and Technology, Japan. The other researchers had no relevant financial conflicts to disclose. The editorial authors had no financial conflicts to disclose.
Taking 8,000 steps or more for just 1 or 2 days a week was linked to a significant reduction in all-cause and cardiovascular mortality, according to a study of about 3,000 adults.
Previous research has shown lower mortality rates among individuals who walk consistently, especially those who log at least 8,000 steps daily, but the benefit of intense walking just once or twice a week on long-term health outcomes has not been examined, wrote Kosuke Inoue, MD, of Kyoto University, Japan, and colleagues.
In a study published in JAMA Network Open, the researchers reviewed 10-year follow-up data for 3,101 adults aged 20 years and older who were part of the 2005 and 2006 National Health and Nutrition Examination Survey (NHANES).
The participants were asked to wear accelerometers to track their steps for 7 consecutive days. The researchers assessed the dose-response relationship between days of taking 8,000 steps or more (about 4 miles) during 1 week, and the primary outcome of all-cause mortality risk after 10 years. Cardiovascular mortality risk after 10 years was a secondary outcome.
The mean age of the participants was 50.5 years and 51% were women. The breakdown by ethnicity was 51% White, 21% Black, 24% Hispanic, and 4% other races/ethnicities. A total of 632 individuals took 8,000 steps or more 0 days a week, 532 took at least 8,000 steps 1-2 days per week, and 1,937 took at least 8,000 steps 3-7 days a week.
During the 10-year follow-up period, overall all-cause mortality was 14.2% and cardiovascular mortality was 5.3% across all step groups.
In an adjusted analysis, individuals who took at least 8,000 steps 1-2 days a week had a 14.9% lower all-cause mortality risk compared with those who never reached 8,000 daily steps. This difference was similar to the 16.5% reduced mortality risk for those who took at least 8,000 steps 3-7 days a week.
Similarly, compared with the group with no days of at least 8,000 steps, cardiovascular mortality risk was 8.1% lower for those who took 8,000 steps 1-2 days per week and 8.4% lower for those who took at least 8,000 steps 3-7 days per week. The decreased mortality risk plateaued at 3-4 days.
These patterns in reduced all-cause mortality risk persisted in a stratified analysis by age (younger than 65 years and 65 years and older) and sex. Similar patterns in reduced mortality also emerged when the researchers used different thresholds of daily steps, such as a minimum of 10,000 steps instead of 8,000. The adjusted all-cause mortality for groups who took at least 10,000 steps 1-2 days a week, 3-7 days a week, and no days a week were 8.1%, 7.3%, and 16.7%, respectively, with corresponding cardiovascular mortality risks of 2.4%, 2.3%, and 7.0%, respectively.
“Given the simplicity and ease of counting daily steps, our findings indicate that the recommended number of steps taken on as few as 1 to 2 days per week may be a feasible option for individuals who are striving to achieve some health benefits through adhering to a recommended daily step count but are unable to accomplish this on a daily basis,” the researchers wrote in their discussion.
The findings were limited by several factors including the use daily step measures for 1 week only at baseline, with no data on how physical activity changes might impact mortality risk, the researchers noted. Other limitations included possible accelerometer error and misclassification of activity, possible selection bias, and lack of data on cause-specific mortality outside of cardiovascular death, they said.
However, the results were strengthened by the use of accelerometers as objective measures of activity and by the availability of 10-year follow-up data for nearly 100% of the participants, they said.
“Although our findings might suffer from residual confounding that should be addressed in future research, they suggest that people may receive substantial health benefits even if a sufficient number of steps are taken on only a couple days of the week,” they concluded.
Proceed with caution
The current study findings should be interpreted cautiously in light of the potential unmeasured confounding factors and selection bias that often occur in studies of physical activity, James Sawalla Guseh, MD, of Massachusetts General Hospital, and Jose F. Figueroa, MD, of Harvard T.H. Chan School of Public Health, Boston, wrote in an accompanying editorial.
The results support previous studies showing some longevity benefits with “weekend warrior” patterns of intense physical activity for only a couple of days; however, “the body of evidence for sporadic activity is not as robust as the evidence for sustained and regular aerobic activity,” the authors emphasized.
The editorial authors also highlighted the limitations of the current study, including the observational design and significant differences in demographics and comorbidities between the 1- to 2-days of 8,000 steps exercise group and the 0-day group, as well as the reliance on only a week’s worth of data to infer 10 years’ mortality.
Although the data are consistent with previous observations that increased exercise volume reduces mortality, more research is needed, as the current study findings may not reflect other dimensions of health, including neurological health, they said.
Despite the need for cautious interpretation of the results, the current study “supports the emerging and popular idea that step counting, which does not require consideration of exercise duration or intensity, can offer guidance toward robust and favorable health outcomes,” and may inform step-based activity goals to improve public health, the editorialists wrote.
The study was supported by the Japan Agency for Medical Research and Development, the Japan Society for the Promotion of Science, the Japan Endocrine Society, and the Meiji Yasuda Life Foundation of Health and Welfare. Dr. Inoue also was supported by the Program for the Development of Next-Generation Leading Scientists With Global Insight sponsored by the Ministry of Education, Culture, Sports, Science and Technology, Japan. The other researchers had no relevant financial conflicts to disclose. The editorial authors had no financial conflicts to disclose.
Taking 8,000 steps or more for just 1 or 2 days a week was linked to a significant reduction in all-cause and cardiovascular mortality, according to a study of about 3,000 adults.
Previous research has shown lower mortality rates among individuals who walk consistently, especially those who log at least 8,000 steps daily, but the benefit of intense walking just once or twice a week on long-term health outcomes has not been examined, wrote Kosuke Inoue, MD, of Kyoto University, Japan, and colleagues.
In a study published in JAMA Network Open, the researchers reviewed 10-year follow-up data for 3,101 adults aged 20 years and older who were part of the 2005 and 2006 National Health and Nutrition Examination Survey (NHANES).
The participants were asked to wear accelerometers to track their steps for 7 consecutive days. The researchers assessed the dose-response relationship between days of taking 8,000 steps or more (about 4 miles) during 1 week, and the primary outcome of all-cause mortality risk after 10 years. Cardiovascular mortality risk after 10 years was a secondary outcome.
The mean age of the participants was 50.5 years and 51% were women. The breakdown by ethnicity was 51% White, 21% Black, 24% Hispanic, and 4% other races/ethnicities. A total of 632 individuals took 8,000 steps or more 0 days a week, 532 took at least 8,000 steps 1-2 days per week, and 1,937 took at least 8,000 steps 3-7 days a week.
During the 10-year follow-up period, overall all-cause mortality was 14.2% and cardiovascular mortality was 5.3% across all step groups.
In an adjusted analysis, individuals who took at least 8,000 steps 1-2 days a week had a 14.9% lower all-cause mortality risk compared with those who never reached 8,000 daily steps. This difference was similar to the 16.5% reduced mortality risk for those who took at least 8,000 steps 3-7 days a week.
Similarly, compared with the group with no days of at least 8,000 steps, cardiovascular mortality risk was 8.1% lower for those who took 8,000 steps 1-2 days per week and 8.4% lower for those who took at least 8,000 steps 3-7 days per week. The decreased mortality risk plateaued at 3-4 days.
These patterns in reduced all-cause mortality risk persisted in a stratified analysis by age (younger than 65 years and 65 years and older) and sex. Similar patterns in reduced mortality also emerged when the researchers used different thresholds of daily steps, such as a minimum of 10,000 steps instead of 8,000. The adjusted all-cause mortality for groups who took at least 10,000 steps 1-2 days a week, 3-7 days a week, and no days a week were 8.1%, 7.3%, and 16.7%, respectively, with corresponding cardiovascular mortality risks of 2.4%, 2.3%, and 7.0%, respectively.
“Given the simplicity and ease of counting daily steps, our findings indicate that the recommended number of steps taken on as few as 1 to 2 days per week may be a feasible option for individuals who are striving to achieve some health benefits through adhering to a recommended daily step count but are unable to accomplish this on a daily basis,” the researchers wrote in their discussion.
The findings were limited by several factors including the use daily step measures for 1 week only at baseline, with no data on how physical activity changes might impact mortality risk, the researchers noted. Other limitations included possible accelerometer error and misclassification of activity, possible selection bias, and lack of data on cause-specific mortality outside of cardiovascular death, they said.
However, the results were strengthened by the use of accelerometers as objective measures of activity and by the availability of 10-year follow-up data for nearly 100% of the participants, they said.
“Although our findings might suffer from residual confounding that should be addressed in future research, they suggest that people may receive substantial health benefits even if a sufficient number of steps are taken on only a couple days of the week,” they concluded.
Proceed with caution
The current study findings should be interpreted cautiously in light of the potential unmeasured confounding factors and selection bias that often occur in studies of physical activity, James Sawalla Guseh, MD, of Massachusetts General Hospital, and Jose F. Figueroa, MD, of Harvard T.H. Chan School of Public Health, Boston, wrote in an accompanying editorial.
The results support previous studies showing some longevity benefits with “weekend warrior” patterns of intense physical activity for only a couple of days; however, “the body of evidence for sporadic activity is not as robust as the evidence for sustained and regular aerobic activity,” the authors emphasized.
The editorial authors also highlighted the limitations of the current study, including the observational design and significant differences in demographics and comorbidities between the 1- to 2-days of 8,000 steps exercise group and the 0-day group, as well as the reliance on only a week’s worth of data to infer 10 years’ mortality.
Although the data are consistent with previous observations that increased exercise volume reduces mortality, more research is needed, as the current study findings may not reflect other dimensions of health, including neurological health, they said.
Despite the need for cautious interpretation of the results, the current study “supports the emerging and popular idea that step counting, which does not require consideration of exercise duration or intensity, can offer guidance toward robust and favorable health outcomes,” and may inform step-based activity goals to improve public health, the editorialists wrote.
The study was supported by the Japan Agency for Medical Research and Development, the Japan Society for the Promotion of Science, the Japan Endocrine Society, and the Meiji Yasuda Life Foundation of Health and Welfare. Dr. Inoue also was supported by the Program for the Development of Next-Generation Leading Scientists With Global Insight sponsored by the Ministry of Education, Culture, Sports, Science and Technology, Japan. The other researchers had no relevant financial conflicts to disclose. The editorial authors had no financial conflicts to disclose.
FROM JAMA NETWORK OPEN
COVID-19 potentially induced adult-onset IgA vasculitis
Plasma exchange successfully improved symptoms of immunoglobulin A vasculitis in an adult female patient who developed the condition after infection with COVID-19, according to a case report published in Cureus.
Immunoglobulin A (IgA) vasculitis can affect all ages, but is relatively rare in adults, and the etiology remains unclear, wrote Hassan Alwafi, MD, of Umm Al-Qura University, Makkah, Saudi Arabia, and colleagues.
COVID-19 has been associated with pulmonary and extrapulmonary complications, but , the authors wrote.
The authors described a case of a 41-year-old otherwise healthy Saudi Arabian woman who presented with an ascending rash on both lower extremities, along with arthralgia. Blood tests showed high blood urea nitrogen, creatinine, and inflammatory markers, and a negative immune panel. The patient had been infected with COVID-19 approximately 2 weeks before the onset of symptoms, but she was treated with supportive care and required no antiviral therapy of dexamethasone.
In addition, the patient’s urinalysis showed proteinuria and hematuria. After a kidney biopsy revealed additional abnormalities, the patient was started on intravenous methylprednisolone pulse therapy.
A few days after the initiation of therapy, the patient experienced nosebleeds and coughing up blood. After a chest x-ray showed bilateral pleural effusion, the patient was transferred to the ICU. The patient was started on intravenous piperacillin-tazobactam, and received two doses of intravenous immunoglobulin and plasma exchange after consultation with a nephrologist. Ultimately, the initial rash and other clinical symptoms improved, and the patient was discharged with a tapering schedule of oral prednisolone.
In this case, COVID-19 may have played a role in the development of IgA vasculitis, the authors said.
The authors also listed 21 cases of IgA vasculitis following COVID-19 infection, including 14 children and 7 adults. Of these, three cases had combined kidney and lung involvement, the two pediatric cases died from respiratory failure, while the adult case was successfully treated with steroid monotherapy.
“As COVID-19 is a novel disease and its pathogenic mechanism of causing IgA vasculitis is not well understood, every patient who is infected with or recently recovered from COVID-19 and presents with a skin rash or arthralgia should have baseline blood and urine tests done and should be treated promptly to avoid the emergence of irreversible consequences,” the authors wrote in their discussion.
Although case reports cannot prove a cause-and-effect link, the data from the cases in the current review suggest that COVID-19 infection may be an indirect trigger for IgA vasculitis, including cases associated with pulmonary renal syndrome, they said. However, more research is needed, especially on the efficacy of treatments in adults, they concluded.
The study received no outside funding. The researchers had no financial conflicts to disclose.
Plasma exchange successfully improved symptoms of immunoglobulin A vasculitis in an adult female patient who developed the condition after infection with COVID-19, according to a case report published in Cureus.
Immunoglobulin A (IgA) vasculitis can affect all ages, but is relatively rare in adults, and the etiology remains unclear, wrote Hassan Alwafi, MD, of Umm Al-Qura University, Makkah, Saudi Arabia, and colleagues.
COVID-19 has been associated with pulmonary and extrapulmonary complications, but , the authors wrote.
The authors described a case of a 41-year-old otherwise healthy Saudi Arabian woman who presented with an ascending rash on both lower extremities, along with arthralgia. Blood tests showed high blood urea nitrogen, creatinine, and inflammatory markers, and a negative immune panel. The patient had been infected with COVID-19 approximately 2 weeks before the onset of symptoms, but she was treated with supportive care and required no antiviral therapy of dexamethasone.
In addition, the patient’s urinalysis showed proteinuria and hematuria. After a kidney biopsy revealed additional abnormalities, the patient was started on intravenous methylprednisolone pulse therapy.
A few days after the initiation of therapy, the patient experienced nosebleeds and coughing up blood. After a chest x-ray showed bilateral pleural effusion, the patient was transferred to the ICU. The patient was started on intravenous piperacillin-tazobactam, and received two doses of intravenous immunoglobulin and plasma exchange after consultation with a nephrologist. Ultimately, the initial rash and other clinical symptoms improved, and the patient was discharged with a tapering schedule of oral prednisolone.
In this case, COVID-19 may have played a role in the development of IgA vasculitis, the authors said.
The authors also listed 21 cases of IgA vasculitis following COVID-19 infection, including 14 children and 7 adults. Of these, three cases had combined kidney and lung involvement, the two pediatric cases died from respiratory failure, while the adult case was successfully treated with steroid monotherapy.
“As COVID-19 is a novel disease and its pathogenic mechanism of causing IgA vasculitis is not well understood, every patient who is infected with or recently recovered from COVID-19 and presents with a skin rash or arthralgia should have baseline blood and urine tests done and should be treated promptly to avoid the emergence of irreversible consequences,” the authors wrote in their discussion.
Although case reports cannot prove a cause-and-effect link, the data from the cases in the current review suggest that COVID-19 infection may be an indirect trigger for IgA vasculitis, including cases associated with pulmonary renal syndrome, they said. However, more research is needed, especially on the efficacy of treatments in adults, they concluded.
The study received no outside funding. The researchers had no financial conflicts to disclose.
Plasma exchange successfully improved symptoms of immunoglobulin A vasculitis in an adult female patient who developed the condition after infection with COVID-19, according to a case report published in Cureus.
Immunoglobulin A (IgA) vasculitis can affect all ages, but is relatively rare in adults, and the etiology remains unclear, wrote Hassan Alwafi, MD, of Umm Al-Qura University, Makkah, Saudi Arabia, and colleagues.
COVID-19 has been associated with pulmonary and extrapulmonary complications, but , the authors wrote.
The authors described a case of a 41-year-old otherwise healthy Saudi Arabian woman who presented with an ascending rash on both lower extremities, along with arthralgia. Blood tests showed high blood urea nitrogen, creatinine, and inflammatory markers, and a negative immune panel. The patient had been infected with COVID-19 approximately 2 weeks before the onset of symptoms, but she was treated with supportive care and required no antiviral therapy of dexamethasone.
In addition, the patient’s urinalysis showed proteinuria and hematuria. After a kidney biopsy revealed additional abnormalities, the patient was started on intravenous methylprednisolone pulse therapy.
A few days after the initiation of therapy, the patient experienced nosebleeds and coughing up blood. After a chest x-ray showed bilateral pleural effusion, the patient was transferred to the ICU. The patient was started on intravenous piperacillin-tazobactam, and received two doses of intravenous immunoglobulin and plasma exchange after consultation with a nephrologist. Ultimately, the initial rash and other clinical symptoms improved, and the patient was discharged with a tapering schedule of oral prednisolone.
In this case, COVID-19 may have played a role in the development of IgA vasculitis, the authors said.
The authors also listed 21 cases of IgA vasculitis following COVID-19 infection, including 14 children and 7 adults. Of these, three cases had combined kidney and lung involvement, the two pediatric cases died from respiratory failure, while the adult case was successfully treated with steroid monotherapy.
“As COVID-19 is a novel disease and its pathogenic mechanism of causing IgA vasculitis is not well understood, every patient who is infected with or recently recovered from COVID-19 and presents with a skin rash or arthralgia should have baseline blood and urine tests done and should be treated promptly to avoid the emergence of irreversible consequences,” the authors wrote in their discussion.
Although case reports cannot prove a cause-and-effect link, the data from the cases in the current review suggest that COVID-19 infection may be an indirect trigger for IgA vasculitis, including cases associated with pulmonary renal syndrome, they said. However, more research is needed, especially on the efficacy of treatments in adults, they concluded.
The study received no outside funding. The researchers had no financial conflicts to disclose.
FROM CUREUS
Dupilumab moves forward as possible COPD treatment
of more than 900 adults with uncontrolled chronic obstructive pulmonary disease.
In the study, known as the BOREAS trial, dupilumab met its primary and secondary endpoints, with a significant reduction compared with placebo in exacerbations for adults with chronic obstructive pulmonary disease (COPD) that was uncontrolled despite use of the maximal standard-of-care inhaled therapy (triple therapy), according to a press release from manufacturers Regeneron and Sanofi.
Dupilumab, which inhibits the signaling of the interleukin-4 (IL-4) and interleukin-13 (IL-13) pathways, is currently approved in multiple countries for certain patients with conditions including atopic dermatitis, asthma, chronic rhinosinusitis with nasal polyps, eosinophilic esophagitis, or prurigo nodularis in different age groups. The drug is not an immunosuppressant, and would be the first biologic approved for COPD, according to the manufacturers.
In the BOREAS trial, 468 adults with COPD who were current or former smokers aged 40-80 years were randomized to dupilumab and 471 to placebo; both groups continued to receive maximal standard of care.
Over 52 weeks, patients in the dupilumab group experienced a 30% reduction in moderate to severe COPD exacerbations compared with placebo (P = .0005).
In addition, patients treated with dupilumab met the key secondary endpoints of significant improvement in lung function from baseline to 12 weeks compared with placebo (160 mL vs. 77 mL, P < .0001); this difference persisted at 52 weeks (P = .0003).
Dupilumab also met endpoints for improvement in patient-reported health-related quality of life based on the St. George’s Respiratory Questionnaire (SGRQ) and reduction in the severity of respiratory symptoms of COPD based on the Evaluation Respiratory Symptoms: COPD (E-RS: COPD) Scale, according to the companies’ statement.
The results represent a previously unreported magnitude of improvement for COPD patients treated with a biologic, principal investigator George D. Yancopoulos, MD, said in the statement. “These results also validate the role type 2 inflammation plays in driving COPD in these patients, advancing the scientific community’s understanding of the underlying biology of this disease,” he added.
The safety results in the BOREAS trial were generally consistent with the known safety profile of Dupixent in its approved indications. Overall adverse event rates were similar for dupilumab and placebo patients (77% and 76%, respectively) and the overall safety profiles were consistent with the currently approved dupilumab indications, according to the manufacturers.
The adverse events that were more common in dupilumab patients compared with placebo patients were headache (8.1% vs. 6.8%), diarrhea (5.3% vs. 3.6%), and back pain (5.1% vs. 3.4%).
Adverse events leading to deaths were similar between the groups (1.7% in placebo patients and 1.5% in dupilumab patients).
Complete safety and efficacy results from the BOREAS trial are scheduled to be presented in a future scientific forum, and a second phase 3 trial of dupilumab for COPD, known as NOTUS, is ongoing, with data expected in 2024, according to the manufacturers.
The Boreas trial was sponsored by Sanofi and Regeneron Pharmaceuticals.
of more than 900 adults with uncontrolled chronic obstructive pulmonary disease.
In the study, known as the BOREAS trial, dupilumab met its primary and secondary endpoints, with a significant reduction compared with placebo in exacerbations for adults with chronic obstructive pulmonary disease (COPD) that was uncontrolled despite use of the maximal standard-of-care inhaled therapy (triple therapy), according to a press release from manufacturers Regeneron and Sanofi.
Dupilumab, which inhibits the signaling of the interleukin-4 (IL-4) and interleukin-13 (IL-13) pathways, is currently approved in multiple countries for certain patients with conditions including atopic dermatitis, asthma, chronic rhinosinusitis with nasal polyps, eosinophilic esophagitis, or prurigo nodularis in different age groups. The drug is not an immunosuppressant, and would be the first biologic approved for COPD, according to the manufacturers.
In the BOREAS trial, 468 adults with COPD who were current or former smokers aged 40-80 years were randomized to dupilumab and 471 to placebo; both groups continued to receive maximal standard of care.
Over 52 weeks, patients in the dupilumab group experienced a 30% reduction in moderate to severe COPD exacerbations compared with placebo (P = .0005).
In addition, patients treated with dupilumab met the key secondary endpoints of significant improvement in lung function from baseline to 12 weeks compared with placebo (160 mL vs. 77 mL, P < .0001); this difference persisted at 52 weeks (P = .0003).
Dupilumab also met endpoints for improvement in patient-reported health-related quality of life based on the St. George’s Respiratory Questionnaire (SGRQ) and reduction in the severity of respiratory symptoms of COPD based on the Evaluation Respiratory Symptoms: COPD (E-RS: COPD) Scale, according to the companies’ statement.
The results represent a previously unreported magnitude of improvement for COPD patients treated with a biologic, principal investigator George D. Yancopoulos, MD, said in the statement. “These results also validate the role type 2 inflammation plays in driving COPD in these patients, advancing the scientific community’s understanding of the underlying biology of this disease,” he added.
The safety results in the BOREAS trial were generally consistent with the known safety profile of Dupixent in its approved indications. Overall adverse event rates were similar for dupilumab and placebo patients (77% and 76%, respectively) and the overall safety profiles were consistent with the currently approved dupilumab indications, according to the manufacturers.
The adverse events that were more common in dupilumab patients compared with placebo patients were headache (8.1% vs. 6.8%), diarrhea (5.3% vs. 3.6%), and back pain (5.1% vs. 3.4%).
Adverse events leading to deaths were similar between the groups (1.7% in placebo patients and 1.5% in dupilumab patients).
Complete safety and efficacy results from the BOREAS trial are scheduled to be presented in a future scientific forum, and a second phase 3 trial of dupilumab for COPD, known as NOTUS, is ongoing, with data expected in 2024, according to the manufacturers.
The Boreas trial was sponsored by Sanofi and Regeneron Pharmaceuticals.
of more than 900 adults with uncontrolled chronic obstructive pulmonary disease.
In the study, known as the BOREAS trial, dupilumab met its primary and secondary endpoints, with a significant reduction compared with placebo in exacerbations for adults with chronic obstructive pulmonary disease (COPD) that was uncontrolled despite use of the maximal standard-of-care inhaled therapy (triple therapy), according to a press release from manufacturers Regeneron and Sanofi.
Dupilumab, which inhibits the signaling of the interleukin-4 (IL-4) and interleukin-13 (IL-13) pathways, is currently approved in multiple countries for certain patients with conditions including atopic dermatitis, asthma, chronic rhinosinusitis with nasal polyps, eosinophilic esophagitis, or prurigo nodularis in different age groups. The drug is not an immunosuppressant, and would be the first biologic approved for COPD, according to the manufacturers.
In the BOREAS trial, 468 adults with COPD who were current or former smokers aged 40-80 years were randomized to dupilumab and 471 to placebo; both groups continued to receive maximal standard of care.
Over 52 weeks, patients in the dupilumab group experienced a 30% reduction in moderate to severe COPD exacerbations compared with placebo (P = .0005).
In addition, patients treated with dupilumab met the key secondary endpoints of significant improvement in lung function from baseline to 12 weeks compared with placebo (160 mL vs. 77 mL, P < .0001); this difference persisted at 52 weeks (P = .0003).
Dupilumab also met endpoints for improvement in patient-reported health-related quality of life based on the St. George’s Respiratory Questionnaire (SGRQ) and reduction in the severity of respiratory symptoms of COPD based on the Evaluation Respiratory Symptoms: COPD (E-RS: COPD) Scale, according to the companies’ statement.
The results represent a previously unreported magnitude of improvement for COPD patients treated with a biologic, principal investigator George D. Yancopoulos, MD, said in the statement. “These results also validate the role type 2 inflammation plays in driving COPD in these patients, advancing the scientific community’s understanding of the underlying biology of this disease,” he added.
The safety results in the BOREAS trial were generally consistent with the known safety profile of Dupixent in its approved indications. Overall adverse event rates were similar for dupilumab and placebo patients (77% and 76%, respectively) and the overall safety profiles were consistent with the currently approved dupilumab indications, according to the manufacturers.
The adverse events that were more common in dupilumab patients compared with placebo patients were headache (8.1% vs. 6.8%), diarrhea (5.3% vs. 3.6%), and back pain (5.1% vs. 3.4%).
Adverse events leading to deaths were similar between the groups (1.7% in placebo patients and 1.5% in dupilumab patients).
Complete safety and efficacy results from the BOREAS trial are scheduled to be presented in a future scientific forum, and a second phase 3 trial of dupilumab for COPD, known as NOTUS, is ongoing, with data expected in 2024, according to the manufacturers.
The Boreas trial was sponsored by Sanofi and Regeneron Pharmaceuticals.