User login
Probiotics do not reduce late-onset sepsis or mortality in very preterm infants
A combination of three probiotics appears to safely reduce the risk of necrotizing enterocolitis in very preterm infants, but did not reduce overall late-onset sepsis or all-cause mortality, according to a recent study.
Probiotics – potentially beneficial live microorganisms – have been previously shown to reduce necrotizing enterocolitis and all-cause mortality. They presumably contribute to a healthier gut flora because preterm infants tend to be more prone to pathogenic bacteria from the neonatal intensive care unit environment and lack the healthier flora biodiversity of term infants. Yet reductions in late-onset sepsis from probiotic administration have not been shown.
This study did find a significant reduction in late-onset sepsis only for infants born later than 28 weeks, but within too small a group (52 infants) to be reliably powered, reported Dr. Susan E. Jacobs of the Royal Women’s Hospital in Melbourne, and her colleagues (Pediatrics 2013 Nov. 18 [doi:10.1542/peds.2013-1339]).
The authors also calculated that administering probiotics to 43 very preterm infants could prevent necrotizing enterocolitis in 1 infant. Yet all-cause mortality, mortality from necrotizing enterocolitis and other secondary outcomes were not significantly different in the intervention and control groups.
The researchers randomized 1,099 very preterm infants, born before 32 complete weeks and weighing less than 1,500 g, to receive either 1.5 g daily of a probiotic combination or placebo. The probiotic powder was comprised of 1 x 109 total organisms per 1.5 g of Bifidobacterium infantis (300 x 106), Streptococcus thermophilus (350 x 106) and B. lactis (350 x 106); the placebo was maltodextrin.
The infants, born in one of eight Australian or two New Zealand hospitals during October 2007 and November 2011, were not included if they had major congenital or chromosomal abnormalities, if their mother was taking nondietary probiotic supplements, or if death within 72 hours appeared likely. At least 95% of infants in both groups received breast milk.
Among 548 infants receiving probiotics and 551 receiving placebo, 13.1% of the probiotic group and 16.2% of the control group were diagnosed with at least one episode of definite late-onset sepsis, confirmed with an isolated or cultured pathogen and occurring later than 48 hours after birth or at least 72 hours after no longer receiving antibiotics. This result revealed no significant difference between the groups (relative risk, 0.81; P = .16).
Yet among preterm infants aged at least 28 weeks, late-onset sepsis was significantly reduced in half in the probiotic subgroup. While 18 of infants (5.5%) receiving probiotics developed late-onset sepsis, 34 of the control infants (10.8%) developed it (RR, 0.51; P = .01). "These results should be interpreted cautiously, as they may be chance findings," the authors wrote.
The probiotic group, with 2% incidence, did see a small reduction in absolute risk (2.4%) of necrotizing enterocolitis of Bell stage 2 or greater, compared with the control group which had 4.4% incidence (RR, 0.46; P = .03). The number needed to treat was 43, but with a range of 23 to 333 (95% confidence interval) because of the low rates.
However, no significant difference in mortality from necrotizing enterocolitis (Bell stage 2 or more) or from all-cause mortality during the primary hospitalization or the study period was found between the two groups. The researchers also found no significant differences in a multitude of other secondary outcomes, including clinical (not definite) late-onset sepsis. The probiotics did not appear to have any adverse effects, including definite late-onset sepsis from the administered strains.
The authors emphasized caution throughout their study and the need for larger studies to detect significant clinical differences in outcomes from probiotics. "Our findings emphasize the importance of performing well-powered trials of probiotic administration in very preterm infants in different settings," they wrote.
The authors also noted that probiotics’ effectiveness for any outcome depends on dose, conditions, strains and combinations of strains. "No one has determined which is the most effective probiotic, combination of probiotics, when they should be started, what dosage should be used, or the duration of administration," the investigators said.
The study was funded by the National Health and Research Medical Council of Australia and the Royal Women’s Hospital Foundation and the Angior Family Foundation, both in Melbourne. The probiotic combination "ABC Dophilus Powder for Infants" was provided at cost by Solgar, USA, which did not provide any additional funding. The authors reported no other disclosures.
A combination of three probiotics appears to safely reduce the risk of necrotizing enterocolitis in very preterm infants, but did not reduce overall late-onset sepsis or all-cause mortality, according to a recent study.
Probiotics – potentially beneficial live microorganisms – have been previously shown to reduce necrotizing enterocolitis and all-cause mortality. They presumably contribute to a healthier gut flora because preterm infants tend to be more prone to pathogenic bacteria from the neonatal intensive care unit environment and lack the healthier flora biodiversity of term infants. Yet reductions in late-onset sepsis from probiotic administration have not been shown.
This study did find a significant reduction in late-onset sepsis only for infants born later than 28 weeks, but within too small a group (52 infants) to be reliably powered, reported Dr. Susan E. Jacobs of the Royal Women’s Hospital in Melbourne, and her colleagues (Pediatrics 2013 Nov. 18 [doi:10.1542/peds.2013-1339]).
The authors also calculated that administering probiotics to 43 very preterm infants could prevent necrotizing enterocolitis in 1 infant. Yet all-cause mortality, mortality from necrotizing enterocolitis and other secondary outcomes were not significantly different in the intervention and control groups.
The researchers randomized 1,099 very preterm infants, born before 32 complete weeks and weighing less than 1,500 g, to receive either 1.5 g daily of a probiotic combination or placebo. The probiotic powder was comprised of 1 x 109 total organisms per 1.5 g of Bifidobacterium infantis (300 x 106), Streptococcus thermophilus (350 x 106) and B. lactis (350 x 106); the placebo was maltodextrin.
The infants, born in one of eight Australian or two New Zealand hospitals during October 2007 and November 2011, were not included if they had major congenital or chromosomal abnormalities, if their mother was taking nondietary probiotic supplements, or if death within 72 hours appeared likely. At least 95% of infants in both groups received breast milk.
Among 548 infants receiving probiotics and 551 receiving placebo, 13.1% of the probiotic group and 16.2% of the control group were diagnosed with at least one episode of definite late-onset sepsis, confirmed with an isolated or cultured pathogen and occurring later than 48 hours after birth or at least 72 hours after no longer receiving antibiotics. This result revealed no significant difference between the groups (relative risk, 0.81; P = .16).
Yet among preterm infants aged at least 28 weeks, late-onset sepsis was significantly reduced in half in the probiotic subgroup. While 18 of infants (5.5%) receiving probiotics developed late-onset sepsis, 34 of the control infants (10.8%) developed it (RR, 0.51; P = .01). "These results should be interpreted cautiously, as they may be chance findings," the authors wrote.
The probiotic group, with 2% incidence, did see a small reduction in absolute risk (2.4%) of necrotizing enterocolitis of Bell stage 2 or greater, compared with the control group which had 4.4% incidence (RR, 0.46; P = .03). The number needed to treat was 43, but with a range of 23 to 333 (95% confidence interval) because of the low rates.
However, no significant difference in mortality from necrotizing enterocolitis (Bell stage 2 or more) or from all-cause mortality during the primary hospitalization or the study period was found between the two groups. The researchers also found no significant differences in a multitude of other secondary outcomes, including clinical (not definite) late-onset sepsis. The probiotics did not appear to have any adverse effects, including definite late-onset sepsis from the administered strains.
The authors emphasized caution throughout their study and the need for larger studies to detect significant clinical differences in outcomes from probiotics. "Our findings emphasize the importance of performing well-powered trials of probiotic administration in very preterm infants in different settings," they wrote.
The authors also noted that probiotics’ effectiveness for any outcome depends on dose, conditions, strains and combinations of strains. "No one has determined which is the most effective probiotic, combination of probiotics, when they should be started, what dosage should be used, or the duration of administration," the investigators said.
The study was funded by the National Health and Research Medical Council of Australia and the Royal Women’s Hospital Foundation and the Angior Family Foundation, both in Melbourne. The probiotic combination "ABC Dophilus Powder for Infants" was provided at cost by Solgar, USA, which did not provide any additional funding. The authors reported no other disclosures.
A combination of three probiotics appears to safely reduce the risk of necrotizing enterocolitis in very preterm infants, but did not reduce overall late-onset sepsis or all-cause mortality, according to a recent study.
Probiotics – potentially beneficial live microorganisms – have been previously shown to reduce necrotizing enterocolitis and all-cause mortality. They presumably contribute to a healthier gut flora because preterm infants tend to be more prone to pathogenic bacteria from the neonatal intensive care unit environment and lack the healthier flora biodiversity of term infants. Yet reductions in late-onset sepsis from probiotic administration have not been shown.
This study did find a significant reduction in late-onset sepsis only for infants born later than 28 weeks, but within too small a group (52 infants) to be reliably powered, reported Dr. Susan E. Jacobs of the Royal Women’s Hospital in Melbourne, and her colleagues (Pediatrics 2013 Nov. 18 [doi:10.1542/peds.2013-1339]).
The authors also calculated that administering probiotics to 43 very preterm infants could prevent necrotizing enterocolitis in 1 infant. Yet all-cause mortality, mortality from necrotizing enterocolitis and other secondary outcomes were not significantly different in the intervention and control groups.
The researchers randomized 1,099 very preterm infants, born before 32 complete weeks and weighing less than 1,500 g, to receive either 1.5 g daily of a probiotic combination or placebo. The probiotic powder was comprised of 1 x 109 total organisms per 1.5 g of Bifidobacterium infantis (300 x 106), Streptococcus thermophilus (350 x 106) and B. lactis (350 x 106); the placebo was maltodextrin.
The infants, born in one of eight Australian or two New Zealand hospitals during October 2007 and November 2011, were not included if they had major congenital or chromosomal abnormalities, if their mother was taking nondietary probiotic supplements, or if death within 72 hours appeared likely. At least 95% of infants in both groups received breast milk.
Among 548 infants receiving probiotics and 551 receiving placebo, 13.1% of the probiotic group and 16.2% of the control group were diagnosed with at least one episode of definite late-onset sepsis, confirmed with an isolated or cultured pathogen and occurring later than 48 hours after birth or at least 72 hours after no longer receiving antibiotics. This result revealed no significant difference between the groups (relative risk, 0.81; P = .16).
Yet among preterm infants aged at least 28 weeks, late-onset sepsis was significantly reduced in half in the probiotic subgroup. While 18 of infants (5.5%) receiving probiotics developed late-onset sepsis, 34 of the control infants (10.8%) developed it (RR, 0.51; P = .01). "These results should be interpreted cautiously, as they may be chance findings," the authors wrote.
The probiotic group, with 2% incidence, did see a small reduction in absolute risk (2.4%) of necrotizing enterocolitis of Bell stage 2 or greater, compared with the control group which had 4.4% incidence (RR, 0.46; P = .03). The number needed to treat was 43, but with a range of 23 to 333 (95% confidence interval) because of the low rates.
However, no significant difference in mortality from necrotizing enterocolitis (Bell stage 2 or more) or from all-cause mortality during the primary hospitalization or the study period was found between the two groups. The researchers also found no significant differences in a multitude of other secondary outcomes, including clinical (not definite) late-onset sepsis. The probiotics did not appear to have any adverse effects, including definite late-onset sepsis from the administered strains.
The authors emphasized caution throughout their study and the need for larger studies to detect significant clinical differences in outcomes from probiotics. "Our findings emphasize the importance of performing well-powered trials of probiotic administration in very preterm infants in different settings," they wrote.
The authors also noted that probiotics’ effectiveness for any outcome depends on dose, conditions, strains and combinations of strains. "No one has determined which is the most effective probiotic, combination of probiotics, when they should be started, what dosage should be used, or the duration of administration," the investigators said.
The study was funded by the National Health and Research Medical Council of Australia and the Royal Women’s Hospital Foundation and the Angior Family Foundation, both in Melbourne. The probiotic combination "ABC Dophilus Powder for Infants" was provided at cost by Solgar, USA, which did not provide any additional funding. The authors reported no other disclosures.
FROM PEDIATRICS
Major finding: A three-probiotic combination administered to very preterm infants did not significantly reduce late-onset sepsis (13.1% vs. 16.2% in controls; relative risk, 0.81; P = .16) or all-cause mortality (4.9% vs. 5.1% in controls; RR, 0.97; P = .91) but did reduce absolute risk by 2.4% of necrotizing enterocolitis Bell stage 2 or greater (2% vs. 4.4% in controls; RR, 0.46; P = .03).
Data source: The findings are based on prospective, multicenter, double-blinded, placebo-controlled, randomized trial involving 1,099 very preterm (less than 32 weeks, less than 1,500 g) infants from Australia and New Zealand between October 2007 and November 2011.
Disclosures: The study was funded by the National Health and Research Medical Council of Australia and the Royal Women’s Hospital Foundation and the Angior Family Foundation, both in Melbourne. The probiotic combination "ABC Dophilus Powder for Infants" was provided at cost by Solgar, USA, which did not provide any additional funding. The authors reported no other disclosures.
Coffee consumption affects cancer risk differently for liver vs. pancreatic cancers
Drinking tea or caffeinated or decaf coffee is unlikely to influence a person’s risk for pancreatic cancer, but consuming coffee of any kind may reduce the risk of the most common liver cancer by as much as 50% (depending on amount consumed), according to two recent studies in Clinical Gastroenterology and Hepatology.
In the pancreatic cancer study, Dr. Nirmala Bhoo-Pathy of University Medical Center Utrecht, the Netherlands, and her colleagues reported, "Our results strengthen the conclusion made by the World Cancer Research Fund and the American Institute of Cancer Research that there is little evidence to support a causal relation between coffee and risk of pancreatic cancer (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.05.029]).
Meanwhile, a 16-study meta-analysis of coffee intake and risk for hepatocellular carcinoma, which accounts for more than 90% of worldwide liver cancers, revealed a 40% decreased risk (relative risk, 0.60; 95% confidence interval: 0.50-0.71) for any coffee consumption vs. no consumption. Yet Dr. Francesca Bravi of Università degli Studi di Milano and her colleagues reported that their findings could not establish a causal relationship between coffee drinking and hepatocellular carcinoma (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.04.039]).
Even such a causal relationship may have limited clinical significance, however, considering that more than 90% of primary liver cancers worldwide can theoretically be prevented through hepatitis B vaccination, control of hepatitis C transmission, and reduction of alcohol consumption, Dr. Bravi’s team wrote.
In the first study, Dr. Bhoo-Pathy’s investigation involved inspection of 865 first incidences of pancreatic cancers reported in a cohort of 477,312 men and women from 10 European countries tracked prospectively over a mean 11.6 years of follow-up. The participants in the EPIC (European Prospective Investigation Into Nutrition and Cancer) cohort completed a dietary questionnaire at baseline in 1992, then calibrated with a 24-hour dietary recall by the final follow-up in 2000.
The 23 participating centers were in Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, and median coffee intake across these ranged from 92 mL/day in Italy to 900 mL/day in Denmark. Among the participants with all information on coffee type intake (n = 269,593), half drank only caffeinated coffee (50%), 4% drank only decaf, a third (34%) drank both, and 12% drank no coffee. Two-thirds (66%) of the total cohort drank tea of any kind (caffeinated, green, or herbal).
Neither total intake of coffee (hazard ratio, 1.03; 95% CI: 0.83-1.27 for high vs. low intake) nor consumption of decaffeinated coffee (HR, 1.12; 95% CI: 0.76-1.63) – reported as cups drunk per day, week, or month and then converted to daily milliliters – showed a significant change in pancreatic cancer risk. Tea consumption of any kind similarly had no impact on risk (HR, 1.22; 95% CI: 0.95-1.56). These risks did not change after accounting for a range of confounders nor when analysis was confined to the 608 (70.3%) cancers that were microscopically confirmed.
Confounders included sex, clinic/center, age at diagnosis, height, weight, physical activity, smoking status, diabetes history, education level, and energy intake, including red meat, processed meat, alcohol, soft drink, tea (for coffee analysis), coffee (for tea analysis), and fruit and vegetable intake.
A comparison of moderately low and low caffeinated coffee intake initially revealed a modest increased risk for moderately low consumption (HR, 1.33; 95% CI: 1.02-1.74) that dropped below statistical significance when only microscopically confirmed pancreatic cancer cases were analyzed. Additionally, no dose-response effect was noted among any of the findings for pancreatic cancer risk.
Yet a dose-response effect was seen in Dr. Bravi’s study investigating coffee consumption and hepatocellular carcinoma risk. Her team’s update of a 2007 meta-analysis included an additional four cohort and two case-control studies, for a total of eight cohort and eight case-control studies from 14 English-language articles included in PubMed/MEDLINE between 1966 and September 2012.
When broken down by study type, the 40% overall risk reduction for any coffee consumption found among 3,153 hepatocellular carcinoma cases split into a 44% reduction in the case-control studies (RR, 0.56; 95% CI: 0.42-0.75) and a 36% reduction in the cohort studies (RR, 0.64; 95% CI: 0.52-0.78).
The dose-response relationship was seen in separate comparisons of low and high coffee consumption with no coffee consumption, using three cups a day as the cutoff in nine papers and one cup a day in five papers. Low coffee consumption reduced hepatocellular carcinoma risk by 28% (RR, 0.72; 95% CI: 0.61-0.84) while high consumption reduced it by 56% (RR, 0.44; 95% CI: 0.77-0.84).
Each additional cup of coffee per day resulted in a 20% risk reduction (RR, 0.80; 95% CI: 0.77-0.84). This split into a 23% risk reduction in the case-control studies (RR, 0.77; 95% CI: 0.71-0.83) and a 17% risk reduction in the cohort studies (RR, 0.83; 95% CI: 0.78-0.88). A temporal analysis of risk reduction for any coffee consumption showed an increase from 20% risk reduction in 2000 (RR, 0.8; 95% CI: 0.50-1.29) to 41% in 2007 (RR, 0.59; 95% CI: 0.48-0.72), which has remained stable at about 40% the past several years.
Accounting for the most [significant] risk factors for liver cancer had little effect on the risk ratios. These factors included hepatitis B and C infections, cirrhosis, and other liver diseases, socioeconomic status, alcohol consumption, and smoking.
Dr. Bravi’s team suggested that the risk reduction effect could be a real, causal effect arising from antioxidants and other minerals in coffee that may inhibit liver carcinogenesis or from the inverse association between coffee and cirrhosis or coffee and diabetes, both conditions known risk factors for liver cancer. Or, the effect could result, at least in part, from reduced consumption of coffee among patients with cirrhosis or other liver disease.
"Thus, a reduction of coffee consumption in unhealthy subjects cannot be ruled out, although the inverse relation between coffee and liver cancer also was present in subjects with no history of hepatitis/liver disease," the researchers wrote. Yet, they also noted the potentially limited utility of coffee risk reduction given the greater impact on reducing liver cancer risk from hepatitis B vaccination, prevention of hepatitis C, and reduction of alcoholic drinking.
The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
Drinking tea or caffeinated or decaf coffee is unlikely to influence a person’s risk for pancreatic cancer, but consuming coffee of any kind may reduce the risk of the most common liver cancer by as much as 50% (depending on amount consumed), according to two recent studies in Clinical Gastroenterology and Hepatology.
In the pancreatic cancer study, Dr. Nirmala Bhoo-Pathy of University Medical Center Utrecht, the Netherlands, and her colleagues reported, "Our results strengthen the conclusion made by the World Cancer Research Fund and the American Institute of Cancer Research that there is little evidence to support a causal relation between coffee and risk of pancreatic cancer (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.05.029]).
Meanwhile, a 16-study meta-analysis of coffee intake and risk for hepatocellular carcinoma, which accounts for more than 90% of worldwide liver cancers, revealed a 40% decreased risk (relative risk, 0.60; 95% confidence interval: 0.50-0.71) for any coffee consumption vs. no consumption. Yet Dr. Francesca Bravi of Università degli Studi di Milano and her colleagues reported that their findings could not establish a causal relationship between coffee drinking and hepatocellular carcinoma (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.04.039]).
Even such a causal relationship may have limited clinical significance, however, considering that more than 90% of primary liver cancers worldwide can theoretically be prevented through hepatitis B vaccination, control of hepatitis C transmission, and reduction of alcohol consumption, Dr. Bravi’s team wrote.
In the first study, Dr. Bhoo-Pathy’s investigation involved inspection of 865 first incidences of pancreatic cancers reported in a cohort of 477,312 men and women from 10 European countries tracked prospectively over a mean 11.6 years of follow-up. The participants in the EPIC (European Prospective Investigation Into Nutrition and Cancer) cohort completed a dietary questionnaire at baseline in 1992, then calibrated with a 24-hour dietary recall by the final follow-up in 2000.
The 23 participating centers were in Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, and median coffee intake across these ranged from 92 mL/day in Italy to 900 mL/day in Denmark. Among the participants with all information on coffee type intake (n = 269,593), half drank only caffeinated coffee (50%), 4% drank only decaf, a third (34%) drank both, and 12% drank no coffee. Two-thirds (66%) of the total cohort drank tea of any kind (caffeinated, green, or herbal).
Neither total intake of coffee (hazard ratio, 1.03; 95% CI: 0.83-1.27 for high vs. low intake) nor consumption of decaffeinated coffee (HR, 1.12; 95% CI: 0.76-1.63) – reported as cups drunk per day, week, or month and then converted to daily milliliters – showed a significant change in pancreatic cancer risk. Tea consumption of any kind similarly had no impact on risk (HR, 1.22; 95% CI: 0.95-1.56). These risks did not change after accounting for a range of confounders nor when analysis was confined to the 608 (70.3%) cancers that were microscopically confirmed.
Confounders included sex, clinic/center, age at diagnosis, height, weight, physical activity, smoking status, diabetes history, education level, and energy intake, including red meat, processed meat, alcohol, soft drink, tea (for coffee analysis), coffee (for tea analysis), and fruit and vegetable intake.
A comparison of moderately low and low caffeinated coffee intake initially revealed a modest increased risk for moderately low consumption (HR, 1.33; 95% CI: 1.02-1.74) that dropped below statistical significance when only microscopically confirmed pancreatic cancer cases were analyzed. Additionally, no dose-response effect was noted among any of the findings for pancreatic cancer risk.
Yet a dose-response effect was seen in Dr. Bravi’s study investigating coffee consumption and hepatocellular carcinoma risk. Her team’s update of a 2007 meta-analysis included an additional four cohort and two case-control studies, for a total of eight cohort and eight case-control studies from 14 English-language articles included in PubMed/MEDLINE between 1966 and September 2012.
When broken down by study type, the 40% overall risk reduction for any coffee consumption found among 3,153 hepatocellular carcinoma cases split into a 44% reduction in the case-control studies (RR, 0.56; 95% CI: 0.42-0.75) and a 36% reduction in the cohort studies (RR, 0.64; 95% CI: 0.52-0.78).
The dose-response relationship was seen in separate comparisons of low and high coffee consumption with no coffee consumption, using three cups a day as the cutoff in nine papers and one cup a day in five papers. Low coffee consumption reduced hepatocellular carcinoma risk by 28% (RR, 0.72; 95% CI: 0.61-0.84) while high consumption reduced it by 56% (RR, 0.44; 95% CI: 0.77-0.84).
Each additional cup of coffee per day resulted in a 20% risk reduction (RR, 0.80; 95% CI: 0.77-0.84). This split into a 23% risk reduction in the case-control studies (RR, 0.77; 95% CI: 0.71-0.83) and a 17% risk reduction in the cohort studies (RR, 0.83; 95% CI: 0.78-0.88). A temporal analysis of risk reduction for any coffee consumption showed an increase from 20% risk reduction in 2000 (RR, 0.8; 95% CI: 0.50-1.29) to 41% in 2007 (RR, 0.59; 95% CI: 0.48-0.72), which has remained stable at about 40% the past several years.
Accounting for the most [significant] risk factors for liver cancer had little effect on the risk ratios. These factors included hepatitis B and C infections, cirrhosis, and other liver diseases, socioeconomic status, alcohol consumption, and smoking.
Dr. Bravi’s team suggested that the risk reduction effect could be a real, causal effect arising from antioxidants and other minerals in coffee that may inhibit liver carcinogenesis or from the inverse association between coffee and cirrhosis or coffee and diabetes, both conditions known risk factors for liver cancer. Or, the effect could result, at least in part, from reduced consumption of coffee among patients with cirrhosis or other liver disease.
"Thus, a reduction of coffee consumption in unhealthy subjects cannot be ruled out, although the inverse relation between coffee and liver cancer also was present in subjects with no history of hepatitis/liver disease," the researchers wrote. Yet, they also noted the potentially limited utility of coffee risk reduction given the greater impact on reducing liver cancer risk from hepatitis B vaccination, prevention of hepatitis C, and reduction of alcoholic drinking.
The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
Drinking tea or caffeinated or decaf coffee is unlikely to influence a person’s risk for pancreatic cancer, but consuming coffee of any kind may reduce the risk of the most common liver cancer by as much as 50% (depending on amount consumed), according to two recent studies in Clinical Gastroenterology and Hepatology.
In the pancreatic cancer study, Dr. Nirmala Bhoo-Pathy of University Medical Center Utrecht, the Netherlands, and her colleagues reported, "Our results strengthen the conclusion made by the World Cancer Research Fund and the American Institute of Cancer Research that there is little evidence to support a causal relation between coffee and risk of pancreatic cancer (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.05.029]).
Meanwhile, a 16-study meta-analysis of coffee intake and risk for hepatocellular carcinoma, which accounts for more than 90% of worldwide liver cancers, revealed a 40% decreased risk (relative risk, 0.60; 95% confidence interval: 0.50-0.71) for any coffee consumption vs. no consumption. Yet Dr. Francesca Bravi of Università degli Studi di Milano and her colleagues reported that their findings could not establish a causal relationship between coffee drinking and hepatocellular carcinoma (Clin. Gastroenterol. Hepatol. 2013 [doi:10.1016/j.cgh.2013.04.039]).
Even such a causal relationship may have limited clinical significance, however, considering that more than 90% of primary liver cancers worldwide can theoretically be prevented through hepatitis B vaccination, control of hepatitis C transmission, and reduction of alcohol consumption, Dr. Bravi’s team wrote.
In the first study, Dr. Bhoo-Pathy’s investigation involved inspection of 865 first incidences of pancreatic cancers reported in a cohort of 477,312 men and women from 10 European countries tracked prospectively over a mean 11.6 years of follow-up. The participants in the EPIC (European Prospective Investigation Into Nutrition and Cancer) cohort completed a dietary questionnaire at baseline in 1992, then calibrated with a 24-hour dietary recall by the final follow-up in 2000.
The 23 participating centers were in Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and the United Kingdom, and median coffee intake across these ranged from 92 mL/day in Italy to 900 mL/day in Denmark. Among the participants with all information on coffee type intake (n = 269,593), half drank only caffeinated coffee (50%), 4% drank only decaf, a third (34%) drank both, and 12% drank no coffee. Two-thirds (66%) of the total cohort drank tea of any kind (caffeinated, green, or herbal).
Neither total intake of coffee (hazard ratio, 1.03; 95% CI: 0.83-1.27 for high vs. low intake) nor consumption of decaffeinated coffee (HR, 1.12; 95% CI: 0.76-1.63) – reported as cups drunk per day, week, or month and then converted to daily milliliters – showed a significant change in pancreatic cancer risk. Tea consumption of any kind similarly had no impact on risk (HR, 1.22; 95% CI: 0.95-1.56). These risks did not change after accounting for a range of confounders nor when analysis was confined to the 608 (70.3%) cancers that were microscopically confirmed.
Confounders included sex, clinic/center, age at diagnosis, height, weight, physical activity, smoking status, diabetes history, education level, and energy intake, including red meat, processed meat, alcohol, soft drink, tea (for coffee analysis), coffee (for tea analysis), and fruit and vegetable intake.
A comparison of moderately low and low caffeinated coffee intake initially revealed a modest increased risk for moderately low consumption (HR, 1.33; 95% CI: 1.02-1.74) that dropped below statistical significance when only microscopically confirmed pancreatic cancer cases were analyzed. Additionally, no dose-response effect was noted among any of the findings for pancreatic cancer risk.
Yet a dose-response effect was seen in Dr. Bravi’s study investigating coffee consumption and hepatocellular carcinoma risk. Her team’s update of a 2007 meta-analysis included an additional four cohort and two case-control studies, for a total of eight cohort and eight case-control studies from 14 English-language articles included in PubMed/MEDLINE between 1966 and September 2012.
When broken down by study type, the 40% overall risk reduction for any coffee consumption found among 3,153 hepatocellular carcinoma cases split into a 44% reduction in the case-control studies (RR, 0.56; 95% CI: 0.42-0.75) and a 36% reduction in the cohort studies (RR, 0.64; 95% CI: 0.52-0.78).
The dose-response relationship was seen in separate comparisons of low and high coffee consumption with no coffee consumption, using three cups a day as the cutoff in nine papers and one cup a day in five papers. Low coffee consumption reduced hepatocellular carcinoma risk by 28% (RR, 0.72; 95% CI: 0.61-0.84) while high consumption reduced it by 56% (RR, 0.44; 95% CI: 0.77-0.84).
Each additional cup of coffee per day resulted in a 20% risk reduction (RR, 0.80; 95% CI: 0.77-0.84). This split into a 23% risk reduction in the case-control studies (RR, 0.77; 95% CI: 0.71-0.83) and a 17% risk reduction in the cohort studies (RR, 0.83; 95% CI: 0.78-0.88). A temporal analysis of risk reduction for any coffee consumption showed an increase from 20% risk reduction in 2000 (RR, 0.8; 95% CI: 0.50-1.29) to 41% in 2007 (RR, 0.59; 95% CI: 0.48-0.72), which has remained stable at about 40% the past several years.
Accounting for the most [significant] risk factors for liver cancer had little effect on the risk ratios. These factors included hepatitis B and C infections, cirrhosis, and other liver diseases, socioeconomic status, alcohol consumption, and smoking.
Dr. Bravi’s team suggested that the risk reduction effect could be a real, causal effect arising from antioxidants and other minerals in coffee that may inhibit liver carcinogenesis or from the inverse association between coffee and cirrhosis or coffee and diabetes, both conditions known risk factors for liver cancer. Or, the effect could result, at least in part, from reduced consumption of coffee among patients with cirrhosis or other liver disease.
"Thus, a reduction of coffee consumption in unhealthy subjects cannot be ruled out, although the inverse relation between coffee and liver cancer also was present in subjects with no history of hepatitis/liver disease," the researchers wrote. Yet, they also noted the potentially limited utility of coffee risk reduction given the greater impact on reducing liver cancer risk from hepatitis B vaccination, prevention of hepatitis C, and reduction of alcoholic drinking.
The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Neither total coffee intake (whether decaffeinated or caffeinated, analyzed separately) nor tea intake appears to influence the risk of pancreatic cancer, but coffee intake of any kind reduces the risk of hepatocellular carcinoma by 40% (RR, 0.60; 95% CI 0.50-0.71), with a dose-response effect even after accounting for participants’ sex, alcohol drinking, and history of hepatitis or liver disease.
Data source: The findings of the pancreatic cancer study are based on prospective analysis of 477,312 initially cancer-free male and female participants from 10 European countries participating in the European Prospective Investigation into Nutrition and Cancer cohort between 1992 and 2000. The liver cancer meta-analysis is based on 14 articles comprising eight case-control studies and eight cohort studies, published in PubMed/MEDLINE between 1966 and September 2012.
Disclosures: The pancreatic cancer study was funded by the European Commission and the International Agency for Research on Cancer, with a long list of additional societies, foundations, and educational institutions supporting the individual national cohorts. The hepatocellular carcinoma study was funded by a grant from the Associazione Italiana per la Ricerca sul Cancro. The authors in both studies reported no disclosures.
Abdominal fat raises risk for esophageal disease and cancer
Excess abdominal fat increases the risk for both Barrett’s esophagus and erosive esophagitis even after body mass index is accounted for, according to a recent meta-analysis. Extra fat around the middle also increases the risk for esophageal adenocarcinoma.
"Central adiposity has a strong and consistent association with development of esophageal inflammation, metaplasia, and neoplasia, independent of BMI [body mass index]," reported Dr. Siddharth Singh and his colleagues in the November issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2013.05.009). "In addition, central adiposity may be more highly associated with a reflux-independent effect on the development of Barrett’s esophagus and perhaps explains the predominance of esophageal adenocarcinoma in this population," said Dr. Singh of the Mayo Clinic in Rochester, Minn., and his coauthors.
The researchers conducted a systematic review and meta-analysis of all studies published through March 2013 in PubMed, Embase, or Web of Science that investigated associations between central adiposity and the risk of erosive esophagitis, Barrett’s esophagus, or esophageal adenocarcinoma. Included studies used computed tomography, waist-hip ratio, or waist circumference to assess central adiposity or visceral adipose tissue area or volume.
The researchers identified 40 studies, including 19 on erosive esophagitis, 17 on Barrett’s esophagus, and 6 on esophageal adenocarcinoma (including studies of overlapping conditions). Of the 37 independent populations covered in these studies, 18 involved Asian populations and the rest involved Western populations.
Compared with study participants in the lowest body-type category, participants with the highest central adiposity had 1.87 greater odds of erosive esophagitis, based on analysis of 18 heterogeneous studies (adjusted odds ratio, 1.87; 95% CI, 1.51-2.31). When the researchers analyzed only the eight studies that controlled for BMI, the risk remained (aOR, 1.93; 95% CI, 1.38-2.71). Although the researchers lacked data to assess the influence of gastroesophageal reflux disease (GERD) symptoms, they did find a dose-response relationship for higher central adiposity and higher erosive esophagitis risk.
An analysis of 15 studies similarly showed a greater risk for Barrett’s esophagus with greater central adiposity – even after accounting for BMI – and a dose-response relationship. Compared with participants in the lowest category of central adiposity, those in the highest group had about double the odds of Barrett’s esophagus (aOR, 1.98; 95% CI, 1.52-2.57). When the researchers evaluated Barrett’s esophagus risk in the five studies that allowed for BMI adjustment, the risk remained high (aOR, 1.88; 95% CI, 1.20-2.95).
In the 11 Barrett’s esophagus studies that controlled for GERD or used control-group participants with GERD, abdominal fat still doubled the odds for Barrett’s esophagus (aOR, 2.04; 95% CI, 1.44-2.90). Meanwhile, overall obesity had no impact on Barrett’s esophagus risk (aOR, 1.15; 95% CI, 0.89-1.47).
Even when the investigators analyzed only the seven studies in which GERD patients without Barrett’s esophagus were compared to Barrett’s esophagus patients, they found an increased risk of central adiposity (aOR, 2.51; 95% CI, 1.48-4.25). Meanwhile, BMI showed no effect on risk in these studies (aOR, 1.23; 95% CI, 0.90-1.66). "These results suggest that central adiposity, rather than overall obesity, may have a GERD symptom-independent effect on development of esophageal metaplasia," the researchers wrote.
The six studies on esophageal adenocarcinoma revealed an increased risk for the cancer with increased abdominal adiposity (aOR, 2.51; 95% CI, 1.56-4.04), though too little data existed to evaluate a dose-response relationship or to calculate risk independent of BMI or GERD symptoms.
For all these analyses, data on the following confounders was also included when available: "age, sex, race, BMI, smoking status, alcohol consumption, GERD symptoms, use of proton pump inhibitors or histamine receptor antagonists, presence of hiatal hernia, family history of esophageal adenocarcinoma, caffeine intake, Helicobacter pylori infection, use of putative chemopreventive agents (aspirin, nonsteroidal anti-inflammatory drugs, statins), and for studies reporting EAC [esophageal adenocarcinoma] as outcome, presence, length, and histology of Barrett’s esophagus."
The authors suggested several possible reasons for the findings, starting with the higher risk for reflux that exists with more abdominal fat. They also noted that abdominal fat may cause systemic or inflammatory effects that could lead to Barrett’s esophagus and cancer, whether independently or in conjunction with other factors.
Past research has already shown an increased risk for colon and pancreatic cancer resulting from visceral fat’s "adipocytokine-mediated carcinogenic effect," the researchers wrote. They also noted the link between abdominal fat and insulin resistance and pointed out that recent research has found evidence for the "role of the insulin–insulin growth factor-1 axis in promoting esophageal neoplasia."
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the American College of Gastroenterology. The authors had no disclosures.
Over the past several decades, obesity has reached epidemic proportions in the United States. Obesity is associated with an increased risk of several gastrointestinal malignancies, including esophageal adenocarcinoma. Body mass index (BMI), calculated as a function of height and weight, is the measure traditionally used to estimate obesity in studies of disease association. While increased BMI is generally associated with a modest increased risk of esophageal adenocarcinoma, associations with Barrett's esophagus have been inconsistent. However, it may be more important to focus on central adiposity, as visceral fat produces many proinflammatory cytokines (or adipokines) that in turn may have cancer-promoting effects.
![]() |
| Dr. Julian Abrams |
In fact, recent studies that have used measures of central adiposity such as waist-to-hip ratio (WHR) have reported more-consistent associations with an increased risk of esophageal neoplasia. Singh et al. performed an excellent meta-analysis of these studies and found a nearly twofold increased risk of esophagitis, Barrett's esophagus, and esophageal adenocarcinoma. Furthermore, this association persisted even after adjusting for BMI, suggesting that the association between obesity and esophageal neoplasia is largely mediated by central adiposity.
Based on these results, future studies of obesity and Barrett's esophagus and esophageal adenocarcinoma should focus on central adiposity, as estimated by WHR, CT volumetric analysis, or some other means. Additionally, research should be aimed at understanding how visceral fat contributes to the development of esophageal adenocarcinoma and whether we can implement measures specifically targeted at reducing visceral fat to lower EAC risk.
Dr. Julian Abrams is the Florence Irving Assistant Professor of Medicine in the division of digestive and liver diseases, Columbia University Medical Center, New York. He has no conflicts of interest to report.
Over the past several decades, obesity has reached epidemic proportions in the United States. Obesity is associated with an increased risk of several gastrointestinal malignancies, including esophageal adenocarcinoma. Body mass index (BMI), calculated as a function of height and weight, is the measure traditionally used to estimate obesity in studies of disease association. While increased BMI is generally associated with a modest increased risk of esophageal adenocarcinoma, associations with Barrett's esophagus have been inconsistent. However, it may be more important to focus on central adiposity, as visceral fat produces many proinflammatory cytokines (or adipokines) that in turn may have cancer-promoting effects.
![]() |
| Dr. Julian Abrams |
In fact, recent studies that have used measures of central adiposity such as waist-to-hip ratio (WHR) have reported more-consistent associations with an increased risk of esophageal neoplasia. Singh et al. performed an excellent meta-analysis of these studies and found a nearly twofold increased risk of esophagitis, Barrett's esophagus, and esophageal adenocarcinoma. Furthermore, this association persisted even after adjusting for BMI, suggesting that the association between obesity and esophageal neoplasia is largely mediated by central adiposity.
Based on these results, future studies of obesity and Barrett's esophagus and esophageal adenocarcinoma should focus on central adiposity, as estimated by WHR, CT volumetric analysis, or some other means. Additionally, research should be aimed at understanding how visceral fat contributes to the development of esophageal adenocarcinoma and whether we can implement measures specifically targeted at reducing visceral fat to lower EAC risk.
Dr. Julian Abrams is the Florence Irving Assistant Professor of Medicine in the division of digestive and liver diseases, Columbia University Medical Center, New York. He has no conflicts of interest to report.
Over the past several decades, obesity has reached epidemic proportions in the United States. Obesity is associated with an increased risk of several gastrointestinal malignancies, including esophageal adenocarcinoma. Body mass index (BMI), calculated as a function of height and weight, is the measure traditionally used to estimate obesity in studies of disease association. While increased BMI is generally associated with a modest increased risk of esophageal adenocarcinoma, associations with Barrett's esophagus have been inconsistent. However, it may be more important to focus on central adiposity, as visceral fat produces many proinflammatory cytokines (or adipokines) that in turn may have cancer-promoting effects.
![]() |
| Dr. Julian Abrams |
In fact, recent studies that have used measures of central adiposity such as waist-to-hip ratio (WHR) have reported more-consistent associations with an increased risk of esophageal neoplasia. Singh et al. performed an excellent meta-analysis of these studies and found a nearly twofold increased risk of esophagitis, Barrett's esophagus, and esophageal adenocarcinoma. Furthermore, this association persisted even after adjusting for BMI, suggesting that the association between obesity and esophageal neoplasia is largely mediated by central adiposity.
Based on these results, future studies of obesity and Barrett's esophagus and esophageal adenocarcinoma should focus on central adiposity, as estimated by WHR, CT volumetric analysis, or some other means. Additionally, research should be aimed at understanding how visceral fat contributes to the development of esophageal adenocarcinoma and whether we can implement measures specifically targeted at reducing visceral fat to lower EAC risk.
Dr. Julian Abrams is the Florence Irving Assistant Professor of Medicine in the division of digestive and liver diseases, Columbia University Medical Center, New York. He has no conflicts of interest to report.
Excess abdominal fat increases the risk for both Barrett’s esophagus and erosive esophagitis even after body mass index is accounted for, according to a recent meta-analysis. Extra fat around the middle also increases the risk for esophageal adenocarcinoma.
"Central adiposity has a strong and consistent association with development of esophageal inflammation, metaplasia, and neoplasia, independent of BMI [body mass index]," reported Dr. Siddharth Singh and his colleagues in the November issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2013.05.009). "In addition, central adiposity may be more highly associated with a reflux-independent effect on the development of Barrett’s esophagus and perhaps explains the predominance of esophageal adenocarcinoma in this population," said Dr. Singh of the Mayo Clinic in Rochester, Minn., and his coauthors.
The researchers conducted a systematic review and meta-analysis of all studies published through March 2013 in PubMed, Embase, or Web of Science that investigated associations between central adiposity and the risk of erosive esophagitis, Barrett’s esophagus, or esophageal adenocarcinoma. Included studies used computed tomography, waist-hip ratio, or waist circumference to assess central adiposity or visceral adipose tissue area or volume.
The researchers identified 40 studies, including 19 on erosive esophagitis, 17 on Barrett’s esophagus, and 6 on esophageal adenocarcinoma (including studies of overlapping conditions). Of the 37 independent populations covered in these studies, 18 involved Asian populations and the rest involved Western populations.
Compared with study participants in the lowest body-type category, participants with the highest central adiposity had 1.87 greater odds of erosive esophagitis, based on analysis of 18 heterogeneous studies (adjusted odds ratio, 1.87; 95% CI, 1.51-2.31). When the researchers analyzed only the eight studies that controlled for BMI, the risk remained (aOR, 1.93; 95% CI, 1.38-2.71). Although the researchers lacked data to assess the influence of gastroesophageal reflux disease (GERD) symptoms, they did find a dose-response relationship for higher central adiposity and higher erosive esophagitis risk.
An analysis of 15 studies similarly showed a greater risk for Barrett’s esophagus with greater central adiposity – even after accounting for BMI – and a dose-response relationship. Compared with participants in the lowest category of central adiposity, those in the highest group had about double the odds of Barrett’s esophagus (aOR, 1.98; 95% CI, 1.52-2.57). When the researchers evaluated Barrett’s esophagus risk in the five studies that allowed for BMI adjustment, the risk remained high (aOR, 1.88; 95% CI, 1.20-2.95).
In the 11 Barrett’s esophagus studies that controlled for GERD or used control-group participants with GERD, abdominal fat still doubled the odds for Barrett’s esophagus (aOR, 2.04; 95% CI, 1.44-2.90). Meanwhile, overall obesity had no impact on Barrett’s esophagus risk (aOR, 1.15; 95% CI, 0.89-1.47).
Even when the investigators analyzed only the seven studies in which GERD patients without Barrett’s esophagus were compared to Barrett’s esophagus patients, they found an increased risk of central adiposity (aOR, 2.51; 95% CI, 1.48-4.25). Meanwhile, BMI showed no effect on risk in these studies (aOR, 1.23; 95% CI, 0.90-1.66). "These results suggest that central adiposity, rather than overall obesity, may have a GERD symptom-independent effect on development of esophageal metaplasia," the researchers wrote.
The six studies on esophageal adenocarcinoma revealed an increased risk for the cancer with increased abdominal adiposity (aOR, 2.51; 95% CI, 1.56-4.04), though too little data existed to evaluate a dose-response relationship or to calculate risk independent of BMI or GERD symptoms.
For all these analyses, data on the following confounders was also included when available: "age, sex, race, BMI, smoking status, alcohol consumption, GERD symptoms, use of proton pump inhibitors or histamine receptor antagonists, presence of hiatal hernia, family history of esophageal adenocarcinoma, caffeine intake, Helicobacter pylori infection, use of putative chemopreventive agents (aspirin, nonsteroidal anti-inflammatory drugs, statins), and for studies reporting EAC [esophageal adenocarcinoma] as outcome, presence, length, and histology of Barrett’s esophagus."
The authors suggested several possible reasons for the findings, starting with the higher risk for reflux that exists with more abdominal fat. They also noted that abdominal fat may cause systemic or inflammatory effects that could lead to Barrett’s esophagus and cancer, whether independently or in conjunction with other factors.
Past research has already shown an increased risk for colon and pancreatic cancer resulting from visceral fat’s "adipocytokine-mediated carcinogenic effect," the researchers wrote. They also noted the link between abdominal fat and insulin resistance and pointed out that recent research has found evidence for the "role of the insulin–insulin growth factor-1 axis in promoting esophageal neoplasia."
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the American College of Gastroenterology. The authors had no disclosures.
Excess abdominal fat increases the risk for both Barrett’s esophagus and erosive esophagitis even after body mass index is accounted for, according to a recent meta-analysis. Extra fat around the middle also increases the risk for esophageal adenocarcinoma.
"Central adiposity has a strong and consistent association with development of esophageal inflammation, metaplasia, and neoplasia, independent of BMI [body mass index]," reported Dr. Siddharth Singh and his colleagues in the November issue of Clinical Gastroenterology and Hepatology (doi: 10.1016/j.cgh.2013.05.009). "In addition, central adiposity may be more highly associated with a reflux-independent effect on the development of Barrett’s esophagus and perhaps explains the predominance of esophageal adenocarcinoma in this population," said Dr. Singh of the Mayo Clinic in Rochester, Minn., and his coauthors.
The researchers conducted a systematic review and meta-analysis of all studies published through March 2013 in PubMed, Embase, or Web of Science that investigated associations between central adiposity and the risk of erosive esophagitis, Barrett’s esophagus, or esophageal adenocarcinoma. Included studies used computed tomography, waist-hip ratio, or waist circumference to assess central adiposity or visceral adipose tissue area or volume.
The researchers identified 40 studies, including 19 on erosive esophagitis, 17 on Barrett’s esophagus, and 6 on esophageal adenocarcinoma (including studies of overlapping conditions). Of the 37 independent populations covered in these studies, 18 involved Asian populations and the rest involved Western populations.
Compared with study participants in the lowest body-type category, participants with the highest central adiposity had 1.87 greater odds of erosive esophagitis, based on analysis of 18 heterogeneous studies (adjusted odds ratio, 1.87; 95% CI, 1.51-2.31). When the researchers analyzed only the eight studies that controlled for BMI, the risk remained (aOR, 1.93; 95% CI, 1.38-2.71). Although the researchers lacked data to assess the influence of gastroesophageal reflux disease (GERD) symptoms, they did find a dose-response relationship for higher central adiposity and higher erosive esophagitis risk.
An analysis of 15 studies similarly showed a greater risk for Barrett’s esophagus with greater central adiposity – even after accounting for BMI – and a dose-response relationship. Compared with participants in the lowest category of central adiposity, those in the highest group had about double the odds of Barrett’s esophagus (aOR, 1.98; 95% CI, 1.52-2.57). When the researchers evaluated Barrett’s esophagus risk in the five studies that allowed for BMI adjustment, the risk remained high (aOR, 1.88; 95% CI, 1.20-2.95).
In the 11 Barrett’s esophagus studies that controlled for GERD or used control-group participants with GERD, abdominal fat still doubled the odds for Barrett’s esophagus (aOR, 2.04; 95% CI, 1.44-2.90). Meanwhile, overall obesity had no impact on Barrett’s esophagus risk (aOR, 1.15; 95% CI, 0.89-1.47).
Even when the investigators analyzed only the seven studies in which GERD patients without Barrett’s esophagus were compared to Barrett’s esophagus patients, they found an increased risk of central adiposity (aOR, 2.51; 95% CI, 1.48-4.25). Meanwhile, BMI showed no effect on risk in these studies (aOR, 1.23; 95% CI, 0.90-1.66). "These results suggest that central adiposity, rather than overall obesity, may have a GERD symptom-independent effect on development of esophageal metaplasia," the researchers wrote.
The six studies on esophageal adenocarcinoma revealed an increased risk for the cancer with increased abdominal adiposity (aOR, 2.51; 95% CI, 1.56-4.04), though too little data existed to evaluate a dose-response relationship or to calculate risk independent of BMI or GERD symptoms.
For all these analyses, data on the following confounders was also included when available: "age, sex, race, BMI, smoking status, alcohol consumption, GERD symptoms, use of proton pump inhibitors or histamine receptor antagonists, presence of hiatal hernia, family history of esophageal adenocarcinoma, caffeine intake, Helicobacter pylori infection, use of putative chemopreventive agents (aspirin, nonsteroidal anti-inflammatory drugs, statins), and for studies reporting EAC [esophageal adenocarcinoma] as outcome, presence, length, and histology of Barrett’s esophagus."
The authors suggested several possible reasons for the findings, starting with the higher risk for reflux that exists with more abdominal fat. They also noted that abdominal fat may cause systemic or inflammatory effects that could lead to Barrett’s esophagus and cancer, whether independently or in conjunction with other factors.
Past research has already shown an increased risk for colon and pancreatic cancer resulting from visceral fat’s "adipocytokine-mediated carcinogenic effect," the researchers wrote. They also noted the link between abdominal fat and insulin resistance and pointed out that recent research has found evidence for the "role of the insulin–insulin growth factor-1 axis in promoting esophageal neoplasia."
The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the American College of Gastroenterology. The authors had no disclosures.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Compared with normal abdominal adiposity, above-normal central adiposity increased the risk of erosive esophagitis after adjustment for body mass index (adjusted odds ratio, 1.93; 95% CI, 1.38-2.71); increased the risk for Barrett’s esophagus after adjustment for BMI (aOR, 1.88; 95% CI, 1.20-2.95) or after adjustment for gastroesophageal reflux (aOR, 2.04; 95% CI, 1.44-2.90); and increased the risk for esophageal adenocarcinoma (aOR, 2.51; 95% CI, 1.54-4.06) without adjustment for BMI or GERD.
Data source: The findings are based on a systematic review and meta-analysis of 40 articles pulled from PubMed, Embase, and Web of Science databases through March 2013, including (with overlap) 19 studies on erosive esophagitis, 17 on Barrett’s esophagus, and 6 on esophageal adenocarcinoma.
Disclosures: The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases and the American College of Gastroenterology. The authors had no disclosures.
Parental addiction in childhood increases risk for depression in adulthood
Adults were more likely to have depression if they had a parent with a drug or alcohol addiction that caused problems in the family, a study showed.
The increased risk for depression remained even after investigators accounted for other factors that might have increased the risk for depression.
"These findings underscore the intergenerational consequences of drug and alcohol addiction and reinforce the need to develop interventions that support healthy childhood development to prevent ongoing patterns of addiction and prevention," Esme Fuller-Thomson, Ph.D., and her colleagues at the University of Toronto wrote in Psychiatry Research (2013 [doi: 10.1016/j.psychres.2013.02.024]). They noted that 7.3 million children under age 18 live with a parent who misuses alcohol and 2.1 million children live with a parent who abuses illicit drugs in the United States, based on previous research.
The investigators analyzed the responses of 6,268 adults from Saskatchewan in the 2005 Canadian Community Health Survey. A total of 15.2% of the responders had been exposed to a parent with addiction, based on an affirmative answer to whether either of their parents drank alcohol or used drugs so often that it caused problems in their family.
Meanwhile, 5.6% of the sample were classified as depressed based on the Composite International Diagnostic Interview - Short Form (using DSM-III-R criteria). Those with at least a 90% probability for a major depressive episode lasting at least 2 weeks within the past year were considered depressed.
Before any confounders beyond age, sex, and race (white or minority) were considered, adults exposed to a parent with addiction had more than double the odds of having depression, compared with peers with nonaddicted parents (OR = 2.27; 95% CI, 1.74-2.96). Then the researchers added four categories of confounders to their model: adverse childhood experiences, responders’ socioeconomic status as adults, life stressors in adulthood, and adult health behaviors.
After investigators accounted for these factors, adults who had a parent with an alcohol or drug addiction remained at higher risk for depression, with 1.69 greater odds than peers whose parents did not have these addictions (OR = 1.69; 95% CI, 1.25-2.28). Although women were more likely to have depression than men across the full sample, no significant influence for gender was found on the link between parental addiction and adult depression.
The confounders of adult health behaviors included alcohol consumption , smoking status, body mass index (BMI), and physical activity. The responders’ adult socioeconomic status considered their highest level of education and their household income, because depression is associated with poverty and a lower education level.
Adverse childhood experiences included parental divorce, parental unemployment (for at least a period of time), and physical abuse. Adult responders’ other life stressors included in the analysis were self-reported daily stress level, having a chronic condition, and marital status (married or single/separated/divorced/widowed).
Adding each of these categories did not change the initial odds much, except the addition of adverse childhood experiences, which dropped the adults’ odds (without the other categories) to 1.67 times greater depression risk among those with an addicted parent compared with those with a nonaddicted parent (OR = 1.67; 95% CI, 1.25-2.24). The reduction in odds after the addition of adverse childhood experiences to the overall model just missed statistical significance, the authors reported, leading this study’s findings to vary from the findings of past studies, "which found that co-occurring adverse childhood experiences, such as abuse, accounted for the entire association between parental addiction and depression."
"Moreover, the number of adversities experienced in childhood was independently associated with the increased odds of depression," Dr. Fuller-Thomson and colleagues reported. They also found increased daily stress, having at least one chronic illness and being single, to independently predict depression.
Study limitations included a lack of information on parental depression, the sex of the addicted parent, prenatal exposure to drugs or alcohol, and early environmental or genetic factors that previously have all been shown to increase the risk of adult depression. However, the link between parental addiction and adult depression even after the other factors in this study were accounted for point to a possible "susceptibility to a set of genes common to both depression and addiction," the authors suggested.
The only funding source noted was internal support from the first author’s Sandra Rotman Endowed Chair in Social Work. No other disclosures were reported.
Adults were more likely to have depression if they had a parent with a drug or alcohol addiction that caused problems in the family, a study showed.
The increased risk for depression remained even after investigators accounted for other factors that might have increased the risk for depression.
"These findings underscore the intergenerational consequences of drug and alcohol addiction and reinforce the need to develop interventions that support healthy childhood development to prevent ongoing patterns of addiction and prevention," Esme Fuller-Thomson, Ph.D., and her colleagues at the University of Toronto wrote in Psychiatry Research (2013 [doi: 10.1016/j.psychres.2013.02.024]). They noted that 7.3 million children under age 18 live with a parent who misuses alcohol and 2.1 million children live with a parent who abuses illicit drugs in the United States, based on previous research.
The investigators analyzed the responses of 6,268 adults from Saskatchewan in the 2005 Canadian Community Health Survey. A total of 15.2% of the responders had been exposed to a parent with addiction, based on an affirmative answer to whether either of their parents drank alcohol or used drugs so often that it caused problems in their family.
Meanwhile, 5.6% of the sample were classified as depressed based on the Composite International Diagnostic Interview - Short Form (using DSM-III-R criteria). Those with at least a 90% probability for a major depressive episode lasting at least 2 weeks within the past year were considered depressed.
Before any confounders beyond age, sex, and race (white or minority) were considered, adults exposed to a parent with addiction had more than double the odds of having depression, compared with peers with nonaddicted parents (OR = 2.27; 95% CI, 1.74-2.96). Then the researchers added four categories of confounders to their model: adverse childhood experiences, responders’ socioeconomic status as adults, life stressors in adulthood, and adult health behaviors.
After investigators accounted for these factors, adults who had a parent with an alcohol or drug addiction remained at higher risk for depression, with 1.69 greater odds than peers whose parents did not have these addictions (OR = 1.69; 95% CI, 1.25-2.28). Although women were more likely to have depression than men across the full sample, no significant influence for gender was found on the link between parental addiction and adult depression.
The confounders of adult health behaviors included alcohol consumption , smoking status, body mass index (BMI), and physical activity. The responders’ adult socioeconomic status considered their highest level of education and their household income, because depression is associated with poverty and a lower education level.
Adverse childhood experiences included parental divorce, parental unemployment (for at least a period of time), and physical abuse. Adult responders’ other life stressors included in the analysis were self-reported daily stress level, having a chronic condition, and marital status (married or single/separated/divorced/widowed).
Adding each of these categories did not change the initial odds much, except the addition of adverse childhood experiences, which dropped the adults’ odds (without the other categories) to 1.67 times greater depression risk among those with an addicted parent compared with those with a nonaddicted parent (OR = 1.67; 95% CI, 1.25-2.24). The reduction in odds after the addition of adverse childhood experiences to the overall model just missed statistical significance, the authors reported, leading this study’s findings to vary from the findings of past studies, "which found that co-occurring adverse childhood experiences, such as abuse, accounted for the entire association between parental addiction and depression."
"Moreover, the number of adversities experienced in childhood was independently associated with the increased odds of depression," Dr. Fuller-Thomson and colleagues reported. They also found increased daily stress, having at least one chronic illness and being single, to independently predict depression.
Study limitations included a lack of information on parental depression, the sex of the addicted parent, prenatal exposure to drugs or alcohol, and early environmental or genetic factors that previously have all been shown to increase the risk of adult depression. However, the link between parental addiction and adult depression even after the other factors in this study were accounted for point to a possible "susceptibility to a set of genes common to both depression and addiction," the authors suggested.
The only funding source noted was internal support from the first author’s Sandra Rotman Endowed Chair in Social Work. No other disclosures were reported.
Adults were more likely to have depression if they had a parent with a drug or alcohol addiction that caused problems in the family, a study showed.
The increased risk for depression remained even after investigators accounted for other factors that might have increased the risk for depression.
"These findings underscore the intergenerational consequences of drug and alcohol addiction and reinforce the need to develop interventions that support healthy childhood development to prevent ongoing patterns of addiction and prevention," Esme Fuller-Thomson, Ph.D., and her colleagues at the University of Toronto wrote in Psychiatry Research (2013 [doi: 10.1016/j.psychres.2013.02.024]). They noted that 7.3 million children under age 18 live with a parent who misuses alcohol and 2.1 million children live with a parent who abuses illicit drugs in the United States, based on previous research.
The investigators analyzed the responses of 6,268 adults from Saskatchewan in the 2005 Canadian Community Health Survey. A total of 15.2% of the responders had been exposed to a parent with addiction, based on an affirmative answer to whether either of their parents drank alcohol or used drugs so often that it caused problems in their family.
Meanwhile, 5.6% of the sample were classified as depressed based on the Composite International Diagnostic Interview - Short Form (using DSM-III-R criteria). Those with at least a 90% probability for a major depressive episode lasting at least 2 weeks within the past year were considered depressed.
Before any confounders beyond age, sex, and race (white or minority) were considered, adults exposed to a parent with addiction had more than double the odds of having depression, compared with peers with nonaddicted parents (OR = 2.27; 95% CI, 1.74-2.96). Then the researchers added four categories of confounders to their model: adverse childhood experiences, responders’ socioeconomic status as adults, life stressors in adulthood, and adult health behaviors.
After investigators accounted for these factors, adults who had a parent with an alcohol or drug addiction remained at higher risk for depression, with 1.69 greater odds than peers whose parents did not have these addictions (OR = 1.69; 95% CI, 1.25-2.28). Although women were more likely to have depression than men across the full sample, no significant influence for gender was found on the link between parental addiction and adult depression.
The confounders of adult health behaviors included alcohol consumption , smoking status, body mass index (BMI), and physical activity. The responders’ adult socioeconomic status considered their highest level of education and their household income, because depression is associated with poverty and a lower education level.
Adverse childhood experiences included parental divorce, parental unemployment (for at least a period of time), and physical abuse. Adult responders’ other life stressors included in the analysis were self-reported daily stress level, having a chronic condition, and marital status (married or single/separated/divorced/widowed).
Adding each of these categories did not change the initial odds much, except the addition of adverse childhood experiences, which dropped the adults’ odds (without the other categories) to 1.67 times greater depression risk among those with an addicted parent compared with those with a nonaddicted parent (OR = 1.67; 95% CI, 1.25-2.24). The reduction in odds after the addition of adverse childhood experiences to the overall model just missed statistical significance, the authors reported, leading this study’s findings to vary from the findings of past studies, "which found that co-occurring adverse childhood experiences, such as abuse, accounted for the entire association between parental addiction and depression."
"Moreover, the number of adversities experienced in childhood was independently associated with the increased odds of depression," Dr. Fuller-Thomson and colleagues reported. They also found increased daily stress, having at least one chronic illness and being single, to independently predict depression.
Study limitations included a lack of information on parental depression, the sex of the addicted parent, prenatal exposure to drugs or alcohol, and early environmental or genetic factors that previously have all been shown to increase the risk of adult depression. However, the link between parental addiction and adult depression even after the other factors in this study were accounted for point to a possible "susceptibility to a set of genes common to both depression and addiction," the authors suggested.
The only funding source noted was internal support from the first author’s Sandra Rotman Endowed Chair in Social Work. No other disclosures were reported.
FROM PSYCHIATRY RESEARCH
Major finding: Adults whose parents had an alcohol or drug addiction had 69% greater odds of having depression (OR = 1.69; 95% CI, 1.25-2.28) compared with peers with nonaddicted parents, regardless of the adults’ gender and after accounting for the influences of adverse childhood experiences, adult health behaviors, adult socioeconomic status, and other life stressors.
Data source: The data are based on an analysis of 6,268 Saskatchewan adults’ self-reported responses in the 2005 Canadian Community Health Survey.
Disclosures: The only funding source noted was internal support from the first author’s Sandra Rotman Endowed Chair in Social Work. No other disclosures were reported.
Mothers’ postpartum concerns predict failure to meet breastfeeding goals
First-time mothers’ widely reported breastfeeding problems or concerns predicted the likelihood that they would stop breastfeeding or give their infants formula within the first 2 months after giving birth, a study showed.
The 92% of mothers with any concerns at 3 days post partum were nine times more likely to stop breastfeeding before 2 months post partum, primarily because of infant feeding difficulties or concerns about milk quantity.
"Breastfeeding problems were a nearly universal experience in this cohort of first-time mothers," reported Erin A. Wagner of Cincinnati Children’s Hospital Medical Center and her colleagues in Pediatrics (2013 Sept. 23 [doi:10.1542/peds.2013-0724]). The concerns were "highly prevalent, persistent, and associated with not meeting breastfeeding goals," the investigators said.
Meanwhile, the lack of association between early breastfeeding cessation and prenatal breastfeeding concerns (as opposed to postpartum concerns) implies that the women’s failure to meet breastfeeding goals "do not appear to be simply the ‘self-fulfillment’ of anticipated problems," the researchers added.
The investigators conducted 2,946 interviews, starting with 532 primiparous expectant mothers, about half of whom (49%) were younger than 25. Just over a quarter (27%) were older than 30. Approximately half had private insurance and half had public insurance.
Ms. Wagner and her colleagues then followed up post partum with 447 of the participants, beginning within 24 hours of delivery and conducting additional interviews on days 3, 7, 14, 30, and 60 days post partum. After those lost to follow-up, 418 mothers comprised the final sample of participants with infant feeding information at 2 months post partum.
From that final sample, 47% of the 354 mothers who intended to feed their babies only breast milk for at least 2 months ended up feeding any formula to their child between 30 and 60 days post partum. Among the 406 mothers who intended to breastfeed for at least 2 months, 21% stopped breastfeeding by 60 days post partum.
To qualify the women’s breastfeeding concerns, the researchers categorized the women’s 4,179 open-ended answers involving concerns into nine main categories with a total of 49 subcategories. The most prevalent concern on delivery day was infant feeding difficulties, reported by 44% of the mothers. Three days post partum, 54% of mothers reported infant feeding difficulties, 42% reported breastfeeding pain, and 42% reported concerns about milk quantity. Infant feeding difficulties included latch problems, sleepy infants, nipple confusion or infant feeding refusal, fussy or frustrated infants, poor infant feeding, and problems with the baby’s length or frequency of breastfeeding sessions.
After taking into account women’s education and their breastfeeding intentions in prenatal interviews, the researchers identified two breastfeeding concerns that contributed most to early cessation of breastfeeding. Thirty-two percent of those who stopped were estimated to have continued if not for infant feeding difficulties reported 7 days post partum (population attributable risk (PAR) = 32%).
"Most notably, the predominant subcategories at day 7 contributing to stopping breastfeeding under the infant feeding difficulty main category were ‘fussy or frustrated at the breast,’ ‘infant refusing to breastfeed/nipple confusion,’ and ‘problems with latch,’ " the researchers wrote. The second highest PAR was 23% for reporting concerns about milk quantity 2 weeks post partum. Accounting for the women’s age, ethnicity, health insurance status, or prenatal perceptions about breastfeeding ability did not significantly alter these findings.
Although 79% of the mothers reported at least one breastfeeding concern during prenatal interviews, the peak of concerns occurred on the third postpartum day, with 92% of women reporting at least one concern. The peak for reporting pain while breastfeeding occurred 1 week post partum with 47% of mothers.
Concerns about breastfeeding during the first week post partum appeared to contribute the most toward women’s using formula or stopping breastfeeding by 2 months post partum. Mothers reporting any breastfeeding concerns at 3 days after giving birth were three times more likely to feed their child formula between 1 and 2 months post partum (ARR = 3.3; 95% CI, 1.7-15.0). They were nine times more likely to stop breastfeeding within 60 days post partum (ARR 9.2, 95% CI, 3.0 to infinity).
The researchers identified 34 outliers who reported no breastfeeding concerns three days post partum; all but one of these women continued breastfeeding past 60 days post partum. These women were more likely to be younger than 30, to be Hispanic, to have confidence prenatally of their ability to breastfeed, to have an unmedicated vaginal delivery, and to report having strong breastfeeding support.
The researchers noted that the high levels of concerns reported in the first week postpartum might have been most strongly associated with adverse outcomes because the "day 3 and day 7 interviews captured a time when there is often a gap between hospital and community lactation support resources." The results "reinforce the recommendation of the American Academy of Pediatrics that all breastfed newborns receive an evaluation by a provider knowledgeable in lactation management within 2 to 3 days post discharge," the investigators said.
One limitation cited by the investigators is that the study’s generalizability might be limited by how similar breastfeeding norms and support rates are in other communities, compared with the study population.
The study was funded by the National Institutes of Health and the Perinatal Institute of Cincinnati Children’s Hospital Medical Center. Laurie A. Nommsen-Rivers, Ph.D., received a stipend for a lecture at the 2012 National WIC Association meeting. The other three authors reported no relevant financial disclosures.
First-time mothers’ widely reported breastfeeding problems or concerns predicted the likelihood that they would stop breastfeeding or give their infants formula within the first 2 months after giving birth, a study showed.
The 92% of mothers with any concerns at 3 days post partum were nine times more likely to stop breastfeeding before 2 months post partum, primarily because of infant feeding difficulties or concerns about milk quantity.
"Breastfeeding problems were a nearly universal experience in this cohort of first-time mothers," reported Erin A. Wagner of Cincinnati Children’s Hospital Medical Center and her colleagues in Pediatrics (2013 Sept. 23 [doi:10.1542/peds.2013-0724]). The concerns were "highly prevalent, persistent, and associated with not meeting breastfeeding goals," the investigators said.
Meanwhile, the lack of association between early breastfeeding cessation and prenatal breastfeeding concerns (as opposed to postpartum concerns) implies that the women’s failure to meet breastfeeding goals "do not appear to be simply the ‘self-fulfillment’ of anticipated problems," the researchers added.
The investigators conducted 2,946 interviews, starting with 532 primiparous expectant mothers, about half of whom (49%) were younger than 25. Just over a quarter (27%) were older than 30. Approximately half had private insurance and half had public insurance.
Ms. Wagner and her colleagues then followed up post partum with 447 of the participants, beginning within 24 hours of delivery and conducting additional interviews on days 3, 7, 14, 30, and 60 days post partum. After those lost to follow-up, 418 mothers comprised the final sample of participants with infant feeding information at 2 months post partum.
From that final sample, 47% of the 354 mothers who intended to feed their babies only breast milk for at least 2 months ended up feeding any formula to their child between 30 and 60 days post partum. Among the 406 mothers who intended to breastfeed for at least 2 months, 21% stopped breastfeeding by 60 days post partum.
To qualify the women’s breastfeeding concerns, the researchers categorized the women’s 4,179 open-ended answers involving concerns into nine main categories with a total of 49 subcategories. The most prevalent concern on delivery day was infant feeding difficulties, reported by 44% of the mothers. Three days post partum, 54% of mothers reported infant feeding difficulties, 42% reported breastfeeding pain, and 42% reported concerns about milk quantity. Infant feeding difficulties included latch problems, sleepy infants, nipple confusion or infant feeding refusal, fussy or frustrated infants, poor infant feeding, and problems with the baby’s length or frequency of breastfeeding sessions.
After taking into account women’s education and their breastfeeding intentions in prenatal interviews, the researchers identified two breastfeeding concerns that contributed most to early cessation of breastfeeding. Thirty-two percent of those who stopped were estimated to have continued if not for infant feeding difficulties reported 7 days post partum (population attributable risk (PAR) = 32%).
"Most notably, the predominant subcategories at day 7 contributing to stopping breastfeeding under the infant feeding difficulty main category were ‘fussy or frustrated at the breast,’ ‘infant refusing to breastfeed/nipple confusion,’ and ‘problems with latch,’ " the researchers wrote. The second highest PAR was 23% for reporting concerns about milk quantity 2 weeks post partum. Accounting for the women’s age, ethnicity, health insurance status, or prenatal perceptions about breastfeeding ability did not significantly alter these findings.
Although 79% of the mothers reported at least one breastfeeding concern during prenatal interviews, the peak of concerns occurred on the third postpartum day, with 92% of women reporting at least one concern. The peak for reporting pain while breastfeeding occurred 1 week post partum with 47% of mothers.
Concerns about breastfeeding during the first week post partum appeared to contribute the most toward women’s using formula or stopping breastfeeding by 2 months post partum. Mothers reporting any breastfeeding concerns at 3 days after giving birth were three times more likely to feed their child formula between 1 and 2 months post partum (ARR = 3.3; 95% CI, 1.7-15.0). They were nine times more likely to stop breastfeeding within 60 days post partum (ARR 9.2, 95% CI, 3.0 to infinity).
The researchers identified 34 outliers who reported no breastfeeding concerns three days post partum; all but one of these women continued breastfeeding past 60 days post partum. These women were more likely to be younger than 30, to be Hispanic, to have confidence prenatally of their ability to breastfeed, to have an unmedicated vaginal delivery, and to report having strong breastfeeding support.
The researchers noted that the high levels of concerns reported in the first week postpartum might have been most strongly associated with adverse outcomes because the "day 3 and day 7 interviews captured a time when there is often a gap between hospital and community lactation support resources." The results "reinforce the recommendation of the American Academy of Pediatrics that all breastfed newborns receive an evaluation by a provider knowledgeable in lactation management within 2 to 3 days post discharge," the investigators said.
One limitation cited by the investigators is that the study’s generalizability might be limited by how similar breastfeeding norms and support rates are in other communities, compared with the study population.
The study was funded by the National Institutes of Health and the Perinatal Institute of Cincinnati Children’s Hospital Medical Center. Laurie A. Nommsen-Rivers, Ph.D., received a stipend for a lecture at the 2012 National WIC Association meeting. The other three authors reported no relevant financial disclosures.
First-time mothers’ widely reported breastfeeding problems or concerns predicted the likelihood that they would stop breastfeeding or give their infants formula within the first 2 months after giving birth, a study showed.
The 92% of mothers with any concerns at 3 days post partum were nine times more likely to stop breastfeeding before 2 months post partum, primarily because of infant feeding difficulties or concerns about milk quantity.
"Breastfeeding problems were a nearly universal experience in this cohort of first-time mothers," reported Erin A. Wagner of Cincinnati Children’s Hospital Medical Center and her colleagues in Pediatrics (2013 Sept. 23 [doi:10.1542/peds.2013-0724]). The concerns were "highly prevalent, persistent, and associated with not meeting breastfeeding goals," the investigators said.
Meanwhile, the lack of association between early breastfeeding cessation and prenatal breastfeeding concerns (as opposed to postpartum concerns) implies that the women’s failure to meet breastfeeding goals "do not appear to be simply the ‘self-fulfillment’ of anticipated problems," the researchers added.
The investigators conducted 2,946 interviews, starting with 532 primiparous expectant mothers, about half of whom (49%) were younger than 25. Just over a quarter (27%) were older than 30. Approximately half had private insurance and half had public insurance.
Ms. Wagner and her colleagues then followed up post partum with 447 of the participants, beginning within 24 hours of delivery and conducting additional interviews on days 3, 7, 14, 30, and 60 days post partum. After those lost to follow-up, 418 mothers comprised the final sample of participants with infant feeding information at 2 months post partum.
From that final sample, 47% of the 354 mothers who intended to feed their babies only breast milk for at least 2 months ended up feeding any formula to their child between 30 and 60 days post partum. Among the 406 mothers who intended to breastfeed for at least 2 months, 21% stopped breastfeeding by 60 days post partum.
To qualify the women’s breastfeeding concerns, the researchers categorized the women’s 4,179 open-ended answers involving concerns into nine main categories with a total of 49 subcategories. The most prevalent concern on delivery day was infant feeding difficulties, reported by 44% of the mothers. Three days post partum, 54% of mothers reported infant feeding difficulties, 42% reported breastfeeding pain, and 42% reported concerns about milk quantity. Infant feeding difficulties included latch problems, sleepy infants, nipple confusion or infant feeding refusal, fussy or frustrated infants, poor infant feeding, and problems with the baby’s length or frequency of breastfeeding sessions.
After taking into account women’s education and their breastfeeding intentions in prenatal interviews, the researchers identified two breastfeeding concerns that contributed most to early cessation of breastfeeding. Thirty-two percent of those who stopped were estimated to have continued if not for infant feeding difficulties reported 7 days post partum (population attributable risk (PAR) = 32%).
"Most notably, the predominant subcategories at day 7 contributing to stopping breastfeeding under the infant feeding difficulty main category were ‘fussy or frustrated at the breast,’ ‘infant refusing to breastfeed/nipple confusion,’ and ‘problems with latch,’ " the researchers wrote. The second highest PAR was 23% for reporting concerns about milk quantity 2 weeks post partum. Accounting for the women’s age, ethnicity, health insurance status, or prenatal perceptions about breastfeeding ability did not significantly alter these findings.
Although 79% of the mothers reported at least one breastfeeding concern during prenatal interviews, the peak of concerns occurred on the third postpartum day, with 92% of women reporting at least one concern. The peak for reporting pain while breastfeeding occurred 1 week post partum with 47% of mothers.
Concerns about breastfeeding during the first week post partum appeared to contribute the most toward women’s using formula or stopping breastfeeding by 2 months post partum. Mothers reporting any breastfeeding concerns at 3 days after giving birth were three times more likely to feed their child formula between 1 and 2 months post partum (ARR = 3.3; 95% CI, 1.7-15.0). They were nine times more likely to stop breastfeeding within 60 days post partum (ARR 9.2, 95% CI, 3.0 to infinity).
The researchers identified 34 outliers who reported no breastfeeding concerns three days post partum; all but one of these women continued breastfeeding past 60 days post partum. These women were more likely to be younger than 30, to be Hispanic, to have confidence prenatally of their ability to breastfeed, to have an unmedicated vaginal delivery, and to report having strong breastfeeding support.
The researchers noted that the high levels of concerns reported in the first week postpartum might have been most strongly associated with adverse outcomes because the "day 3 and day 7 interviews captured a time when there is often a gap between hospital and community lactation support resources." The results "reinforce the recommendation of the American Academy of Pediatrics that all breastfed newborns receive an evaluation by a provider knowledgeable in lactation management within 2 to 3 days post discharge," the investigators said.
One limitation cited by the investigators is that the study’s generalizability might be limited by how similar breastfeeding norms and support rates are in other communities, compared with the study population.
The study was funded by the National Institutes of Health and the Perinatal Institute of Cincinnati Children’s Hospital Medical Center. Laurie A. Nommsen-Rivers, Ph.D., received a stipend for a lecture at the 2012 National WIC Association meeting. The other three authors reported no relevant financial disclosures.
FROM PEDIATRICS
Major finding: The 92% of women who reported breastfeeding concerns at 3 days post partum were nine times more likely to stop breastfeeding within 60 days (adjusted relative risk, 9.2; 95% CI, 3.0 to infinity), with 54% reporting infant feeding problems, 42% reporting breastfeeding pain, and 42% reporting milk quantity concerns.
Data source: An analysis of 2,946 interviews with 532 primiparas through University of California Davis Medical Center, with prenatal interviews and follow-up interviews at 0, 3, 7, 14, 30, and 60 days post partum.
Disclosures: The study was funded by the National Institutes of Health and the Perinatal Institute of Cincinnati Children’s Hospital Medical Center. Laurie A. Nommsen-Rivers, Ph.D., one of the researchers, received a stipend for a lecture at the 2012 National WIC Association meeting. The other three authors reported no relevant financial disclosures.
Healthy weight-related behaviors in teens increased, but so did BMI
Adolescents were watching less TV, eating fewer sweets, and drinking fewer sweetened soft drinks in 2010 than they were in 2001, yet their body mass index percentiles went up, not down, according to a recent study. The study also found U.S. teens were eating more fruits and vegetables, having breakfast more often, and getting more physical activity in 2010 than they were in 2001.
"It may be that current public health efforts are succeeding," Ronald J. Iannotti, Ph.D., and Jing Wang, Ph.D., of the Eunice Kennedy Shriver National Institute of Child Health and Human Development reported in Pediatrics (2013 Sept. 16 [doi:10.1542/peds.2013-1488]). "Yet it appears that the magnitude of these changes in health behaviors were not sufficient to reverse the trends in weight status."
Dr. Iannotti and Dr. Wang analyzed results from the Health Behavior in School-Aged Children surveys administered to three nationally representative samples of U.S. high school students in grades 6 through 10. The 83% response rate in 2001-2002 yielded a sample of 14,818 adolescents; the 87% response rate in 2005-2006 yielded 9,227 participants; and the 89% response rate in 2009-2010 yielded 10,993 participants. The researchers oversampled black and Hispanic students "to obtain better estimates for these groups."
From 2001 to 2010, physical activity among adolescents overall increased even though it remained fewer than 5 days/week for all three samples. The number of days teens reported getting at least 60 minutes of physical activity increased from 4.33 days/week in 2001-2002 to 4.53 days in 2009-2010. For each sample, boys reported more physical activity than did girls, and Hispanics reported less physical activity than did whites.
As physical activity increased, there was a drop in TV watching (P less than .001), from 3.06 hours/day in 2001-2002 (weight-averaged for weekdays and weekends) to 2.65 hours daily in 2005-2006 and 2.38 hours daily in 2009-2010. Hispanics, blacks, and "other" ethnicities reported more hours of TV watching than white teens did.
Computer use and video-game playing were only assessed in 2005-2006 and 2009-2010, and neither showed any significant change overall, although an increase in video-game playing was seen among girls only. Participants averaged less than 2 hours/day of video-game playing, which was higher in boys, younger teens, and nonwhite teens. Computer use – higher in girls, older teens, and also nonwhite teens – averaged less than 2 hours/day (P less than .001 for all results).
Assessment of fruit and vegetable intake was done on a scale of 1 (never) to 7 (more than once a day), with 6 denoting a serving at least once a day. Intake for both increased over the three samples (P less than .001), with a mean 4.29 for fruits and 4.31 for vegetables in 2001-2002 increasing to 4.91 for fruits and 4.61 for vegetables in 2009-2010. The increase in vegetables was driven by boys from 2001-2002 to 2005-2006 and in girls for 2005-2006 to 2009-2010.
Meanwhile, sweets and sweetened soft drinks, measured on the same 1-7 scale, decreased over time (P less than .001) with the greatest drop for soft drinks occurring between 2001-2002 and 2005-2006. In 2001-2002, a mean 4.7 was reported for sweets and mean 4.85 was reported for sweetened soft drinks. These decreased to 4.48 for sweets and 4.36 for sweetened soft drinks in 2005-2006 and 4.1 for sweets and 4.18 for sweetened soft drinks in 2009-2010.
Another improvement seen across the samples was an increase in adolescents’ reporting eating breakfast on weekdays (P less than .001). The teens reported eating breakfast an average of 2.98 days/weekday in 2001-2002, which increased to 3.12 in 2005-2006 and 3.25 in 2009-2010. No significant change was seen in breakfasts eaten on weekends (1.59 days/weekend in 2001-2002 to 1.62 days/weekend in 2009-2010). Those less frequently eating breakfast tended to be females, older adolescents, and blacks and Hispanics (P less than .001).
Yet, despite the decrease in obesogenic behaviors and the increase in healthy behaviors, average body mass index (BMI) percentiles in the teens increased over time, driven by the increase from 2001-2002 to 2005-2006 among both boys and girls. While 70.1% of the sample had a normal weight in 2001-2002, this dropped to 66.6% in 2005-2006 and remained similar (66.5%) in 2009-2010. Meanwhile, the percentage of overweight (14.9%) and obese (10.3%) teens in 2001-2002 increased to 17% and 12.7%, respectively, in 2005-2006 and similarly stabilized at 16.6% and 12.7%, respectively, in 2009-2010.
In line with other research, black boys had 1.36 times greater odds and black girls had 2.19 greater odds of being obese, compared with white boys and girls. Hispanics were also more likely than whites to be obese, with an odds ratio of 1.79 for Hispanic boys and an OR of 1.60 for Hispanic girls.
Despite the improvements in healthy behaviors, the authors said more room for improvement exists, given that most adolescents are not engaging in physical activity at least 60 minutes every day or eating at least five servings of fruits and vegetables daily. Similarly, most teens continue to exceed the recommendation of watching TV for no more than 2 hours a day.
"Establishment of obesogenic behaviors during adolescence is important because physical activity and diet track from adolescence to adulthood," the authors wrote. "Furthermore, there is evidence that most U.S. youth engage in multiple obesogenic behaviors, putting them at greater risk for physical and psychological health problems and indicating they could benefit from intervention targeting physical activity, sedentary behavior and diet."
Yet the authors noted that the leveling off of BMI between 2005-2006 and 2009-2010 may indicate a stabilization that could potentially begin a downward trend with continued improvements in physical activity, sedentary behavior, and dietary behaviors.
One area of concern, the authors reported, related to the age differences in behaviors. "Compared with younger adolescents, older adolescents reported less physical activity, more computer use, less frequent consumption of fruits and vegetables, more frequent consumption of sweets and sweetened soft drinks, and less frequent consumption of breakfast on weekdays," they wrote. "Thus, it appears that obesogenic behaviors increase with age, and this increase corresponds with an increase in obesity."
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the Maternal and Child Health Bureau of the Health Resources and Services Administration. The authors reported no financial disclosures.
Adolescents were watching less TV, eating fewer sweets, and drinking fewer sweetened soft drinks in 2010 than they were in 2001, yet their body mass index percentiles went up, not down, according to a recent study. The study also found U.S. teens were eating more fruits and vegetables, having breakfast more often, and getting more physical activity in 2010 than they were in 2001.
"It may be that current public health efforts are succeeding," Ronald J. Iannotti, Ph.D., and Jing Wang, Ph.D., of the Eunice Kennedy Shriver National Institute of Child Health and Human Development reported in Pediatrics (2013 Sept. 16 [doi:10.1542/peds.2013-1488]). "Yet it appears that the magnitude of these changes in health behaviors were not sufficient to reverse the trends in weight status."
Dr. Iannotti and Dr. Wang analyzed results from the Health Behavior in School-Aged Children surveys administered to three nationally representative samples of U.S. high school students in grades 6 through 10. The 83% response rate in 2001-2002 yielded a sample of 14,818 adolescents; the 87% response rate in 2005-2006 yielded 9,227 participants; and the 89% response rate in 2009-2010 yielded 10,993 participants. The researchers oversampled black and Hispanic students "to obtain better estimates for these groups."
From 2001 to 2010, physical activity among adolescents overall increased even though it remained fewer than 5 days/week for all three samples. The number of days teens reported getting at least 60 minutes of physical activity increased from 4.33 days/week in 2001-2002 to 4.53 days in 2009-2010. For each sample, boys reported more physical activity than did girls, and Hispanics reported less physical activity than did whites.
As physical activity increased, there was a drop in TV watching (P less than .001), from 3.06 hours/day in 2001-2002 (weight-averaged for weekdays and weekends) to 2.65 hours daily in 2005-2006 and 2.38 hours daily in 2009-2010. Hispanics, blacks, and "other" ethnicities reported more hours of TV watching than white teens did.
Computer use and video-game playing were only assessed in 2005-2006 and 2009-2010, and neither showed any significant change overall, although an increase in video-game playing was seen among girls only. Participants averaged less than 2 hours/day of video-game playing, which was higher in boys, younger teens, and nonwhite teens. Computer use – higher in girls, older teens, and also nonwhite teens – averaged less than 2 hours/day (P less than .001 for all results).
Assessment of fruit and vegetable intake was done on a scale of 1 (never) to 7 (more than once a day), with 6 denoting a serving at least once a day. Intake for both increased over the three samples (P less than .001), with a mean 4.29 for fruits and 4.31 for vegetables in 2001-2002 increasing to 4.91 for fruits and 4.61 for vegetables in 2009-2010. The increase in vegetables was driven by boys from 2001-2002 to 2005-2006 and in girls for 2005-2006 to 2009-2010.
Meanwhile, sweets and sweetened soft drinks, measured on the same 1-7 scale, decreased over time (P less than .001) with the greatest drop for soft drinks occurring between 2001-2002 and 2005-2006. In 2001-2002, a mean 4.7 was reported for sweets and mean 4.85 was reported for sweetened soft drinks. These decreased to 4.48 for sweets and 4.36 for sweetened soft drinks in 2005-2006 and 4.1 for sweets and 4.18 for sweetened soft drinks in 2009-2010.
Another improvement seen across the samples was an increase in adolescents’ reporting eating breakfast on weekdays (P less than .001). The teens reported eating breakfast an average of 2.98 days/weekday in 2001-2002, which increased to 3.12 in 2005-2006 and 3.25 in 2009-2010. No significant change was seen in breakfasts eaten on weekends (1.59 days/weekend in 2001-2002 to 1.62 days/weekend in 2009-2010). Those less frequently eating breakfast tended to be females, older adolescents, and blacks and Hispanics (P less than .001).
Yet, despite the decrease in obesogenic behaviors and the increase in healthy behaviors, average body mass index (BMI) percentiles in the teens increased over time, driven by the increase from 2001-2002 to 2005-2006 among both boys and girls. While 70.1% of the sample had a normal weight in 2001-2002, this dropped to 66.6% in 2005-2006 and remained similar (66.5%) in 2009-2010. Meanwhile, the percentage of overweight (14.9%) and obese (10.3%) teens in 2001-2002 increased to 17% and 12.7%, respectively, in 2005-2006 and similarly stabilized at 16.6% and 12.7%, respectively, in 2009-2010.
In line with other research, black boys had 1.36 times greater odds and black girls had 2.19 greater odds of being obese, compared with white boys and girls. Hispanics were also more likely than whites to be obese, with an odds ratio of 1.79 for Hispanic boys and an OR of 1.60 for Hispanic girls.
Despite the improvements in healthy behaviors, the authors said more room for improvement exists, given that most adolescents are not engaging in physical activity at least 60 minutes every day or eating at least five servings of fruits and vegetables daily. Similarly, most teens continue to exceed the recommendation of watching TV for no more than 2 hours a day.
"Establishment of obesogenic behaviors during adolescence is important because physical activity and diet track from adolescence to adulthood," the authors wrote. "Furthermore, there is evidence that most U.S. youth engage in multiple obesogenic behaviors, putting them at greater risk for physical and psychological health problems and indicating they could benefit from intervention targeting physical activity, sedentary behavior and diet."
Yet the authors noted that the leveling off of BMI between 2005-2006 and 2009-2010 may indicate a stabilization that could potentially begin a downward trend with continued improvements in physical activity, sedentary behavior, and dietary behaviors.
One area of concern, the authors reported, related to the age differences in behaviors. "Compared with younger adolescents, older adolescents reported less physical activity, more computer use, less frequent consumption of fruits and vegetables, more frequent consumption of sweets and sweetened soft drinks, and less frequent consumption of breakfast on weekdays," they wrote. "Thus, it appears that obesogenic behaviors increase with age, and this increase corresponds with an increase in obesity."
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the Maternal and Child Health Bureau of the Health Resources and Services Administration. The authors reported no financial disclosures.
Adolescents were watching less TV, eating fewer sweets, and drinking fewer sweetened soft drinks in 2010 than they were in 2001, yet their body mass index percentiles went up, not down, according to a recent study. The study also found U.S. teens were eating more fruits and vegetables, having breakfast more often, and getting more physical activity in 2010 than they were in 2001.
"It may be that current public health efforts are succeeding," Ronald J. Iannotti, Ph.D., and Jing Wang, Ph.D., of the Eunice Kennedy Shriver National Institute of Child Health and Human Development reported in Pediatrics (2013 Sept. 16 [doi:10.1542/peds.2013-1488]). "Yet it appears that the magnitude of these changes in health behaviors were not sufficient to reverse the trends in weight status."
Dr. Iannotti and Dr. Wang analyzed results from the Health Behavior in School-Aged Children surveys administered to three nationally representative samples of U.S. high school students in grades 6 through 10. The 83% response rate in 2001-2002 yielded a sample of 14,818 adolescents; the 87% response rate in 2005-2006 yielded 9,227 participants; and the 89% response rate in 2009-2010 yielded 10,993 participants. The researchers oversampled black and Hispanic students "to obtain better estimates for these groups."
From 2001 to 2010, physical activity among adolescents overall increased even though it remained fewer than 5 days/week for all three samples. The number of days teens reported getting at least 60 minutes of physical activity increased from 4.33 days/week in 2001-2002 to 4.53 days in 2009-2010. For each sample, boys reported more physical activity than did girls, and Hispanics reported less physical activity than did whites.
As physical activity increased, there was a drop in TV watching (P less than .001), from 3.06 hours/day in 2001-2002 (weight-averaged for weekdays and weekends) to 2.65 hours daily in 2005-2006 and 2.38 hours daily in 2009-2010. Hispanics, blacks, and "other" ethnicities reported more hours of TV watching than white teens did.
Computer use and video-game playing were only assessed in 2005-2006 and 2009-2010, and neither showed any significant change overall, although an increase in video-game playing was seen among girls only. Participants averaged less than 2 hours/day of video-game playing, which was higher in boys, younger teens, and nonwhite teens. Computer use – higher in girls, older teens, and also nonwhite teens – averaged less than 2 hours/day (P less than .001 for all results).
Assessment of fruit and vegetable intake was done on a scale of 1 (never) to 7 (more than once a day), with 6 denoting a serving at least once a day. Intake for both increased over the three samples (P less than .001), with a mean 4.29 for fruits and 4.31 for vegetables in 2001-2002 increasing to 4.91 for fruits and 4.61 for vegetables in 2009-2010. The increase in vegetables was driven by boys from 2001-2002 to 2005-2006 and in girls for 2005-2006 to 2009-2010.
Meanwhile, sweets and sweetened soft drinks, measured on the same 1-7 scale, decreased over time (P less than .001) with the greatest drop for soft drinks occurring between 2001-2002 and 2005-2006. In 2001-2002, a mean 4.7 was reported for sweets and mean 4.85 was reported for sweetened soft drinks. These decreased to 4.48 for sweets and 4.36 for sweetened soft drinks in 2005-2006 and 4.1 for sweets and 4.18 for sweetened soft drinks in 2009-2010.
Another improvement seen across the samples was an increase in adolescents’ reporting eating breakfast on weekdays (P less than .001). The teens reported eating breakfast an average of 2.98 days/weekday in 2001-2002, which increased to 3.12 in 2005-2006 and 3.25 in 2009-2010. No significant change was seen in breakfasts eaten on weekends (1.59 days/weekend in 2001-2002 to 1.62 days/weekend in 2009-2010). Those less frequently eating breakfast tended to be females, older adolescents, and blacks and Hispanics (P less than .001).
Yet, despite the decrease in obesogenic behaviors and the increase in healthy behaviors, average body mass index (BMI) percentiles in the teens increased over time, driven by the increase from 2001-2002 to 2005-2006 among both boys and girls. While 70.1% of the sample had a normal weight in 2001-2002, this dropped to 66.6% in 2005-2006 and remained similar (66.5%) in 2009-2010. Meanwhile, the percentage of overweight (14.9%) and obese (10.3%) teens in 2001-2002 increased to 17% and 12.7%, respectively, in 2005-2006 and similarly stabilized at 16.6% and 12.7%, respectively, in 2009-2010.
In line with other research, black boys had 1.36 times greater odds and black girls had 2.19 greater odds of being obese, compared with white boys and girls. Hispanics were also more likely than whites to be obese, with an odds ratio of 1.79 for Hispanic boys and an OR of 1.60 for Hispanic girls.
Despite the improvements in healthy behaviors, the authors said more room for improvement exists, given that most adolescents are not engaging in physical activity at least 60 minutes every day or eating at least five servings of fruits and vegetables daily. Similarly, most teens continue to exceed the recommendation of watching TV for no more than 2 hours a day.
"Establishment of obesogenic behaviors during adolescence is important because physical activity and diet track from adolescence to adulthood," the authors wrote. "Furthermore, there is evidence that most U.S. youth engage in multiple obesogenic behaviors, putting them at greater risk for physical and psychological health problems and indicating they could benefit from intervention targeting physical activity, sedentary behavior and diet."
Yet the authors noted that the leveling off of BMI between 2005-2006 and 2009-2010 may indicate a stabilization that could potentially begin a downward trend with continued improvements in physical activity, sedentary behavior, and dietary behaviors.
One area of concern, the authors reported, related to the age differences in behaviors. "Compared with younger adolescents, older adolescents reported less physical activity, more computer use, less frequent consumption of fruits and vegetables, more frequent consumption of sweets and sweetened soft drinks, and less frequent consumption of breakfast on weekdays," they wrote. "Thus, it appears that obesogenic behaviors increase with age, and this increase corresponds with an increase in obesity."
The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the Maternal and Child Health Bureau of the Health Resources and Services Administration. The authors reported no financial disclosures.
FROM PEDIATRICS
Major Finding: Despite an increase in physical activity, fruit and vegetable consumption, and daily breakfasts in addition to a decrease in TV viewing and consumption of sweets and sweetened soft drinks from 2001 to 2010 (P less than .001 for all changes), adolescents also experienced an increase in body mass index percentiles over this time.
Data Source: The data is based on analysis of Health Behavior in School-Aged Children surveys from three nationally representative samples of U.S. students in grades 6-10, with 14,818 adolescents in 2001-2002, 9,227 adolescents in 2005-2006, and 10,993 adolescents in 2009-2010. There was a deliberate oversampling of black and Hispanic students.
Disclosures: The study was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development and by the Maternal and Child Health Bureau of the Health Resources and Services Administration. The authors reported no financial disclosures.
Puffy fingers improves predictiveness of very early systemic sclerosis diagnosis
The presence of puffy fingers in patients with Raynaud’s phenomenon who are antinuclear antibody positive was further validated as an important sign of possible early systemic sclerosis in a recent study.
The findings support the value of using the European League Against Rheumatism Scleroderma Trial and Research Group’s (EUSTAR’s) new criteria of ANA positivity, Raynaud’s phenomenon, and puffy fingers as three red flags that raise suspicion for very early systemic sclerosis, reported Dr. Tünde Minier of the University of Pécs in Hungary and her colleagues (Ann. Rheum. Dis. 2013 Aug. 12 [doi: 10.1136/annrheumdis-2013-203716]). They calculated that the positive predictive value of ANA positivity for developing systemic sclerosis in patients with Raynaud’s phenomenon increased from 33.9% to 88.5% when combined with the presence of puffy fingers.
The researchers examined 469 patients with Raynaud’s phenomenon who were enrolled in the multicenter, prospective, observational Very Early Diagnosis of Systemic Sclerosis (VEDOSS) study at 33 EUSTAR centers throughout and outside Europe. About a third of the patients were ANA negative (32.2%), and 67.8% were ANA positive. Among the ANA-positive participants, 53.6% had a systemic sclerosis pattern on nailfold capillaroscopy, compared with only 13.4% of the ANA-negative patients (P less than .001).
Just over half of the ANA-negative patients lacked a systemic sclerosis pattern on nailfold capillaroscopy, systemic sclerosis–related specific clinical symptoms, or an erythrocyte sedimentation rate of 25 mm/hr or greater, and so were diagnosed with primary Raynaud’s.
The examinations of the patients also included assessments for digital ulcers, digital pitting scars, telangiectases, calcinosis, tendon friction rubs, esophageal symptoms, and symptoms consistent with median nerve compression syndrome.
The most common clinical features in the ANA-positive patients were previous or current puffy fingers, found in 38.5% of the ANA-positive patients versus 23.3% of the ANA-negative patients, and esophageal symptoms, identified in 35.2% of the ANA-positive patients and 18.4% of the ANA-negative patients.
"Almost 90% of ANA-positive Raynaud’s phenomenon patients with previous or current finger edematous skin changes (puffy fingers) already had a nailfold capillaroscopy systemic sclerosis pattern and/or systemic sclerosis–specific autoantibodies," Dr. Minier’s team wrote. Specifically, 73.3% of ANA-positive patients with Raynaud’s and puffy fingers had a nailfold capillaroscopy systemic sclerosis pattern, compared with only 41.2% of ANA-positive patients without puffy fingers. Overall, 88.5% of ANA-positive patients with Raynaud’s and puffy fingers met the criteria for very early systemic sclerosis.
The ANA-positive patients with puffy fingers were also more likely to have other symptoms than were the ANA-positive patients without puffy fingers. Sclerodactyly was identified in 17.8% of the ANA-positive patients with puffy fingers, compared with 6.2% of the ANA-positive patients without current or previous puffy fingers (P = .002). Telangiectases also appeared on 17.3% of the ANA-positive patients with puffy fingers, compared with 9.2% of those without puffy fingers (P = .033). Similarly, 42.1% of ANA-positive patients with puffy fingers had esophageal symptoms, compared with 30.9% of those without puffy fingers (P = .043).
The researchers also noted that even puffy fingers in ANA-negative patients may require more careful follow-up, because 17% of the ANA-negative patients with current or previous puffy fingers had a nailfold capillaroscopy systemic sclerosis pattern and 20% of the ANA-negative patients with puffy fingers had sclerodactyly with other systemic sclerosis symptoms.
The authors noted that one limitation of their study is the inability to generalize their findings to the broader population of Raynaud’s phenomenon patients seen by general physicians, since a higher percentage of the patients enrolled in this cohort meet very early systemic sclerosis classification.
"Patients identified with the very early diagnostic criteria may also have scleroderma with limited cutaneous involvement or even undifferentiated connective tissue disease," a limitation that longer-term follow-up data may clarify with a comparison of the ACR 1980 classification criteria and the new ACR-EULAR criteria.
The study did not use external funding. The authors had no disclosures.
* This story was updated 8/20/2013.
It is appreciated among rheumatologists that although
Raynaud’s phenomenon is common in the general population, it occurs more
commonly in patients with connective tissue disorders and in particular in those
with scleroderma. Recently, preliminary criteria have been proposed by the European
League Against Rheumatism (EULAR) Scleroderma Trials and Research Group to
identify very early systemic sclerosis. Three red-flag features were identified
as those which would raise suspicion for early scleroderma: Raynaud’s phenomenon,
puffy fingers, and a positive ANA. In Dr. Minier’s study, puffy fingers emerged
as an important feature suggesting a higher likelihood of systemic sclerosis.
![]() |
| Dr. Robert Spiera |
It has already been appreciated that patients with Raynaud’s phenomenon who have a positive ANA have a higher likelihood of developing a
connective tissue disorder. In this cohort study, the presence of puffy fingers
in the past or at the time of evaluation markedly increased a positive
predictive value that scleroderma would be present or develop. Perhaps even
more important was their observation that among patients with Raynaud’s phenomenon
without detectable ANA (in whom the likelihood of a connective tissue disorder
evolving was traditionally felt to be low), if puffy fingers were present,
as many as 20% ultimately developed features suggestive of systemic sclerosis.
Rheumatologists have typically recognized the presence
of puffy fingers as potentially an important clinical finding, indicating a
likelihood of the patient having an inflammatory arthritis, connective tissue
disorder, or scleroderma. Of course there is a differential diagnosis of
puffy hands, including thyroid disease, edema, or other metabolic
abnormalities. Nevertheless, in the context of Raynaud’s phenomenon and
detectable ANA, vigilance for evolution toward a scleroderma spectrum disorder
is warranted.
Ultimately, the balance of disease criteria can be
difficult. More liberal diagnostic criteria allow the recognition and diagnosis
of the disorder in patients earlier in the disease course. That must be
balanced against the downside of establishing a diagnosis of a potentially dangerous
and debilitating rheumatic disease in a patient in whom it might not more fully
evolve. Ultimately, the very diagnosis itself could be a major cause of anxiety
and, indeed, morbidity to those patients.
The finding, however, of the importance of puffy
fingers as a predictive factor in establishing a diagnosis of systemic
sclerosis is important, and would be particularly of value if recognized in the
general medical community in terms of identifying patients worthy of further
investigation or rheumatologic referral.
Dr. Robert Spiera is a professor of clinical medicine at
Weill Cornell Medical College and director of the Scleroderma, Vasculitis and Myositis Center at the Hospital for Special Surgery in New York.
It is appreciated among rheumatologists that although
Raynaud’s phenomenon is common in the general population, it occurs more
commonly in patients with connective tissue disorders and in particular in those
with scleroderma. Recently, preliminary criteria have been proposed by the European
League Against Rheumatism (EULAR) Scleroderma Trials and Research Group to
identify very early systemic sclerosis. Three red-flag features were identified
as those which would raise suspicion for early scleroderma: Raynaud’s phenomenon,
puffy fingers, and a positive ANA. In Dr. Minier’s study, puffy fingers emerged
as an important feature suggesting a higher likelihood of systemic sclerosis.
![]() |
| Dr. Robert Spiera |
It has already been appreciated that patients with Raynaud’s phenomenon who have a positive ANA have a higher likelihood of developing a
connective tissue disorder. In this cohort study, the presence of puffy fingers
in the past or at the time of evaluation markedly increased a positive
predictive value that scleroderma would be present or develop. Perhaps even
more important was their observation that among patients with Raynaud’s phenomenon
without detectable ANA (in whom the likelihood of a connective tissue disorder
evolving was traditionally felt to be low), if puffy fingers were present,
as many as 20% ultimately developed features suggestive of systemic sclerosis.
Rheumatologists have typically recognized the presence
of puffy fingers as potentially an important clinical finding, indicating a
likelihood of the patient having an inflammatory arthritis, connective tissue
disorder, or scleroderma. Of course there is a differential diagnosis of
puffy hands, including thyroid disease, edema, or other metabolic
abnormalities. Nevertheless, in the context of Raynaud’s phenomenon and
detectable ANA, vigilance for evolution toward a scleroderma spectrum disorder
is warranted.
Ultimately, the balance of disease criteria can be
difficult. More liberal diagnostic criteria allow the recognition and diagnosis
of the disorder in patients earlier in the disease course. That must be
balanced against the downside of establishing a diagnosis of a potentially dangerous
and debilitating rheumatic disease in a patient in whom it might not more fully
evolve. Ultimately, the very diagnosis itself could be a major cause of anxiety
and, indeed, morbidity to those patients.
The finding, however, of the importance of puffy
fingers as a predictive factor in establishing a diagnosis of systemic
sclerosis is important, and would be particularly of value if recognized in the
general medical community in terms of identifying patients worthy of further
investigation or rheumatologic referral.
Dr. Robert Spiera is a professor of clinical medicine at
Weill Cornell Medical College and director of the Scleroderma, Vasculitis and Myositis Center at the Hospital for Special Surgery in New York.
It is appreciated among rheumatologists that although
Raynaud’s phenomenon is common in the general population, it occurs more
commonly in patients with connective tissue disorders and in particular in those
with scleroderma. Recently, preliminary criteria have been proposed by the European
League Against Rheumatism (EULAR) Scleroderma Trials and Research Group to
identify very early systemic sclerosis. Three red-flag features were identified
as those which would raise suspicion for early scleroderma: Raynaud’s phenomenon,
puffy fingers, and a positive ANA. In Dr. Minier’s study, puffy fingers emerged
as an important feature suggesting a higher likelihood of systemic sclerosis.
![]() |
| Dr. Robert Spiera |
It has already been appreciated that patients with Raynaud’s phenomenon who have a positive ANA have a higher likelihood of developing a
connective tissue disorder. In this cohort study, the presence of puffy fingers
in the past or at the time of evaluation markedly increased a positive
predictive value that scleroderma would be present or develop. Perhaps even
more important was their observation that among patients with Raynaud’s phenomenon
without detectable ANA (in whom the likelihood of a connective tissue disorder
evolving was traditionally felt to be low), if puffy fingers were present,
as many as 20% ultimately developed features suggestive of systemic sclerosis.
Rheumatologists have typically recognized the presence
of puffy fingers as potentially an important clinical finding, indicating a
likelihood of the patient having an inflammatory arthritis, connective tissue
disorder, or scleroderma. Of course there is a differential diagnosis of
puffy hands, including thyroid disease, edema, or other metabolic
abnormalities. Nevertheless, in the context of Raynaud’s phenomenon and
detectable ANA, vigilance for evolution toward a scleroderma spectrum disorder
is warranted.
Ultimately, the balance of disease criteria can be
difficult. More liberal diagnostic criteria allow the recognition and diagnosis
of the disorder in patients earlier in the disease course. That must be
balanced against the downside of establishing a diagnosis of a potentially dangerous
and debilitating rheumatic disease in a patient in whom it might not more fully
evolve. Ultimately, the very diagnosis itself could be a major cause of anxiety
and, indeed, morbidity to those patients.
The finding, however, of the importance of puffy
fingers as a predictive factor in establishing a diagnosis of systemic
sclerosis is important, and would be particularly of value if recognized in the
general medical community in terms of identifying patients worthy of further
investigation or rheumatologic referral.
Dr. Robert Spiera is a professor of clinical medicine at
Weill Cornell Medical College and director of the Scleroderma, Vasculitis and Myositis Center at the Hospital for Special Surgery in New York.
The presence of puffy fingers in patients with Raynaud’s phenomenon who are antinuclear antibody positive was further validated as an important sign of possible early systemic sclerosis in a recent study.
The findings support the value of using the European League Against Rheumatism Scleroderma Trial and Research Group’s (EUSTAR’s) new criteria of ANA positivity, Raynaud’s phenomenon, and puffy fingers as three red flags that raise suspicion for very early systemic sclerosis, reported Dr. Tünde Minier of the University of Pécs in Hungary and her colleagues (Ann. Rheum. Dis. 2013 Aug. 12 [doi: 10.1136/annrheumdis-2013-203716]). They calculated that the positive predictive value of ANA positivity for developing systemic sclerosis in patients with Raynaud’s phenomenon increased from 33.9% to 88.5% when combined with the presence of puffy fingers.
The researchers examined 469 patients with Raynaud’s phenomenon who were enrolled in the multicenter, prospective, observational Very Early Diagnosis of Systemic Sclerosis (VEDOSS) study at 33 EUSTAR centers throughout and outside Europe. About a third of the patients were ANA negative (32.2%), and 67.8% were ANA positive. Among the ANA-positive participants, 53.6% had a systemic sclerosis pattern on nailfold capillaroscopy, compared with only 13.4% of the ANA-negative patients (P less than .001).
Just over half of the ANA-negative patients lacked a systemic sclerosis pattern on nailfold capillaroscopy, systemic sclerosis–related specific clinical symptoms, or an erythrocyte sedimentation rate of 25 mm/hr or greater, and so were diagnosed with primary Raynaud’s.
The examinations of the patients also included assessments for digital ulcers, digital pitting scars, telangiectases, calcinosis, tendon friction rubs, esophageal symptoms, and symptoms consistent with median nerve compression syndrome.
The most common clinical features in the ANA-positive patients were previous or current puffy fingers, found in 38.5% of the ANA-positive patients versus 23.3% of the ANA-negative patients, and esophageal symptoms, identified in 35.2% of the ANA-positive patients and 18.4% of the ANA-negative patients.
"Almost 90% of ANA-positive Raynaud’s phenomenon patients with previous or current finger edematous skin changes (puffy fingers) already had a nailfold capillaroscopy systemic sclerosis pattern and/or systemic sclerosis–specific autoantibodies," Dr. Minier’s team wrote. Specifically, 73.3% of ANA-positive patients with Raynaud’s and puffy fingers had a nailfold capillaroscopy systemic sclerosis pattern, compared with only 41.2% of ANA-positive patients without puffy fingers. Overall, 88.5% of ANA-positive patients with Raynaud’s and puffy fingers met the criteria for very early systemic sclerosis.
The ANA-positive patients with puffy fingers were also more likely to have other symptoms than were the ANA-positive patients without puffy fingers. Sclerodactyly was identified in 17.8% of the ANA-positive patients with puffy fingers, compared with 6.2% of the ANA-positive patients without current or previous puffy fingers (P = .002). Telangiectases also appeared on 17.3% of the ANA-positive patients with puffy fingers, compared with 9.2% of those without puffy fingers (P = .033). Similarly, 42.1% of ANA-positive patients with puffy fingers had esophageal symptoms, compared with 30.9% of those without puffy fingers (P = .043).
The researchers also noted that even puffy fingers in ANA-negative patients may require more careful follow-up, because 17% of the ANA-negative patients with current or previous puffy fingers had a nailfold capillaroscopy systemic sclerosis pattern and 20% of the ANA-negative patients with puffy fingers had sclerodactyly with other systemic sclerosis symptoms.
The authors noted that one limitation of their study is the inability to generalize their findings to the broader population of Raynaud’s phenomenon patients seen by general physicians, since a higher percentage of the patients enrolled in this cohort meet very early systemic sclerosis classification.
"Patients identified with the very early diagnostic criteria may also have scleroderma with limited cutaneous involvement or even undifferentiated connective tissue disease," a limitation that longer-term follow-up data may clarify with a comparison of the ACR 1980 classification criteria and the new ACR-EULAR criteria.
The study did not use external funding. The authors had no disclosures.
* This story was updated 8/20/2013.
The presence of puffy fingers in patients with Raynaud’s phenomenon who are antinuclear antibody positive was further validated as an important sign of possible early systemic sclerosis in a recent study.
The findings support the value of using the European League Against Rheumatism Scleroderma Trial and Research Group’s (EUSTAR’s) new criteria of ANA positivity, Raynaud’s phenomenon, and puffy fingers as three red flags that raise suspicion for very early systemic sclerosis, reported Dr. Tünde Minier of the University of Pécs in Hungary and her colleagues (Ann. Rheum. Dis. 2013 Aug. 12 [doi: 10.1136/annrheumdis-2013-203716]). They calculated that the positive predictive value of ANA positivity for developing systemic sclerosis in patients with Raynaud’s phenomenon increased from 33.9% to 88.5% when combined with the presence of puffy fingers.
The researchers examined 469 patients with Raynaud’s phenomenon who were enrolled in the multicenter, prospective, observational Very Early Diagnosis of Systemic Sclerosis (VEDOSS) study at 33 EUSTAR centers throughout and outside Europe. About a third of the patients were ANA negative (32.2%), and 67.8% were ANA positive. Among the ANA-positive participants, 53.6% had a systemic sclerosis pattern on nailfold capillaroscopy, compared with only 13.4% of the ANA-negative patients (P less than .001).
Just over half of the ANA-negative patients lacked a systemic sclerosis pattern on nailfold capillaroscopy, systemic sclerosis–related specific clinical symptoms, or an erythrocyte sedimentation rate of 25 mm/hr or greater, and so were diagnosed with primary Raynaud’s.
The examinations of the patients also included assessments for digital ulcers, digital pitting scars, telangiectases, calcinosis, tendon friction rubs, esophageal symptoms, and symptoms consistent with median nerve compression syndrome.
The most common clinical features in the ANA-positive patients were previous or current puffy fingers, found in 38.5% of the ANA-positive patients versus 23.3% of the ANA-negative patients, and esophageal symptoms, identified in 35.2% of the ANA-positive patients and 18.4% of the ANA-negative patients.
"Almost 90% of ANA-positive Raynaud’s phenomenon patients with previous or current finger edematous skin changes (puffy fingers) already had a nailfold capillaroscopy systemic sclerosis pattern and/or systemic sclerosis–specific autoantibodies," Dr. Minier’s team wrote. Specifically, 73.3% of ANA-positive patients with Raynaud’s and puffy fingers had a nailfold capillaroscopy systemic sclerosis pattern, compared with only 41.2% of ANA-positive patients without puffy fingers. Overall, 88.5% of ANA-positive patients with Raynaud’s and puffy fingers met the criteria for very early systemic sclerosis.
The ANA-positive patients with puffy fingers were also more likely to have other symptoms than were the ANA-positive patients without puffy fingers. Sclerodactyly was identified in 17.8% of the ANA-positive patients with puffy fingers, compared with 6.2% of the ANA-positive patients without current or previous puffy fingers (P = .002). Telangiectases also appeared on 17.3% of the ANA-positive patients with puffy fingers, compared with 9.2% of those without puffy fingers (P = .033). Similarly, 42.1% of ANA-positive patients with puffy fingers had esophageal symptoms, compared with 30.9% of those without puffy fingers (P = .043).
The researchers also noted that even puffy fingers in ANA-negative patients may require more careful follow-up, because 17% of the ANA-negative patients with current or previous puffy fingers had a nailfold capillaroscopy systemic sclerosis pattern and 20% of the ANA-negative patients with puffy fingers had sclerodactyly with other systemic sclerosis symptoms.
The authors noted that one limitation of their study is the inability to generalize their findings to the broader population of Raynaud’s phenomenon patients seen by general physicians, since a higher percentage of the patients enrolled in this cohort meet very early systemic sclerosis classification.
"Patients identified with the very early diagnostic criteria may also have scleroderma with limited cutaneous involvement or even undifferentiated connective tissue disease," a limitation that longer-term follow-up data may clarify with a comparison of the ACR 1980 classification criteria and the new ACR-EULAR criteria.
The study did not use external funding. The authors had no disclosures.
* This story was updated 8/20/2013.
FROM ANNALS OF THE RHEUMATIC DISEASES
Major finding: Puffy fingers were identified in more Raynaud's phenomenon patients who were antinuclear antibody positive (38.5%) than ANA negative (23.3%, P less than .01).
Data source: The findings are based on an analysis of signs and symptoms in 469 Reynaud’s phenomenon patients enrolled in the Very Early Diagnosis of Systemic Sclerosis cohort from 33 EUSTAR centers throughout and outside Europe.
Disclosures: Information on the study’s funding was unavailable. The authors had no disclosures.
Affective processing may differ in bipolar I patients
The underlying neural mechanisms for processing affective stimuli appear to differ in patients with bipolar disorder II, compared with those of healthy controls, a study has shown.
Both functional MRI scans during patients’ exposure to facial expressions and an overt task of identifying fearful or happy faces revealed differences in bipolar patients’ processing of emotional information relative to healthy controls, reported Kelly A. Sagar and her associates at McLean Hospital, Belmont, Mass., in the Journal of Affective Disorders (2013 May 30 [doi:10.1016/j.jad.2013.05.019]).
Ms. Sagar’s team wrote that their findings suggest that bipolar I patients "have difficulties with the identification of certain emotional expressions," potentially resulting "in an inability to appropriately read social cues, which often leads to miscommunication, misinterpretation, and compromised interpersonal relationships."
The researchers assessed the affective processing of 23 bipolar I patients and 18 healthy controls in two separate tasks. The bipolar I patients, with a mean age of 26.65 (plus or minus 6.65) and a mean bipolar disorder onset age of 16.5 (plus or minus 3.65) were primarily euthymic at the time of the study, and their pharmacotherapeutic regimens had been stable for at least 12 weeks before the study began.
Four bipolar patients were unmedicated at the time of the study, four were taking antidepressants, four were taking benzodiazepines, 11 were taking antipsychotics, and 17 were taking mood stabilizers. The healthy controls tended to be younger, with a mean age of 23.11 (plus or minus 3.15), but the controls’ years of education (15.53 plus or minus 21.22 ) were similar to those of the bipolar participants (14.57 plus or minus 1.68).
During the functional MRI task, the participants completed a backward-masked affect paradigm in which they viewed black and white photographs of male and female faces with different expressions, shown for 30 milliseconds each. The two affective conditions were fearful and happy, alternated with neutral faces and with neutral masks shown between each face for 170 milliseconds.
Ms. Sagar’s team reported results for both single sample analyses and contrast analyses (subtracting one group map from the other for each face condition).
In the single sample analyses, the patients with bipolar disorder showed altered activation in the amygdala and increased activation in the anterior cingulate and dorsolateral prefrontal cortex during the fear condition, compared with the controls. The bipolar patients’ activation patterns in these three regions were more diffuse during the happy condition than in the controls.
In the contrast analyses, relative to controls, bipolar patients showed increased activation in the anterior cingulate cortex, bilateral amygdala, and dorsolateral prefrontal cortex during the fear condition and higher activation in the subgenual anterior cingulate, right dorsolateral prefrontal cortex, and left amygdala during the happy condition. The controls showed higher activation in the midcingulate and left dorsolateral prefrontal cortex during the happy condition, compared with the participants with bipolar disorder.
After the functional MRI scan, participants completed the computerized Facial Expression of Emotion Stimuli and Test, in which they had to identify the most closely represented emotion (anger, disgust, fear, happiness, sadness, or surprise) for 60 faces shown for 5 seconds each.
The bipolar participants identified fewer of the fearful faces, with an average 68.91% accuracy (standard deviation = 21.74), compared with 80% accuracy (SD = 14.14) among the controls. Identification of the happy faces, however, was comparable between the groups, with 97.39% accuracy (SD = 7.51) among the bipolar participants and 98.89% accuracy (SD = 3.23) among the controls.
The researchers noted that the functional MRI scan results revealed that the differences in emotional processing between bipolar patients and healthy controls occurred when the stimuli were shown "below the level of conscious awareness, suggesting a disruption early in the neural circuit responsible for affective processing." These findings, along with the greater difficulty bipolar patients had in identifying fearful faces during the overt task, corroborate similar findings in other studies, including a meta-analysis finding impairments among bipolar patients in recognizing facial emotions.
"Given the behavioral alterations and difficulty in inhibiting inappropriate responses often seen in patients with [bipolar disorder], these findings may have implications for reading cues in social situations, which may result in negative consequences," the authors wrote.
The study was limited by the moderate number of participants, the statistically significant but likely not biologically significant, age differences between the two groups, and the inability to be certain whether medication status could have affected the results.
The study was funded by the Jim and Pat Poitras Foundation and by a National Institute on Drug Abuse grant. The authors reported that they had no relevant financial disclosures.
The underlying neural mechanisms for processing affective stimuli appear to differ in patients with bipolar disorder II, compared with those of healthy controls, a study has shown.
Both functional MRI scans during patients’ exposure to facial expressions and an overt task of identifying fearful or happy faces revealed differences in bipolar patients’ processing of emotional information relative to healthy controls, reported Kelly A. Sagar and her associates at McLean Hospital, Belmont, Mass., in the Journal of Affective Disorders (2013 May 30 [doi:10.1016/j.jad.2013.05.019]).
Ms. Sagar’s team wrote that their findings suggest that bipolar I patients "have difficulties with the identification of certain emotional expressions," potentially resulting "in an inability to appropriately read social cues, which often leads to miscommunication, misinterpretation, and compromised interpersonal relationships."
The researchers assessed the affective processing of 23 bipolar I patients and 18 healthy controls in two separate tasks. The bipolar I patients, with a mean age of 26.65 (plus or minus 6.65) and a mean bipolar disorder onset age of 16.5 (plus or minus 3.65) were primarily euthymic at the time of the study, and their pharmacotherapeutic regimens had been stable for at least 12 weeks before the study began.
Four bipolar patients were unmedicated at the time of the study, four were taking antidepressants, four were taking benzodiazepines, 11 were taking antipsychotics, and 17 were taking mood stabilizers. The healthy controls tended to be younger, with a mean age of 23.11 (plus or minus 3.15), but the controls’ years of education (15.53 plus or minus 21.22 ) were similar to those of the bipolar participants (14.57 plus or minus 1.68).
During the functional MRI task, the participants completed a backward-masked affect paradigm in which they viewed black and white photographs of male and female faces with different expressions, shown for 30 milliseconds each. The two affective conditions were fearful and happy, alternated with neutral faces and with neutral masks shown between each face for 170 milliseconds.
Ms. Sagar’s team reported results for both single sample analyses and contrast analyses (subtracting one group map from the other for each face condition).
In the single sample analyses, the patients with bipolar disorder showed altered activation in the amygdala and increased activation in the anterior cingulate and dorsolateral prefrontal cortex during the fear condition, compared with the controls. The bipolar patients’ activation patterns in these three regions were more diffuse during the happy condition than in the controls.
In the contrast analyses, relative to controls, bipolar patients showed increased activation in the anterior cingulate cortex, bilateral amygdala, and dorsolateral prefrontal cortex during the fear condition and higher activation in the subgenual anterior cingulate, right dorsolateral prefrontal cortex, and left amygdala during the happy condition. The controls showed higher activation in the midcingulate and left dorsolateral prefrontal cortex during the happy condition, compared with the participants with bipolar disorder.
After the functional MRI scan, participants completed the computerized Facial Expression of Emotion Stimuli and Test, in which they had to identify the most closely represented emotion (anger, disgust, fear, happiness, sadness, or surprise) for 60 faces shown for 5 seconds each.
The bipolar participants identified fewer of the fearful faces, with an average 68.91% accuracy (standard deviation = 21.74), compared with 80% accuracy (SD = 14.14) among the controls. Identification of the happy faces, however, was comparable between the groups, with 97.39% accuracy (SD = 7.51) among the bipolar participants and 98.89% accuracy (SD = 3.23) among the controls.
The researchers noted that the functional MRI scan results revealed that the differences in emotional processing between bipolar patients and healthy controls occurred when the stimuli were shown "below the level of conscious awareness, suggesting a disruption early in the neural circuit responsible for affective processing." These findings, along with the greater difficulty bipolar patients had in identifying fearful faces during the overt task, corroborate similar findings in other studies, including a meta-analysis finding impairments among bipolar patients in recognizing facial emotions.
"Given the behavioral alterations and difficulty in inhibiting inappropriate responses often seen in patients with [bipolar disorder], these findings may have implications for reading cues in social situations, which may result in negative consequences," the authors wrote.
The study was limited by the moderate number of participants, the statistically significant but likely not biologically significant, age differences between the two groups, and the inability to be certain whether medication status could have affected the results.
The study was funded by the Jim and Pat Poitras Foundation and by a National Institute on Drug Abuse grant. The authors reported that they had no relevant financial disclosures.
The underlying neural mechanisms for processing affective stimuli appear to differ in patients with bipolar disorder II, compared with those of healthy controls, a study has shown.
Both functional MRI scans during patients’ exposure to facial expressions and an overt task of identifying fearful or happy faces revealed differences in bipolar patients’ processing of emotional information relative to healthy controls, reported Kelly A. Sagar and her associates at McLean Hospital, Belmont, Mass., in the Journal of Affective Disorders (2013 May 30 [doi:10.1016/j.jad.2013.05.019]).
Ms. Sagar’s team wrote that their findings suggest that bipolar I patients "have difficulties with the identification of certain emotional expressions," potentially resulting "in an inability to appropriately read social cues, which often leads to miscommunication, misinterpretation, and compromised interpersonal relationships."
The researchers assessed the affective processing of 23 bipolar I patients and 18 healthy controls in two separate tasks. The bipolar I patients, with a mean age of 26.65 (plus or minus 6.65) and a mean bipolar disorder onset age of 16.5 (plus or minus 3.65) were primarily euthymic at the time of the study, and their pharmacotherapeutic regimens had been stable for at least 12 weeks before the study began.
Four bipolar patients were unmedicated at the time of the study, four were taking antidepressants, four were taking benzodiazepines, 11 were taking antipsychotics, and 17 were taking mood stabilizers. The healthy controls tended to be younger, with a mean age of 23.11 (plus or minus 3.15), but the controls’ years of education (15.53 plus or minus 21.22 ) were similar to those of the bipolar participants (14.57 plus or minus 1.68).
During the functional MRI task, the participants completed a backward-masked affect paradigm in which they viewed black and white photographs of male and female faces with different expressions, shown for 30 milliseconds each. The two affective conditions were fearful and happy, alternated with neutral faces and with neutral masks shown between each face for 170 milliseconds.
Ms. Sagar’s team reported results for both single sample analyses and contrast analyses (subtracting one group map from the other for each face condition).
In the single sample analyses, the patients with bipolar disorder showed altered activation in the amygdala and increased activation in the anterior cingulate and dorsolateral prefrontal cortex during the fear condition, compared with the controls. The bipolar patients’ activation patterns in these three regions were more diffuse during the happy condition than in the controls.
In the contrast analyses, relative to controls, bipolar patients showed increased activation in the anterior cingulate cortex, bilateral amygdala, and dorsolateral prefrontal cortex during the fear condition and higher activation in the subgenual anterior cingulate, right dorsolateral prefrontal cortex, and left amygdala during the happy condition. The controls showed higher activation in the midcingulate and left dorsolateral prefrontal cortex during the happy condition, compared with the participants with bipolar disorder.
After the functional MRI scan, participants completed the computerized Facial Expression of Emotion Stimuli and Test, in which they had to identify the most closely represented emotion (anger, disgust, fear, happiness, sadness, or surprise) for 60 faces shown for 5 seconds each.
The bipolar participants identified fewer of the fearful faces, with an average 68.91% accuracy (standard deviation = 21.74), compared with 80% accuracy (SD = 14.14) among the controls. Identification of the happy faces, however, was comparable between the groups, with 97.39% accuracy (SD = 7.51) among the bipolar participants and 98.89% accuracy (SD = 3.23) among the controls.
The researchers noted that the functional MRI scan results revealed that the differences in emotional processing between bipolar patients and healthy controls occurred when the stimuli were shown "below the level of conscious awareness, suggesting a disruption early in the neural circuit responsible for affective processing." These findings, along with the greater difficulty bipolar patients had in identifying fearful faces during the overt task, corroborate similar findings in other studies, including a meta-analysis finding impairments among bipolar patients in recognizing facial emotions.
"Given the behavioral alterations and difficulty in inhibiting inappropriate responses often seen in patients with [bipolar disorder], these findings may have implications for reading cues in social situations, which may result in negative consequences," the authors wrote.
The study was limited by the moderate number of participants, the statistically significant but likely not biologically significant, age differences between the two groups, and the inability to be certain whether medication status could have affected the results.
The study was funded by the Jim and Pat Poitras Foundation and by a National Institute on Drug Abuse grant. The authors reported that they had no relevant financial disclosures.
FROM THE JOURNAL OF AFFECTIVE DISORDERS
Major finding: Functional MRI scans revealed that bipolar patients less accurately identified fearful facial expressions (68.91% accuracy vs. 80%) but identified happy ones at comparable rates to controls in a separate task (97.39% vs. 98.89%).
Data source: The findings are based on an analysis of fMRI scans and results from the Facial Expression of Emotion Stimuli and Test for 23 bipolar I participants and 18 healthy controls.
Disclosures: The study was funded by the Jim and Pat Poitras Foundation and by a National Institute on Drug Abuse grant. The authors reported that they had no relevant financial disclosures.
Racial, ethnic disparities found in diagnosis of children with ADHD
Children are diagnosed and treated for attention-deficit/hyperactivity disorder at disproportionate rates based on their race or ethnicity, a study has shown.
Hispanic children were about half as likely as white children to have received a diagnosis by junior high, and African American children were about two-thirds less likely than white children to have received a diagnosis by junior high. In addition, white children with ADHD were about two to three times more likely to be taking medication for their disorder than children of all other racial/ethnic backgrounds, the results showed.
Starting in kindergarten and continuing through eighth grade, racial/ethnic disparities in ADHD diagnosis and medication use were identified in a cohort of 17,100 children, reported Paul L. Morgan, Ph.D., of Pennsylvania State University, University Park, and his associates (Pediatrics 2013 June 24 [doi:10.1542/peds.2012-2390]). Hispanic children had 50% lower odds and African American children had 69% lower odds of being diagnosed compared with white children, after adjustment for confounders. Children of other races/ethnicities had 46% lower odds of being diagnosed with ADHD.
Dr. Morgan’s team identified 6.6% of the initial cohort as being diagnosed with ADHD by eighth grade. The diagnosis was based on parental report that the child had been formally diagnosed by a professional with ADHD, attention-deficit disorder (ADD), or hyperactivity by their kindergarten, first-, third-, fifth-, or eighth-grade years.
The cohort was made up of about 19% Hispanics, 16% non-Hispanic African Americans, 57% non-Hispanic whites, and 8% children of other races/ethnicities, including Asian, Native Hawaiian, Pacific Islander, Native American, and Alaskan Native.
The researchers conducted two analyses: The first included results with only race/ethnicity and time used as predictors from the full cohort of 17,100 kindergarteners. The second, using the 15,100 children for whom the data were available, "included additional child- and family-level predictors measured in kindergarten, as well as time-varying measures of children’s behavioral and academic functioning." The child- and family-level predictors used were low birth weight; mother’s age; health insurance status; English-speaking parents and socioeconomic status, based on family income; and the mother’s and father’s education levels and occupations.
Children’s externalizing and learning-related behaviors were assessed by their kindergarten, first-, third-, and fifth-grade teachers, using the Externalizing Problem Behaviors and the Approaches to Learning subscales of the Social Rating Scale. Averages of the children’s reading and mathematics standardized test scores were used to estimate their academic achievement.
In the first analysis, unadjusted for child or family factors and compared with white children, the odds of being diagnosed with ADHD were 57% lower for Hispanic children, 36% lower for African American children, and 47% lower for children of other races/ethnicities (P less than .001).
In the second model adjusted for other predictors, Hispanic children were 50% less likely (odds ratio, 0.38-0.66), African American children were 69% less likely (OR, 0.24-0.40), and children of other races/ethnicities were 46% less likely (OR, 0.39-0.74) to be diagnosed with ADHD than white children were (P less than .001).
The researchers also used both models in calculating the odds ratios for prescription medication use for ADHD among the children. Parents reported whether their children were taking prescription medication, including methylphenidate, amphetamine, or atomoxetine, related to ADD, ADHD, or hyperactivity while the children were in fifth and eighth grades.
Compared with white children, in the first unadjusted model, Hispanic children were 64% less likely and African American children were 65% less likely to be taking medication for ADHD (P less than .001). Children of other races/ethnicities were 58% less likely to be taking medication for ADHD than were white children (P less than.01).
In the second model, the odds of taking prescription medication for ADHD were 47% lower for Hispanic children (OR, 0.29-0.98; P less than .05), 65% lower for African American children (OR, 0.19-0.62; P less than .001), and 51% lower for children of other races/ethnicities (OR, 0.26-0.95; P less than .05).
The researchers identified other statistically significant predictors of ADHD diagnosis. Children without health insurance had 33% lower odds of being diagnosed with ADHD than children with health insurance (P less than .01). Children were more likely to be diagnosed with ADHD if they had a mother older than 38 when the child was born, compared with mothers aged 18-38 (OR, 1.65; P less than .001) or if the parents were English speaking (OR, 1.86; P less than .05). As has been found in past research, boys were twice as likely to be diagnosed (OR, 1.98; P less than .001).
Children with higher achievement or who engaged in learning-related behaviors were 30% and 41% less likely, respectively, of being diagnosed with ADHD (P less than .001). Children showing externalizing problem behaviors were 46% more likely to be diagnosed (P less than .001).
Dr. Morgan and his colleagues suggested that clinicians might be "disproportionately responsive to white parents who are more likely to solicit ADHD diagnosis and treatment for their children."
In light of these findings, they urged medical and school-based professionals to use intensive culturally sensitive monitoring to make sure that appropriate screening, diagnosis, and treatment for ADHD is extended to all children.
The study was funded by the National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education, and National Institutes of Health. The authors reported no relevant financial disclosures.
Children are diagnosed and treated for attention-deficit/hyperactivity disorder at disproportionate rates based on their race or ethnicity, a study has shown.
Hispanic children were about half as likely as white children to have received a diagnosis by junior high, and African American children were about two-thirds less likely than white children to have received a diagnosis by junior high. In addition, white children with ADHD were about two to three times more likely to be taking medication for their disorder than children of all other racial/ethnic backgrounds, the results showed.
Starting in kindergarten and continuing through eighth grade, racial/ethnic disparities in ADHD diagnosis and medication use were identified in a cohort of 17,100 children, reported Paul L. Morgan, Ph.D., of Pennsylvania State University, University Park, and his associates (Pediatrics 2013 June 24 [doi:10.1542/peds.2012-2390]). Hispanic children had 50% lower odds and African American children had 69% lower odds of being diagnosed compared with white children, after adjustment for confounders. Children of other races/ethnicities had 46% lower odds of being diagnosed with ADHD.
Dr. Morgan’s team identified 6.6% of the initial cohort as being diagnosed with ADHD by eighth grade. The diagnosis was based on parental report that the child had been formally diagnosed by a professional with ADHD, attention-deficit disorder (ADD), or hyperactivity by their kindergarten, first-, third-, fifth-, or eighth-grade years.
The cohort was made up of about 19% Hispanics, 16% non-Hispanic African Americans, 57% non-Hispanic whites, and 8% children of other races/ethnicities, including Asian, Native Hawaiian, Pacific Islander, Native American, and Alaskan Native.
The researchers conducted two analyses: The first included results with only race/ethnicity and time used as predictors from the full cohort of 17,100 kindergarteners. The second, using the 15,100 children for whom the data were available, "included additional child- and family-level predictors measured in kindergarten, as well as time-varying measures of children’s behavioral and academic functioning." The child- and family-level predictors used were low birth weight; mother’s age; health insurance status; English-speaking parents and socioeconomic status, based on family income; and the mother’s and father’s education levels and occupations.
Children’s externalizing and learning-related behaviors were assessed by their kindergarten, first-, third-, and fifth-grade teachers, using the Externalizing Problem Behaviors and the Approaches to Learning subscales of the Social Rating Scale. Averages of the children’s reading and mathematics standardized test scores were used to estimate their academic achievement.
In the first analysis, unadjusted for child or family factors and compared with white children, the odds of being diagnosed with ADHD were 57% lower for Hispanic children, 36% lower for African American children, and 47% lower for children of other races/ethnicities (P less than .001).
In the second model adjusted for other predictors, Hispanic children were 50% less likely (odds ratio, 0.38-0.66), African American children were 69% less likely (OR, 0.24-0.40), and children of other races/ethnicities were 46% less likely (OR, 0.39-0.74) to be diagnosed with ADHD than white children were (P less than .001).
The researchers also used both models in calculating the odds ratios for prescription medication use for ADHD among the children. Parents reported whether their children were taking prescription medication, including methylphenidate, amphetamine, or atomoxetine, related to ADD, ADHD, or hyperactivity while the children were in fifth and eighth grades.
Compared with white children, in the first unadjusted model, Hispanic children were 64% less likely and African American children were 65% less likely to be taking medication for ADHD (P less than .001). Children of other races/ethnicities were 58% less likely to be taking medication for ADHD than were white children (P less than.01).
In the second model, the odds of taking prescription medication for ADHD were 47% lower for Hispanic children (OR, 0.29-0.98; P less than .05), 65% lower for African American children (OR, 0.19-0.62; P less than .001), and 51% lower for children of other races/ethnicities (OR, 0.26-0.95; P less than .05).
The researchers identified other statistically significant predictors of ADHD diagnosis. Children without health insurance had 33% lower odds of being diagnosed with ADHD than children with health insurance (P less than .01). Children were more likely to be diagnosed with ADHD if they had a mother older than 38 when the child was born, compared with mothers aged 18-38 (OR, 1.65; P less than .001) or if the parents were English speaking (OR, 1.86; P less than .05). As has been found in past research, boys were twice as likely to be diagnosed (OR, 1.98; P less than .001).
Children with higher achievement or who engaged in learning-related behaviors were 30% and 41% less likely, respectively, of being diagnosed with ADHD (P less than .001). Children showing externalizing problem behaviors were 46% more likely to be diagnosed (P less than .001).
Dr. Morgan and his colleagues suggested that clinicians might be "disproportionately responsive to white parents who are more likely to solicit ADHD diagnosis and treatment for their children."
In light of these findings, they urged medical and school-based professionals to use intensive culturally sensitive monitoring to make sure that appropriate screening, diagnosis, and treatment for ADHD is extended to all children.
The study was funded by the National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education, and National Institutes of Health. The authors reported no relevant financial disclosures.
Children are diagnosed and treated for attention-deficit/hyperactivity disorder at disproportionate rates based on their race or ethnicity, a study has shown.
Hispanic children were about half as likely as white children to have received a diagnosis by junior high, and African American children were about two-thirds less likely than white children to have received a diagnosis by junior high. In addition, white children with ADHD were about two to three times more likely to be taking medication for their disorder than children of all other racial/ethnic backgrounds, the results showed.
Starting in kindergarten and continuing through eighth grade, racial/ethnic disparities in ADHD diagnosis and medication use were identified in a cohort of 17,100 children, reported Paul L. Morgan, Ph.D., of Pennsylvania State University, University Park, and his associates (Pediatrics 2013 June 24 [doi:10.1542/peds.2012-2390]). Hispanic children had 50% lower odds and African American children had 69% lower odds of being diagnosed compared with white children, after adjustment for confounders. Children of other races/ethnicities had 46% lower odds of being diagnosed with ADHD.
Dr. Morgan’s team identified 6.6% of the initial cohort as being diagnosed with ADHD by eighth grade. The diagnosis was based on parental report that the child had been formally diagnosed by a professional with ADHD, attention-deficit disorder (ADD), or hyperactivity by their kindergarten, first-, third-, fifth-, or eighth-grade years.
The cohort was made up of about 19% Hispanics, 16% non-Hispanic African Americans, 57% non-Hispanic whites, and 8% children of other races/ethnicities, including Asian, Native Hawaiian, Pacific Islander, Native American, and Alaskan Native.
The researchers conducted two analyses: The first included results with only race/ethnicity and time used as predictors from the full cohort of 17,100 kindergarteners. The second, using the 15,100 children for whom the data were available, "included additional child- and family-level predictors measured in kindergarten, as well as time-varying measures of children’s behavioral and academic functioning." The child- and family-level predictors used were low birth weight; mother’s age; health insurance status; English-speaking parents and socioeconomic status, based on family income; and the mother’s and father’s education levels and occupations.
Children’s externalizing and learning-related behaviors were assessed by their kindergarten, first-, third-, and fifth-grade teachers, using the Externalizing Problem Behaviors and the Approaches to Learning subscales of the Social Rating Scale. Averages of the children’s reading and mathematics standardized test scores were used to estimate their academic achievement.
In the first analysis, unadjusted for child or family factors and compared with white children, the odds of being diagnosed with ADHD were 57% lower for Hispanic children, 36% lower for African American children, and 47% lower for children of other races/ethnicities (P less than .001).
In the second model adjusted for other predictors, Hispanic children were 50% less likely (odds ratio, 0.38-0.66), African American children were 69% less likely (OR, 0.24-0.40), and children of other races/ethnicities were 46% less likely (OR, 0.39-0.74) to be diagnosed with ADHD than white children were (P less than .001).
The researchers also used both models in calculating the odds ratios for prescription medication use for ADHD among the children. Parents reported whether their children were taking prescription medication, including methylphenidate, amphetamine, or atomoxetine, related to ADD, ADHD, or hyperactivity while the children were in fifth and eighth grades.
Compared with white children, in the first unadjusted model, Hispanic children were 64% less likely and African American children were 65% less likely to be taking medication for ADHD (P less than .001). Children of other races/ethnicities were 58% less likely to be taking medication for ADHD than were white children (P less than.01).
In the second model, the odds of taking prescription medication for ADHD were 47% lower for Hispanic children (OR, 0.29-0.98; P less than .05), 65% lower for African American children (OR, 0.19-0.62; P less than .001), and 51% lower for children of other races/ethnicities (OR, 0.26-0.95; P less than .05).
The researchers identified other statistically significant predictors of ADHD diagnosis. Children without health insurance had 33% lower odds of being diagnosed with ADHD than children with health insurance (P less than .01). Children were more likely to be diagnosed with ADHD if they had a mother older than 38 when the child was born, compared with mothers aged 18-38 (OR, 1.65; P less than .001) or if the parents were English speaking (OR, 1.86; P less than .05). As has been found in past research, boys were twice as likely to be diagnosed (OR, 1.98; P less than .001).
Children with higher achievement or who engaged in learning-related behaviors were 30% and 41% less likely, respectively, of being diagnosed with ADHD (P less than .001). Children showing externalizing problem behaviors were 46% more likely to be diagnosed (P less than .001).
Dr. Morgan and his colleagues suggested that clinicians might be "disproportionately responsive to white parents who are more likely to solicit ADHD diagnosis and treatment for their children."
In light of these findings, they urged medical and school-based professionals to use intensive culturally sensitive monitoring to make sure that appropriate screening, diagnosis, and treatment for ADHD is extended to all children.
The study was funded by the National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education, and National Institutes of Health. The authors reported no relevant financial disclosures.
FROM PEDIATRICS
Major finding: Hispanics (odds ratio, 0.50), African Americans (OR, 0.31), and children of other races/ethnicities (OR 0.54) were less likely than white children to be diagnosed with attention-deficit/hyperactivity disorder by eighth grade, after adjustment for confounders.
Data source: The Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999, a nationally representative cohort of 17,100 children.
Disclosures: The study was funded by the National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education, and National Institutes of Health. The authors reported no relevant financial disclosures.
Updated acute bacterial sinusitis guidelines include four major changes
Giving clinicians the option to wait up to 3 days before treating the most common presentation of acute bacterial sinusitis is among the changes to the American Academy of Pediatrics’ updated clinical practice guidelines for treating these infections.
About 5%-10% of upper respiratory tract infections in children develop into acute bacterial sinusitis, according to the new guidelines, published in Pediatrics.
Other changes include a new presentation, and discouraging the use of x-rays to confirm diagnosis. The guidelines published online were written by Dr. Ellen R. Wald, chair of pediatrics at the University of Wisconsin, Madison, and her associates (Pediatrics 2013 June 24 [doi:10.1542/peds.2013-1071]). The guidelines incorporated data from an accompanying systematic review of the research published since the last guidelines were issued in 2001.
The added presentation is a worsening course, defined as "worsening or new onset of nasal discharge, daytime cough, or fever after initial improvement." This presentation joins the existing severe onset (a fever of at least 39° C [102.2° F] with at least 3 days of a purulent nasal discharge) and, most common, persistent illness lasting more than 10 days without improvement.
For those with symptoms of nasal discharge, daytime cough, or fever lasting more than 10 days, clinicians may discuss with the parent whether to treat right away or wait a few days. For severe onset and worsening symptoms, clinicians should prescribe antibiotic therapy right away. First-line treatment is amoxicillin with or without clavulanate, followed by a reassessment of initial management if the symptoms worsen or do not improve within 72 hours.
The guidelines do not recommend adjuvant therapies, including intranasal corticosteroids, saline nasal irrigation or lavage, topical or oral decongestants, mucolytics, and topical or oral antihistamines.
Among the four major changes to the guidelines, including the updated evidence, the option for delayed treatment in nonsevere cases and the recommendation not to use imaging are especially relevant for clinical practice, according to Dr. Wald, a pediatric infectious disease specialist.
"When the AAP writes about this, they’re talking about it as joint decision making," Dr. Wald said in an interview. "If the parent really wants treatment at that time, I think the doctor’s going to want to do it. It’s being a little bit more permissive in tolerating the symptoms for a few more days. The clinician is given the option to treat immediately or, with the parents’ consent, they can wait a few days to see if the child gets better spontaneously."
Dr. Wald noted that the decision to treat can involve a trade-off, so these guidelines offer the clinicians more latitude in making the cost-benefit analysis with the parent, taking into account the illness severity, the child’s quality of life, and the parents’ values and concerns.
"The reason we like to treat it is that kids get better faster," Dr. Wald said. "On the one hand, we want the kid to get better faster, but on the other hand, we don’t want to use the antibiotic if we don’t have to because we want to avoid side effects or, from a public health perspective, the increased antibiotic resistance for the population." The most common side effect of antibiotics is diarrhea, she said; fewer patients may experience a rash.
The guideline discouraging imaging stems from findings that imaging offers little clinical benefit. "In the past, a diagnostician would get a set of x-rays to see if the sinuses were cloudy and confirm the diagnosis if they found cloudy sinuses," Dr. Wald said. "However, x-rays are frequently abnormal even in children with uncomplicated colds, so the x-rays are not a help. Therefore, we’re encouraging people to make the diagnosis only on clinical grounds."
However, the guidelines do encourage clinicians to get a "contrast-enhanced CT scan of the paranasal sinuses and/or an MRI with contrast whenever a child is suspected of having orbital or central nervous system complications of acute bacterial sinusitis" because discovered abscesses may require surgical intervention.
The systematic review, conducted by Dr. Michael J. Smith, a pediatric infectious disease specialist at the University of Louisville (Ky.), included evidence from 17 randomized controlled trials in the treatment of sinusitis in children (Pediatrics 2013 June 24 [doi:10.1542/peds.2013-1072]. All published since 2001, these trials add to the evidence base from the 21 studies published between 1966 and 1999 that were used in the previous guidelines.
Among the 17 new trials, 4 were randomized, double-blind, placebo-controlled trials of antimicrobial therapy used on a combined 392 children, but they were too heterogenous in criteria and results (2 favored treatment and 2 found no significant difference between treatment and control) to use in conducting a formal meta-analysis. Comparisons were further complicated by the long time span over which they were conducted, the introduction of universal conjugate pneumococcal vaccination, the increase in prevalence of other bacterial infections, and the variance in placebo group clinical improvement, ranging from 14% to 79% across the studies.
Five other trials that compared antimicrobial therapies lacked placebo controls, three dealt with subacute sinusitis rather than acute, and six tested various ancillary treatments. These ancillary treatments included steroids, nasal spray, saline irrigation, and mucolytic agents, but with small study populations and mostly equivocal results.
"Greater severity of illness at the time of presentation seems to be associated with increased likelihood of antimicrobial efficacy," Dr. Smith said.
Dr. Smith identified several clinical questions that require additional research: definitions of acute, subacute, and recurrent acute sinusitis; the epidemiology of sinusitis in the pneumococcal conjugate vaccine era; the effectiveness of antimicrobial prophylaxis; accurate estimates for duration of symptoms; and clinical utility of various imaging types.
The guidelines and systematic review did not identify any external funding used. Dr. Smith has received research funding from Sanofi Pasteur and Novartis. Dr. Nelson is employed by McKesson Health Solutions. Dr. Wald, Dr. Shaikh, and Dr. Rosenfeld have published research related to sinusitis. No other disclosures were reported.
In the revised Clinical Practice Guideline on
management of acute sinusitis endorsed by the American Academy of Pediatrics (Pediatrics
2013;132:e262-e280),http://pediatrics.aappublications.org/content/early/2013/06/19/peds.2013-1071
there are three changes from the previous guideline: (1) the addition of a
clinical presentation designated as “worsening course,” (2) an option to treat
immediately or observe children with persistent symptoms for 3 days before
treating, and (3) a review of evidence indicating that imaging is not necessary
in children with uncomplicated acute bacterial sinusitis.
The authors of the guideline are
authorities in the field and have done a good job under difficult
circumstances. The evidence on the best diagnosis and management of acute
bacterial sinusitis is limited and out-of-date, as shown by a companion
systematic review of the topic by Dr. Michael Smith in the same issue of Pediatrics.
Making guidelines without good evidence
is challenging and often leads to limited adoption by practitioners. Purulence
of nasal discharge is now accepted by most as a natural part of a viral upper
respiratory infection as the host immune system becomes activated, and
neutrophils and lymphocytes migrate to the nasopharynx to clear the infection. However,
waiting for 10 days of purulence before making the diagnosis is built on
methodology employed by Dr. Wald in her group’s seminal trials, but it was
empiric and not systematically investigated.
Treatment recommendations also are not
evidence based, but influenced greatly by the risks of unnecessary, excessive
antibiotic use for the common cold. Antibiotic selection now mirrors the
guideline for acute otitis media.
There have been no new data from
maxillary sinus punctures in children for over 30 years, and the microbiology
is reasonably presumed to be the same as that of AOM. Our group is the only
group in the United States
collecting tympanocentesis data from children with AOM, and those data are only
from children 6-36 months old. Prevnar 13 is changing the dynamics of the
bacterial pathogen mix of AOM and presumably sinusitis. In our work, we find
that only 30% of respiratory bacteria isolated from young children are
susceptible to amoxicillin – most of the Streptococcus
pneumoniae and about one-third of the Haemophilus
influenzae. Some authorities point to older literature that suggested a 50%
“spontaneous” cure rate with H. flu AOM
and an 80% “spontaneous” cure rate with Moraxella
catarrhalis infections. Our group has evidence that those rates do not
reflect the current virulence of H. flu
and M. catarrhalis, as we are seeing
many more tympanic membrane ruptures from those organisms than in years past
(Janet Casey, Legacy Pediatrics, Rochester, N.Y.,
personal communication).
Moreover, the speed of the
spontaneous cure is slower than occurs with antibiotics effective at eradication
of the causative pathogen. On that point, there is an ample evidence base. I
recommend and use amoxicillin/clavulanate with a high dose of amoxicillin.
Adding observation as an option to
match the AOM guideline is an interesting recommendation, and one I will watch
with interest. Practicing pediatricians will need to weigh the reaction of
parents and children to yet another 3 more days of waiting after persistence of
symptoms to begin a treatment that might speed resolution of the illness. What
would you do for your child?
Dr.
Michael E. Pichichero, a specialist in pediatric infectious diseases, is
director of the Rochester (N.Y.) General Hospital Research Institute. He is
also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant
financial conflicts of interest to disclose.
In the revised Clinical Practice Guideline on
management of acute sinusitis endorsed by the American Academy of Pediatrics (Pediatrics
2013;132:e262-e280),http://pediatrics.aappublications.org/content/early/2013/06/19/peds.2013-1071
there are three changes from the previous guideline: (1) the addition of a
clinical presentation designated as “worsening course,” (2) an option to treat
immediately or observe children with persistent symptoms for 3 days before
treating, and (3) a review of evidence indicating that imaging is not necessary
in children with uncomplicated acute bacterial sinusitis.
The authors of the guideline are
authorities in the field and have done a good job under difficult
circumstances. The evidence on the best diagnosis and management of acute
bacterial sinusitis is limited and out-of-date, as shown by a companion
systematic review of the topic by Dr. Michael Smith in the same issue of Pediatrics.
Making guidelines without good evidence
is challenging and often leads to limited adoption by practitioners. Purulence
of nasal discharge is now accepted by most as a natural part of a viral upper
respiratory infection as the host immune system becomes activated, and
neutrophils and lymphocytes migrate to the nasopharynx to clear the infection. However,
waiting for 10 days of purulence before making the diagnosis is built on
methodology employed by Dr. Wald in her group’s seminal trials, but it was
empiric and not systematically investigated.
Treatment recommendations also are not
evidence based, but influenced greatly by the risks of unnecessary, excessive
antibiotic use for the common cold. Antibiotic selection now mirrors the
guideline for acute otitis media.
There have been no new data from
maxillary sinus punctures in children for over 30 years, and the microbiology
is reasonably presumed to be the same as that of AOM. Our group is the only
group in the United States
collecting tympanocentesis data from children with AOM, and those data are only
from children 6-36 months old. Prevnar 13 is changing the dynamics of the
bacterial pathogen mix of AOM and presumably sinusitis. In our work, we find
that only 30% of respiratory bacteria isolated from young children are
susceptible to amoxicillin – most of the Streptococcus
pneumoniae and about one-third of the Haemophilus
influenzae. Some authorities point to older literature that suggested a 50%
“spontaneous” cure rate with H. flu AOM
and an 80% “spontaneous” cure rate with Moraxella
catarrhalis infections. Our group has evidence that those rates do not
reflect the current virulence of H. flu
and M. catarrhalis, as we are seeing
many more tympanic membrane ruptures from those organisms than in years past
(Janet Casey, Legacy Pediatrics, Rochester, N.Y.,
personal communication).
Moreover, the speed of the
spontaneous cure is slower than occurs with antibiotics effective at eradication
of the causative pathogen. On that point, there is an ample evidence base. I
recommend and use amoxicillin/clavulanate with a high dose of amoxicillin.
Adding observation as an option to
match the AOM guideline is an interesting recommendation, and one I will watch
with interest. Practicing pediatricians will need to weigh the reaction of
parents and children to yet another 3 more days of waiting after persistence of
symptoms to begin a treatment that might speed resolution of the illness. What
would you do for your child?
Dr.
Michael E. Pichichero, a specialist in pediatric infectious diseases, is
director of the Rochester (N.Y.) General Hospital Research Institute. He is
also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant
financial conflicts of interest to disclose.
In the revised Clinical Practice Guideline on
management of acute sinusitis endorsed by the American Academy of Pediatrics (Pediatrics
2013;132:e262-e280),http://pediatrics.aappublications.org/content/early/2013/06/19/peds.2013-1071
there are three changes from the previous guideline: (1) the addition of a
clinical presentation designated as “worsening course,” (2) an option to treat
immediately or observe children with persistent symptoms for 3 days before
treating, and (3) a review of evidence indicating that imaging is not necessary
in children with uncomplicated acute bacterial sinusitis.
The authors of the guideline are
authorities in the field and have done a good job under difficult
circumstances. The evidence on the best diagnosis and management of acute
bacterial sinusitis is limited and out-of-date, as shown by a companion
systematic review of the topic by Dr. Michael Smith in the same issue of Pediatrics.
Making guidelines without good evidence
is challenging and often leads to limited adoption by practitioners. Purulence
of nasal discharge is now accepted by most as a natural part of a viral upper
respiratory infection as the host immune system becomes activated, and
neutrophils and lymphocytes migrate to the nasopharynx to clear the infection. However,
waiting for 10 days of purulence before making the diagnosis is built on
methodology employed by Dr. Wald in her group’s seminal trials, but it was
empiric and not systematically investigated.
Treatment recommendations also are not
evidence based, but influenced greatly by the risks of unnecessary, excessive
antibiotic use for the common cold. Antibiotic selection now mirrors the
guideline for acute otitis media.
There have been no new data from
maxillary sinus punctures in children for over 30 years, and the microbiology
is reasonably presumed to be the same as that of AOM. Our group is the only
group in the United States
collecting tympanocentesis data from children with AOM, and those data are only
from children 6-36 months old. Prevnar 13 is changing the dynamics of the
bacterial pathogen mix of AOM and presumably sinusitis. In our work, we find
that only 30% of respiratory bacteria isolated from young children are
susceptible to amoxicillin – most of the Streptococcus
pneumoniae and about one-third of the Haemophilus
influenzae. Some authorities point to older literature that suggested a 50%
“spontaneous” cure rate with H. flu AOM
and an 80% “spontaneous” cure rate with Moraxella
catarrhalis infections. Our group has evidence that those rates do not
reflect the current virulence of H. flu
and M. catarrhalis, as we are seeing
many more tympanic membrane ruptures from those organisms than in years past
(Janet Casey, Legacy Pediatrics, Rochester, N.Y.,
personal communication).
Moreover, the speed of the
spontaneous cure is slower than occurs with antibiotics effective at eradication
of the causative pathogen. On that point, there is an ample evidence base. I
recommend and use amoxicillin/clavulanate with a high dose of amoxicillin.
Adding observation as an option to
match the AOM guideline is an interesting recommendation, and one I will watch
with interest. Practicing pediatricians will need to weigh the reaction of
parents and children to yet another 3 more days of waiting after persistence of
symptoms to begin a treatment that might speed resolution of the illness. What
would you do for your child?
Dr.
Michael E. Pichichero, a specialist in pediatric infectious diseases, is
director of the Rochester (N.Y.) General Hospital Research Institute. He is
also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant
financial conflicts of interest to disclose.
Giving clinicians the option to wait up to 3 days before treating the most common presentation of acute bacterial sinusitis is among the changes to the American Academy of Pediatrics’ updated clinical practice guidelines for treating these infections.
About 5%-10% of upper respiratory tract infections in children develop into acute bacterial sinusitis, according to the new guidelines, published in Pediatrics.
Other changes include a new presentation, and discouraging the use of x-rays to confirm diagnosis. The guidelines published online were written by Dr. Ellen R. Wald, chair of pediatrics at the University of Wisconsin, Madison, and her associates (Pediatrics 2013 June 24 [doi:10.1542/peds.2013-1071]). The guidelines incorporated data from an accompanying systematic review of the research published since the last guidelines were issued in 2001.
The added presentation is a worsening course, defined as "worsening or new onset of nasal discharge, daytime cough, or fever after initial improvement." This presentation joins the existing severe onset (a fever of at least 39° C [102.2° F] with at least 3 days of a purulent nasal discharge) and, most common, persistent illness lasting more than 10 days without improvement.
For those with symptoms of nasal discharge, daytime cough, or fever lasting more than 10 days, clinicians may discuss with the parent whether to treat right away or wait a few days. For severe onset and worsening symptoms, clinicians should prescribe antibiotic therapy right away. First-line treatment is amoxicillin with or without clavulanate, followed by a reassessment of initial management if the symptoms worsen or do not improve within 72 hours.
The guidelines do not recommend adjuvant therapies, including intranasal corticosteroids, saline nasal irrigation or lavage, topical or oral decongestants, mucolytics, and topical or oral antihistamines.
Among the four major changes to the guidelines, including the updated evidence, the option for delayed treatment in nonsevere cases and the recommendation not to use imaging are especially relevant for clinical practice, according to Dr. Wald, a pediatric infectious disease specialist.
"When the AAP writes about this, they’re talking about it as joint decision making," Dr. Wald said in an interview. "If the parent really wants treatment at that time, I think the doctor’s going to want to do it. It’s being a little bit more permissive in tolerating the symptoms for a few more days. The clinician is given the option to treat immediately or, with the parents’ consent, they can wait a few days to see if the child gets better spontaneously."
Dr. Wald noted that the decision to treat can involve a trade-off, so these guidelines offer the clinicians more latitude in making the cost-benefit analysis with the parent, taking into account the illness severity, the child’s quality of life, and the parents’ values and concerns.
"The reason we like to treat it is that kids get better faster," Dr. Wald said. "On the one hand, we want the kid to get better faster, but on the other hand, we don’t want to use the antibiotic if we don’t have to because we want to avoid side effects or, from a public health perspective, the increased antibiotic resistance for the population." The most common side effect of antibiotics is diarrhea, she said; fewer patients may experience a rash.
The guideline discouraging imaging stems from findings that imaging offers little clinical benefit. "In the past, a diagnostician would get a set of x-rays to see if the sinuses were cloudy and confirm the diagnosis if they found cloudy sinuses," Dr. Wald said. "However, x-rays are frequently abnormal even in children with uncomplicated colds, so the x-rays are not a help. Therefore, we’re encouraging people to make the diagnosis only on clinical grounds."
However, the guidelines do encourage clinicians to get a "contrast-enhanced CT scan of the paranasal sinuses and/or an MRI with contrast whenever a child is suspected of having orbital or central nervous system complications of acute bacterial sinusitis" because discovered abscesses may require surgical intervention.
The systematic review, conducted by Dr. Michael J. Smith, a pediatric infectious disease specialist at the University of Louisville (Ky.), included evidence from 17 randomized controlled trials in the treatment of sinusitis in children (Pediatrics 2013 June 24 [doi:10.1542/peds.2013-1072]. All published since 2001, these trials add to the evidence base from the 21 studies published between 1966 and 1999 that were used in the previous guidelines.
Among the 17 new trials, 4 were randomized, double-blind, placebo-controlled trials of antimicrobial therapy used on a combined 392 children, but they were too heterogenous in criteria and results (2 favored treatment and 2 found no significant difference between treatment and control) to use in conducting a formal meta-analysis. Comparisons were further complicated by the long time span over which they were conducted, the introduction of universal conjugate pneumococcal vaccination, the increase in prevalence of other bacterial infections, and the variance in placebo group clinical improvement, ranging from 14% to 79% across the studies.
Five other trials that compared antimicrobial therapies lacked placebo controls, three dealt with subacute sinusitis rather than acute, and six tested various ancillary treatments. These ancillary treatments included steroids, nasal spray, saline irrigation, and mucolytic agents, but with small study populations and mostly equivocal results.
"Greater severity of illness at the time of presentation seems to be associated with increased likelihood of antimicrobial efficacy," Dr. Smith said.
Dr. Smith identified several clinical questions that require additional research: definitions of acute, subacute, and recurrent acute sinusitis; the epidemiology of sinusitis in the pneumococcal conjugate vaccine era; the effectiveness of antimicrobial prophylaxis; accurate estimates for duration of symptoms; and clinical utility of various imaging types.
The guidelines and systematic review did not identify any external funding used. Dr. Smith has received research funding from Sanofi Pasteur and Novartis. Dr. Nelson is employed by McKesson Health Solutions. Dr. Wald, Dr. Shaikh, and Dr. Rosenfeld have published research related to sinusitis. No other disclosures were reported.
Giving clinicians the option to wait up to 3 days before treating the most common presentation of acute bacterial sinusitis is among the changes to the American Academy of Pediatrics’ updated clinical practice guidelines for treating these infections.
About 5%-10% of upper respiratory tract infections in children develop into acute bacterial sinusitis, according to the new guidelines, published in Pediatrics.
Other changes include a new presentation, and discouraging the use of x-rays to confirm diagnosis. The guidelines published online were written by Dr. Ellen R. Wald, chair of pediatrics at the University of Wisconsin, Madison, and her associates (Pediatrics 2013 June 24 [doi:10.1542/peds.2013-1071]). The guidelines incorporated data from an accompanying systematic review of the research published since the last guidelines were issued in 2001.
The added presentation is a worsening course, defined as "worsening or new onset of nasal discharge, daytime cough, or fever after initial improvement." This presentation joins the existing severe onset (a fever of at least 39° C [102.2° F] with at least 3 days of a purulent nasal discharge) and, most common, persistent illness lasting more than 10 days without improvement.
For those with symptoms of nasal discharge, daytime cough, or fever lasting more than 10 days, clinicians may discuss with the parent whether to treat right away or wait a few days. For severe onset and worsening symptoms, clinicians should prescribe antibiotic therapy right away. First-line treatment is amoxicillin with or without clavulanate, followed by a reassessment of initial management if the symptoms worsen or do not improve within 72 hours.
The guidelines do not recommend adjuvant therapies, including intranasal corticosteroids, saline nasal irrigation or lavage, topical or oral decongestants, mucolytics, and topical or oral antihistamines.
Among the four major changes to the guidelines, including the updated evidence, the option for delayed treatment in nonsevere cases and the recommendation not to use imaging are especially relevant for clinical practice, according to Dr. Wald, a pediatric infectious disease specialist.
"When the AAP writes about this, they’re talking about it as joint decision making," Dr. Wald said in an interview. "If the parent really wants treatment at that time, I think the doctor’s going to want to do it. It’s being a little bit more permissive in tolerating the symptoms for a few more days. The clinician is given the option to treat immediately or, with the parents’ consent, they can wait a few days to see if the child gets better spontaneously."
Dr. Wald noted that the decision to treat can involve a trade-off, so these guidelines offer the clinicians more latitude in making the cost-benefit analysis with the parent, taking into account the illness severity, the child’s quality of life, and the parents’ values and concerns.
"The reason we like to treat it is that kids get better faster," Dr. Wald said. "On the one hand, we want the kid to get better faster, but on the other hand, we don’t want to use the antibiotic if we don’t have to because we want to avoid side effects or, from a public health perspective, the increased antibiotic resistance for the population." The most common side effect of antibiotics is diarrhea, she said; fewer patients may experience a rash.
The guideline discouraging imaging stems from findings that imaging offers little clinical benefit. "In the past, a diagnostician would get a set of x-rays to see if the sinuses were cloudy and confirm the diagnosis if they found cloudy sinuses," Dr. Wald said. "However, x-rays are frequently abnormal even in children with uncomplicated colds, so the x-rays are not a help. Therefore, we’re encouraging people to make the diagnosis only on clinical grounds."
However, the guidelines do encourage clinicians to get a "contrast-enhanced CT scan of the paranasal sinuses and/or an MRI with contrast whenever a child is suspected of having orbital or central nervous system complications of acute bacterial sinusitis" because discovered abscesses may require surgical intervention.
The systematic review, conducted by Dr. Michael J. Smith, a pediatric infectious disease specialist at the University of Louisville (Ky.), included evidence from 17 randomized controlled trials in the treatment of sinusitis in children (Pediatrics 2013 June 24 [doi:10.1542/peds.2013-1072]. All published since 2001, these trials add to the evidence base from the 21 studies published between 1966 and 1999 that were used in the previous guidelines.
Among the 17 new trials, 4 were randomized, double-blind, placebo-controlled trials of antimicrobial therapy used on a combined 392 children, but they were too heterogenous in criteria and results (2 favored treatment and 2 found no significant difference between treatment and control) to use in conducting a formal meta-analysis. Comparisons were further complicated by the long time span over which they were conducted, the introduction of universal conjugate pneumococcal vaccination, the increase in prevalence of other bacterial infections, and the variance in placebo group clinical improvement, ranging from 14% to 79% across the studies.
Five other trials that compared antimicrobial therapies lacked placebo controls, three dealt with subacute sinusitis rather than acute, and six tested various ancillary treatments. These ancillary treatments included steroids, nasal spray, saline irrigation, and mucolytic agents, but with small study populations and mostly equivocal results.
"Greater severity of illness at the time of presentation seems to be associated with increased likelihood of antimicrobial efficacy," Dr. Smith said.
Dr. Smith identified several clinical questions that require additional research: definitions of acute, subacute, and recurrent acute sinusitis; the epidemiology of sinusitis in the pneumococcal conjugate vaccine era; the effectiveness of antimicrobial prophylaxis; accurate estimates for duration of symptoms; and clinical utility of various imaging types.
The guidelines and systematic review did not identify any external funding used. Dr. Smith has received research funding from Sanofi Pasteur and Novartis. Dr. Nelson is employed by McKesson Health Solutions. Dr. Wald, Dr. Shaikh, and Dr. Rosenfeld have published research related to sinusitis. No other disclosures were reported.
FROM PEDIATRICS






