User login
Hospital physician orders increased postpartum Tdap vaccination rates
Changing in-hospital ordering procedures for postpartum Tdap vaccination considerably increased vaccination rates in birth mothers, investigators reported in the March issue of the American Journal of Obstetrics & Gynecology.
Tdap vaccinations protect against tetanus, diphtheria, and pertussis (whooping cough), the latter of which has increased in recent years and causes significant morbidity for adults and children and mortality in infants.
ACIP recommended postpartum Tdap vaccination in birth mothers in 2006 and then updated their recommendations in 2011 to administer the Tdap during pregnant women’s second or third trimester each pregnancy. Yet a 2012 survey showed update of the Tdap during pregnancy as low as 2.6%, and postpartum vaccination remaining limited.
Dr. Sylvia Yeh of the University of California, Los Angeles, and her colleagues reported on an intervention at a Los Angeles private community hospital with a 0% baseline rate of postpartum Tdap vaccination (Am. J. Obstet. Gynecol. 2014;210:237.e1-6).
For completed deliveries between October 2009 and July 2010, the researchers reviewed 658 charts for birth mothers at that hospital and compared them with 606 women’s charts at a hospital with no procedure changes. The comparison hospital, also with a 0% baseline vaccination rate, was located 18 miles from the intervention hospital and served a relatively similar demographic population.
Implementing physician opt-in orders for postpartum Tdap vaccination at the intervention hospital in November 2009 led to an increase in the vaccination rate to 18.8% (P less than .001). The hospital then implemented standing orders in February 2010, allowing nurses to deliver the vaccines without an additional physician order. The postpartum Tdap vaccination rate then climbed again to 62.7% (P less than .001). The comparison hospital’s rate for postpartum Tdap vaccination remained at 0% throughout the same period.
The researchers identified no differences in demographic characteristics among the women who received in-hospital postpartum Tdap vaccination and those who did not.
The study was funded by the Centers for Disease Control and Prevention. No disclosures were reported.
Changing in-hospital ordering procedures for postpartum Tdap vaccination considerably increased vaccination rates in birth mothers, investigators reported in the March issue of the American Journal of Obstetrics & Gynecology.
Tdap vaccinations protect against tetanus, diphtheria, and pertussis (whooping cough), the latter of which has increased in recent years and causes significant morbidity for adults and children and mortality in infants.
ACIP recommended postpartum Tdap vaccination in birth mothers in 2006 and then updated their recommendations in 2011 to administer the Tdap during pregnant women’s second or third trimester each pregnancy. Yet a 2012 survey showed update of the Tdap during pregnancy as low as 2.6%, and postpartum vaccination remaining limited.
Dr. Sylvia Yeh of the University of California, Los Angeles, and her colleagues reported on an intervention at a Los Angeles private community hospital with a 0% baseline rate of postpartum Tdap vaccination (Am. J. Obstet. Gynecol. 2014;210:237.e1-6).
For completed deliveries between October 2009 and July 2010, the researchers reviewed 658 charts for birth mothers at that hospital and compared them with 606 women’s charts at a hospital with no procedure changes. The comparison hospital, also with a 0% baseline vaccination rate, was located 18 miles from the intervention hospital and served a relatively similar demographic population.
Implementing physician opt-in orders for postpartum Tdap vaccination at the intervention hospital in November 2009 led to an increase in the vaccination rate to 18.8% (P less than .001). The hospital then implemented standing orders in February 2010, allowing nurses to deliver the vaccines without an additional physician order. The postpartum Tdap vaccination rate then climbed again to 62.7% (P less than .001). The comparison hospital’s rate for postpartum Tdap vaccination remained at 0% throughout the same period.
The researchers identified no differences in demographic characteristics among the women who received in-hospital postpartum Tdap vaccination and those who did not.
The study was funded by the Centers for Disease Control and Prevention. No disclosures were reported.
Changing in-hospital ordering procedures for postpartum Tdap vaccination considerably increased vaccination rates in birth mothers, investigators reported in the March issue of the American Journal of Obstetrics & Gynecology.
Tdap vaccinations protect against tetanus, diphtheria, and pertussis (whooping cough), the latter of which has increased in recent years and causes significant morbidity for adults and children and mortality in infants.
ACIP recommended postpartum Tdap vaccination in birth mothers in 2006 and then updated their recommendations in 2011 to administer the Tdap during pregnant women’s second or third trimester each pregnancy. Yet a 2012 survey showed update of the Tdap during pregnancy as low as 2.6%, and postpartum vaccination remaining limited.
Dr. Sylvia Yeh of the University of California, Los Angeles, and her colleagues reported on an intervention at a Los Angeles private community hospital with a 0% baseline rate of postpartum Tdap vaccination (Am. J. Obstet. Gynecol. 2014;210:237.e1-6).
For completed deliveries between October 2009 and July 2010, the researchers reviewed 658 charts for birth mothers at that hospital and compared them with 606 women’s charts at a hospital with no procedure changes. The comparison hospital, also with a 0% baseline vaccination rate, was located 18 miles from the intervention hospital and served a relatively similar demographic population.
Implementing physician opt-in orders for postpartum Tdap vaccination at the intervention hospital in November 2009 led to an increase in the vaccination rate to 18.8% (P less than .001). The hospital then implemented standing orders in February 2010, allowing nurses to deliver the vaccines without an additional physician order. The postpartum Tdap vaccination rate then climbed again to 62.7% (P less than .001). The comparison hospital’s rate for postpartum Tdap vaccination remained at 0% throughout the same period.
The researchers identified no differences in demographic characteristics among the women who received in-hospital postpartum Tdap vaccination and those who did not.
The study was funded by the Centers for Disease Control and Prevention. No disclosures were reported.
FROM THE AMERICAN JOURNAL OF OBSTETRICS & GYNECOLOGY
Major finding: Tdap postpartum vaccination rates in birth mothers increased from 0% to 18.8% following physician opt-in orders and then to 62.7% following standing orders at an intervention hospital, compared with a continuing 0% rate at a hospital with no procedural changes.
Data source: A prospective evaluation of in-hospital postpartum pertussis vaccination rates of 1,264 birth mothers at two hospitals from October 2009 through July 2010.
Disclosures: This study was supported by the U.S. Centers for Disease Control and Prevention. No disclosures were reported.
Pediatric Appendicitis Outcomes Similar for Ultrasound/MRI and CT Imaging
Using ultrasonography and magnetic resonance imaging instead of computed tomography to diagnose children’s acute appendicitis resulted in similar clinical outcomes, a study found.
The retrospective study found no differences in the time to receive antibiotics or an appendectomy, the negative appendectomy rate, the perforation rate, or the length of stay among children undergoing either diagnostic method, reported Dr. Gudrun Aspelund and her colleagues at Columbia University Medical Center, New York (Pediatrics 2014;133:1-8).
CT scans have historically been used to diagnose appendicitis because of their high sensitivity and specificity, but they expose children to radiation, raising concerns about cumulative exposure and later cancer risk. Ultrasonography requires more operator expertise (with a sensitivity range of 44%-100%), but it involves no radiation and has been used with MRI to effectively diagnose appendicitis in adults.
Among 662 patients under age 18 with suspected appendicitis at Morgan Stanley Children’s Hospital, New York, 265 (group A) were assessed between November 2008 and October 2010, including 224 (84.5%) with CT, 40 with ultrasonography, and 1 with MRI. After the hospital prioritized ultrasonography/MRI for diagnostic imaging for appendicitis, 397 (group B) were assessed between November 2010 and October 2012, including 365 receiving ultrasounds, 142 receiving MRI, and 35 receiving CT scans (including those receiving multiple imaging).
Among group A patients (primarily CT scans), 51% had positive imaging for appendicitis; group B included 41% with positive imaging (P = .007). Patients with appendicitis were treated with an appendectomy, percutaneous drainage, or antibiotics. While the 291 patients with negative studies were confirmed true negatives, 74 others (11%) were lost to follow-up but believed to be true negatives.
Despite the significantly higher positive imaging rate in group A than in group B, no false negative imaging was identified in either group, and negative appendectomy rates were similar: 2.5% in group A and 1.4% in group B (P = .7). "Sensitivity, specificity, and positive and negative predictive value of the imaging pathways for the diagnosis of appendicitis were similar between study periods," the researchers reported.
Appendectomy rates were 45% in group A and 37% in group B (P = .0003). The researchers reported that perforation rates were not significantly different between the groups, but they did not provide numbers for the rates.
Similarly, there was no difference in time to antibiotics (8.7 hours in group A and 8.2 hours in group B) or time to appendectomy (13.2 and 13.9 hours, respectively). Overall length of stay did not differ between the groups (52.2 and 43.3 hours), nor did length of stay for image-positive appendicitis (82.2 and 76.6 hours).
"Use of ultrasonography and MRI is possible and effective for diagnosis in most cases of pediatric appendicitis," Dr. Aspelund and her associates reported. "These data support the notion that use of CT could be limited."
The study was internally funded, and the authors reported no disclosures.
Using ultrasonography and magnetic resonance imaging instead of computed tomography to diagnose children’s acute appendicitis resulted in similar clinical outcomes, a study found.
The retrospective study found no differences in the time to receive antibiotics or an appendectomy, the negative appendectomy rate, the perforation rate, or the length of stay among children undergoing either diagnostic method, reported Dr. Gudrun Aspelund and her colleagues at Columbia University Medical Center, New York (Pediatrics 2014;133:1-8).
CT scans have historically been used to diagnose appendicitis because of their high sensitivity and specificity, but they expose children to radiation, raising concerns about cumulative exposure and later cancer risk. Ultrasonography requires more operator expertise (with a sensitivity range of 44%-100%), but it involves no radiation and has been used with MRI to effectively diagnose appendicitis in adults.
Among 662 patients under age 18 with suspected appendicitis at Morgan Stanley Children’s Hospital, New York, 265 (group A) were assessed between November 2008 and October 2010, including 224 (84.5%) with CT, 40 with ultrasonography, and 1 with MRI. After the hospital prioritized ultrasonography/MRI for diagnostic imaging for appendicitis, 397 (group B) were assessed between November 2010 and October 2012, including 365 receiving ultrasounds, 142 receiving MRI, and 35 receiving CT scans (including those receiving multiple imaging).
Among group A patients (primarily CT scans), 51% had positive imaging for appendicitis; group B included 41% with positive imaging (P = .007). Patients with appendicitis were treated with an appendectomy, percutaneous drainage, or antibiotics. While the 291 patients with negative studies were confirmed true negatives, 74 others (11%) were lost to follow-up but believed to be true negatives.
Despite the significantly higher positive imaging rate in group A than in group B, no false negative imaging was identified in either group, and negative appendectomy rates were similar: 2.5% in group A and 1.4% in group B (P = .7). "Sensitivity, specificity, and positive and negative predictive value of the imaging pathways for the diagnosis of appendicitis were similar between study periods," the researchers reported.
Appendectomy rates were 45% in group A and 37% in group B (P = .0003). The researchers reported that perforation rates were not significantly different between the groups, but they did not provide numbers for the rates.
Similarly, there was no difference in time to antibiotics (8.7 hours in group A and 8.2 hours in group B) or time to appendectomy (13.2 and 13.9 hours, respectively). Overall length of stay did not differ between the groups (52.2 and 43.3 hours), nor did length of stay for image-positive appendicitis (82.2 and 76.6 hours).
"Use of ultrasonography and MRI is possible and effective for diagnosis in most cases of pediatric appendicitis," Dr. Aspelund and her associates reported. "These data support the notion that use of CT could be limited."
The study was internally funded, and the authors reported no disclosures.
Using ultrasonography and magnetic resonance imaging instead of computed tomography to diagnose children’s acute appendicitis resulted in similar clinical outcomes, a study found.
The retrospective study found no differences in the time to receive antibiotics or an appendectomy, the negative appendectomy rate, the perforation rate, or the length of stay among children undergoing either diagnostic method, reported Dr. Gudrun Aspelund and her colleagues at Columbia University Medical Center, New York (Pediatrics 2014;133:1-8).
CT scans have historically been used to diagnose appendicitis because of their high sensitivity and specificity, but they expose children to radiation, raising concerns about cumulative exposure and later cancer risk. Ultrasonography requires more operator expertise (with a sensitivity range of 44%-100%), but it involves no radiation and has been used with MRI to effectively diagnose appendicitis in adults.
Among 662 patients under age 18 with suspected appendicitis at Morgan Stanley Children’s Hospital, New York, 265 (group A) were assessed between November 2008 and October 2010, including 224 (84.5%) with CT, 40 with ultrasonography, and 1 with MRI. After the hospital prioritized ultrasonography/MRI for diagnostic imaging for appendicitis, 397 (group B) were assessed between November 2010 and October 2012, including 365 receiving ultrasounds, 142 receiving MRI, and 35 receiving CT scans (including those receiving multiple imaging).
Among group A patients (primarily CT scans), 51% had positive imaging for appendicitis; group B included 41% with positive imaging (P = .007). Patients with appendicitis were treated with an appendectomy, percutaneous drainage, or antibiotics. While the 291 patients with negative studies were confirmed true negatives, 74 others (11%) were lost to follow-up but believed to be true negatives.
Despite the significantly higher positive imaging rate in group A than in group B, no false negative imaging was identified in either group, and negative appendectomy rates were similar: 2.5% in group A and 1.4% in group B (P = .7). "Sensitivity, specificity, and positive and negative predictive value of the imaging pathways for the diagnosis of appendicitis were similar between study periods," the researchers reported.
Appendectomy rates were 45% in group A and 37% in group B (P = .0003). The researchers reported that perforation rates were not significantly different between the groups, but they did not provide numbers for the rates.
Similarly, there was no difference in time to antibiotics (8.7 hours in group A and 8.2 hours in group B) or time to appendectomy (13.2 and 13.9 hours, respectively). Overall length of stay did not differ between the groups (52.2 and 43.3 hours), nor did length of stay for image-positive appendicitis (82.2 and 76.6 hours).
"Use of ultrasonography and MRI is possible and effective for diagnosis in most cases of pediatric appendicitis," Dr. Aspelund and her associates reported. "These data support the notion that use of CT could be limited."
The study was internally funded, and the authors reported no disclosures.
FROM PEDIATRICS
Pediatric appendicitis outcomes similar for ultrasound/MRI and CT imaging
Using ultrasonography and magnetic resonance imaging instead of computed tomography to diagnose children’s acute appendicitis resulted in similar clinical outcomes, a study found.
The retrospective study found no differences in the time to receive antibiotics or an appendectomy, the negative appendectomy rate, the perforation rate, or the length of stay among children undergoing either diagnostic method, reported Dr. Gudrun Aspelund and her colleagues at Columbia University Medical Center, New York (Pediatrics 2014;133:1-8).
CT scans have historically been used to diagnose appendicitis because of their high sensitivity and specificity, but they expose children to radiation, raising concerns about cumulative exposure and later cancer risk. Ultrasonography requires more operator expertise (with a sensitivity range of 44%-100%), but it involves no radiation and has been used with MRI to effectively diagnose appendicitis in adults.
Among 662 patients under age 18 with suspected appendicitis at Morgan Stanley Children’s Hospital, New York, 265 (group A) were assessed between November 2008 and October 2010, including 224 (84.5%) with CT, 40 with ultrasonography, and 1 with MRI. After the hospital prioritized ultrasonography/MRI for diagnostic imaging for appendicitis, 397 (group B) were assessed between November 2010 and October 2012, including 365 receiving ultrasounds, 142 receiving MRI, and 35 receiving CT scans (including those receiving multiple imaging).
Among group A patients (primarily CT scans), 51% had positive imaging for appendicitis; group B included 41% with positive imaging (P = .007). Patients with appendicitis were treated with an appendectomy, percutaneous drainage, or antibiotics. While the 291 patients with negative studies were confirmed true negatives, 74 others (11%) were lost to follow-up but believed to be true negatives.
Despite the significantly higher positive imaging rate in group A than in group B, no false negative imaging was identified in either group, and negative appendectomy rates were similar: 2.5% in group A and 1.4% in group B (P = .7). "Sensitivity, specificity, and positive and negative predictive value of the imaging pathways for the diagnosis of appendicitis were similar between study periods," the researchers reported.
Appendectomy rates were 45% in group A and 37% in group B (P = .0003). The researchers reported that perforation rates were not significantly different between the groups, but they did not provide numbers for the rates.
Similarly, there was no difference in time to antibiotics (8.7 hours in group A and 8.2 hours in group B) or time to appendectomy (13.2 and 13.9 hours, respectively). Overall length of stay did not differ between the groups (52.2 and 43.3 hours), nor did length of stay for image-positive appendicitis (82.2 and 76.6 hours).
"Use of ultrasonography and MRI is possible and effective for diagnosis in most cases of pediatric appendicitis," Dr. Aspelund and her associates reported. "These data support the notion that use of CT could be limited."
The study was internally funded, and the authors reported no disclosures.
Using ultrasonography and magnetic resonance imaging instead of computed tomography to diagnose children’s acute appendicitis resulted in similar clinical outcomes, a study found.
The retrospective study found no differences in the time to receive antibiotics or an appendectomy, the negative appendectomy rate, the perforation rate, or the length of stay among children undergoing either diagnostic method, reported Dr. Gudrun Aspelund and her colleagues at Columbia University Medical Center, New York (Pediatrics 2014;133:1-8).
CT scans have historically been used to diagnose appendicitis because of their high sensitivity and specificity, but they expose children to radiation, raising concerns about cumulative exposure and later cancer risk. Ultrasonography requires more operator expertise (with a sensitivity range of 44%-100%), but it involves no radiation and has been used with MRI to effectively diagnose appendicitis in adults.
Among 662 patients under age 18 with suspected appendicitis at Morgan Stanley Children’s Hospital, New York, 265 (group A) were assessed between November 2008 and October 2010, including 224 (84.5%) with CT, 40 with ultrasonography, and 1 with MRI. After the hospital prioritized ultrasonography/MRI for diagnostic imaging for appendicitis, 397 (group B) were assessed between November 2010 and October 2012, including 365 receiving ultrasounds, 142 receiving MRI, and 35 receiving CT scans (including those receiving multiple imaging).
Among group A patients (primarily CT scans), 51% had positive imaging for appendicitis; group B included 41% with positive imaging (P = .007). Patients with appendicitis were treated with an appendectomy, percutaneous drainage, or antibiotics. While the 291 patients with negative studies were confirmed true negatives, 74 others (11%) were lost to follow-up but believed to be true negatives.
Despite the significantly higher positive imaging rate in group A than in group B, no false negative imaging was identified in either group, and negative appendectomy rates were similar: 2.5% in group A and 1.4% in group B (P = .7). "Sensitivity, specificity, and positive and negative predictive value of the imaging pathways for the diagnosis of appendicitis were similar between study periods," the researchers reported.
Appendectomy rates were 45% in group A and 37% in group B (P = .0003). The researchers reported that perforation rates were not significantly different between the groups, but they did not provide numbers for the rates.
Similarly, there was no difference in time to antibiotics (8.7 hours in group A and 8.2 hours in group B) or time to appendectomy (13.2 and 13.9 hours, respectively). Overall length of stay did not differ between the groups (52.2 and 43.3 hours), nor did length of stay for image-positive appendicitis (82.2 and 76.6 hours).
"Use of ultrasonography and MRI is possible and effective for diagnosis in most cases of pediatric appendicitis," Dr. Aspelund and her associates reported. "These data support the notion that use of CT could be limited."
The study was internally funded, and the authors reported no disclosures.
Using ultrasonography and magnetic resonance imaging instead of computed tomography to diagnose children’s acute appendicitis resulted in similar clinical outcomes, a study found.
The retrospective study found no differences in the time to receive antibiotics or an appendectomy, the negative appendectomy rate, the perforation rate, or the length of stay among children undergoing either diagnostic method, reported Dr. Gudrun Aspelund and her colleagues at Columbia University Medical Center, New York (Pediatrics 2014;133:1-8).
CT scans have historically been used to diagnose appendicitis because of their high sensitivity and specificity, but they expose children to radiation, raising concerns about cumulative exposure and later cancer risk. Ultrasonography requires more operator expertise (with a sensitivity range of 44%-100%), but it involves no radiation and has been used with MRI to effectively diagnose appendicitis in adults.
Among 662 patients under age 18 with suspected appendicitis at Morgan Stanley Children’s Hospital, New York, 265 (group A) were assessed between November 2008 and October 2010, including 224 (84.5%) with CT, 40 with ultrasonography, and 1 with MRI. After the hospital prioritized ultrasonography/MRI for diagnostic imaging for appendicitis, 397 (group B) were assessed between November 2010 and October 2012, including 365 receiving ultrasounds, 142 receiving MRI, and 35 receiving CT scans (including those receiving multiple imaging).
Among group A patients (primarily CT scans), 51% had positive imaging for appendicitis; group B included 41% with positive imaging (P = .007). Patients with appendicitis were treated with an appendectomy, percutaneous drainage, or antibiotics. While the 291 patients with negative studies were confirmed true negatives, 74 others (11%) were lost to follow-up but believed to be true negatives.
Despite the significantly higher positive imaging rate in group A than in group B, no false negative imaging was identified in either group, and negative appendectomy rates were similar: 2.5% in group A and 1.4% in group B (P = .7). "Sensitivity, specificity, and positive and negative predictive value of the imaging pathways for the diagnosis of appendicitis were similar between study periods," the researchers reported.
Appendectomy rates were 45% in group A and 37% in group B (P = .0003). The researchers reported that perforation rates were not significantly different between the groups, but they did not provide numbers for the rates.
Similarly, there was no difference in time to antibiotics (8.7 hours in group A and 8.2 hours in group B) or time to appendectomy (13.2 and 13.9 hours, respectively). Overall length of stay did not differ between the groups (52.2 and 43.3 hours), nor did length of stay for image-positive appendicitis (82.2 and 76.6 hours).
"Use of ultrasonography and MRI is possible and effective for diagnosis in most cases of pediatric appendicitis," Dr. Aspelund and her associates reported. "These data support the notion that use of CT could be limited."
The study was internally funded, and the authors reported no disclosures.
FROM PEDIATRICS
Major finding: Among 265 pediatric patients with suspected appendicitis primarily imaged with CT and 397 primarily imaged with ultrasonography and/or magnetic resonance imaging, 51% and 41%, respectively, had positive imaging (P = .007), with similar diagnostic accuracy.
Data source: A retrospective analysis of 662 children less than 18 years old, assessed for possible appendicitis.
Disclosures: The study was internally funded, and the authors reported no disclosures.
INR inadequate to determine coagulopathy, thrombelastography preferred
The international normalized ratio (INR) plays a major role in clinical decision-making, based upon the Model for End-Stage Liver Disease score and international guidelines, but it is unreliable in determining a patient’s need for fresh frozen plasma (FFP) and should not be used to estimate coagulopathy risk, according to a recent study.
Instead, thrombelastography’s (TEG) ability to measure clotting in real time makes it a superior assay for determining coagulopathy risk and reducing unnecessary FFP transfusions, the researchers found.
Originally developed to guide warfarin dosing needs, the INR only identifies deficiencies in several specific procoagulant factors in the extrinsic pathway, ignoring anticoagulant factors and intrinsic pathway procoagulant factors which together may balance out the extrinsic pathway abnormalities.
"Evidence-based research does not justify reliance on the INR to determine the need for FFP transfusion in hemodynamically stable patients," wrote Dr. Sean P. McCully and his colleagues at Oregon Health & Science University, Portland, in the December issue of the Journal of Trauma and Acute Care Surgery.
"Transfusion of FFP should be limited to bleeding patients with evidence of coagulopathy or stable patients with evidence of a hypocoagulable state on TEG who are at risk for bleeding," they wrote (J. Trauma Acute Care Surg. 2013;75:947-53 [doi:10.1097/TA.0b013e3182a9676c]).
McCully’s team used three coagulation assay methods on blood samples before and after FFP transfer from 106 hemodynamically stable trauma (35%) and surgical (65%) patients. The patients, 59% males, ranged in age from 16 to 92 years (median age, 60 years) and were assessed from February 2010 through August 2012.
Patients were excluded if they had received a massive transfusion or had taken antiplatelet agents within 10 days of admission. Overall, 262 U of fresh frozen plasma were transfused, with patients receiving from 1-4 U each.
The three methods included TEG, clotting factor activity levels, and a collection of conventional coagulation tests that included INR values. While the median INR dropped from 1.87 before FFP transfusion to a still elevated 1.53 after transfusion (P less than .001), TEG values remained in the normal range with little change before and after transfusion. The researchers found all procoagulant factor activities to exceed 30% of normal values preceding transfusions.
The overall coagulation index, calculated from the four main TEG assay variables, was -0.1 before transfusion and -0.4 after (P less than.05). The R value, representing soluble clotting factor activity with the number of minutes to initial fibrin formation, went from 7.2 before transfusion to 6.9 afterward. K time represents a combination of soluble factor activity and fibrin cross-linking, measured at 1.7 minutes before transfusion and 1.6 minutes after.
Values for alpha-angle, the rate of clot strengthening, were 66.7 degrees before transfusion and 66.5 after. Clot strength measured by maximum amplitude of tracing, representing platelet function, went from 66.1 mm before to 66.6 mm after transfusion. Clot lysis at 30 minutes, assessing fibrinolysis, was 0.2% before and 0.3% after transfusion.
Among the other conventional coagulation tests, small changes were seen: a slight reduction in partial thromboplastin time but remaining within normal range before and after transfusion, decreases in hematocrit and platelet counts and increases in D-dimer and fibrinogen levels.
The clotting factors assay revealed median values within the normal range before and after transfusion for factors VIII, IX, XI and XII (intrinsic pathway), but median values for factors II, V and X (common pathway) were below normal range before and after FFP transfer.
The below-normal factor VII median values (extrinsic pathway) before and after transfusion corresponded with the abnormal INR values but were sufficient to maintain hemostasis.
"Our data suggest that an isolated abnormal INR does not reflect coagulopathy," the researchers wrote.
"Despite transfusion-related changes, TEG continued to reflect normal coagulation in a setting without clinical evidence of bleeding," the researchers wrote. "Based on a normal TEG and functional intrinsic pathway, we believe these patients had the potential to form a robust clot and should not have received FFP."
The researchers cited past research finding that anywhere from 10% to 73% of FFP transfusions are inappropriately administered. FFP transfusion complication risks can include acute lung injury, transfusion-associated circulatory overload, anaphylactic reactions, and infections transmission.
The study was limited by the lack of a control group who did not receive FFP, the determination of the bleeding events retrospectively instead of prospectively, and lack of indications for FFP transfusion from the treatment teams.
The study was funded by Dr. Schreiber within the Trauma Research Institute of Oregon. The authors reported no conflicts of interest.
The international normalized ratio (INR) plays a major role in clinical decision-making, based upon the Model for End-Stage Liver Disease score and international guidelines, but it is unreliable in determining a patient’s need for fresh frozen plasma (FFP) and should not be used to estimate coagulopathy risk, according to a recent study.
Instead, thrombelastography’s (TEG) ability to measure clotting in real time makes it a superior assay for determining coagulopathy risk and reducing unnecessary FFP transfusions, the researchers found.
Originally developed to guide warfarin dosing needs, the INR only identifies deficiencies in several specific procoagulant factors in the extrinsic pathway, ignoring anticoagulant factors and intrinsic pathway procoagulant factors which together may balance out the extrinsic pathway abnormalities.
"Evidence-based research does not justify reliance on the INR to determine the need for FFP transfusion in hemodynamically stable patients," wrote Dr. Sean P. McCully and his colleagues at Oregon Health & Science University, Portland, in the December issue of the Journal of Trauma and Acute Care Surgery.
"Transfusion of FFP should be limited to bleeding patients with evidence of coagulopathy or stable patients with evidence of a hypocoagulable state on TEG who are at risk for bleeding," they wrote (J. Trauma Acute Care Surg. 2013;75:947-53 [doi:10.1097/TA.0b013e3182a9676c]).
McCully’s team used three coagulation assay methods on blood samples before and after FFP transfer from 106 hemodynamically stable trauma (35%) and surgical (65%) patients. The patients, 59% males, ranged in age from 16 to 92 years (median age, 60 years) and were assessed from February 2010 through August 2012.
Patients were excluded if they had received a massive transfusion or had taken antiplatelet agents within 10 days of admission. Overall, 262 U of fresh frozen plasma were transfused, with patients receiving from 1-4 U each.
The three methods included TEG, clotting factor activity levels, and a collection of conventional coagulation tests that included INR values. While the median INR dropped from 1.87 before FFP transfusion to a still elevated 1.53 after transfusion (P less than .001), TEG values remained in the normal range with little change before and after transfusion. The researchers found all procoagulant factor activities to exceed 30% of normal values preceding transfusions.
The overall coagulation index, calculated from the four main TEG assay variables, was -0.1 before transfusion and -0.4 after (P less than.05). The R value, representing soluble clotting factor activity with the number of minutes to initial fibrin formation, went from 7.2 before transfusion to 6.9 afterward. K time represents a combination of soluble factor activity and fibrin cross-linking, measured at 1.7 minutes before transfusion and 1.6 minutes after.
Values for alpha-angle, the rate of clot strengthening, were 66.7 degrees before transfusion and 66.5 after. Clot strength measured by maximum amplitude of tracing, representing platelet function, went from 66.1 mm before to 66.6 mm after transfusion. Clot lysis at 30 minutes, assessing fibrinolysis, was 0.2% before and 0.3% after transfusion.
Among the other conventional coagulation tests, small changes were seen: a slight reduction in partial thromboplastin time but remaining within normal range before and after transfusion, decreases in hematocrit and platelet counts and increases in D-dimer and fibrinogen levels.
The clotting factors assay revealed median values within the normal range before and after transfusion for factors VIII, IX, XI and XII (intrinsic pathway), but median values for factors II, V and X (common pathway) were below normal range before and after FFP transfer.
The below-normal factor VII median values (extrinsic pathway) before and after transfusion corresponded with the abnormal INR values but were sufficient to maintain hemostasis.
"Our data suggest that an isolated abnormal INR does not reflect coagulopathy," the researchers wrote.
"Despite transfusion-related changes, TEG continued to reflect normal coagulation in a setting without clinical evidence of bleeding," the researchers wrote. "Based on a normal TEG and functional intrinsic pathway, we believe these patients had the potential to form a robust clot and should not have received FFP."
The researchers cited past research finding that anywhere from 10% to 73% of FFP transfusions are inappropriately administered. FFP transfusion complication risks can include acute lung injury, transfusion-associated circulatory overload, anaphylactic reactions, and infections transmission.
The study was limited by the lack of a control group who did not receive FFP, the determination of the bleeding events retrospectively instead of prospectively, and lack of indications for FFP transfusion from the treatment teams.
The study was funded by Dr. Schreiber within the Trauma Research Institute of Oregon. The authors reported no conflicts of interest.
The international normalized ratio (INR) plays a major role in clinical decision-making, based upon the Model for End-Stage Liver Disease score and international guidelines, but it is unreliable in determining a patient’s need for fresh frozen plasma (FFP) and should not be used to estimate coagulopathy risk, according to a recent study.
Instead, thrombelastography’s (TEG) ability to measure clotting in real time makes it a superior assay for determining coagulopathy risk and reducing unnecessary FFP transfusions, the researchers found.
Originally developed to guide warfarin dosing needs, the INR only identifies deficiencies in several specific procoagulant factors in the extrinsic pathway, ignoring anticoagulant factors and intrinsic pathway procoagulant factors which together may balance out the extrinsic pathway abnormalities.
"Evidence-based research does not justify reliance on the INR to determine the need for FFP transfusion in hemodynamically stable patients," wrote Dr. Sean P. McCully and his colleagues at Oregon Health & Science University, Portland, in the December issue of the Journal of Trauma and Acute Care Surgery.
"Transfusion of FFP should be limited to bleeding patients with evidence of coagulopathy or stable patients with evidence of a hypocoagulable state on TEG who are at risk for bleeding," they wrote (J. Trauma Acute Care Surg. 2013;75:947-53 [doi:10.1097/TA.0b013e3182a9676c]).
McCully’s team used three coagulation assay methods on blood samples before and after FFP transfer from 106 hemodynamically stable trauma (35%) and surgical (65%) patients. The patients, 59% males, ranged in age from 16 to 92 years (median age, 60 years) and were assessed from February 2010 through August 2012.
Patients were excluded if they had received a massive transfusion or had taken antiplatelet agents within 10 days of admission. Overall, 262 U of fresh frozen plasma were transfused, with patients receiving from 1-4 U each.
The three methods included TEG, clotting factor activity levels, and a collection of conventional coagulation tests that included INR values. While the median INR dropped from 1.87 before FFP transfusion to a still elevated 1.53 after transfusion (P less than .001), TEG values remained in the normal range with little change before and after transfusion. The researchers found all procoagulant factor activities to exceed 30% of normal values preceding transfusions.
The overall coagulation index, calculated from the four main TEG assay variables, was -0.1 before transfusion and -0.4 after (P less than.05). The R value, representing soluble clotting factor activity with the number of minutes to initial fibrin formation, went from 7.2 before transfusion to 6.9 afterward. K time represents a combination of soluble factor activity and fibrin cross-linking, measured at 1.7 minutes before transfusion and 1.6 minutes after.
Values for alpha-angle, the rate of clot strengthening, were 66.7 degrees before transfusion and 66.5 after. Clot strength measured by maximum amplitude of tracing, representing platelet function, went from 66.1 mm before to 66.6 mm after transfusion. Clot lysis at 30 minutes, assessing fibrinolysis, was 0.2% before and 0.3% after transfusion.
Among the other conventional coagulation tests, small changes were seen: a slight reduction in partial thromboplastin time but remaining within normal range before and after transfusion, decreases in hematocrit and platelet counts and increases in D-dimer and fibrinogen levels.
The clotting factors assay revealed median values within the normal range before and after transfusion for factors VIII, IX, XI and XII (intrinsic pathway), but median values for factors II, V and X (common pathway) were below normal range before and after FFP transfer.
The below-normal factor VII median values (extrinsic pathway) before and after transfusion corresponded with the abnormal INR values but were sufficient to maintain hemostasis.
"Our data suggest that an isolated abnormal INR does not reflect coagulopathy," the researchers wrote.
"Despite transfusion-related changes, TEG continued to reflect normal coagulation in a setting without clinical evidence of bleeding," the researchers wrote. "Based on a normal TEG and functional intrinsic pathway, we believe these patients had the potential to form a robust clot and should not have received FFP."
The researchers cited past research finding that anywhere from 10% to 73% of FFP transfusions are inappropriately administered. FFP transfusion complication risks can include acute lung injury, transfusion-associated circulatory overload, anaphylactic reactions, and infections transmission.
The study was limited by the lack of a control group who did not receive FFP, the determination of the bleeding events retrospectively instead of prospectively, and lack of indications for FFP transfusion from the treatment teams.
The study was funded by Dr. Schreiber within the Trauma Research Institute of Oregon. The authors reported no conflicts of interest.
FROM THE JOURNAL OF TRAUMA AND ACUTE CARE SURGERY
Major Finding: Despite elevated international normalized ratio values before (1.87) and after (1.53) FFP transfusions in stable patients (P less than .001), thrombelastography values remained in normal range with an overall coagulation index of –0.1 before transfusion and –0.4 after (P less than .05).
Data Source: The findings are based on a prospective, observational study of 106 hemodynamically stable trauma and surgical patients who received FFP, enrolled from February 2010 through August 2012.
Disclosures: The study was funded by Dr. Schreiber within the Trauma Research Institute of Oregon. The authors reported no conflicts of interest.
Injury cause alone insufficient to justify CT scanning in children
Using computed tomographic imaging (CT scans) based only on a child’s method of injury in blunt trauma cases incurs more risks than benefits, according to a recent study.
The ionizing radiation from CT scanning has been linked to long-term risk of cancer, with an estimated risk of one cancer case per 10,000 CT scans, and the U.S. Environmental Protection Agency attributes 25% of all radiation in the United States to CT scanning.
"The benefit of identifying or excluding life-threatening injuries with a high sensitivity is an invaluable tool," wrote Dr. Hunter B. Moore and his colleagues at the University of Colorado at Denver, Aurora, in the Journal of Trauma and Acute Care Surgery. "However, application in the more radiosensitive pediatric population requires critical analysis."
Dr. Moore’s team found that the only clinically significant factor in determining the value of using CT scans was an abnormal Glasgow Coma Score (GCS). The GCS neurological scale rates patients from 3 to 14 on their level of consciousness; the highest score (14 on the original scale, 15 on the revised scale) refers to normal verbal, motor, and eye functioning. This study used the original scale, according to corresponding author Denis Bensard.
"Most concerning was that injured children imaged based on the mechanism of injury alone yielded no significant findings on CT imaging," the researchers wrote (J. Trauma Acute Care Surg. 2013;75:995-1001). "When anatomic or physiologic abnormalities were present, a serious CT finding was observed in more than 20% of the children imaged."
The researchers classified 174 patients, all meeting trauma team activation criteria at a Level 2 pediatric trauma center, into four groups to study the clinical value of CT scanning based on the children’s mechanism of injury. The patients, with a mean age of 7 years and a mean Injury Severity Score of 10, were admitted from January 2006 through December 2011.
The first group had normal GCS scores and normal vital signs and physical examinations. CT scanning for this group was considered to be done based on mechanism of injury alone. The second group had abnormal GCS scores but normal vital signs and physical exams. The third group had normal GCS scores but abnormal vital signs or exam findings. The fourth group had both abnormal GCS scores and abnormal findings in vital signs and/or exams.
Across all groups, motor vehicle collisions accounted for the most common injury causes, followed by being struck by autos as pedestrians, and falls. Positive CT scan findings included extra axial blood or parenchymal injury in the head; bony, vascular injury in the neck; great vessel injury in the chest; or solid organ or hollow visceral injury in the abdomen.
Of the 54 patients (82% of 66 children) in the group with normal exams, vital signs, and GCS scores who received CT scans, the patients were exposed to an average 17 mSv through an average 1.7 scans per child. The annual environmental dose limit for radiation is established at 1 mSv per year. "Remarkably, no patient imaged, based on [injury] mechanism alone, had a serious or life-threatening finding on CT scan," the researchers wrote.
All 25 patients in the group with abnormal GCS scores but normal exams and vital signs were scanned, with an average of 3.1 scans and 29 mSv of radiation per child. While 22% of the scans revealed a serious injury, the only surgeries required were one craniotomy and one nephrectomy.
Among the 57 children with normal GCS scores but abnormal exams or vital signs, 49 of them (86%) were scanned, with an average of two scans and 20 mSv per child. One splenectomy resulted from among the 23% of scans revealing significant findings.
All but 1 of the 26 children with abnormal GCS scores and abnormal vital signs or exams were scanned, with an average of 2.8 scans and 27 mSv per child. A quarter of the scans revealed significant findings, and two children required emergency craniotomies.
"We found that only one in four CT scans found a serious finding, but emergent operative interventions were required in less than 3% of injured children imaged," the researchers wrote. "Focused assessment with sonography for trauma [FAST] examination for the cohort was found to have a high specificity of 98%, but low sensitivity of 30%."
They determined the low sensitivity to result from the scans’ inability to identify injuries in solid organs without "detectable blood or retroperitoneal injury," though CT scans did appear valuable for identifying intra-abdominal hemorrhage. Abdominal CT scans were most likely to identify serious injuries when initial exams revealed anatomic or physiologic abnormalities, but chest scans had little to no utility.
The authors noted that current cancer risk estimates from CT scan radiation may be low because of the time it can take for cancers to manifest (up to 40 years) and the short time span (10 years) of the retrospective study that validated the 1 in 10,000 per CT scan risk. "Commentary on this article cautions that these preliminary data are similar to atomic bomb survivors, and the true incidence of cancer from CT scanning may be 10 times more after more time elapses following CT scans," they wrote.
The researchers did not use external funding. They reported no disclosures
Using computed tomographic imaging (CT scans) based only on a child’s method of injury in blunt trauma cases incurs more risks than benefits, according to a recent study.
The ionizing radiation from CT scanning has been linked to long-term risk of cancer, with an estimated risk of one cancer case per 10,000 CT scans, and the U.S. Environmental Protection Agency attributes 25% of all radiation in the United States to CT scanning.
"The benefit of identifying or excluding life-threatening injuries with a high sensitivity is an invaluable tool," wrote Dr. Hunter B. Moore and his colleagues at the University of Colorado at Denver, Aurora, in the Journal of Trauma and Acute Care Surgery. "However, application in the more radiosensitive pediatric population requires critical analysis."
Dr. Moore’s team found that the only clinically significant factor in determining the value of using CT scans was an abnormal Glasgow Coma Score (GCS). The GCS neurological scale rates patients from 3 to 14 on their level of consciousness; the highest score (14 on the original scale, 15 on the revised scale) refers to normal verbal, motor, and eye functioning. This study used the original scale, according to corresponding author Denis Bensard.
"Most concerning was that injured children imaged based on the mechanism of injury alone yielded no significant findings on CT imaging," the researchers wrote (J. Trauma Acute Care Surg. 2013;75:995-1001). "When anatomic or physiologic abnormalities were present, a serious CT finding was observed in more than 20% of the children imaged."
The researchers classified 174 patients, all meeting trauma team activation criteria at a Level 2 pediatric trauma center, into four groups to study the clinical value of CT scanning based on the children’s mechanism of injury. The patients, with a mean age of 7 years and a mean Injury Severity Score of 10, were admitted from January 2006 through December 2011.
The first group had normal GCS scores and normal vital signs and physical examinations. CT scanning for this group was considered to be done based on mechanism of injury alone. The second group had abnormal GCS scores but normal vital signs and physical exams. The third group had normal GCS scores but abnormal vital signs or exam findings. The fourth group had both abnormal GCS scores and abnormal findings in vital signs and/or exams.
Across all groups, motor vehicle collisions accounted for the most common injury causes, followed by being struck by autos as pedestrians, and falls. Positive CT scan findings included extra axial blood or parenchymal injury in the head; bony, vascular injury in the neck; great vessel injury in the chest; or solid organ or hollow visceral injury in the abdomen.
Of the 54 patients (82% of 66 children) in the group with normal exams, vital signs, and GCS scores who received CT scans, the patients were exposed to an average 17 mSv through an average 1.7 scans per child. The annual environmental dose limit for radiation is established at 1 mSv per year. "Remarkably, no patient imaged, based on [injury] mechanism alone, had a serious or life-threatening finding on CT scan," the researchers wrote.
All 25 patients in the group with abnormal GCS scores but normal exams and vital signs were scanned, with an average of 3.1 scans and 29 mSv of radiation per child. While 22% of the scans revealed a serious injury, the only surgeries required were one craniotomy and one nephrectomy.
Among the 57 children with normal GCS scores but abnormal exams or vital signs, 49 of them (86%) were scanned, with an average of two scans and 20 mSv per child. One splenectomy resulted from among the 23% of scans revealing significant findings.
All but 1 of the 26 children with abnormal GCS scores and abnormal vital signs or exams were scanned, with an average of 2.8 scans and 27 mSv per child. A quarter of the scans revealed significant findings, and two children required emergency craniotomies.
"We found that only one in four CT scans found a serious finding, but emergent operative interventions were required in less than 3% of injured children imaged," the researchers wrote. "Focused assessment with sonography for trauma [FAST] examination for the cohort was found to have a high specificity of 98%, but low sensitivity of 30%."
They determined the low sensitivity to result from the scans’ inability to identify injuries in solid organs without "detectable blood or retroperitoneal injury," though CT scans did appear valuable for identifying intra-abdominal hemorrhage. Abdominal CT scans were most likely to identify serious injuries when initial exams revealed anatomic or physiologic abnormalities, but chest scans had little to no utility.
The authors noted that current cancer risk estimates from CT scan radiation may be low because of the time it can take for cancers to manifest (up to 40 years) and the short time span (10 years) of the retrospective study that validated the 1 in 10,000 per CT scan risk. "Commentary on this article cautions that these preliminary data are similar to atomic bomb survivors, and the true incidence of cancer from CT scanning may be 10 times more after more time elapses following CT scans," they wrote.
The researchers did not use external funding. They reported no disclosures
Using computed tomographic imaging (CT scans) based only on a child’s method of injury in blunt trauma cases incurs more risks than benefits, according to a recent study.
The ionizing radiation from CT scanning has been linked to long-term risk of cancer, with an estimated risk of one cancer case per 10,000 CT scans, and the U.S. Environmental Protection Agency attributes 25% of all radiation in the United States to CT scanning.
"The benefit of identifying or excluding life-threatening injuries with a high sensitivity is an invaluable tool," wrote Dr. Hunter B. Moore and his colleagues at the University of Colorado at Denver, Aurora, in the Journal of Trauma and Acute Care Surgery. "However, application in the more radiosensitive pediatric population requires critical analysis."
Dr. Moore’s team found that the only clinically significant factor in determining the value of using CT scans was an abnormal Glasgow Coma Score (GCS). The GCS neurological scale rates patients from 3 to 14 on their level of consciousness; the highest score (14 on the original scale, 15 on the revised scale) refers to normal verbal, motor, and eye functioning. This study used the original scale, according to corresponding author Denis Bensard.
"Most concerning was that injured children imaged based on the mechanism of injury alone yielded no significant findings on CT imaging," the researchers wrote (J. Trauma Acute Care Surg. 2013;75:995-1001). "When anatomic or physiologic abnormalities were present, a serious CT finding was observed in more than 20% of the children imaged."
The researchers classified 174 patients, all meeting trauma team activation criteria at a Level 2 pediatric trauma center, into four groups to study the clinical value of CT scanning based on the children’s mechanism of injury. The patients, with a mean age of 7 years and a mean Injury Severity Score of 10, were admitted from January 2006 through December 2011.
The first group had normal GCS scores and normal vital signs and physical examinations. CT scanning for this group was considered to be done based on mechanism of injury alone. The second group had abnormal GCS scores but normal vital signs and physical exams. The third group had normal GCS scores but abnormal vital signs or exam findings. The fourth group had both abnormal GCS scores and abnormal findings in vital signs and/or exams.
Across all groups, motor vehicle collisions accounted for the most common injury causes, followed by being struck by autos as pedestrians, and falls. Positive CT scan findings included extra axial blood or parenchymal injury in the head; bony, vascular injury in the neck; great vessel injury in the chest; or solid organ or hollow visceral injury in the abdomen.
Of the 54 patients (82% of 66 children) in the group with normal exams, vital signs, and GCS scores who received CT scans, the patients were exposed to an average 17 mSv through an average 1.7 scans per child. The annual environmental dose limit for radiation is established at 1 mSv per year. "Remarkably, no patient imaged, based on [injury] mechanism alone, had a serious or life-threatening finding on CT scan," the researchers wrote.
All 25 patients in the group with abnormal GCS scores but normal exams and vital signs were scanned, with an average of 3.1 scans and 29 mSv of radiation per child. While 22% of the scans revealed a serious injury, the only surgeries required were one craniotomy and one nephrectomy.
Among the 57 children with normal GCS scores but abnormal exams or vital signs, 49 of them (86%) were scanned, with an average of two scans and 20 mSv per child. One splenectomy resulted from among the 23% of scans revealing significant findings.
All but 1 of the 26 children with abnormal GCS scores and abnormal vital signs or exams were scanned, with an average of 2.8 scans and 27 mSv per child. A quarter of the scans revealed significant findings, and two children required emergency craniotomies.
"We found that only one in four CT scans found a serious finding, but emergent operative interventions were required in less than 3% of injured children imaged," the researchers wrote. "Focused assessment with sonography for trauma [FAST] examination for the cohort was found to have a high specificity of 98%, but low sensitivity of 30%."
They determined the low sensitivity to result from the scans’ inability to identify injuries in solid organs without "detectable blood or retroperitoneal injury," though CT scans did appear valuable for identifying intra-abdominal hemorrhage. Abdominal CT scans were most likely to identify serious injuries when initial exams revealed anatomic or physiologic abnormalities, but chest scans had little to no utility.
The authors noted that current cancer risk estimates from CT scan radiation may be low because of the time it can take for cancers to manifest (up to 40 years) and the short time span (10 years) of the retrospective study that validated the 1 in 10,000 per CT scan risk. "Commentary on this article cautions that these preliminary data are similar to atomic bomb survivors, and the true incidence of cancer from CT scanning may be 10 times more after more time elapses following CT scans," they wrote.
The researchers did not use external funding. They reported no disclosures
FROM THE JOURNAL OF TRAUMA AND ACUTE CARE SURGERY
Major finding: A Glasgow Coma Score (GCS) of less than 14 (original scale; less than 15 on revised scale) was the only clinically significant variable for identifying positive findings with CT scans in pediatric blunt trauma patients.
Data source: The findings are based on an analysis of the cases (CT scans received, significant findings, surgeries, and radiation exposure) of 174 children who met trauma team activation criteria at a Level 2 pediatric trauma center between January 2006 and December 2011.
Disclosures: The researchers did not use external funding. They reported no disclosures.
Small declines in primary cesarean births show slowly reversing trend
Just over one in five live, singleton births in 2012 were delivered by cesarean section for first-time mothers, showing slight declines in the rates since 2009 among the U.S. states included in a recent report.
The overall primary cesarean delivery rate was 21.5% for the 38 states – plus the District of Columbia and New York City – that had implemented the 2003 U.S. Standard Certificate of Live Birth (revised) by, at latest, Jan. 1, 2012, according to a National Vital Statistics Report from the National Center for Health Statistics.
State-specific rates in 2012 ranged from a low of 12.5% in Utah to a high of 26.9% in Florida and Louisiana, the investigators reported Jan. 23 (Natl. Vital Stat. Rep. 2014;63:1-10).
The report did not include any data from states that used the 1989 U.S. Standard Certificate of Live Birth (unrevised) through 2012 since the 1989 certificate reports on the "method of delivery" differently than the 2003 revised certificate. The data are, therefore, not generalizable to the entire country since the births are not a random sample or representative of the national demographics, especially in terms of Hispanics’ distribution.
Instead, the report’s data are presented based on when states implemented the revised 2003 certificate, a process that occurred gradually over about a decade. Among the 19 states (excluding New York City) that had implemented the 2003 certificate by Jan. 1, 2006, the primary cesarean rate first increased from 21.9% in 2006 to 22.4% in 2009 and then dropped to 21.9% in 2012. The lower rate in 2012 resulted from declines in 11 of the 19 states since 2009 while the other eight states had no significant changes in their rates.
The 28 states plus New York City that implemented the 2003 certificate by Jan. 1, 2009, also showed a decline from 2009, when the primary cesarean rate was 22.1%, to 2012, when the rate was 21.5%. This overall decline resulted from decreases in the cesarean rates in 16 of the 29 total areas included, while the other 13 areas’ rates did not significantly change. Utah’s rate during this time dropped 15%, while Delaware, New York, New York City, North Dakota, and Oregon saw drops of 5% to 10% from 2009 to 2012.
"Although significant declines were observed for total 2006 and 2009 revised reporting area rates and for many state-specific rates between 2009 and 2012, the pace of the decline has slowed," the authors wrote. For example, only two of the 19 states using the revised certificate by 2006 saw declines from 2011 to 2012, even though 13 states from this group saw declines from 2009 to 2010.
Few changes occurred in cesarean rates based on gestational age at the state level, with the only statistically significant change across multiple states occurring among babies born at 38 weeks. Among 18 of the 29 areas that adopted the 2003 certificate by 2009, the rate for this gestational age dropped an average of 10% from 2009 to 2012.
The decline varied from a 5% drop in Michigan to an 18% drop in Utah. States seeing a drop of at least 10% in primary cesarean rates at 38 weeks for those years included Georgia, Kansas, Nebraska, New Mexico, New York, New York City, Ohio, Oregon, Utah, and Washington. The overall primary cesarean rates for all 29 areas combined did show decreases for each gestational age from 37 weeks to 41 weeks and for 42 or more weeks.
The report was funded by the National Center for Health Statistics. No disclosures were reported.
The new CDC data on cesarean section rates across the country, which show considerable variation from state to state, reflect the individualistic nature of U.S. citizens and indicate that this method of delivery is influenced by multiple factors, such as location, a milieu of ethnic backgrounds and mores, culture, and available options for childbirth. For example, rates may be high in one state because physicians are more vulnerable to liability suits and, therefore, elect to perform cesarean sections more often than vaginal deliveries.
![]() |
|
Whereas women in other countries might defer to what their doctor may recommend or to national guidelines, the CDC data reveal that women and their physicians can and will make their own decisions about the delivery of their babies, based on geographic locale, cultural background of the patient, and perception of cesarean section vs. vaginal birth.
Fortunately, the morbidity and mortality associated with cesarean or vaginal delivery is significantly reduced today, allowing most women and their physicians, in general, the flexibility to pursue options based on preference. For ob.gyns. reviewing the CDC report, what remains important is the patient. The cesarean section rates today, which are slightly lower than the rates from several years ago, are neither too high nor too low, but reflect a cultural trend that is relevant to this time in our history. In another several years, we may see a shift in the opposite direction. It will all depend upon popular perceptions, preferences of women and their doctors, and the impact of health care clinicians on pregnancy and delivery.
Dr. E. Albert Reece is vice president for medical affairs at the University of Maryland, Baltimore, dean of the school of medicine, and the John Z. and Akiko K. Bowers Distinguished Professor in obstetrics and gynecology. He made these comments in an interview. He had no relevant financial disclosures.
The new CDC data on cesarean section rates across the country, which show considerable variation from state to state, reflect the individualistic nature of U.S. citizens and indicate that this method of delivery is influenced by multiple factors, such as location, a milieu of ethnic backgrounds and mores, culture, and available options for childbirth. For example, rates may be high in one state because physicians are more vulnerable to liability suits and, therefore, elect to perform cesarean sections more often than vaginal deliveries.
![]() |
|
Whereas women in other countries might defer to what their doctor may recommend or to national guidelines, the CDC data reveal that women and their physicians can and will make their own decisions about the delivery of their babies, based on geographic locale, cultural background of the patient, and perception of cesarean section vs. vaginal birth.
Fortunately, the morbidity and mortality associated with cesarean or vaginal delivery is significantly reduced today, allowing most women and their physicians, in general, the flexibility to pursue options based on preference. For ob.gyns. reviewing the CDC report, what remains important is the patient. The cesarean section rates today, which are slightly lower than the rates from several years ago, are neither too high nor too low, but reflect a cultural trend that is relevant to this time in our history. In another several years, we may see a shift in the opposite direction. It will all depend upon popular perceptions, preferences of women and their doctors, and the impact of health care clinicians on pregnancy and delivery.
Dr. E. Albert Reece is vice president for medical affairs at the University of Maryland, Baltimore, dean of the school of medicine, and the John Z. and Akiko K. Bowers Distinguished Professor in obstetrics and gynecology. He made these comments in an interview. He had no relevant financial disclosures.
The new CDC data on cesarean section rates across the country, which show considerable variation from state to state, reflect the individualistic nature of U.S. citizens and indicate that this method of delivery is influenced by multiple factors, such as location, a milieu of ethnic backgrounds and mores, culture, and available options for childbirth. For example, rates may be high in one state because physicians are more vulnerable to liability suits and, therefore, elect to perform cesarean sections more often than vaginal deliveries.
![]() |
|
Whereas women in other countries might defer to what their doctor may recommend or to national guidelines, the CDC data reveal that women and their physicians can and will make their own decisions about the delivery of their babies, based on geographic locale, cultural background of the patient, and perception of cesarean section vs. vaginal birth.
Fortunately, the morbidity and mortality associated with cesarean or vaginal delivery is significantly reduced today, allowing most women and their physicians, in general, the flexibility to pursue options based on preference. For ob.gyns. reviewing the CDC report, what remains important is the patient. The cesarean section rates today, which are slightly lower than the rates from several years ago, are neither too high nor too low, but reflect a cultural trend that is relevant to this time in our history. In another several years, we may see a shift in the opposite direction. It will all depend upon popular perceptions, preferences of women and their doctors, and the impact of health care clinicians on pregnancy and delivery.
Dr. E. Albert Reece is vice president for medical affairs at the University of Maryland, Baltimore, dean of the school of medicine, and the John Z. and Akiko K. Bowers Distinguished Professor in obstetrics and gynecology. He made these comments in an interview. He had no relevant financial disclosures.
Just over one in five live, singleton births in 2012 were delivered by cesarean section for first-time mothers, showing slight declines in the rates since 2009 among the U.S. states included in a recent report.
The overall primary cesarean delivery rate was 21.5% for the 38 states – plus the District of Columbia and New York City – that had implemented the 2003 U.S. Standard Certificate of Live Birth (revised) by, at latest, Jan. 1, 2012, according to a National Vital Statistics Report from the National Center for Health Statistics.
State-specific rates in 2012 ranged from a low of 12.5% in Utah to a high of 26.9% in Florida and Louisiana, the investigators reported Jan. 23 (Natl. Vital Stat. Rep. 2014;63:1-10).
The report did not include any data from states that used the 1989 U.S. Standard Certificate of Live Birth (unrevised) through 2012 since the 1989 certificate reports on the "method of delivery" differently than the 2003 revised certificate. The data are, therefore, not generalizable to the entire country since the births are not a random sample or representative of the national demographics, especially in terms of Hispanics’ distribution.
Instead, the report’s data are presented based on when states implemented the revised 2003 certificate, a process that occurred gradually over about a decade. Among the 19 states (excluding New York City) that had implemented the 2003 certificate by Jan. 1, 2006, the primary cesarean rate first increased from 21.9% in 2006 to 22.4% in 2009 and then dropped to 21.9% in 2012. The lower rate in 2012 resulted from declines in 11 of the 19 states since 2009 while the other eight states had no significant changes in their rates.
The 28 states plus New York City that implemented the 2003 certificate by Jan. 1, 2009, also showed a decline from 2009, when the primary cesarean rate was 22.1%, to 2012, when the rate was 21.5%. This overall decline resulted from decreases in the cesarean rates in 16 of the 29 total areas included, while the other 13 areas’ rates did not significantly change. Utah’s rate during this time dropped 15%, while Delaware, New York, New York City, North Dakota, and Oregon saw drops of 5% to 10% from 2009 to 2012.
"Although significant declines were observed for total 2006 and 2009 revised reporting area rates and for many state-specific rates between 2009 and 2012, the pace of the decline has slowed," the authors wrote. For example, only two of the 19 states using the revised certificate by 2006 saw declines from 2011 to 2012, even though 13 states from this group saw declines from 2009 to 2010.
Few changes occurred in cesarean rates based on gestational age at the state level, with the only statistically significant change across multiple states occurring among babies born at 38 weeks. Among 18 of the 29 areas that adopted the 2003 certificate by 2009, the rate for this gestational age dropped an average of 10% from 2009 to 2012.
The decline varied from a 5% drop in Michigan to an 18% drop in Utah. States seeing a drop of at least 10% in primary cesarean rates at 38 weeks for those years included Georgia, Kansas, Nebraska, New Mexico, New York, New York City, Ohio, Oregon, Utah, and Washington. The overall primary cesarean rates for all 29 areas combined did show decreases for each gestational age from 37 weeks to 41 weeks and for 42 or more weeks.
The report was funded by the National Center for Health Statistics. No disclosures were reported.
Just over one in five live, singleton births in 2012 were delivered by cesarean section for first-time mothers, showing slight declines in the rates since 2009 among the U.S. states included in a recent report.
The overall primary cesarean delivery rate was 21.5% for the 38 states – plus the District of Columbia and New York City – that had implemented the 2003 U.S. Standard Certificate of Live Birth (revised) by, at latest, Jan. 1, 2012, according to a National Vital Statistics Report from the National Center for Health Statistics.
State-specific rates in 2012 ranged from a low of 12.5% in Utah to a high of 26.9% in Florida and Louisiana, the investigators reported Jan. 23 (Natl. Vital Stat. Rep. 2014;63:1-10).
The report did not include any data from states that used the 1989 U.S. Standard Certificate of Live Birth (unrevised) through 2012 since the 1989 certificate reports on the "method of delivery" differently than the 2003 revised certificate. The data are, therefore, not generalizable to the entire country since the births are not a random sample or representative of the national demographics, especially in terms of Hispanics’ distribution.
Instead, the report’s data are presented based on when states implemented the revised 2003 certificate, a process that occurred gradually over about a decade. Among the 19 states (excluding New York City) that had implemented the 2003 certificate by Jan. 1, 2006, the primary cesarean rate first increased from 21.9% in 2006 to 22.4% in 2009 and then dropped to 21.9% in 2012. The lower rate in 2012 resulted from declines in 11 of the 19 states since 2009 while the other eight states had no significant changes in their rates.
The 28 states plus New York City that implemented the 2003 certificate by Jan. 1, 2009, also showed a decline from 2009, when the primary cesarean rate was 22.1%, to 2012, when the rate was 21.5%. This overall decline resulted from decreases in the cesarean rates in 16 of the 29 total areas included, while the other 13 areas’ rates did not significantly change. Utah’s rate during this time dropped 15%, while Delaware, New York, New York City, North Dakota, and Oregon saw drops of 5% to 10% from 2009 to 2012.
"Although significant declines were observed for total 2006 and 2009 revised reporting area rates and for many state-specific rates between 2009 and 2012, the pace of the decline has slowed," the authors wrote. For example, only two of the 19 states using the revised certificate by 2006 saw declines from 2011 to 2012, even though 13 states from this group saw declines from 2009 to 2010.
Few changes occurred in cesarean rates based on gestational age at the state level, with the only statistically significant change across multiple states occurring among babies born at 38 weeks. Among 18 of the 29 areas that adopted the 2003 certificate by 2009, the rate for this gestational age dropped an average of 10% from 2009 to 2012.
The decline varied from a 5% drop in Michigan to an 18% drop in Utah. States seeing a drop of at least 10% in primary cesarean rates at 38 weeks for those years included Georgia, Kansas, Nebraska, New Mexico, New York, New York City, Ohio, Oregon, Utah, and Washington. The overall primary cesarean rates for all 29 areas combined did show decreases for each gestational age from 37 weeks to 41 weeks and for 42 or more weeks.
The report was funded by the National Center for Health Statistics. No disclosures were reported.
FROM NATIONAL VITAL STATISTICS REPORTS
Major finding: The primary singleton cesarean delivery rate for 38 states, the District of Columbia, and New York City was 21.5% in 2012, with a decline among 19 of those states from 22.4% in 2009 to 21.9% in 2012 and among 28 of those states plus New York City from 22.1% in 2009 to 21.5% in 2012.
Data source: The findings are based on 100% of singleton births for 2006-2012 among the 19 states that implemented the 2003 U.S. Standard Certificate of Live Birth by Jan. 1, 2006; for 2009-2012 among the 28 states plus New York City, that implemented the 2003 certificate by Jan. 1, 2009; and for 2012 among the additional states that implemented the certificate by Jan. 1, 2012.
Disclosures: The report was funded by the National Center for Health Statistics. No disclosures were reported.
Use of infertility services dips slightly from one decade ago
The percentage of any U.S. women aged 25-44 who had ever sought infertility services in the United States dropped 3 percentage points from 1995 to 2006-2010, a survey showed. Yet women who had current fertility problems sought infertility services at similar rates from 1982 through 2006-2010.
As in previous surveys, the use of infertility services was higher among white women and women who were married or formerly married, with higher levels of education and/or income. "Reasons for the disparities in use of infertility services may include access barriers such as the significant cost of medical services for infertility and the lack of adequate health insurance to afford the necessary diagnostic or treatment services," said Anjani Chandra, Ph.D., of the National Center for Health Statistics, and her colleagues in the Jan. 22 report from the center. "Numerous previous analyses have shown that women who make use of medical help for fertility problems are a highly selective group among those who have fertility problems."
The findings are based on interviews with 12,279 U.S. women and 10,403 U.S. men, all aged 15-44, conducted from June 2006 through June 2010 for the National Survey of Family Growth. The report found that 17% of women aged 25-44 years had ever used infertility services in 2006-2010, down from 20% in 1995. Similarly, the percentage of nulliparous women with current fertility problems continued to decline from 56% in 1982 and 46% in 1995 to 38% in 2006-2010. Yet across all women aged 25-44 with fertility problems, service-seeking rates did not change much during those years, remaining at 41%-46%, Dr. Chandra reported (Natl. Health Stat. Rep. 2014;73:1-19).
The decline in service seeking among nulliparous women with current fertility problems "may stem from overall patterns of delayed childbearing," the authors wrote, "such that more women are attempting to have their first child at older ages, possibly beyond age 44, and are less likely to recognize a need for infertility services" within the younger age range.
The report compared rates of seeking infertility services and the characteristics of women and men seeking them based on the National Survey of Family Growth from 1982, 1988, 1995, 2002, and 2006-2010, focusing on data collected during the last survey.
For the survey, infertility services included those used both to help women get pregnant and to prevent miscarriages (beyond standard prenatal care), regardless of whether the women or their partners have a specific fertility problem. In questions for women, such services included advice (such as intercourse timing), infertility testing for either partner, medications for ovulation, surgery for blocked fallopian tubes, artificial insemination, and "other medical help," which included surgery or drug treatment for endometriosis, in vitro fertilization (IVF) or other assisted reproductive technology, surgery or drug treatment for uterine fibroids, other female pelvic surgery, or other medical help.
Men were asked questions about their or their partners’ use of the following services: advice, infertility testing, ovulation drugs, surgery for blocked fallopian tubes, artificial insemination, varicocele treatment, and other medical help (including assisted reproductive technology).
The services most commonly used by women aged 25-44 in 2006-2010 included advice (9.4% of all women, 29% of those with fertility problems), infertility tests (7.3% and 27%), services to prevent miscarriage (6.8% of all women), and ovulation drugs (5.8% of all women and 20% of those with fertility problems). Less common services were artificial insemination (1.7%) and surgery for blocked fallopian tubes (1.3%), followed by assisted reproductive technology, used by 0.7% of women in this age group.
The largest disparities in 2006-2010 among those seeking infertility services centered on income, educational attainment, and race/ethnicity, both for all women and for women with current fertility problems. For example, 21% of women aged 25-44 with at least a bachelor’s degree sought any infertility service, and 18% sought medical help to get pregnant. Yet among women in that age group with less than a bachelor’s degree, only 15% sought any infertility service and 10% sought medical help to get pregnant.
A similar pattern was seen among women in that age group with current fertility problems. Among those with at least a bachelor’s degree, 58% sought any infertility service and 56% sought medical help to get pregnant, but among those with less than a bachelor’s degree, only 33% sought any infertility service and 27% sought medical help to get pregnant.
Meanwhile, only 13% of women aged 25-44 with household incomes below the federal poverty level sought any type of infertility service, compared with 21% of women living in households with incomes at 400% of the poverty level or higher.
In terms of specifically seeking medical help to get pregnant, only 1.9% of those below the poverty level sought help, compared with 5% of those above 400% of the poverty level. Similarly, only 0.8% of those with less than a high school degree or GED sought such medical help, compared with 5.8% of women with a master’s degree or higher. And 15% of non-Hispanic white women sought medical help to get pregnant, about double the rates for Hispanic women (7.5%) and black women (8%).
The report was funded by the National Center for Health Statistics at the Centers for Disease Control and Prevention. No financial disclosures were reported.
![]() |
|
While there is nothing in this report that is surprising, the health disparities pointed out are an important piece of information. The fact that Hispanic and non-Hispanic black women used infertility services at a lower rate than non-Hispanic white women is a potential concern. This holds true for women with less than a bachelor’s degree or those living below the poverty level as well. Infertility services are health care services, and these data are similar to what we see in other population-based studies; people of color, and those who are less well educated or less well off, have a more difficult time accessing health care.
Meanwhile, infertility services tend to be more acceptable among white women, with financial means and higher education, than among women and men in other demographic groups. Couples in this group often delay childbearing until education is complete and careers are started, leaving them older and with increased age-related needs for infertility services, as well as the means to afford that care.
I believe the 3% decline in the use of fertility services relates mainly to finances. After the recession of 2008, many infertility physicians saw a decrease in activity as patients were concerned about their families’ finances. Yet, in states where insurance mandates have required infertility services, the costs of care are not very high. In an older study, the cost of infertility coverage per contract-month was only $1.71 after the IVF mandate was added in Massachusetts (Fertil. Steril. 1998;70:22-9). Unfortunately, infertility services were not included in the Patient Protection and Affordable Care Act (Obamacare). Awareness campaigns to both patients and providers, along with lobbying efforts at the state and federal level, could be helpful in expanding knowledge of and access to fertility services in a wider range of demographics.
Dr. David A. Forstein is with the department of obstetrics and gynecology at the University of South Carolina, Columbia. He reported having no relevant financial disclosures. He made these comments in an interview.
![]() |
|
While there is nothing in this report that is surprising, the health disparities pointed out are an important piece of information. The fact that Hispanic and non-Hispanic black women used infertility services at a lower rate than non-Hispanic white women is a potential concern. This holds true for women with less than a bachelor’s degree or those living below the poverty level as well. Infertility services are health care services, and these data are similar to what we see in other population-based studies; people of color, and those who are less well educated or less well off, have a more difficult time accessing health care.
Meanwhile, infertility services tend to be more acceptable among white women, with financial means and higher education, than among women and men in other demographic groups. Couples in this group often delay childbearing until education is complete and careers are started, leaving them older and with increased age-related needs for infertility services, as well as the means to afford that care.
I believe the 3% decline in the use of fertility services relates mainly to finances. After the recession of 2008, many infertility physicians saw a decrease in activity as patients were concerned about their families’ finances. Yet, in states where insurance mandates have required infertility services, the costs of care are not very high. In an older study, the cost of infertility coverage per contract-month was only $1.71 after the IVF mandate was added in Massachusetts (Fertil. Steril. 1998;70:22-9). Unfortunately, infertility services were not included in the Patient Protection and Affordable Care Act (Obamacare). Awareness campaigns to both patients and providers, along with lobbying efforts at the state and federal level, could be helpful in expanding knowledge of and access to fertility services in a wider range of demographics.
Dr. David A. Forstein is with the department of obstetrics and gynecology at the University of South Carolina, Columbia. He reported having no relevant financial disclosures. He made these comments in an interview.
![]() |
|
While there is nothing in this report that is surprising, the health disparities pointed out are an important piece of information. The fact that Hispanic and non-Hispanic black women used infertility services at a lower rate than non-Hispanic white women is a potential concern. This holds true for women with less than a bachelor’s degree or those living below the poverty level as well. Infertility services are health care services, and these data are similar to what we see in other population-based studies; people of color, and those who are less well educated or less well off, have a more difficult time accessing health care.
Meanwhile, infertility services tend to be more acceptable among white women, with financial means and higher education, than among women and men in other demographic groups. Couples in this group often delay childbearing until education is complete and careers are started, leaving them older and with increased age-related needs for infertility services, as well as the means to afford that care.
I believe the 3% decline in the use of fertility services relates mainly to finances. After the recession of 2008, many infertility physicians saw a decrease in activity as patients were concerned about their families’ finances. Yet, in states where insurance mandates have required infertility services, the costs of care are not very high. In an older study, the cost of infertility coverage per contract-month was only $1.71 after the IVF mandate was added in Massachusetts (Fertil. Steril. 1998;70:22-9). Unfortunately, infertility services were not included in the Patient Protection and Affordable Care Act (Obamacare). Awareness campaigns to both patients and providers, along with lobbying efforts at the state and federal level, could be helpful in expanding knowledge of and access to fertility services in a wider range of demographics.
Dr. David A. Forstein is with the department of obstetrics and gynecology at the University of South Carolina, Columbia. He reported having no relevant financial disclosures. He made these comments in an interview.
The percentage of any U.S. women aged 25-44 who had ever sought infertility services in the United States dropped 3 percentage points from 1995 to 2006-2010, a survey showed. Yet women who had current fertility problems sought infertility services at similar rates from 1982 through 2006-2010.
As in previous surveys, the use of infertility services was higher among white women and women who were married or formerly married, with higher levels of education and/or income. "Reasons for the disparities in use of infertility services may include access barriers such as the significant cost of medical services for infertility and the lack of adequate health insurance to afford the necessary diagnostic or treatment services," said Anjani Chandra, Ph.D., of the National Center for Health Statistics, and her colleagues in the Jan. 22 report from the center. "Numerous previous analyses have shown that women who make use of medical help for fertility problems are a highly selective group among those who have fertility problems."
The findings are based on interviews with 12,279 U.S. women and 10,403 U.S. men, all aged 15-44, conducted from June 2006 through June 2010 for the National Survey of Family Growth. The report found that 17% of women aged 25-44 years had ever used infertility services in 2006-2010, down from 20% in 1995. Similarly, the percentage of nulliparous women with current fertility problems continued to decline from 56% in 1982 and 46% in 1995 to 38% in 2006-2010. Yet across all women aged 25-44 with fertility problems, service-seeking rates did not change much during those years, remaining at 41%-46%, Dr. Chandra reported (Natl. Health Stat. Rep. 2014;73:1-19).
The decline in service seeking among nulliparous women with current fertility problems "may stem from overall patterns of delayed childbearing," the authors wrote, "such that more women are attempting to have their first child at older ages, possibly beyond age 44, and are less likely to recognize a need for infertility services" within the younger age range.
The report compared rates of seeking infertility services and the characteristics of women and men seeking them based on the National Survey of Family Growth from 1982, 1988, 1995, 2002, and 2006-2010, focusing on data collected during the last survey.
For the survey, infertility services included those used both to help women get pregnant and to prevent miscarriages (beyond standard prenatal care), regardless of whether the women or their partners have a specific fertility problem. In questions for women, such services included advice (such as intercourse timing), infertility testing for either partner, medications for ovulation, surgery for blocked fallopian tubes, artificial insemination, and "other medical help," which included surgery or drug treatment for endometriosis, in vitro fertilization (IVF) or other assisted reproductive technology, surgery or drug treatment for uterine fibroids, other female pelvic surgery, or other medical help.
Men were asked questions about their or their partners’ use of the following services: advice, infertility testing, ovulation drugs, surgery for blocked fallopian tubes, artificial insemination, varicocele treatment, and other medical help (including assisted reproductive technology).
The services most commonly used by women aged 25-44 in 2006-2010 included advice (9.4% of all women, 29% of those with fertility problems), infertility tests (7.3% and 27%), services to prevent miscarriage (6.8% of all women), and ovulation drugs (5.8% of all women and 20% of those with fertility problems). Less common services were artificial insemination (1.7%) and surgery for blocked fallopian tubes (1.3%), followed by assisted reproductive technology, used by 0.7% of women in this age group.
The largest disparities in 2006-2010 among those seeking infertility services centered on income, educational attainment, and race/ethnicity, both for all women and for women with current fertility problems. For example, 21% of women aged 25-44 with at least a bachelor’s degree sought any infertility service, and 18% sought medical help to get pregnant. Yet among women in that age group with less than a bachelor’s degree, only 15% sought any infertility service and 10% sought medical help to get pregnant.
A similar pattern was seen among women in that age group with current fertility problems. Among those with at least a bachelor’s degree, 58% sought any infertility service and 56% sought medical help to get pregnant, but among those with less than a bachelor’s degree, only 33% sought any infertility service and 27% sought medical help to get pregnant.
Meanwhile, only 13% of women aged 25-44 with household incomes below the federal poverty level sought any type of infertility service, compared with 21% of women living in households with incomes at 400% of the poverty level or higher.
In terms of specifically seeking medical help to get pregnant, only 1.9% of those below the poverty level sought help, compared with 5% of those above 400% of the poverty level. Similarly, only 0.8% of those with less than a high school degree or GED sought such medical help, compared with 5.8% of women with a master’s degree or higher. And 15% of non-Hispanic white women sought medical help to get pregnant, about double the rates for Hispanic women (7.5%) and black women (8%).
The report was funded by the National Center for Health Statistics at the Centers for Disease Control and Prevention. No financial disclosures were reported.
The percentage of any U.S. women aged 25-44 who had ever sought infertility services in the United States dropped 3 percentage points from 1995 to 2006-2010, a survey showed. Yet women who had current fertility problems sought infertility services at similar rates from 1982 through 2006-2010.
As in previous surveys, the use of infertility services was higher among white women and women who were married or formerly married, with higher levels of education and/or income. "Reasons for the disparities in use of infertility services may include access barriers such as the significant cost of medical services for infertility and the lack of adequate health insurance to afford the necessary diagnostic or treatment services," said Anjani Chandra, Ph.D., of the National Center for Health Statistics, and her colleagues in the Jan. 22 report from the center. "Numerous previous analyses have shown that women who make use of medical help for fertility problems are a highly selective group among those who have fertility problems."
The findings are based on interviews with 12,279 U.S. women and 10,403 U.S. men, all aged 15-44, conducted from June 2006 through June 2010 for the National Survey of Family Growth. The report found that 17% of women aged 25-44 years had ever used infertility services in 2006-2010, down from 20% in 1995. Similarly, the percentage of nulliparous women with current fertility problems continued to decline from 56% in 1982 and 46% in 1995 to 38% in 2006-2010. Yet across all women aged 25-44 with fertility problems, service-seeking rates did not change much during those years, remaining at 41%-46%, Dr. Chandra reported (Natl. Health Stat. Rep. 2014;73:1-19).
The decline in service seeking among nulliparous women with current fertility problems "may stem from overall patterns of delayed childbearing," the authors wrote, "such that more women are attempting to have their first child at older ages, possibly beyond age 44, and are less likely to recognize a need for infertility services" within the younger age range.
The report compared rates of seeking infertility services and the characteristics of women and men seeking them based on the National Survey of Family Growth from 1982, 1988, 1995, 2002, and 2006-2010, focusing on data collected during the last survey.
For the survey, infertility services included those used both to help women get pregnant and to prevent miscarriages (beyond standard prenatal care), regardless of whether the women or their partners have a specific fertility problem. In questions for women, such services included advice (such as intercourse timing), infertility testing for either partner, medications for ovulation, surgery for blocked fallopian tubes, artificial insemination, and "other medical help," which included surgery or drug treatment for endometriosis, in vitro fertilization (IVF) or other assisted reproductive technology, surgery or drug treatment for uterine fibroids, other female pelvic surgery, or other medical help.
Men were asked questions about their or their partners’ use of the following services: advice, infertility testing, ovulation drugs, surgery for blocked fallopian tubes, artificial insemination, varicocele treatment, and other medical help (including assisted reproductive technology).
The services most commonly used by women aged 25-44 in 2006-2010 included advice (9.4% of all women, 29% of those with fertility problems), infertility tests (7.3% and 27%), services to prevent miscarriage (6.8% of all women), and ovulation drugs (5.8% of all women and 20% of those with fertility problems). Less common services were artificial insemination (1.7%) and surgery for blocked fallopian tubes (1.3%), followed by assisted reproductive technology, used by 0.7% of women in this age group.
The largest disparities in 2006-2010 among those seeking infertility services centered on income, educational attainment, and race/ethnicity, both for all women and for women with current fertility problems. For example, 21% of women aged 25-44 with at least a bachelor’s degree sought any infertility service, and 18% sought medical help to get pregnant. Yet among women in that age group with less than a bachelor’s degree, only 15% sought any infertility service and 10% sought medical help to get pregnant.
A similar pattern was seen among women in that age group with current fertility problems. Among those with at least a bachelor’s degree, 58% sought any infertility service and 56% sought medical help to get pregnant, but among those with less than a bachelor’s degree, only 33% sought any infertility service and 27% sought medical help to get pregnant.
Meanwhile, only 13% of women aged 25-44 with household incomes below the federal poverty level sought any type of infertility service, compared with 21% of women living in households with incomes at 400% of the poverty level or higher.
In terms of specifically seeking medical help to get pregnant, only 1.9% of those below the poverty level sought help, compared with 5% of those above 400% of the poverty level. Similarly, only 0.8% of those with less than a high school degree or GED sought such medical help, compared with 5.8% of women with a master’s degree or higher. And 15% of non-Hispanic white women sought medical help to get pregnant, about double the rates for Hispanic women (7.5%) and black women (8%).
The report was funded by the National Center for Health Statistics at the Centers for Disease Control and Prevention. No financial disclosures were reported.
FROM THE NATIONAL CENTER FOR HEALTH STATISTICS
Major finding: From 2006 to 2010, approximately 17% of women aged 25-44 reported ever having used any infertility service, a drop of 3 percentage points from 1995, although rates of women aged 35-44 who had ever used infertility services remained fairly stable from 1982 through 1995 and 2006-2010.
Data source: Interviews with 12,279 U.S. women and 10,403 U.S. men, all aged 15-44, conducted from June 2006 through June 2010 for the National Survey of Family Growth.
Disclosures: The report was funded by the National Center for Health Statistics at the Centers for Disease Control and Prevention. No financial disclosures were reported.
Best cost savings seen with RA treatment aimed at fastest remission
A treat-to-target strategy for treating early rheumatoid arthritis yielded better patient outcomes and long-term cost savings than did usual care for patients in the multicenter Dutch Rheumatoid Arthritis Monitoring registry.
"Treat-to-target" refers to a treatment regime whose goal is to reach and maintain remission in patients as quickly as possible using regular monitoring of disease activity and a fixed protocol for adjusting medication.
"After 2 years of treatment, treat-to-target is cost-effective as it comes with higher costs but also with substantially higher effectiveness," reported Marloes Vermeer, a PhD student at the University of Twente in Enschede, the Netherlands, and her colleagues in (BMC Musculoskelet. Disord. 2013 Dec. 13 [doi:10.1186/1471-2474-14-350]). "Our study suggests that treating to the target of remission is the preferred strategy over usual care in early rheumatoid arthritis," they wrote.
The researchers used the incremental cost-effectiveness ratio (ICER) and the incremental cost-utility ratio (ICUR) to analyze costs after determining the participants’ volume of care and the cost for each volume of consumption, based on the Dutch Guideline for Cost Analyses and the Dutch Board of Health Insurances. The ICER, found in this study to be 3,591 euros (about US $4,900), represents the costs per one more patient in remission while the ICUR, found to be 19,410 euros (US $26,530) in this study, represents the costs per quality-adjusted life-year (QALY) gained. For both the second and third years of follow-up, the treat-to-target strategy was dominant.
The researchers followed two cohorts of rheumatoid arthritis patients from initial diagnosis through at least 2 years of follow-up at 11 centers participating in the Dutch Rheumatoid Arthritis Monitoring registry (DREAM). Both the target-to-treat and usual care groups had been diagnosed according to American College of Rheumatology 1987 classification criteria with symptoms for less than a year and no past treatment with disease-modifying antirheumatic drugs (DMARDs). Comparable age, sex, rheumatoid factor positivity, number of tender joints, and erythrocyte sedimentation rate characteristics existed among both groups.
The treat-to-target cohort, initially composed of 261 patients diagnosed between January 2006 and February 2009, involved initial treatment with methotrexate monotherapy and then sulfasalazine, which was replaced with antitumor necrosis factor (TNF) agents if disease activity continued. Remission was defined as a Disease Activity Score in 28 joints (DAS28) of less than 2.6, after which medication was not changed until remission had been sustained for at least 6 months. After 6 months of remission, medication use was gradually discontinued. After clinic visits every 1-3 months for the first year, patients were assessed every 3 months in the second and third years.
In the usual care group, initially composed of 213 patients diagnosed between January 2000 and February 2009, DAS28 was assessed every 3 months by rheumatology nurses but not usually provided to the treating rheumatologist. Medication regimes were determined without a set protocol by the rheumatologist, most frequently involving "step-up or sequential monotherapy with conventional DMARDs and/or biologic, notably anti-TNF."
Among the patients in the treat-to-target cohort, 64.4% of 261 patients were in remission after 2 years, and 59.8% of 127 patients were in remission after 3 years (P less than .001). Among the patients in the usual care cohort, 34.7% of 213 patients were in remission after 2 years, and 35% of 180 patients were in remission after 3 years (P less than .001). The median QALYs in both the second and third years were higher for the treat-to-target cohort, rising from 1.45 in the second year (compared with 1.39 in the usual care group, P = .04) to 2.19 in the third year (compared with 2.04 in the usual care group, P = .05).
Direct costs per patient after 2 years were initially greater in the treat-to-target group, at 4,791 per patient, than in the usual care group, costing 3,727 euros per patient, a difference driven primarily by hospitalization and anti-TNF therapy costs. The treat-to-target group included 21.5% of patients receiving anti-TNF therapy over the first 2 years, with a mean time of 58 weeks until the first anti-TNF agent was started. Meanwhile, 15% of the usual care group received anti-TNF therapy over the first 2 years, with a mean time of 80 weeks until the first anti-TNF agent was started.
By the third year of follow-up, however, the 6,872 euro costs per patient in the usual care group exceeded the 6,410 euro costs per patient in the treat-to-target group, a difference driven primarily by hospitalization costs. The treat-to-target strategy was determined to be dominant in both the second and third years of follow-up.
Ms. Vermeer and her colleagues wrote that they expected cost savings with the treat-to-target regime to continue increasing over the long term, with better, earlier disease control also allowing for work participation and productivity within society and overall improved quality of life for patients. "Our expectation is that the extra effort and time spent in the first years of the disease ultimately result in a reduction of the number of consultations later in the disease course and the possibility of tapering and discontinuing medication in case of sustained remission, thereby diminishing costs," they wrote.
An unrestricted grant from Abbott in the Netherlands funded the study. The authors reported no disclosures.
A treat-to-target strategy for treating early rheumatoid arthritis yielded better patient outcomes and long-term cost savings than did usual care for patients in the multicenter Dutch Rheumatoid Arthritis Monitoring registry.
"Treat-to-target" refers to a treatment regime whose goal is to reach and maintain remission in patients as quickly as possible using regular monitoring of disease activity and a fixed protocol for adjusting medication.
"After 2 years of treatment, treat-to-target is cost-effective as it comes with higher costs but also with substantially higher effectiveness," reported Marloes Vermeer, a PhD student at the University of Twente in Enschede, the Netherlands, and her colleagues in (BMC Musculoskelet. Disord. 2013 Dec. 13 [doi:10.1186/1471-2474-14-350]). "Our study suggests that treating to the target of remission is the preferred strategy over usual care in early rheumatoid arthritis," they wrote.
The researchers used the incremental cost-effectiveness ratio (ICER) and the incremental cost-utility ratio (ICUR) to analyze costs after determining the participants’ volume of care and the cost for each volume of consumption, based on the Dutch Guideline for Cost Analyses and the Dutch Board of Health Insurances. The ICER, found in this study to be 3,591 euros (about US $4,900), represents the costs per one more patient in remission while the ICUR, found to be 19,410 euros (US $26,530) in this study, represents the costs per quality-adjusted life-year (QALY) gained. For both the second and third years of follow-up, the treat-to-target strategy was dominant.
The researchers followed two cohorts of rheumatoid arthritis patients from initial diagnosis through at least 2 years of follow-up at 11 centers participating in the Dutch Rheumatoid Arthritis Monitoring registry (DREAM). Both the target-to-treat and usual care groups had been diagnosed according to American College of Rheumatology 1987 classification criteria with symptoms for less than a year and no past treatment with disease-modifying antirheumatic drugs (DMARDs). Comparable age, sex, rheumatoid factor positivity, number of tender joints, and erythrocyte sedimentation rate characteristics existed among both groups.
The treat-to-target cohort, initially composed of 261 patients diagnosed between January 2006 and February 2009, involved initial treatment with methotrexate monotherapy and then sulfasalazine, which was replaced with antitumor necrosis factor (TNF) agents if disease activity continued. Remission was defined as a Disease Activity Score in 28 joints (DAS28) of less than 2.6, after which medication was not changed until remission had been sustained for at least 6 months. After 6 months of remission, medication use was gradually discontinued. After clinic visits every 1-3 months for the first year, patients were assessed every 3 months in the second and third years.
In the usual care group, initially composed of 213 patients diagnosed between January 2000 and February 2009, DAS28 was assessed every 3 months by rheumatology nurses but not usually provided to the treating rheumatologist. Medication regimes were determined without a set protocol by the rheumatologist, most frequently involving "step-up or sequential monotherapy with conventional DMARDs and/or biologic, notably anti-TNF."
Among the patients in the treat-to-target cohort, 64.4% of 261 patients were in remission after 2 years, and 59.8% of 127 patients were in remission after 3 years (P less than .001). Among the patients in the usual care cohort, 34.7% of 213 patients were in remission after 2 years, and 35% of 180 patients were in remission after 3 years (P less than .001). The median QALYs in both the second and third years were higher for the treat-to-target cohort, rising from 1.45 in the second year (compared with 1.39 in the usual care group, P = .04) to 2.19 in the third year (compared with 2.04 in the usual care group, P = .05).
Direct costs per patient after 2 years were initially greater in the treat-to-target group, at 4,791 per patient, than in the usual care group, costing 3,727 euros per patient, a difference driven primarily by hospitalization and anti-TNF therapy costs. The treat-to-target group included 21.5% of patients receiving anti-TNF therapy over the first 2 years, with a mean time of 58 weeks until the first anti-TNF agent was started. Meanwhile, 15% of the usual care group received anti-TNF therapy over the first 2 years, with a mean time of 80 weeks until the first anti-TNF agent was started.
By the third year of follow-up, however, the 6,872 euro costs per patient in the usual care group exceeded the 6,410 euro costs per patient in the treat-to-target group, a difference driven primarily by hospitalization costs. The treat-to-target strategy was determined to be dominant in both the second and third years of follow-up.
Ms. Vermeer and her colleagues wrote that they expected cost savings with the treat-to-target regime to continue increasing over the long term, with better, earlier disease control also allowing for work participation and productivity within society and overall improved quality of life for patients. "Our expectation is that the extra effort and time spent in the first years of the disease ultimately result in a reduction of the number of consultations later in the disease course and the possibility of tapering and discontinuing medication in case of sustained remission, thereby diminishing costs," they wrote.
An unrestricted grant from Abbott in the Netherlands funded the study. The authors reported no disclosures.
A treat-to-target strategy for treating early rheumatoid arthritis yielded better patient outcomes and long-term cost savings than did usual care for patients in the multicenter Dutch Rheumatoid Arthritis Monitoring registry.
"Treat-to-target" refers to a treatment regime whose goal is to reach and maintain remission in patients as quickly as possible using regular monitoring of disease activity and a fixed protocol for adjusting medication.
"After 2 years of treatment, treat-to-target is cost-effective as it comes with higher costs but also with substantially higher effectiveness," reported Marloes Vermeer, a PhD student at the University of Twente in Enschede, the Netherlands, and her colleagues in (BMC Musculoskelet. Disord. 2013 Dec. 13 [doi:10.1186/1471-2474-14-350]). "Our study suggests that treating to the target of remission is the preferred strategy over usual care in early rheumatoid arthritis," they wrote.
The researchers used the incremental cost-effectiveness ratio (ICER) and the incremental cost-utility ratio (ICUR) to analyze costs after determining the participants’ volume of care and the cost for each volume of consumption, based on the Dutch Guideline for Cost Analyses and the Dutch Board of Health Insurances. The ICER, found in this study to be 3,591 euros (about US $4,900), represents the costs per one more patient in remission while the ICUR, found to be 19,410 euros (US $26,530) in this study, represents the costs per quality-adjusted life-year (QALY) gained. For both the second and third years of follow-up, the treat-to-target strategy was dominant.
The researchers followed two cohorts of rheumatoid arthritis patients from initial diagnosis through at least 2 years of follow-up at 11 centers participating in the Dutch Rheumatoid Arthritis Monitoring registry (DREAM). Both the target-to-treat and usual care groups had been diagnosed according to American College of Rheumatology 1987 classification criteria with symptoms for less than a year and no past treatment with disease-modifying antirheumatic drugs (DMARDs). Comparable age, sex, rheumatoid factor positivity, number of tender joints, and erythrocyte sedimentation rate characteristics existed among both groups.
The treat-to-target cohort, initially composed of 261 patients diagnosed between January 2006 and February 2009, involved initial treatment with methotrexate monotherapy and then sulfasalazine, which was replaced with antitumor necrosis factor (TNF) agents if disease activity continued. Remission was defined as a Disease Activity Score in 28 joints (DAS28) of less than 2.6, after which medication was not changed until remission had been sustained for at least 6 months. After 6 months of remission, medication use was gradually discontinued. After clinic visits every 1-3 months for the first year, patients were assessed every 3 months in the second and third years.
In the usual care group, initially composed of 213 patients diagnosed between January 2000 and February 2009, DAS28 was assessed every 3 months by rheumatology nurses but not usually provided to the treating rheumatologist. Medication regimes were determined without a set protocol by the rheumatologist, most frequently involving "step-up or sequential monotherapy with conventional DMARDs and/or biologic, notably anti-TNF."
Among the patients in the treat-to-target cohort, 64.4% of 261 patients were in remission after 2 years, and 59.8% of 127 patients were in remission after 3 years (P less than .001). Among the patients in the usual care cohort, 34.7% of 213 patients were in remission after 2 years, and 35% of 180 patients were in remission after 3 years (P less than .001). The median QALYs in both the second and third years were higher for the treat-to-target cohort, rising from 1.45 in the second year (compared with 1.39 in the usual care group, P = .04) to 2.19 in the third year (compared with 2.04 in the usual care group, P = .05).
Direct costs per patient after 2 years were initially greater in the treat-to-target group, at 4,791 per patient, than in the usual care group, costing 3,727 euros per patient, a difference driven primarily by hospitalization and anti-TNF therapy costs. The treat-to-target group included 21.5% of patients receiving anti-TNF therapy over the first 2 years, with a mean time of 58 weeks until the first anti-TNF agent was started. Meanwhile, 15% of the usual care group received anti-TNF therapy over the first 2 years, with a mean time of 80 weeks until the first anti-TNF agent was started.
By the third year of follow-up, however, the 6,872 euro costs per patient in the usual care group exceeded the 6,410 euro costs per patient in the treat-to-target group, a difference driven primarily by hospitalization costs. The treat-to-target strategy was determined to be dominant in both the second and third years of follow-up.
Ms. Vermeer and her colleagues wrote that they expected cost savings with the treat-to-target regime to continue increasing over the long term, with better, earlier disease control also allowing for work participation and productivity within society and overall improved quality of life for patients. "Our expectation is that the extra effort and time spent in the first years of the disease ultimately result in a reduction of the number of consultations later in the disease course and the possibility of tapering and discontinuing medication in case of sustained remission, thereby diminishing costs," they wrote.
An unrestricted grant from Abbott in the Netherlands funded the study. The authors reported no disclosures.
FROM BMC MUSCULOSKELETAL DISORDERS
Major Finding: A treat-to-target regime for early rheumatoid arthritis costs 4,791 euros per patient after 2 years and 6,410 euros per patient after 3 years with remission rates of 64.4% and 59.8%, respectively, compared with a usual care regime’s cost of 3,727 euros after 2 years and 6,872 euros after 3 years with remission rates of 34.7% and 35%, respectively (P less than .001).
Data Source: The findings are based on 2011 cost prices determined in a prospective comparison of two cohorts of 474 total rheumatoid arthritis patients, diagnosed between 2000 and 2009 and treated at 1 of 11 centers participating in the Dutch Rheumatoid Arthritis Monitoring registry.
Disclosures: An unrestricted grant from Abbott in the Netherlands funded the study. The authors reported no disclosures.
Pregnancy rate continues downward trend to 12-year low in 2009
The lowest pregnancy rate in more than a decade was reported for 2009, largely driven by declines among teenagers and, to a lesser extent, women under age 30 years.
The 2009 rate of 102.1 pregnancies per 1,000 women was 12% lower than the 1990 peak of 115.8 pregnancies per 1,000 women, reported Sally C. Curtin of the Centers for Disease Control and Prevention’s National Center for Health Statistics and her colleagues in the December issue of an NCHS Data Brief.
Continuing a trend of declining pregnancies, especially since 2007, the rate drop occurred across both married and unmarried women, although the rate among women over age 30 continued to climb gradually from 1990 rates. "It has been suggested that the declining economy, beginning in 2007, has likely played a role in the decreased rates for women under age 40," the authors wrote (NCHS Data Brief, December 2013; No. 136:1-7).
The age group with the highest pregnancy rate also shifted slightly: Those aged 25-29 led with 1.69 million pregnancies in 2009, compared with 1.61 million pregnancies among women aged 20-24. In 1990 and 2000, women aged 20-24 had the highest pregnancy rates.
The most substantial declines in pregnancy rates occurred among teenagers. The rate among all teens in 2009 was 37.9/1,000, down 39% from the 1991 peak of 61.8 pregnancies per 1,000 teens. The fall in the teen pregnancy rate has continued through 2012, with an all-time low of 29.4/1,000 teens. Younger teens in particular have contributed to the decline, with the rate among girls aged 15-17 down 53% since 1990. The drop among 18- and 19-year-olds fell by more than a third (36%) since 1990.
Although black and Hispanic teen pregnancy rates remained at least double the rates among white teens, the decline was still seen across all three demographics. White and black teen pregnancy rates fell 50%, and the Hispanic rate fell 40%, between 1990 and 2009. The authors noted that the teen pregnancy rate declines match up with related trends among teens showing shrinking percentages of sexually experienced teens and increasing rates of contraception use at first intercourse.
Along with the drop in pregnancies has been a drop in abortions, a trend that has continued since 1980 to arrive in 2009 at the lowest rate since 1976. The 2009 rate of 18.5 abortions per 1,000 women is 32% lower than the 1990 rate of 27.4/1,000 women.
However, the abortion rates varied greatly between married and unmarried women: The 2009 rate for unmarried women was 28.9/1,000, almost five times greater than the 6.1/1,000 rate among married women. Accordingly, married women’s 2009 birth rate of 85.6 live births per 1,000 women was 72% higher than that of unmarried women, 49.9/1,000 women.
The data are pulled from three federal sources on live births, abortions, and fetal losses. All states report live births to the CDC’s National Center for Health Statistics through the vital statistics cooperative program of the National Vital Statistics System. Most states report abortion data to the CDC’s National Center for Chronic Disease Prevention and Health Promotion, after which the data are adjusted by the Guttmacher Institute to estimate national totals.
The NCHS National Survey of Family Growth provided data for estimates of fetal losses from the 1995, 2002, and 2006-2010 surveys, plus 1988 data used only for adolescent estimates. The report was funded by the Centers for Disease Control and Prevention.
The lowest pregnancy rate in more than a decade was reported for 2009, largely driven by declines among teenagers and, to a lesser extent, women under age 30 years.
The 2009 rate of 102.1 pregnancies per 1,000 women was 12% lower than the 1990 peak of 115.8 pregnancies per 1,000 women, reported Sally C. Curtin of the Centers for Disease Control and Prevention’s National Center for Health Statistics and her colleagues in the December issue of an NCHS Data Brief.
Continuing a trend of declining pregnancies, especially since 2007, the rate drop occurred across both married and unmarried women, although the rate among women over age 30 continued to climb gradually from 1990 rates. "It has been suggested that the declining economy, beginning in 2007, has likely played a role in the decreased rates for women under age 40," the authors wrote (NCHS Data Brief, December 2013; No. 136:1-7).
The age group with the highest pregnancy rate also shifted slightly: Those aged 25-29 led with 1.69 million pregnancies in 2009, compared with 1.61 million pregnancies among women aged 20-24. In 1990 and 2000, women aged 20-24 had the highest pregnancy rates.
The most substantial declines in pregnancy rates occurred among teenagers. The rate among all teens in 2009 was 37.9/1,000, down 39% from the 1991 peak of 61.8 pregnancies per 1,000 teens. The fall in the teen pregnancy rate has continued through 2012, with an all-time low of 29.4/1,000 teens. Younger teens in particular have contributed to the decline, with the rate among girls aged 15-17 down 53% since 1990. The drop among 18- and 19-year-olds fell by more than a third (36%) since 1990.
Although black and Hispanic teen pregnancy rates remained at least double the rates among white teens, the decline was still seen across all three demographics. White and black teen pregnancy rates fell 50%, and the Hispanic rate fell 40%, between 1990 and 2009. The authors noted that the teen pregnancy rate declines match up with related trends among teens showing shrinking percentages of sexually experienced teens and increasing rates of contraception use at first intercourse.
Along with the drop in pregnancies has been a drop in abortions, a trend that has continued since 1980 to arrive in 2009 at the lowest rate since 1976. The 2009 rate of 18.5 abortions per 1,000 women is 32% lower than the 1990 rate of 27.4/1,000 women.
However, the abortion rates varied greatly between married and unmarried women: The 2009 rate for unmarried women was 28.9/1,000, almost five times greater than the 6.1/1,000 rate among married women. Accordingly, married women’s 2009 birth rate of 85.6 live births per 1,000 women was 72% higher than that of unmarried women, 49.9/1,000 women.
The data are pulled from three federal sources on live births, abortions, and fetal losses. All states report live births to the CDC’s National Center for Health Statistics through the vital statistics cooperative program of the National Vital Statistics System. Most states report abortion data to the CDC’s National Center for Chronic Disease Prevention and Health Promotion, after which the data are adjusted by the Guttmacher Institute to estimate national totals.
The NCHS National Survey of Family Growth provided data for estimates of fetal losses from the 1995, 2002, and 2006-2010 surveys, plus 1988 data used only for adolescent estimates. The report was funded by the Centers for Disease Control and Prevention.
The lowest pregnancy rate in more than a decade was reported for 2009, largely driven by declines among teenagers and, to a lesser extent, women under age 30 years.
The 2009 rate of 102.1 pregnancies per 1,000 women was 12% lower than the 1990 peak of 115.8 pregnancies per 1,000 women, reported Sally C. Curtin of the Centers for Disease Control and Prevention’s National Center for Health Statistics and her colleagues in the December issue of an NCHS Data Brief.
Continuing a trend of declining pregnancies, especially since 2007, the rate drop occurred across both married and unmarried women, although the rate among women over age 30 continued to climb gradually from 1990 rates. "It has been suggested that the declining economy, beginning in 2007, has likely played a role in the decreased rates for women under age 40," the authors wrote (NCHS Data Brief, December 2013; No. 136:1-7).
The age group with the highest pregnancy rate also shifted slightly: Those aged 25-29 led with 1.69 million pregnancies in 2009, compared with 1.61 million pregnancies among women aged 20-24. In 1990 and 2000, women aged 20-24 had the highest pregnancy rates.
The most substantial declines in pregnancy rates occurred among teenagers. The rate among all teens in 2009 was 37.9/1,000, down 39% from the 1991 peak of 61.8 pregnancies per 1,000 teens. The fall in the teen pregnancy rate has continued through 2012, with an all-time low of 29.4/1,000 teens. Younger teens in particular have contributed to the decline, with the rate among girls aged 15-17 down 53% since 1990. The drop among 18- and 19-year-olds fell by more than a third (36%) since 1990.
Although black and Hispanic teen pregnancy rates remained at least double the rates among white teens, the decline was still seen across all three demographics. White and black teen pregnancy rates fell 50%, and the Hispanic rate fell 40%, between 1990 and 2009. The authors noted that the teen pregnancy rate declines match up with related trends among teens showing shrinking percentages of sexually experienced teens and increasing rates of contraception use at first intercourse.
Along with the drop in pregnancies has been a drop in abortions, a trend that has continued since 1980 to arrive in 2009 at the lowest rate since 1976. The 2009 rate of 18.5 abortions per 1,000 women is 32% lower than the 1990 rate of 27.4/1,000 women.
However, the abortion rates varied greatly between married and unmarried women: The 2009 rate for unmarried women was 28.9/1,000, almost five times greater than the 6.1/1,000 rate among married women. Accordingly, married women’s 2009 birth rate of 85.6 live births per 1,000 women was 72% higher than that of unmarried women, 49.9/1,000 women.
The data are pulled from three federal sources on live births, abortions, and fetal losses. All states report live births to the CDC’s National Center for Health Statistics through the vital statistics cooperative program of the National Vital Statistics System. Most states report abortion data to the CDC’s National Center for Chronic Disease Prevention and Health Promotion, after which the data are adjusted by the Guttmacher Institute to estimate national totals.
The NCHS National Survey of Family Growth provided data for estimates of fetal losses from the 1995, 2002, and 2006-2010 surveys, plus 1988 data used only for adolescent estimates. The report was funded by the Centers for Disease Control and Prevention.
FROM THE NATIONAL CENTER FOR HEALTH STATISTICS DATA BRIEF
Major Finding: U.S. pregnancy rates hit a 12-year low in 2009, with a rate of 102.1/1,000 women aged 15-44 years, down 12% since 1990’s peak of 115.8/1,000, because of large declines in teen pregnancy rates and lesser declines in pregnancy rates among women under 30.
Data Source: State reports of live births through the CDC’s vital statistics cooperative program of the National Vital Statistics System, abortion surveillance data reported by most states to the CDC’s National Center for Chronic Disease Prevention and Health Promotion, and fetal loss data collected in the 1995, 2002, and 2006-2010 National Surveys of Family Growth.
Disclosures: Funding for the report came from the CDC.
Vitamin K deficiency bleeding in infants tied to insufficient parent knowledge of risks
More than a quarter of infants born in Nashville area birthing centers – eight times the rate among infants born in one Nashville area hospital – did not receive vitamin K after birth in 2013 because parents declined it, a study has found.
The findings resulted from an investigation following four Nashville cases of late vitamin K deficiency bleeding in infants whose parents declined postpartum vitamin K administration, reported Dr. Michael Warren and his colleagues at the Centers for Disease Control and Prevention (MMWR 2013;62:901-2). No cases were found from 2007 to 2012 among Tennessee hospital discharge data.
Dr. Warren’s team found, based on random sampling, that 3.4% of 3,080 infants born in one of three Nashville area hospitals surveyed and 28% of 218 infants born in four Tennessee nonhospital birthing centers did not receive intramuscular vitamin K after birth.
The parents of these children identified "concern about an increased risk for leukemia when vitamin K is administered, an impression that the injection was unnecessary, and a desire to minimize the newborn’s exposure to ‘toxins’ " as reasons for declining vitamin K administration after birth.
The survey of Nashville neonates was initiated after four confirmed cases of late vitamin K deficiency bleeding were diagnosed between February and September 2013 at a Nashville children’s hospital.
"The four infants had laboratory-confirmed coagulopathy, defined as elevation of prothrombin time greater than or equal to four times the laboratory limit of normal, correctable by vitamin K administration, and symptomatic bleeding," the report noted.
Among the infants, diagnosed between 6 and 15 weeks old, one had gastrointestinal bleeding and three had diffuse intracranial hemorrhage. All survived, but one with intracranial hemorrhage has an apparent gross motor deficit, and all three with hemorrhage are being followed by neurologists. The parents had been unaware of vitamin K deficiency bleeding risks.
Intramuscular vitamin K injections have been recommended by the American Academy of Pediatrics since 1961 to prevent vitamin K deficiency bleeding.
Early vitamin K deficiency bleeding occurs within 24 hours of birth, primarily in infants whose mothers took medication that inhibits vitamin K, such as antiepileptics or isoniazid, during pregnancy. Classic cases can occur within the first week of life as a result of natural vitamin K level decreases, and late cases generally occur between 2 and 24 weeks after birth, primarily in infants with liver disease or related conditions. Classic cases occur in 0.25%-1.7% of births without vitamin K administration. Late vitamin K deficiency bleeding occurs among 4.4-7.2 per 100,000 infants without vitamin K administration.
Other than one 1992 report in the British Medical Journal, whose findings have not been replicated, no other studies have found an association between vitamin K injections and childhood cancer (BMJ 1992;305:341-6).
Funding for the current report came from the Centers for Disease Control and Prevention.
More than a quarter of infants born in Nashville area birthing centers – eight times the rate among infants born in one Nashville area hospital – did not receive vitamin K after birth in 2013 because parents declined it, a study has found.
The findings resulted from an investigation following four Nashville cases of late vitamin K deficiency bleeding in infants whose parents declined postpartum vitamin K administration, reported Dr. Michael Warren and his colleagues at the Centers for Disease Control and Prevention (MMWR 2013;62:901-2). No cases were found from 2007 to 2012 among Tennessee hospital discharge data.
Dr. Warren’s team found, based on random sampling, that 3.4% of 3,080 infants born in one of three Nashville area hospitals surveyed and 28% of 218 infants born in four Tennessee nonhospital birthing centers did not receive intramuscular vitamin K after birth.
The parents of these children identified "concern about an increased risk for leukemia when vitamin K is administered, an impression that the injection was unnecessary, and a desire to minimize the newborn’s exposure to ‘toxins’ " as reasons for declining vitamin K administration after birth.
The survey of Nashville neonates was initiated after four confirmed cases of late vitamin K deficiency bleeding were diagnosed between February and September 2013 at a Nashville children’s hospital.
"The four infants had laboratory-confirmed coagulopathy, defined as elevation of prothrombin time greater than or equal to four times the laboratory limit of normal, correctable by vitamin K administration, and symptomatic bleeding," the report noted.
Among the infants, diagnosed between 6 and 15 weeks old, one had gastrointestinal bleeding and three had diffuse intracranial hemorrhage. All survived, but one with intracranial hemorrhage has an apparent gross motor deficit, and all three with hemorrhage are being followed by neurologists. The parents had been unaware of vitamin K deficiency bleeding risks.
Intramuscular vitamin K injections have been recommended by the American Academy of Pediatrics since 1961 to prevent vitamin K deficiency bleeding.
Early vitamin K deficiency bleeding occurs within 24 hours of birth, primarily in infants whose mothers took medication that inhibits vitamin K, such as antiepileptics or isoniazid, during pregnancy. Classic cases can occur within the first week of life as a result of natural vitamin K level decreases, and late cases generally occur between 2 and 24 weeks after birth, primarily in infants with liver disease or related conditions. Classic cases occur in 0.25%-1.7% of births without vitamin K administration. Late vitamin K deficiency bleeding occurs among 4.4-7.2 per 100,000 infants without vitamin K administration.
Other than one 1992 report in the British Medical Journal, whose findings have not been replicated, no other studies have found an association between vitamin K injections and childhood cancer (BMJ 1992;305:341-6).
Funding for the current report came from the Centers for Disease Control and Prevention.
More than a quarter of infants born in Nashville area birthing centers – eight times the rate among infants born in one Nashville area hospital – did not receive vitamin K after birth in 2013 because parents declined it, a study has found.
The findings resulted from an investigation following four Nashville cases of late vitamin K deficiency bleeding in infants whose parents declined postpartum vitamin K administration, reported Dr. Michael Warren and his colleagues at the Centers for Disease Control and Prevention (MMWR 2013;62:901-2). No cases were found from 2007 to 2012 among Tennessee hospital discharge data.
Dr. Warren’s team found, based on random sampling, that 3.4% of 3,080 infants born in one of three Nashville area hospitals surveyed and 28% of 218 infants born in four Tennessee nonhospital birthing centers did not receive intramuscular vitamin K after birth.
The parents of these children identified "concern about an increased risk for leukemia when vitamin K is administered, an impression that the injection was unnecessary, and a desire to minimize the newborn’s exposure to ‘toxins’ " as reasons for declining vitamin K administration after birth.
The survey of Nashville neonates was initiated after four confirmed cases of late vitamin K deficiency bleeding were diagnosed between February and September 2013 at a Nashville children’s hospital.
"The four infants had laboratory-confirmed coagulopathy, defined as elevation of prothrombin time greater than or equal to four times the laboratory limit of normal, correctable by vitamin K administration, and symptomatic bleeding," the report noted.
Among the infants, diagnosed between 6 and 15 weeks old, one had gastrointestinal bleeding and three had diffuse intracranial hemorrhage. All survived, but one with intracranial hemorrhage has an apparent gross motor deficit, and all three with hemorrhage are being followed by neurologists. The parents had been unaware of vitamin K deficiency bleeding risks.
Intramuscular vitamin K injections have been recommended by the American Academy of Pediatrics since 1961 to prevent vitamin K deficiency bleeding.
Early vitamin K deficiency bleeding occurs within 24 hours of birth, primarily in infants whose mothers took medication that inhibits vitamin K, such as antiepileptics or isoniazid, during pregnancy. Classic cases can occur within the first week of life as a result of natural vitamin K level decreases, and late cases generally occur between 2 and 24 weeks after birth, primarily in infants with liver disease or related conditions. Classic cases occur in 0.25%-1.7% of births without vitamin K administration. Late vitamin K deficiency bleeding occurs among 4.4-7.2 per 100,000 infants without vitamin K administration.
Other than one 1992 report in the British Medical Journal, whose findings have not been replicated, no other studies have found an association between vitamin K injections and childhood cancer (BMJ 1992;305:341-6).
Funding for the current report came from the Centers for Disease Control and Prevention.
FROM THE MORBIDITY AND MORTALITY WEEKLY REPORT
Major finding: Among 3,080 infants born at one hospital and 218 infants born in birthing centers, 3.4% and 28%, respectively, did not receive vitamin K after birth because parents declined vitamin K administration; four confirmed cases of vitamin K deficiency bleeding had occurred.
Data source: A random sample of infants born from February to October 2013 at three hospitals and four birthing centers in the Nashville, Tenn., area.
Disclosures: Funding for the report came from the Centers for Disease Control and Prevention.





