User login
Sharon Worcester is an award-winning medical journalist for MDedge News. She has been with the company since 1996, first as the Southeast Bureau Chief (1996-2009) when the company was known as International Medical News Group, then as a freelance writer (2010-2015) before returning as a reporter in 2015. She previously worked as a daily newspaper reporter covering health and local government. Sharon currently reports primarily on oncology and hematology. She has a BA from Eckerd College and an MA in Mass Communication/Print Journalism from the University of Florida. Connect with her via LinkedIn and follow her on twitter @SW_MedReporter.
Hypoglycemia in diabetes occurs across glycemic control levels
Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the post-test.
Hypoglycemia is common among patients being treated for type 2 diabetes, regardless of their level of glycemic control, according to findings from the Diabetes Study of Northern California (DISTANCE) survey.
The findings challenge the conventional wisdom that hypoglycemia occurs only among those with the lowest hemoglobin A1c levels, reported Dr. Kasia J. Lipska of Yale University, New Haven, Conn., and colleagues. The study was published online July 30 in Diabetes Care.
Among 9,094 adults with diabetes who participated in the survey, 11% reported experiencing severe hypoglycemia in the past year, with nearly 1 in 4 (24%) reporting more than three events during that time period. In an unadjusted analysis, those with HbA1c at the highest and lowest levels were most likely to experience hypoglycemia, the investigators said (Diabetes Care 2013 July 30 [doi: 10.2337/dc13-0610]).
Compared with those with "good" HbA1c levels of 7.0%-7.9%; the relative risk of hypoglycemia was 1.25 in those with "near normal" levels of less than 6%, 1.01 in those with "very good" levels of 6.0%-6.9%, 0.99 in those with "suboptimal" levels of 8.0%-8.9%, and 1.16 among those with "very poor" levels of 9% or greater, they explained.
However, the elevated relative risk was statistically significant only for those with HbA1c of 9% or greater, they noted, adding that while adjustment for demographic variables did not "alter the shape of the relationship," and while the point estimates for hypoglycemia risk remained higher at the two extremes of glycemic control, the differences in risk between each HbA1c category and the reference group did not differ significantly in the fully adjusted model.
Study participants were patients with type 2 diabetes aged 30-77 years, who were members of the Kaiser Permanente Northern California Diabetes Registry. The patients, who were using glucose-lowering therapy, were surveyed during 2005-2006 about episodes of severe hypoglycemia, such as episodes during which they passed out and/or required medical assistance.
Patients most likely to report hypoglycemic episodes were women; those taking more than four medications for chronic conditions; those using either insulin or a secretagogue during the preobservation period; and those with longer duration of disease, prior history of hypoglycemia, and multiple comorbidities, the investigators noted.
After examining the prevalence of hypoglycemia across potential effect modifiers, however, they found no significant interactions.
Prior studies have resulted in inconsistent findings about the relationship between glucose control and hypoglycemic events, but findings from the current study suggest that "hypoglycemia occurs across all levels of HbA1c, with higher risk associated with near-normal or very poor glycemic control," the investigators said.
The findings are important, they said, because several studies have shown that patients who experience severe hypoglycemia are at increased risk for a number of unfavorable health outcomes, including dementia, falls, fall-related fractures, cardiovascular events, poor health–related quality of life, and increased mortality. The researchers noted that in addition to efforts to limit adverse effects of overtreatment and to improve patient outcomes, efforts to consider the safety of the various glucose-lowering therapies in patients with higher HbA1c levels are needed.
"Poorly controlled diabetes appears to be associated with both higher risk of diabetic complications and higher risk of treatment-related hypoglycemia. Therefore, quality improvement efforts must balance the need to improve glucose levels with safety of antihyperglycemic therapy in this group," they said.
Though limited by a number of factors, including self-reporting of hypoglycemia without laboratory confirmation and a wide age range of participants, which limits applicability to elderly patients with diabetes, the findings in this usual-care setting nonetheless underscore the importance of directing efforts to improve the safety of glucose-lowering therapies. Such efforts must be directed not only to patients achieving intensive glucose control, but also to those with poorly controlled disease, the investigators concluded, noting that "future analyses are needed to identify management strategies and treatment factors that may mitigate hypoglycemia risk."
The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.
Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the post-test.
Hypoglycemia is common among patients being treated for type 2 diabetes, regardless of their level of glycemic control, according to findings from the Diabetes Study of Northern California (DISTANCE) survey.
The findings challenge the conventional wisdom that hypoglycemia occurs only among those with the lowest hemoglobin A1c levels, reported Dr. Kasia J. Lipska of Yale University, New Haven, Conn., and colleagues. The study was published online July 30 in Diabetes Care.
Among 9,094 adults with diabetes who participated in the survey, 11% reported experiencing severe hypoglycemia in the past year, with nearly 1 in 4 (24%) reporting more than three events during that time period. In an unadjusted analysis, those with HbA1c at the highest and lowest levels were most likely to experience hypoglycemia, the investigators said (Diabetes Care 2013 July 30 [doi: 10.2337/dc13-0610]).
Compared with those with "good" HbA1c levels of 7.0%-7.9%; the relative risk of hypoglycemia was 1.25 in those with "near normal" levels of less than 6%, 1.01 in those with "very good" levels of 6.0%-6.9%, 0.99 in those with "suboptimal" levels of 8.0%-8.9%, and 1.16 among those with "very poor" levels of 9% or greater, they explained.
However, the elevated relative risk was statistically significant only for those with HbA1c of 9% or greater, they noted, adding that while adjustment for demographic variables did not "alter the shape of the relationship," and while the point estimates for hypoglycemia risk remained higher at the two extremes of glycemic control, the differences in risk between each HbA1c category and the reference group did not differ significantly in the fully adjusted model.
Study participants were patients with type 2 diabetes aged 30-77 years, who were members of the Kaiser Permanente Northern California Diabetes Registry. The patients, who were using glucose-lowering therapy, were surveyed during 2005-2006 about episodes of severe hypoglycemia, such as episodes during which they passed out and/or required medical assistance.
Patients most likely to report hypoglycemic episodes were women; those taking more than four medications for chronic conditions; those using either insulin or a secretagogue during the preobservation period; and those with longer duration of disease, prior history of hypoglycemia, and multiple comorbidities, the investigators noted.
After examining the prevalence of hypoglycemia across potential effect modifiers, however, they found no significant interactions.
Prior studies have resulted in inconsistent findings about the relationship between glucose control and hypoglycemic events, but findings from the current study suggest that "hypoglycemia occurs across all levels of HbA1c, with higher risk associated with near-normal or very poor glycemic control," the investigators said.
The findings are important, they said, because several studies have shown that patients who experience severe hypoglycemia are at increased risk for a number of unfavorable health outcomes, including dementia, falls, fall-related fractures, cardiovascular events, poor health–related quality of life, and increased mortality. The researchers noted that in addition to efforts to limit adverse effects of overtreatment and to improve patient outcomes, efforts to consider the safety of the various glucose-lowering therapies in patients with higher HbA1c levels are needed.
"Poorly controlled diabetes appears to be associated with both higher risk of diabetic complications and higher risk of treatment-related hypoglycemia. Therefore, quality improvement efforts must balance the need to improve glucose levels with safety of antihyperglycemic therapy in this group," they said.
Though limited by a number of factors, including self-reporting of hypoglycemia without laboratory confirmation and a wide age range of participants, which limits applicability to elderly patients with diabetes, the findings in this usual-care setting nonetheless underscore the importance of directing efforts to improve the safety of glucose-lowering therapies. Such efforts must be directed not only to patients achieving intensive glucose control, but also to those with poorly controlled disease, the investigators concluded, noting that "future analyses are needed to identify management strategies and treatment factors that may mitigate hypoglycemia risk."
The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.
Earn 0.25 hours AMA PRA Category 1 credit: Read this article, and click the link at the end to take the post-test.
Hypoglycemia is common among patients being treated for type 2 diabetes, regardless of their level of glycemic control, according to findings from the Diabetes Study of Northern California (DISTANCE) survey.
The findings challenge the conventional wisdom that hypoglycemia occurs only among those with the lowest hemoglobin A1c levels, reported Dr. Kasia J. Lipska of Yale University, New Haven, Conn., and colleagues. The study was published online July 30 in Diabetes Care.
Among 9,094 adults with diabetes who participated in the survey, 11% reported experiencing severe hypoglycemia in the past year, with nearly 1 in 4 (24%) reporting more than three events during that time period. In an unadjusted analysis, those with HbA1c at the highest and lowest levels were most likely to experience hypoglycemia, the investigators said (Diabetes Care 2013 July 30 [doi: 10.2337/dc13-0610]).
Compared with those with "good" HbA1c levels of 7.0%-7.9%; the relative risk of hypoglycemia was 1.25 in those with "near normal" levels of less than 6%, 1.01 in those with "very good" levels of 6.0%-6.9%, 0.99 in those with "suboptimal" levels of 8.0%-8.9%, and 1.16 among those with "very poor" levels of 9% or greater, they explained.
However, the elevated relative risk was statistically significant only for those with HbA1c of 9% or greater, they noted, adding that while adjustment for demographic variables did not "alter the shape of the relationship," and while the point estimates for hypoglycemia risk remained higher at the two extremes of glycemic control, the differences in risk between each HbA1c category and the reference group did not differ significantly in the fully adjusted model.
Study participants were patients with type 2 diabetes aged 30-77 years, who were members of the Kaiser Permanente Northern California Diabetes Registry. The patients, who were using glucose-lowering therapy, were surveyed during 2005-2006 about episodes of severe hypoglycemia, such as episodes during which they passed out and/or required medical assistance.
Patients most likely to report hypoglycemic episodes were women; those taking more than four medications for chronic conditions; those using either insulin or a secretagogue during the preobservation period; and those with longer duration of disease, prior history of hypoglycemia, and multiple comorbidities, the investigators noted.
After examining the prevalence of hypoglycemia across potential effect modifiers, however, they found no significant interactions.
Prior studies have resulted in inconsistent findings about the relationship between glucose control and hypoglycemic events, but findings from the current study suggest that "hypoglycemia occurs across all levels of HbA1c, with higher risk associated with near-normal or very poor glycemic control," the investigators said.
The findings are important, they said, because several studies have shown that patients who experience severe hypoglycemia are at increased risk for a number of unfavorable health outcomes, including dementia, falls, fall-related fractures, cardiovascular events, poor health–related quality of life, and increased mortality. The researchers noted that in addition to efforts to limit adverse effects of overtreatment and to improve patient outcomes, efforts to consider the safety of the various glucose-lowering therapies in patients with higher HbA1c levels are needed.
"Poorly controlled diabetes appears to be associated with both higher risk of diabetic complications and higher risk of treatment-related hypoglycemia. Therefore, quality improvement efforts must balance the need to improve glucose levels with safety of antihyperglycemic therapy in this group," they said.
Though limited by a number of factors, including self-reporting of hypoglycemia without laboratory confirmation and a wide age range of participants, which limits applicability to elderly patients with diabetes, the findings in this usual-care setting nonetheless underscore the importance of directing efforts to improve the safety of glucose-lowering therapies. Such efforts must be directed not only to patients achieving intensive glucose control, but also to those with poorly controlled disease, the investigators concluded, noting that "future analyses are needed to identify management strategies and treatment factors that may mitigate hypoglycemia risk."
The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
To earn 0.25 hours AMA PRA Category 1 credit after reading this article, take the post-test here.
FROM DIABETES CARE
Hypoglycemia in diabetes occurs across glycemic control levels
Hypoglycemia is common among patients being treated for type 2 diabetes, regardless of their level of glycemic control, according to findings from the Diabetes Study of Northern California (DISTANCE) survey.
The findings challenge the conventional wisdom that hypoglycemia occurs only among those with the lowest hemoglobin A1c levels, reported Dr. Kasia J. Lipska of Yale University, New Haven, Conn., and colleagues. The study was published online July 30 in Diabetes Care.
Among 9,094 adults with diabetes who participated in the survey, 11% reported experiencing severe hypoglycemia in the past year, with nearly 1 in 4 (24%) reporting more than three events during that time period. In an unadjusted analysis, those with HbA1c at the highest and lowest levels were most likely to experience hypoglycemia, the investigators said (Diabetes Care 2013 July 30 [doi: 10.2337/dc13-0610]).
Compared with those with "good" HbA1c levels of 7.0%-7.9%; the relative risk of hypoglycemia was 1.25 in those with "near normal" levels of less than 6%, 1.01 in those with "very good" levels of 6.0%-6.9%, 0.99 in those with "suboptimal" levels of 8.0%-8.9%, and 1.16 among those with "very poor" levels of 9% or greater, they explained.
However, the elevated relative risk was statistically significant only for those with HbA1c of 9% or greater, they noted, adding that while adjustment for demographic variables did not "alter the shape of the relationship," and while the point estimates for hypoglycemia risk remained higher at the two extremes of glycemic control, the differences in risk between each HbA1c category and the reference group did not differ significantly in the fully adjusted model.
Study participants were patients with type 2 diabetes aged 30-77 years, who were members of the Kaiser Permanente Northern California Diabetes Registry. The patients, who were using glucose-lowering therapy, were surveyed during 2005-2006 about episodes of severe hypoglycemia, such as episodes during which they passed out and/or required medical assistance.
Patients most likely to report hypoglycemic episodes were women; those taking more than four medications for chronic conditions; those using either insulin or a secretagogue during the preobservation period; and those with longer duration of disease, prior history of hypoglycemia, and multiple comorbidities, the investigators noted.
After examining the prevalence of hypoglycemia across potential effect modifiers, however, they found no significant interactions.
Prior studies have resulted in inconsistent findings about the relationship between glucose control and hypoglycemic events, but findings from the current study suggest that "hypoglycemia occurs across all levels of HbA1c, with higher risk associated with near-normal or very poor glycemic control," the investigators said.
The findings are important, they said, because several studies have shown that patients who experience severe hypoglycemia are at increased risk for a number of unfavorable health outcomes, including dementia, falls, fall-related fractures, cardiovascular events, poor health–related quality of life, and increased mortality. The researchers noted that in addition to efforts to limit adverse effects of overtreatment and to improve patient outcomes, efforts to consider the safety of the various glucose-lowering therapies in patients with higher HbA1c levels are needed.
"Poorly controlled diabetes appears to be associated with both higher risk of diabetic complications and higher risk of treatment-related hypoglycemia. Therefore, quality improvement efforts must balance the need to improve glucose levels with safety of antihyperglycemic therapy in this group," they said.
Though limited by a number of factors, including self-reporting of hypoglycemia without laboratory confirmation and a wide age range of participants, which limits applicability to elderly patients with diabetes, the findings in this usual-care setting nonetheless underscore the importance of directing efforts to improve the safety of glucose-lowering therapies. Such efforts must be directed not only to patients achieving intensive glucose control, but also to those with poorly controlled disease, the investigators concluded, noting that "future analyses are needed to identify management strategies and treatment factors that may mitigate hypoglycemia risk."
The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
Hypoglycemia is common among patients being treated for type 2 diabetes, regardless of their level of glycemic control, according to findings from the Diabetes Study of Northern California (DISTANCE) survey.
The findings challenge the conventional wisdom that hypoglycemia occurs only among those with the lowest hemoglobin A1c levels, reported Dr. Kasia J. Lipska of Yale University, New Haven, Conn., and colleagues. The study was published online July 30 in Diabetes Care.
Among 9,094 adults with diabetes who participated in the survey, 11% reported experiencing severe hypoglycemia in the past year, with nearly 1 in 4 (24%) reporting more than three events during that time period. In an unadjusted analysis, those with HbA1c at the highest and lowest levels were most likely to experience hypoglycemia, the investigators said (Diabetes Care 2013 July 30 [doi: 10.2337/dc13-0610]).
Compared with those with "good" HbA1c levels of 7.0%-7.9%; the relative risk of hypoglycemia was 1.25 in those with "near normal" levels of less than 6%, 1.01 in those with "very good" levels of 6.0%-6.9%, 0.99 in those with "suboptimal" levels of 8.0%-8.9%, and 1.16 among those with "very poor" levels of 9% or greater, they explained.
However, the elevated relative risk was statistically significant only for those with HbA1c of 9% or greater, they noted, adding that while adjustment for demographic variables did not "alter the shape of the relationship," and while the point estimates for hypoglycemia risk remained higher at the two extremes of glycemic control, the differences in risk between each HbA1c category and the reference group did not differ significantly in the fully adjusted model.
Study participants were patients with type 2 diabetes aged 30-77 years, who were members of the Kaiser Permanente Northern California Diabetes Registry. The patients, who were using glucose-lowering therapy, were surveyed during 2005-2006 about episodes of severe hypoglycemia, such as episodes during which they passed out and/or required medical assistance.
Patients most likely to report hypoglycemic episodes were women; those taking more than four medications for chronic conditions; those using either insulin or a secretagogue during the preobservation period; and those with longer duration of disease, prior history of hypoglycemia, and multiple comorbidities, the investigators noted.
After examining the prevalence of hypoglycemia across potential effect modifiers, however, they found no significant interactions.
Prior studies have resulted in inconsistent findings about the relationship between glucose control and hypoglycemic events, but findings from the current study suggest that "hypoglycemia occurs across all levels of HbA1c, with higher risk associated with near-normal or very poor glycemic control," the investigators said.
The findings are important, they said, because several studies have shown that patients who experience severe hypoglycemia are at increased risk for a number of unfavorable health outcomes, including dementia, falls, fall-related fractures, cardiovascular events, poor health–related quality of life, and increased mortality. The researchers noted that in addition to efforts to limit adverse effects of overtreatment and to improve patient outcomes, efforts to consider the safety of the various glucose-lowering therapies in patients with higher HbA1c levels are needed.
"Poorly controlled diabetes appears to be associated with both higher risk of diabetic complications and higher risk of treatment-related hypoglycemia. Therefore, quality improvement efforts must balance the need to improve glucose levels with safety of antihyperglycemic therapy in this group," they said.
Though limited by a number of factors, including self-reporting of hypoglycemia without laboratory confirmation and a wide age range of participants, which limits applicability to elderly patients with diabetes, the findings in this usual-care setting nonetheless underscore the importance of directing efforts to improve the safety of glucose-lowering therapies. Such efforts must be directed not only to patients achieving intensive glucose control, but also to those with poorly controlled disease, the investigators concluded, noting that "future analyses are needed to identify management strategies and treatment factors that may mitigate hypoglycemia risk."
The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
Hypoglycemia is common among patients being treated for type 2 diabetes, regardless of their level of glycemic control, according to findings from the Diabetes Study of Northern California (DISTANCE) survey.
The findings challenge the conventional wisdom that hypoglycemia occurs only among those with the lowest hemoglobin A1c levels, reported Dr. Kasia J. Lipska of Yale University, New Haven, Conn., and colleagues. The study was published online July 30 in Diabetes Care.
Among 9,094 adults with diabetes who participated in the survey, 11% reported experiencing severe hypoglycemia in the past year, with nearly 1 in 4 (24%) reporting more than three events during that time period. In an unadjusted analysis, those with HbA1c at the highest and lowest levels were most likely to experience hypoglycemia, the investigators said (Diabetes Care 2013 July 30 [doi: 10.2337/dc13-0610]).
Compared with those with "good" HbA1c levels of 7.0%-7.9%; the relative risk of hypoglycemia was 1.25 in those with "near normal" levels of less than 6%, 1.01 in those with "very good" levels of 6.0%-6.9%, 0.99 in those with "suboptimal" levels of 8.0%-8.9%, and 1.16 among those with "very poor" levels of 9% or greater, they explained.
However, the elevated relative risk was statistically significant only for those with HbA1c of 9% or greater, they noted, adding that while adjustment for demographic variables did not "alter the shape of the relationship," and while the point estimates for hypoglycemia risk remained higher at the two extremes of glycemic control, the differences in risk between each HbA1c category and the reference group did not differ significantly in the fully adjusted model.
Study participants were patients with type 2 diabetes aged 30-77 years, who were members of the Kaiser Permanente Northern California Diabetes Registry. The patients, who were using glucose-lowering therapy, were surveyed during 2005-2006 about episodes of severe hypoglycemia, such as episodes during which they passed out and/or required medical assistance.
Patients most likely to report hypoglycemic episodes were women; those taking more than four medications for chronic conditions; those using either insulin or a secretagogue during the preobservation period; and those with longer duration of disease, prior history of hypoglycemia, and multiple comorbidities, the investigators noted.
After examining the prevalence of hypoglycemia across potential effect modifiers, however, they found no significant interactions.
Prior studies have resulted in inconsistent findings about the relationship between glucose control and hypoglycemic events, but findings from the current study suggest that "hypoglycemia occurs across all levels of HbA1c, with higher risk associated with near-normal or very poor glycemic control," the investigators said.
The findings are important, they said, because several studies have shown that patients who experience severe hypoglycemia are at increased risk for a number of unfavorable health outcomes, including dementia, falls, fall-related fractures, cardiovascular events, poor health–related quality of life, and increased mortality. The researchers noted that in addition to efforts to limit adverse effects of overtreatment and to improve patient outcomes, efforts to consider the safety of the various glucose-lowering therapies in patients with higher HbA1c levels are needed.
"Poorly controlled diabetes appears to be associated with both higher risk of diabetic complications and higher risk of treatment-related hypoglycemia. Therefore, quality improvement efforts must balance the need to improve glucose levels with safety of antihyperglycemic therapy in this group," they said.
Though limited by a number of factors, including self-reporting of hypoglycemia without laboratory confirmation and a wide age range of participants, which limits applicability to elderly patients with diabetes, the findings in this usual-care setting nonetheless underscore the importance of directing efforts to improve the safety of glucose-lowering therapies. Such efforts must be directed not only to patients achieving intensive glucose control, but also to those with poorly controlled disease, the investigators concluded, noting that "future analyses are needed to identify management strategies and treatment factors that may mitigate hypoglycemia risk."
The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
FROM DIABETES CARE
Major finding: The relative risk of hypoglycemia was greatest in those with "near normal" and "very poor" HbA1c levels (RR, 1.25 and 1.16, respectively), compared with those with "good" HbA1c levels.
Data source: A survey of more than 9,000 patients.
Disclosures: The DISTANCE survey was funded by numerous grants, including grants to individual authors or their employers from the National Institute on Aging and the National Heart, Lung, and Blood Institute.
USPSTF systematic review supports CT screening for lung cancer
Low-dose computed tomography reduces lung cancer mortality and all-cause mortality when used as a screening tool in asymptomatic adults at high risk for the disease, according to the results of a systematic review conducted for the U.S. Preventive Services Task Force.
In 2004, the USPSTF deemed the evidence insufficient for recommending for or against low-dose computed tomography (LDCT) for lung cancer screening in asymptomatic individuals, but the findings of the current review suggest screening has a definite benefit for most patients, Dr. Linda L. Humphrey, of Oregon Health and Science University and the Portland Veterans Affairs Medical Center, and her colleagues reported.
A draft recommendation based on the findings, published online in the July 30 issue of Annals of Internal Medicine (2013 July 29 [doi: 10.7326/0003-4819-159-6-201309170-00690]), is available on the USPSTF website for public comment.
The researchers conducted a review of the literature published between 2000 and May 2013 and identified four trials that reported findings on the efficacy of LDCT screening in patients with smoking exposure for both intervention and control groups. Three small trials showed varying degrees of benefit with screening, but were underpowered; one large trial – the National Lung Screening Trial (NLST) – showed a significant 20% reduction in lung cancer mortality among those screened, as well as a 6.7% reduction in all-cause mortality.
The randomized multicenter NLST compared annual LDCT scans with annual single-view posterior-anterior chest radiographs for 3 years in more than 53,000 current or former smokers aged 55-74 years with at least a 30–pack-year history of smoking (N. Engl. J. Med. 2013;368:1980-91). One cancer death was prevented for every 320 patients who completed one screening, and one death from any cause was prevented for every 219 patients screened in that study; the trial was stopped early after 6.5 years of follow-up based on the findings.
In general, the benefits of LDCT for lung cancer screening in this population outweighed the risks, Dr. Humphrey and her colleagues noted.
Harms associated with LDCT, according to findings from 7 trials and 13 cohort studies that reported on such outcomes, included radiation exposure, overdiagnosis, and a high rate of false-positive findings that were resolved by further imaging in most cases. False negatives were reported in six studies, and the rates ranged from 0% to 20%, but none of the studies evaluated the harm of false reassurance, the investigators noted. The benefits of screening must be weighed against these potential harms.
Lung cancer is the third most common cancer among men and women in the United States, but is the leading cause of cancer-related deaths, accounting for nearly 27%. Furthermore, about 85% of U.S. lung cancer cases are attributable to smoking, and since about 20% of Americans currently smoke – and many more are former smokers who remain at increased risk because of their smoking history – "lung cancer will remain a major public health problem in this country for decades," the investigators wrote.
The studies included in this review were conducted in patients at high risk for lung cancer based on current or former smoking. However, patients at an increased risk for lung cancer, including older adults and those with a family history of lung cancer, chronic obstructive pulmonary disease, pulmonary fibrosis, and certain environmental and occupational exposures, may also benefit from LDCT screening.
"Future research to identify methods for focusing LDCT screening on persons at highest risk for disease, to improve discrimination between benign and malignant pulmonary nodules, and to find early indicators of aggressive disease is warranted," the investigators noted. "If LDCT screening becomes routine, the risk for harms should be measured and methods to limit them should be identified."
The review was funded by grants from the Agency for Healthcare Research and Quality and the Portland Veterans Affairs Medical Center. Dr. Humphrey is employed by the Department of Veterans Affairs, and she and her coauthors disclosed ties with UpToDate, the USPSTF, AHRQ, the Department of Veterans Affairs, the American Lung Association, the Chest/LUNGevity Foundation, the National Lung Cancer Partnership, and/or the American College of Chest Physicians.
Low-dose computed tomography reduces lung cancer mortality and all-cause mortality when used as a screening tool in asymptomatic adults at high risk for the disease, according to the results of a systematic review conducted for the U.S. Preventive Services Task Force.
In 2004, the USPSTF deemed the evidence insufficient for recommending for or against low-dose computed tomography (LDCT) for lung cancer screening in asymptomatic individuals, but the findings of the current review suggest screening has a definite benefit for most patients, Dr. Linda L. Humphrey, of Oregon Health and Science University and the Portland Veterans Affairs Medical Center, and her colleagues reported.
A draft recommendation based on the findings, published online in the July 30 issue of Annals of Internal Medicine (2013 July 29 [doi: 10.7326/0003-4819-159-6-201309170-00690]), is available on the USPSTF website for public comment.
The researchers conducted a review of the literature published between 2000 and May 2013 and identified four trials that reported findings on the efficacy of LDCT screening in patients with smoking exposure for both intervention and control groups. Three small trials showed varying degrees of benefit with screening, but were underpowered; one large trial – the National Lung Screening Trial (NLST) – showed a significant 20% reduction in lung cancer mortality among those screened, as well as a 6.7% reduction in all-cause mortality.
The randomized multicenter NLST compared annual LDCT scans with annual single-view posterior-anterior chest radiographs for 3 years in more than 53,000 current or former smokers aged 55-74 years with at least a 30–pack-year history of smoking (N. Engl. J. Med. 2013;368:1980-91). One cancer death was prevented for every 320 patients who completed one screening, and one death from any cause was prevented for every 219 patients screened in that study; the trial was stopped early after 6.5 years of follow-up based on the findings.
In general, the benefits of LDCT for lung cancer screening in this population outweighed the risks, Dr. Humphrey and her colleagues noted.
Harms associated with LDCT, according to findings from 7 trials and 13 cohort studies that reported on such outcomes, included radiation exposure, overdiagnosis, and a high rate of false-positive findings that were resolved by further imaging in most cases. False negatives were reported in six studies, and the rates ranged from 0% to 20%, but none of the studies evaluated the harm of false reassurance, the investigators noted. The benefits of screening must be weighed against these potential harms.
Lung cancer is the third most common cancer among men and women in the United States, but is the leading cause of cancer-related deaths, accounting for nearly 27%. Furthermore, about 85% of U.S. lung cancer cases are attributable to smoking, and since about 20% of Americans currently smoke – and many more are former smokers who remain at increased risk because of their smoking history – "lung cancer will remain a major public health problem in this country for decades," the investigators wrote.
The studies included in this review were conducted in patients at high risk for lung cancer based on current or former smoking. However, patients at an increased risk for lung cancer, including older adults and those with a family history of lung cancer, chronic obstructive pulmonary disease, pulmonary fibrosis, and certain environmental and occupational exposures, may also benefit from LDCT screening.
"Future research to identify methods for focusing LDCT screening on persons at highest risk for disease, to improve discrimination between benign and malignant pulmonary nodules, and to find early indicators of aggressive disease is warranted," the investigators noted. "If LDCT screening becomes routine, the risk for harms should be measured and methods to limit them should be identified."
The review was funded by grants from the Agency for Healthcare Research and Quality and the Portland Veterans Affairs Medical Center. Dr. Humphrey is employed by the Department of Veterans Affairs, and she and her coauthors disclosed ties with UpToDate, the USPSTF, AHRQ, the Department of Veterans Affairs, the American Lung Association, the Chest/LUNGevity Foundation, the National Lung Cancer Partnership, and/or the American College of Chest Physicians.
Low-dose computed tomography reduces lung cancer mortality and all-cause mortality when used as a screening tool in asymptomatic adults at high risk for the disease, according to the results of a systematic review conducted for the U.S. Preventive Services Task Force.
In 2004, the USPSTF deemed the evidence insufficient for recommending for or against low-dose computed tomography (LDCT) for lung cancer screening in asymptomatic individuals, but the findings of the current review suggest screening has a definite benefit for most patients, Dr. Linda L. Humphrey, of Oregon Health and Science University and the Portland Veterans Affairs Medical Center, and her colleagues reported.
A draft recommendation based on the findings, published online in the July 30 issue of Annals of Internal Medicine (2013 July 29 [doi: 10.7326/0003-4819-159-6-201309170-00690]), is available on the USPSTF website for public comment.
The researchers conducted a review of the literature published between 2000 and May 2013 and identified four trials that reported findings on the efficacy of LDCT screening in patients with smoking exposure for both intervention and control groups. Three small trials showed varying degrees of benefit with screening, but were underpowered; one large trial – the National Lung Screening Trial (NLST) – showed a significant 20% reduction in lung cancer mortality among those screened, as well as a 6.7% reduction in all-cause mortality.
The randomized multicenter NLST compared annual LDCT scans with annual single-view posterior-anterior chest radiographs for 3 years in more than 53,000 current or former smokers aged 55-74 years with at least a 30–pack-year history of smoking (N. Engl. J. Med. 2013;368:1980-91). One cancer death was prevented for every 320 patients who completed one screening, and one death from any cause was prevented for every 219 patients screened in that study; the trial was stopped early after 6.5 years of follow-up based on the findings.
In general, the benefits of LDCT for lung cancer screening in this population outweighed the risks, Dr. Humphrey and her colleagues noted.
Harms associated with LDCT, according to findings from 7 trials and 13 cohort studies that reported on such outcomes, included radiation exposure, overdiagnosis, and a high rate of false-positive findings that were resolved by further imaging in most cases. False negatives were reported in six studies, and the rates ranged from 0% to 20%, but none of the studies evaluated the harm of false reassurance, the investigators noted. The benefits of screening must be weighed against these potential harms.
Lung cancer is the third most common cancer among men and women in the United States, but is the leading cause of cancer-related deaths, accounting for nearly 27%. Furthermore, about 85% of U.S. lung cancer cases are attributable to smoking, and since about 20% of Americans currently smoke – and many more are former smokers who remain at increased risk because of their smoking history – "lung cancer will remain a major public health problem in this country for decades," the investigators wrote.
The studies included in this review were conducted in patients at high risk for lung cancer based on current or former smoking. However, patients at an increased risk for lung cancer, including older adults and those with a family history of lung cancer, chronic obstructive pulmonary disease, pulmonary fibrosis, and certain environmental and occupational exposures, may also benefit from LDCT screening.
"Future research to identify methods for focusing LDCT screening on persons at highest risk for disease, to improve discrimination between benign and malignant pulmonary nodules, and to find early indicators of aggressive disease is warranted," the investigators noted. "If LDCT screening becomes routine, the risk for harms should be measured and methods to limit them should be identified."
The review was funded by grants from the Agency for Healthcare Research and Quality and the Portland Veterans Affairs Medical Center. Dr. Humphrey is employed by the Department of Veterans Affairs, and she and her coauthors disclosed ties with UpToDate, the USPSTF, AHRQ, the Department of Veterans Affairs, the American Lung Association, the Chest/LUNGevity Foundation, the National Lung Cancer Partnership, and/or the American College of Chest Physicians.
FROM ANNALS OF INTERNAL MEDICINE
Major finding: One large trial showed a significant 20% reduction in lung cancer mortality among those screened, as well as a 6.7% reduction in all-cause mortality.
Data source: A systematic review of LDCT efficacy findings published between 2000 and May 2013.
Disclosures: The review was funded by grants from the Agency for Healthcare Research and Quality and the Portland Veterans Affairs Medical Center. Dr. Humphrey is employed by the Department of Veterans Affairs, and she and her coauthors disclosed ties with UpToDate, the USPSTF, AHRQ, the Department of Veterans Affairs, the American Lung Association, the Chest/LUNGevity Foundation, the National Lung Cancer Partnership, and/or the American College of Chest Physicians.
Study suggests cardiovascular benefit with TNF-alpha blockade
Patients with rheumatoid arthritis who began taking a tumor necrosis factor–alpha blocking agent had a lower cardiovascular event risk during the subsequent 6 months than did those who took a nonbiologic disease-modifying antirheumatic drug in a large, observational study of several administrative and health plan datasets.
The incidence rate for a composite cardiovascular endpoint through the first 6 months of follow-up was 2.52/100 person-years in 11,587 subjects with rheumatoid arthritis (RA) who added a TNF-alpha blocker to their treatment regimen, compared with a rate of 3.05/100 person-years in 8,656 subjects with RA who added a nonbiologic DMARD, Dr. Daniel H. Solomon of Brigham and Women’s Hospital, Boston, and his colleagues reported.
The findings were most pronounced among those aged 65 years and older (hazard ratio, 0.52), and the benefits waned after 6 months.
"In both the first exposure carried forward and the as-treated analyses, the cardiovascular event–free survival curves diverged over the first 6 months. The hazard ratio for the TNF-alpha blocking agent, compared with the nonbiologic DMARD in the first exposure, carried forward was 0.80, and the as-treated analysis at 6 months was 0.71. However, by 12 months the curves appeared to converge with the hazard ratios approaching the null," the investigators explained (Am. J. Med. 2013;126:730.e9-17).
The study involved participants older than 16 years (mean age of 56 years) in several U.S. insurance programs. Those included in the current study – which was part of a larger study collaborative known as SABER (Safety Assessment of Biologic Therapy) – were RA patients using methotrexate who were either adding or switching to a TNF-alpha blocker or a nonbiologic DMARD.
The study’s primary endpoint consisted of a composite cardiovascular outcome of myocardial infarction, stroke, or coronary revascularization; each of these components alone represented a secondary study endpoint.
"While statins and aspirin may reduce cardiovascular risk in part through their anti-inflammatory properties, targeting cytokines known to be part of cardiovascular disease is an attractive therapeutic option."
"The component cardiovascular outcomes followed a similar pattern (to the composite outcome), except stroke, where the incidence rates were similar across exposures," the investigators noted.
Despite the limitations inherent in an observational study, the findings may have important clinical implications, they said.
"As greater evidence accumulates for the role of inflammation in atherosclerosis, consideration has been given to the use of immunosuppressive treatment regimens in cardiovascular disease. While statins and aspirin may reduce cardiovascular risk in part through their anti-inflammatory properties, targeting cytokines known to be part of cardiovascular disease is an attractive therapeutic option. Because the use of potent immunosuppressive agents is common in a systemic inflammatory condition such as rheumatoid arthritis, studying the effect of these agents on cardiovascular disease may provide important insights into the potential role of this strategy in the general population."
The results of the current study generally agree with prior findings demonstrating a link between TNF-alpha blocker use and reduced risk of cardiovascular outcomes.
"TNF-alpha appears to affect several aspects of cardiovascular disease, such as plaque stabilization, endothelial function, and postinfarct remodeling. Thus, one would anticipate that blockade of TNF-alpha would reduce ischemic cardiovascular outcomes. This finding supports the inflammatory underpinnings of cardiovascular disease and highlights a potential role for immunosuppression in cardiovascular risk reduction," they wrote, concluding that randomized controlled clinical trials testing targeted immunosuppression to reduce cardiovascular risk are warranted.
This study was supported by the Agency for Healthcare Research and Quality and the Food and Drug Administration. Dr. Solomon reported receiving research grants from Abbott, Amgen, and Lilly, and serving in unpaid roles on two Pfizer trials. He also directed an educational course supported by Bristol-Myers Squibb, and serves as a consultant to the Consortium of Rheumatology Researchers of North America Inc. (CORRONA). Other study authors reported receiving research grants from, and/or serving as a consultant for Amgen, Abbott, BMS, Centocor, Genentech/Roche, Janssen, Pfizer, UCB, and CORRONA.
Patients with rheumatoid arthritis who began taking a tumor necrosis factor–alpha blocking agent had a lower cardiovascular event risk during the subsequent 6 months than did those who took a nonbiologic disease-modifying antirheumatic drug in a large, observational study of several administrative and health plan datasets.
The incidence rate for a composite cardiovascular endpoint through the first 6 months of follow-up was 2.52/100 person-years in 11,587 subjects with rheumatoid arthritis (RA) who added a TNF-alpha blocker to their treatment regimen, compared with a rate of 3.05/100 person-years in 8,656 subjects with RA who added a nonbiologic DMARD, Dr. Daniel H. Solomon of Brigham and Women’s Hospital, Boston, and his colleagues reported.
The findings were most pronounced among those aged 65 years and older (hazard ratio, 0.52), and the benefits waned after 6 months.
"In both the first exposure carried forward and the as-treated analyses, the cardiovascular event–free survival curves diverged over the first 6 months. The hazard ratio for the TNF-alpha blocking agent, compared with the nonbiologic DMARD in the first exposure, carried forward was 0.80, and the as-treated analysis at 6 months was 0.71. However, by 12 months the curves appeared to converge with the hazard ratios approaching the null," the investigators explained (Am. J. Med. 2013;126:730.e9-17).
The study involved participants older than 16 years (mean age of 56 years) in several U.S. insurance programs. Those included in the current study – which was part of a larger study collaborative known as SABER (Safety Assessment of Biologic Therapy) – were RA patients using methotrexate who were either adding or switching to a TNF-alpha blocker or a nonbiologic DMARD.
The study’s primary endpoint consisted of a composite cardiovascular outcome of myocardial infarction, stroke, or coronary revascularization; each of these components alone represented a secondary study endpoint.
"While statins and aspirin may reduce cardiovascular risk in part through their anti-inflammatory properties, targeting cytokines known to be part of cardiovascular disease is an attractive therapeutic option."
"The component cardiovascular outcomes followed a similar pattern (to the composite outcome), except stroke, where the incidence rates were similar across exposures," the investigators noted.
Despite the limitations inherent in an observational study, the findings may have important clinical implications, they said.
"As greater evidence accumulates for the role of inflammation in atherosclerosis, consideration has been given to the use of immunosuppressive treatment regimens in cardiovascular disease. While statins and aspirin may reduce cardiovascular risk in part through their anti-inflammatory properties, targeting cytokines known to be part of cardiovascular disease is an attractive therapeutic option. Because the use of potent immunosuppressive agents is common in a systemic inflammatory condition such as rheumatoid arthritis, studying the effect of these agents on cardiovascular disease may provide important insights into the potential role of this strategy in the general population."
The results of the current study generally agree with prior findings demonstrating a link between TNF-alpha blocker use and reduced risk of cardiovascular outcomes.
"TNF-alpha appears to affect several aspects of cardiovascular disease, such as plaque stabilization, endothelial function, and postinfarct remodeling. Thus, one would anticipate that blockade of TNF-alpha would reduce ischemic cardiovascular outcomes. This finding supports the inflammatory underpinnings of cardiovascular disease and highlights a potential role for immunosuppression in cardiovascular risk reduction," they wrote, concluding that randomized controlled clinical trials testing targeted immunosuppression to reduce cardiovascular risk are warranted.
This study was supported by the Agency for Healthcare Research and Quality and the Food and Drug Administration. Dr. Solomon reported receiving research grants from Abbott, Amgen, and Lilly, and serving in unpaid roles on two Pfizer trials. He also directed an educational course supported by Bristol-Myers Squibb, and serves as a consultant to the Consortium of Rheumatology Researchers of North America Inc. (CORRONA). Other study authors reported receiving research grants from, and/or serving as a consultant for Amgen, Abbott, BMS, Centocor, Genentech/Roche, Janssen, Pfizer, UCB, and CORRONA.
Patients with rheumatoid arthritis who began taking a tumor necrosis factor–alpha blocking agent had a lower cardiovascular event risk during the subsequent 6 months than did those who took a nonbiologic disease-modifying antirheumatic drug in a large, observational study of several administrative and health plan datasets.
The incidence rate for a composite cardiovascular endpoint through the first 6 months of follow-up was 2.52/100 person-years in 11,587 subjects with rheumatoid arthritis (RA) who added a TNF-alpha blocker to their treatment regimen, compared with a rate of 3.05/100 person-years in 8,656 subjects with RA who added a nonbiologic DMARD, Dr. Daniel H. Solomon of Brigham and Women’s Hospital, Boston, and his colleagues reported.
The findings were most pronounced among those aged 65 years and older (hazard ratio, 0.52), and the benefits waned after 6 months.
"In both the first exposure carried forward and the as-treated analyses, the cardiovascular event–free survival curves diverged over the first 6 months. The hazard ratio for the TNF-alpha blocking agent, compared with the nonbiologic DMARD in the first exposure, carried forward was 0.80, and the as-treated analysis at 6 months was 0.71. However, by 12 months the curves appeared to converge with the hazard ratios approaching the null," the investigators explained (Am. J. Med. 2013;126:730.e9-17).
The study involved participants older than 16 years (mean age of 56 years) in several U.S. insurance programs. Those included in the current study – which was part of a larger study collaborative known as SABER (Safety Assessment of Biologic Therapy) – were RA patients using methotrexate who were either adding or switching to a TNF-alpha blocker or a nonbiologic DMARD.
The study’s primary endpoint consisted of a composite cardiovascular outcome of myocardial infarction, stroke, or coronary revascularization; each of these components alone represented a secondary study endpoint.
"While statins and aspirin may reduce cardiovascular risk in part through their anti-inflammatory properties, targeting cytokines known to be part of cardiovascular disease is an attractive therapeutic option."
"The component cardiovascular outcomes followed a similar pattern (to the composite outcome), except stroke, where the incidence rates were similar across exposures," the investigators noted.
Despite the limitations inherent in an observational study, the findings may have important clinical implications, they said.
"As greater evidence accumulates for the role of inflammation in atherosclerosis, consideration has been given to the use of immunosuppressive treatment regimens in cardiovascular disease. While statins and aspirin may reduce cardiovascular risk in part through their anti-inflammatory properties, targeting cytokines known to be part of cardiovascular disease is an attractive therapeutic option. Because the use of potent immunosuppressive agents is common in a systemic inflammatory condition such as rheumatoid arthritis, studying the effect of these agents on cardiovascular disease may provide important insights into the potential role of this strategy in the general population."
The results of the current study generally agree with prior findings demonstrating a link between TNF-alpha blocker use and reduced risk of cardiovascular outcomes.
"TNF-alpha appears to affect several aspects of cardiovascular disease, such as plaque stabilization, endothelial function, and postinfarct remodeling. Thus, one would anticipate that blockade of TNF-alpha would reduce ischemic cardiovascular outcomes. This finding supports the inflammatory underpinnings of cardiovascular disease and highlights a potential role for immunosuppression in cardiovascular risk reduction," they wrote, concluding that randomized controlled clinical trials testing targeted immunosuppression to reduce cardiovascular risk are warranted.
This study was supported by the Agency for Healthcare Research and Quality and the Food and Drug Administration. Dr. Solomon reported receiving research grants from Abbott, Amgen, and Lilly, and serving in unpaid roles on two Pfizer trials. He also directed an educational course supported by Bristol-Myers Squibb, and serves as a consultant to the Consortium of Rheumatology Researchers of North America Inc. (CORRONA). Other study authors reported receiving research grants from, and/or serving as a consultant for Amgen, Abbott, BMS, Centocor, Genentech/Roche, Janssen, Pfizer, UCB, and CORRONA.
FROM THE AMERICAN JOURNAL OF MEDICINE
Major finding: Composite cardiovascular endpoint incidence rates were 2.52/100 person-years for TNF-alpha blockers, compared with 3.05 for nonbiologic DMARDs.
Data source: A large, observational study including more than 20,000 RA patients.
Disclosures: This study was supported by the Agency for Healthcare Research and Quality and the Food and Drug Administration. Dr. Solomon reported receiving research grants from Abbott, Amgen, and Lilly, and serves in unpaid roles on two Pfizer trials. He also directed an educational course supported by Bristol-Myers Squibb (BMS), and serves as a consultant to CORRONA. Other study authors reported receiving research grants from, and/or serving as a consultant for Amgen, Abbott, BMS, Centocor, Genentech/Roche, Janssen, Pfizer, UCB, and CORRONA.
Stem cell mutations in breast cancer may confer metastatic risk
The likelihood of nodal metastases is increased in breast cancer patients whose tumors have breast cancer stem and progenitor cells with defects in PI3/Akt signaling.
The findings, drawn from an analysis of surgical specimens, "support embarking on a new way of approaching breast cancer diagnosis and treatment planning" and have potential implications for targeted treatment with PI3/Akt inhibitors currently being tested in clinical trials, Dr. Cory A. Donovan and his colleagues at the Oregon Health and Science University, Portland reported online July 24 in JAMA Surgery.
The researchers evaluated malignant and benign stem cells from 30 fresh surgical specimens of ductal breast cancers. Nine of the 10 specimens with mutations in breast cancer stem and progenitor cells (BCSCs) were associated with axillary lymph node micro- or macrometastases. Just 4 of the 20 tumors without mutations were associated with axillary lymph node metastases, the investigators said (JAMA Surg. 2013 July 17 [doi:10.1001/jamasurg.2013.3028]).
The tumor specimens were collected from patients with stage IA through IIIB cancers via cell sorting and were subjected to whole genomic amplification and subsequent screening for oncogene mutations. The 10 tumors with BCSC defects had AKT1, HRAS, or PIK3CA mutations.
"Three different mutations (E545k, N345k, and H1047R) were detected in PIK3CA, a single mutation was detected in AKT1, and a single mutation was detected in HRAS," the investigators wrote.
No difference in CD44 positivity was observed between BCSCs with and without mutations.
"When the presence of any BCSC mutation correlated with patient and breast cancer characteristics, no statistically significant correlations were found with patient age at diagnosis, tumor size, tumor histologic grade, estrogen receptor expression, progesterone receptor expression, or ERBB2 status. However, a statistically significant correlation was observed between the presence of BCSC mutations and axillary lymph node metastases. This significance was more pronounced when micrometastatic disease was included," they said.
At a mean follow-up of 29 months, disease had progressed after treatment in 3 of the 10 patients with BCSC mutations. Two patients died of disease; one had brain metastases. Conversely, there was no evidence of disease at a mean follow-up of 19 months in the patients with BCSCs without mutations.
Since 20% of patients without BCSC mutations had axillary lymph node metastases, it appears that a PI3K/Akt mutation in BCSC is not a requirement for metastases, but the link between PI3K and metastatic potential demonstrated in this study suggests that "micrometastases harboring PI3K/Akt mutations may carry a different risk for distant metastatic disease," Dr. Donovan and his associates noted.
"Longer patient follow-up periods and a larger sample size will determine if this subset of patients demonstrates an increased risk and may benefit from specifically designed use of adjuvant chemotherapy," the researchers added.
The existing evidence regarding the prognostic significance of specific PI3K/Akt signaling pathway mutations is conflicting, likely because of the variability of the mutations, the heterogeneity of the tumors, and the complexity of the pathway. Although the findings in BCSCs in the current study are consistent with others showing that PIK3CA and AKT mutations in breast cancers are associated with factors that may indicate poor prognosis and decreased survival, other studies have demonstrated improved survival, lower tumor grades, and increased rates of estrogen receptor positivity in patients with tumors that have PIK3CA mutations, they said.
"Our study findings indicate that the answer to this controversy may lie in identifying mutations in BCSCs, as well as mutations in the tumor as a whole," they said, adding that the findings support an evaluation of BCSCs along with overall breast cancer assessment.
"The analysis of BCSCs can generate specific information about tumor growth and metastatic potential that may not be obtained from analysis of the tumor progeny cells alone. Simultaneous molecular analyses of both the tumor and BCSCs may better identify patients who are likely to benefit from specific therapeutic regimens. Similarly, simultaneous BCSC and tumor analysis may increase the number of patients who might benefit from treatment but be missed by tumor analysis alone," Dr. Donovan and his coworkers said.
P13K/Akt signaling pathway inhibitors are currently being evaluated in clinical trials and could prove useful for the treatment of patients with BCSC mutations, they noted, adding that this may be true even in cases without PIK/Akt mutation.
"The use of BCSC-specific and tumor-targeted chemotherapeutic agents may prove to be synergistic with each other, providing a novel therapeutic approach," they said.
This study was supported by a grant from the Janet E. Bowen Foundation. The authors reported having no disclosures.
The likelihood of nodal metastases is increased in breast cancer patients whose tumors have breast cancer stem and progenitor cells with defects in PI3/Akt signaling.
The findings, drawn from an analysis of surgical specimens, "support embarking on a new way of approaching breast cancer diagnosis and treatment planning" and have potential implications for targeted treatment with PI3/Akt inhibitors currently being tested in clinical trials, Dr. Cory A. Donovan and his colleagues at the Oregon Health and Science University, Portland reported online July 24 in JAMA Surgery.
The researchers evaluated malignant and benign stem cells from 30 fresh surgical specimens of ductal breast cancers. Nine of the 10 specimens with mutations in breast cancer stem and progenitor cells (BCSCs) were associated with axillary lymph node micro- or macrometastases. Just 4 of the 20 tumors without mutations were associated with axillary lymph node metastases, the investigators said (JAMA Surg. 2013 July 17 [doi:10.1001/jamasurg.2013.3028]).
The tumor specimens were collected from patients with stage IA through IIIB cancers via cell sorting and were subjected to whole genomic amplification and subsequent screening for oncogene mutations. The 10 tumors with BCSC defects had AKT1, HRAS, or PIK3CA mutations.
"Three different mutations (E545k, N345k, and H1047R) were detected in PIK3CA, a single mutation was detected in AKT1, and a single mutation was detected in HRAS," the investigators wrote.
No difference in CD44 positivity was observed between BCSCs with and without mutations.
"When the presence of any BCSC mutation correlated with patient and breast cancer characteristics, no statistically significant correlations were found with patient age at diagnosis, tumor size, tumor histologic grade, estrogen receptor expression, progesterone receptor expression, or ERBB2 status. However, a statistically significant correlation was observed between the presence of BCSC mutations and axillary lymph node metastases. This significance was more pronounced when micrometastatic disease was included," they said.
At a mean follow-up of 29 months, disease had progressed after treatment in 3 of the 10 patients with BCSC mutations. Two patients died of disease; one had brain metastases. Conversely, there was no evidence of disease at a mean follow-up of 19 months in the patients with BCSCs without mutations.
Since 20% of patients without BCSC mutations had axillary lymph node metastases, it appears that a PI3K/Akt mutation in BCSC is not a requirement for metastases, but the link between PI3K and metastatic potential demonstrated in this study suggests that "micrometastases harboring PI3K/Akt mutations may carry a different risk for distant metastatic disease," Dr. Donovan and his associates noted.
"Longer patient follow-up periods and a larger sample size will determine if this subset of patients demonstrates an increased risk and may benefit from specifically designed use of adjuvant chemotherapy," the researchers added.
The existing evidence regarding the prognostic significance of specific PI3K/Akt signaling pathway mutations is conflicting, likely because of the variability of the mutations, the heterogeneity of the tumors, and the complexity of the pathway. Although the findings in BCSCs in the current study are consistent with others showing that PIK3CA and AKT mutations in breast cancers are associated with factors that may indicate poor prognosis and decreased survival, other studies have demonstrated improved survival, lower tumor grades, and increased rates of estrogen receptor positivity in patients with tumors that have PIK3CA mutations, they said.
"Our study findings indicate that the answer to this controversy may lie in identifying mutations in BCSCs, as well as mutations in the tumor as a whole," they said, adding that the findings support an evaluation of BCSCs along with overall breast cancer assessment.
"The analysis of BCSCs can generate specific information about tumor growth and metastatic potential that may not be obtained from analysis of the tumor progeny cells alone. Simultaneous molecular analyses of both the tumor and BCSCs may better identify patients who are likely to benefit from specific therapeutic regimens. Similarly, simultaneous BCSC and tumor analysis may increase the number of patients who might benefit from treatment but be missed by tumor analysis alone," Dr. Donovan and his coworkers said.
P13K/Akt signaling pathway inhibitors are currently being evaluated in clinical trials and could prove useful for the treatment of patients with BCSC mutations, they noted, adding that this may be true even in cases without PIK/Akt mutation.
"The use of BCSC-specific and tumor-targeted chemotherapeutic agents may prove to be synergistic with each other, providing a novel therapeutic approach," they said.
This study was supported by a grant from the Janet E. Bowen Foundation. The authors reported having no disclosures.
The likelihood of nodal metastases is increased in breast cancer patients whose tumors have breast cancer stem and progenitor cells with defects in PI3/Akt signaling.
The findings, drawn from an analysis of surgical specimens, "support embarking on a new way of approaching breast cancer diagnosis and treatment planning" and have potential implications for targeted treatment with PI3/Akt inhibitors currently being tested in clinical trials, Dr. Cory A. Donovan and his colleagues at the Oregon Health and Science University, Portland reported online July 24 in JAMA Surgery.
The researchers evaluated malignant and benign stem cells from 30 fresh surgical specimens of ductal breast cancers. Nine of the 10 specimens with mutations in breast cancer stem and progenitor cells (BCSCs) were associated with axillary lymph node micro- or macrometastases. Just 4 of the 20 tumors without mutations were associated with axillary lymph node metastases, the investigators said (JAMA Surg. 2013 July 17 [doi:10.1001/jamasurg.2013.3028]).
The tumor specimens were collected from patients with stage IA through IIIB cancers via cell sorting and were subjected to whole genomic amplification and subsequent screening for oncogene mutations. The 10 tumors with BCSC defects had AKT1, HRAS, or PIK3CA mutations.
"Three different mutations (E545k, N345k, and H1047R) were detected in PIK3CA, a single mutation was detected in AKT1, and a single mutation was detected in HRAS," the investigators wrote.
No difference in CD44 positivity was observed between BCSCs with and without mutations.
"When the presence of any BCSC mutation correlated with patient and breast cancer characteristics, no statistically significant correlations were found with patient age at diagnosis, tumor size, tumor histologic grade, estrogen receptor expression, progesterone receptor expression, or ERBB2 status. However, a statistically significant correlation was observed between the presence of BCSC mutations and axillary lymph node metastases. This significance was more pronounced when micrometastatic disease was included," they said.
At a mean follow-up of 29 months, disease had progressed after treatment in 3 of the 10 patients with BCSC mutations. Two patients died of disease; one had brain metastases. Conversely, there was no evidence of disease at a mean follow-up of 19 months in the patients with BCSCs without mutations.
Since 20% of patients without BCSC mutations had axillary lymph node metastases, it appears that a PI3K/Akt mutation in BCSC is not a requirement for metastases, but the link between PI3K and metastatic potential demonstrated in this study suggests that "micrometastases harboring PI3K/Akt mutations may carry a different risk for distant metastatic disease," Dr. Donovan and his associates noted.
"Longer patient follow-up periods and a larger sample size will determine if this subset of patients demonstrates an increased risk and may benefit from specifically designed use of adjuvant chemotherapy," the researchers added.
The existing evidence regarding the prognostic significance of specific PI3K/Akt signaling pathway mutations is conflicting, likely because of the variability of the mutations, the heterogeneity of the tumors, and the complexity of the pathway. Although the findings in BCSCs in the current study are consistent with others showing that PIK3CA and AKT mutations in breast cancers are associated with factors that may indicate poor prognosis and decreased survival, other studies have demonstrated improved survival, lower tumor grades, and increased rates of estrogen receptor positivity in patients with tumors that have PIK3CA mutations, they said.
"Our study findings indicate that the answer to this controversy may lie in identifying mutations in BCSCs, as well as mutations in the tumor as a whole," they said, adding that the findings support an evaluation of BCSCs along with overall breast cancer assessment.
"The analysis of BCSCs can generate specific information about tumor growth and metastatic potential that may not be obtained from analysis of the tumor progeny cells alone. Simultaneous molecular analyses of both the tumor and BCSCs may better identify patients who are likely to benefit from specific therapeutic regimens. Similarly, simultaneous BCSC and tumor analysis may increase the number of patients who might benefit from treatment but be missed by tumor analysis alone," Dr. Donovan and his coworkers said.
P13K/Akt signaling pathway inhibitors are currently being evaluated in clinical trials and could prove useful for the treatment of patients with BCSC mutations, they noted, adding that this may be true even in cases without PIK/Akt mutation.
"The use of BCSC-specific and tumor-targeted chemotherapeutic agents may prove to be synergistic with each other, providing a novel therapeutic approach," they said.
This study was supported by a grant from the Janet E. Bowen Foundation. The authors reported having no disclosures.
FROM JAMA SURGERY
Major finding: Nine of 10 specimens with mutations in breast cancer stem and progenitor cells, as compared with 4 of 20 tumors without BCSC mutations, were associated with axillary lymph node metastases.
Data source: An analysis of surgical specimens from 30 breast cancers.
Disclosures: This study was supported by a grant from the Janet E. Bowen Foundation. The authors reported having no disclosures.
LARCs hold key to reducing unplanned pregnancy rate
Little has changed over the years with respect to the proportion of unplanned pregnancies in the United States, but the emergence – and increasing acceptance – of safe and reliable long-acting reversible contraceptives, or LARCs, offers hope for improved reproductive management and outcomes.
Currently, about half of the 6.7 million pregnancies that occur each year in the United States are unplanned, and while that is a startling figure, more startling is the fact that although the distribution has changed – with decreases in unplanned pregnancies among wealthier women, and increases among low-income women and minorities – the percentage hasn’t changed in decades, Dr. Eve Espey said during a clinical seminar on contraception at the annual meeting of the American College of Obstetricians and Gynecologists in New Orleans.
Further, while only 11% of the women with unplanned pregnancies use no form of birth control, those women’s pregnancies make up only about 50% of the unplanned pregnancies overall; that means that half of all unplanned pregnancies occur in women who use at least one form of birth control, said Dr. Espey of the University of New Mexico, Albuquerque.
This is a problem that likely involves both contraceptive failures and user error.
"One of the things that I think we don’t appreciate is the extent to which women who do use contraceptives use them incorrectly or inconsistently, or use methods that have a high failure rate," she said.
These statistics, and the fact that unintended pregnancies are associated with an increased risk of numerous adverse outcomes, such as preterm birth and neonatal intensive care unit stays, underscore the importance of identifying and promoting contraceptive methods that will help women achieve better regulation of fertility, she said.
LARCs, according to burgeoning research – and a recent American College of Obstetricians and Gynecologists committee opinion – are the answer.
LARCs, including intrauterine devices and the contraceptive implant, should be first-line recommendations for all women and adolescents, according to an October 2012 opinion from the Committee on Adolescent Health Care LARC Working Group (Obstet. Gynecol. 2012;120:983-8). With both perfect and typical use, these contraceptive methods are associated with pregnancy rates of less than 1% per year – far better than reported rates among those using short-acting contraceptive methods such as condoms, oral contraceptives, the contraceptive patch, the vaginal ring, and depot medroxyprogesterone acetate injections, according to the committee opinion. Yet the use of short-acting methods, and particularly the use of oral contraceptives, dwarfs the use of LARC methods.
The use of IUDs is now about 7.5% – a substantial and encouraging increase over the 5.5% reported in recent years, but still far less than the 15%-20% of women who report oral contraceptive use, Dr. Espey said.
Findings from the Contraceptive CHOICE Project – a prospective cohort study designed to promote the use of LARCs among women and adolescents in the St. Louis area, and to reduce the rate of unintended pregnancies in the region, demonstrated that the unintended pregnancy rate was more than 20-fold greater with short-acting vs. LARC methods at 2- to 3-year follow-up. The rate was twice as high in adolescents as in adults (N. Engl. J. Med. 2012;366:1998-2007).
The CHOICE Project included 9,256 women who received a brief educational intervention and access to their contraceptive method of choice free of charge. The majority – 75% - chose LARC methods, suggesting that when cost and access barriers are removed, the typically low use of these highly effective methods (about 5.5% at the time of the study) increases substantially. The increased use of LARC methods was associated with an unplanned pregnancy rate of 35 per 1,000 women, compared with the national rate of 52 per 1,000 women, Dr. Jeffrey Peipert, the lead investigator for the project, said at the meeting.
Moreover, the continuation rate, which is strongly associated with outcomes, was 86% among LARC users at 12 months, compared with 55% for short-acting methods in a separate analysis of data from more than 4,100 project participants (Obstet. Gynecol. 2011;117:1105-13).
The abortion rate among CHOICE Project participants was 6 per 1,000 at follow-up, compared with the national rate of 20 per 1,000. The number needed to treat to prevent 1 abortion was 108.
"These are very reasonable numbers," said Dr. Peipert of Washington University in St. Louis, noting that the findings are all the more astounding given that the CHOICE population was much higher risk than the general population due to younger age, a high percentage of African Americans (about 50%), and lower socioeconomic status (about 40% had trouble affording basic necessities and 40% were uninsured).
If all women in the United States had access to LARC methods, more than 1 million unplanned pregnancies and nearly 900,000 abortions could be prevented each year, according to a CHOICE Project video he shared during his presentation.
"So if we truly want to reduce abortion in this country, what we need to do is increase contraceptive prevalence, and, in particular, talk about the advantages of LARCs," Dr. Espey said, referencing the CHOICE Project findings.
The currently available LARC methods include intrauterine devices and systems (the copper IUD and the hormonal intrauterine systems Mirena and Skyla) and the contraceptive implant (Nexplanon).
Skyla (Bayer HealthCare), approved in January, is the newest system on the market. Compared with Mirena (Bayer HealthCare) – a hormonal intrauterine system that has been available since 2000, the new system uses less levonorgestrel (14 mcg vs. 20 mcg), has a smaller frame and inserter tube that have been shown to be less painful on insertion in nulliparous women, is associated with more abnormal bleeding, and is approved for 3 years (vs. 5 years) of use, Dr. Espey said.
In her experience, some women prefer the 3- vs. 5-year product, even after they are told that the 5-year product can be removed early, she noted.
"It shouldn’t make a difference, but psychologically it does," she said.
As for the contraceptive implant, Nexplanon (Merck) is the latest-generation product, having replaced its predecessor, Implanon. The major difference between the two is that Nexplanon, which is approved for 3 years of use, can be seen on x-ray. Also, it only requires one hand for insertion, improving ease of use.
Overall, the implant, which works by preventing ovulation, is easy to learn, and is safe and highly effective, with very few contraindications, according to Dr. Tony Ogburn, who also spoke at the ACOG meeting.
Unpredictable bleeding can be an issue for some women, and is the most common reason for removal. Counseling and education, along with reassurance about Nexplanon’s safety, can promote continuation, he said.
Contraceptives prevent pregnancy, which is inherently more dangerous than contraception in most cases. Physicians previously certified to insert Implanon can take an online training course to become certified for Nexplanon insertion; those not previously trained must attend a live course, said Dr. Ogburn of the University of New Mexico, Albuquerque.
Deciding which LARC method is appropriate in a given patient can be somewhat daunting, but an app available from the Centers for Disease Control and Prevention’s U.S. Medical Eligibility Criteria (US MEC) for Contraceptive Use can help.
Dr. Espey, who swears by the app – even for "non-app people" like herself – said that it provides evidence-based reviews of every type of contraceptive method lined up against various patient characteristics and conditions, and coordinates recommendations.
For those averse to using an app, a chart is also available. Notably, the chart shows that most contraceptive use is safe.
There are a lot of misconceptions on the part of patients about the safety of one contraceptive method or another, and the fact is that contraception is "overmedicalized," Dr. Espey said.
Contraceptives prevent pregnancy, which is inherently more dangerous than contraception in most cases, she added.
Of course, the low use of LARC methods is hardly the only hurdle when it comes to improving the unintended pregnancy rate in the United States. Compared with European Countries that have extremely low unintended pregnancy rates, the United States has a lack of comprehensive sex education that begins at a young age, and a greater cultural acceptance of teen motherhood. Progress in the United States also is hampered by patriarchal attitudes that may allow men control over reproductive health, and by a mismatch in cultural values that is apparent in the "wildly sexual" U.S. media and the puritanical views that limit conversations with children about sex and sexuality as a normal part of human behavior, Dr Espey said.
Additional hurdles include poverty, racism, and inadequate social and health care safety nets, she noted.
That’s not to say, however, that major inroads can’t be made by promoting LARC use. A flurry of research presented at the ACOG annual meeting that focused on various approaches to increasing use among patients highlights the increasing focus on, and commitment to, helping patients take control of their fertility. One study, for example, showed that the use of a short, simple counseling intervention – much like the one used for the CHOICE project, is feasible and effective for promoting LARC use when provided in the immediate postpartum period. Another suggested that while a postpartum educational script increased interest in LARC methods, certain barriers to access may limit uptake.
Dr. Espey said that it is important to focus first and foremost on LARC methods when counseling a patient about contraception.
"So often I think we approach contraceptive counseling as if we have to tell everybody about all the methods as if they were all equal, but in other kinds of medication we would naturally lean toward recommending methods that are most highly effective," she said.
For contraception, that’s intrauterine devices and implants, she added.
Dr. Peipert agreed, noting, "If we had a pill for hypertension that was 20-fold less effective, we wouldn’t offer it first line."
Not only are LARC methods the most effective contraceptive methods, but under the right circumstances they also have a high rate of acceptability, as demonstrated by the CHOICE Project, he said.
They also have the potential to dramatically reduce health care costs.
"We believe that family planning saves dollars. We spend over $11 billion each year on unintended pregnancy. No-cost contraception and wide access to contraception can prevent unintended pregnancy and save health care dollars," he said, adding that every dollar spent on family planning can save $3 or $4 down the road – and because of their effectiveness, the savings are even greater with LARC use.
"We really have an opportunity to impact public health. It’s been decades where we’ve been stuck at a rate of unintended pregnancy in the U.S. of close to 50%, and now, if we can shift our emphasis to LARC methods, I think we will finally see a reduction in unintended pregnancies," he said.
Dr. Espey and Dr. Ogburn reported having no disclosures. Dr. Peipert has received research funding from Bayer and Merck. The CHOICE Project was funded by an anonymous donation.
Little has changed over the years with respect to the proportion of unplanned pregnancies in the United States, but the emergence – and increasing acceptance – of safe and reliable long-acting reversible contraceptives, or LARCs, offers hope for improved reproductive management and outcomes.
Currently, about half of the 6.7 million pregnancies that occur each year in the United States are unplanned, and while that is a startling figure, more startling is the fact that although the distribution has changed – with decreases in unplanned pregnancies among wealthier women, and increases among low-income women and minorities – the percentage hasn’t changed in decades, Dr. Eve Espey said during a clinical seminar on contraception at the annual meeting of the American College of Obstetricians and Gynecologists in New Orleans.
Further, while only 11% of the women with unplanned pregnancies use no form of birth control, those women’s pregnancies make up only about 50% of the unplanned pregnancies overall; that means that half of all unplanned pregnancies occur in women who use at least one form of birth control, said Dr. Espey of the University of New Mexico, Albuquerque.
This is a problem that likely involves both contraceptive failures and user error.
"One of the things that I think we don’t appreciate is the extent to which women who do use contraceptives use them incorrectly or inconsistently, or use methods that have a high failure rate," she said.
These statistics, and the fact that unintended pregnancies are associated with an increased risk of numerous adverse outcomes, such as preterm birth and neonatal intensive care unit stays, underscore the importance of identifying and promoting contraceptive methods that will help women achieve better regulation of fertility, she said.
LARCs, according to burgeoning research – and a recent American College of Obstetricians and Gynecologists committee opinion – are the answer.
LARCs, including intrauterine devices and the contraceptive implant, should be first-line recommendations for all women and adolescents, according to an October 2012 opinion from the Committee on Adolescent Health Care LARC Working Group (Obstet. Gynecol. 2012;120:983-8). With both perfect and typical use, these contraceptive methods are associated with pregnancy rates of less than 1% per year – far better than reported rates among those using short-acting contraceptive methods such as condoms, oral contraceptives, the contraceptive patch, the vaginal ring, and depot medroxyprogesterone acetate injections, according to the committee opinion. Yet the use of short-acting methods, and particularly the use of oral contraceptives, dwarfs the use of LARC methods.
The use of IUDs is now about 7.5% – a substantial and encouraging increase over the 5.5% reported in recent years, but still far less than the 15%-20% of women who report oral contraceptive use, Dr. Espey said.
Findings from the Contraceptive CHOICE Project – a prospective cohort study designed to promote the use of LARCs among women and adolescents in the St. Louis area, and to reduce the rate of unintended pregnancies in the region, demonstrated that the unintended pregnancy rate was more than 20-fold greater with short-acting vs. LARC methods at 2- to 3-year follow-up. The rate was twice as high in adolescents as in adults (N. Engl. J. Med. 2012;366:1998-2007).
The CHOICE Project included 9,256 women who received a brief educational intervention and access to their contraceptive method of choice free of charge. The majority – 75% - chose LARC methods, suggesting that when cost and access barriers are removed, the typically low use of these highly effective methods (about 5.5% at the time of the study) increases substantially. The increased use of LARC methods was associated with an unplanned pregnancy rate of 35 per 1,000 women, compared with the national rate of 52 per 1,000 women, Dr. Jeffrey Peipert, the lead investigator for the project, said at the meeting.
Moreover, the continuation rate, which is strongly associated with outcomes, was 86% among LARC users at 12 months, compared with 55% for short-acting methods in a separate analysis of data from more than 4,100 project participants (Obstet. Gynecol. 2011;117:1105-13).
The abortion rate among CHOICE Project participants was 6 per 1,000 at follow-up, compared with the national rate of 20 per 1,000. The number needed to treat to prevent 1 abortion was 108.
"These are very reasonable numbers," said Dr. Peipert of Washington University in St. Louis, noting that the findings are all the more astounding given that the CHOICE population was much higher risk than the general population due to younger age, a high percentage of African Americans (about 50%), and lower socioeconomic status (about 40% had trouble affording basic necessities and 40% were uninsured).
If all women in the United States had access to LARC methods, more than 1 million unplanned pregnancies and nearly 900,000 abortions could be prevented each year, according to a CHOICE Project video he shared during his presentation.
"So if we truly want to reduce abortion in this country, what we need to do is increase contraceptive prevalence, and, in particular, talk about the advantages of LARCs," Dr. Espey said, referencing the CHOICE Project findings.
The currently available LARC methods include intrauterine devices and systems (the copper IUD and the hormonal intrauterine systems Mirena and Skyla) and the contraceptive implant (Nexplanon).
Skyla (Bayer HealthCare), approved in January, is the newest system on the market. Compared with Mirena (Bayer HealthCare) – a hormonal intrauterine system that has been available since 2000, the new system uses less levonorgestrel (14 mcg vs. 20 mcg), has a smaller frame and inserter tube that have been shown to be less painful on insertion in nulliparous women, is associated with more abnormal bleeding, and is approved for 3 years (vs. 5 years) of use, Dr. Espey said.
In her experience, some women prefer the 3- vs. 5-year product, even after they are told that the 5-year product can be removed early, she noted.
"It shouldn’t make a difference, but psychologically it does," she said.
As for the contraceptive implant, Nexplanon (Merck) is the latest-generation product, having replaced its predecessor, Implanon. The major difference between the two is that Nexplanon, which is approved for 3 years of use, can be seen on x-ray. Also, it only requires one hand for insertion, improving ease of use.
Overall, the implant, which works by preventing ovulation, is easy to learn, and is safe and highly effective, with very few contraindications, according to Dr. Tony Ogburn, who also spoke at the ACOG meeting.
Unpredictable bleeding can be an issue for some women, and is the most common reason for removal. Counseling and education, along with reassurance about Nexplanon’s safety, can promote continuation, he said.
Contraceptives prevent pregnancy, which is inherently more dangerous than contraception in most cases. Physicians previously certified to insert Implanon can take an online training course to become certified for Nexplanon insertion; those not previously trained must attend a live course, said Dr. Ogburn of the University of New Mexico, Albuquerque.
Deciding which LARC method is appropriate in a given patient can be somewhat daunting, but an app available from the Centers for Disease Control and Prevention’s U.S. Medical Eligibility Criteria (US MEC) for Contraceptive Use can help.
Dr. Espey, who swears by the app – even for "non-app people" like herself – said that it provides evidence-based reviews of every type of contraceptive method lined up against various patient characteristics and conditions, and coordinates recommendations.
For those averse to using an app, a chart is also available. Notably, the chart shows that most contraceptive use is safe.
There are a lot of misconceptions on the part of patients about the safety of one contraceptive method or another, and the fact is that contraception is "overmedicalized," Dr. Espey said.
Contraceptives prevent pregnancy, which is inherently more dangerous than contraception in most cases, she added.
Of course, the low use of LARC methods is hardly the only hurdle when it comes to improving the unintended pregnancy rate in the United States. Compared with European Countries that have extremely low unintended pregnancy rates, the United States has a lack of comprehensive sex education that begins at a young age, and a greater cultural acceptance of teen motherhood. Progress in the United States also is hampered by patriarchal attitudes that may allow men control over reproductive health, and by a mismatch in cultural values that is apparent in the "wildly sexual" U.S. media and the puritanical views that limit conversations with children about sex and sexuality as a normal part of human behavior, Dr Espey said.
Additional hurdles include poverty, racism, and inadequate social and health care safety nets, she noted.
That’s not to say, however, that major inroads can’t be made by promoting LARC use. A flurry of research presented at the ACOG annual meeting that focused on various approaches to increasing use among patients highlights the increasing focus on, and commitment to, helping patients take control of their fertility. One study, for example, showed that the use of a short, simple counseling intervention – much like the one used for the CHOICE project, is feasible and effective for promoting LARC use when provided in the immediate postpartum period. Another suggested that while a postpartum educational script increased interest in LARC methods, certain barriers to access may limit uptake.
Dr. Espey said that it is important to focus first and foremost on LARC methods when counseling a patient about contraception.
"So often I think we approach contraceptive counseling as if we have to tell everybody about all the methods as if they were all equal, but in other kinds of medication we would naturally lean toward recommending methods that are most highly effective," she said.
For contraception, that’s intrauterine devices and implants, she added.
Dr. Peipert agreed, noting, "If we had a pill for hypertension that was 20-fold less effective, we wouldn’t offer it first line."
Not only are LARC methods the most effective contraceptive methods, but under the right circumstances they also have a high rate of acceptability, as demonstrated by the CHOICE Project, he said.
They also have the potential to dramatically reduce health care costs.
"We believe that family planning saves dollars. We spend over $11 billion each year on unintended pregnancy. No-cost contraception and wide access to contraception can prevent unintended pregnancy and save health care dollars," he said, adding that every dollar spent on family planning can save $3 or $4 down the road – and because of their effectiveness, the savings are even greater with LARC use.
"We really have an opportunity to impact public health. It’s been decades where we’ve been stuck at a rate of unintended pregnancy in the U.S. of close to 50%, and now, if we can shift our emphasis to LARC methods, I think we will finally see a reduction in unintended pregnancies," he said.
Dr. Espey and Dr. Ogburn reported having no disclosures. Dr. Peipert has received research funding from Bayer and Merck. The CHOICE Project was funded by an anonymous donation.
Little has changed over the years with respect to the proportion of unplanned pregnancies in the United States, but the emergence – and increasing acceptance – of safe and reliable long-acting reversible contraceptives, or LARCs, offers hope for improved reproductive management and outcomes.
Currently, about half of the 6.7 million pregnancies that occur each year in the United States are unplanned, and while that is a startling figure, more startling is the fact that although the distribution has changed – with decreases in unplanned pregnancies among wealthier women, and increases among low-income women and minorities – the percentage hasn’t changed in decades, Dr. Eve Espey said during a clinical seminar on contraception at the annual meeting of the American College of Obstetricians and Gynecologists in New Orleans.
Further, while only 11% of the women with unplanned pregnancies use no form of birth control, those women’s pregnancies make up only about 50% of the unplanned pregnancies overall; that means that half of all unplanned pregnancies occur in women who use at least one form of birth control, said Dr. Espey of the University of New Mexico, Albuquerque.
This is a problem that likely involves both contraceptive failures and user error.
"One of the things that I think we don’t appreciate is the extent to which women who do use contraceptives use them incorrectly or inconsistently, or use methods that have a high failure rate," she said.
These statistics, and the fact that unintended pregnancies are associated with an increased risk of numerous adverse outcomes, such as preterm birth and neonatal intensive care unit stays, underscore the importance of identifying and promoting contraceptive methods that will help women achieve better regulation of fertility, she said.
LARCs, according to burgeoning research – and a recent American College of Obstetricians and Gynecologists committee opinion – are the answer.
LARCs, including intrauterine devices and the contraceptive implant, should be first-line recommendations for all women and adolescents, according to an October 2012 opinion from the Committee on Adolescent Health Care LARC Working Group (Obstet. Gynecol. 2012;120:983-8). With both perfect and typical use, these contraceptive methods are associated with pregnancy rates of less than 1% per year – far better than reported rates among those using short-acting contraceptive methods such as condoms, oral contraceptives, the contraceptive patch, the vaginal ring, and depot medroxyprogesterone acetate injections, according to the committee opinion. Yet the use of short-acting methods, and particularly the use of oral contraceptives, dwarfs the use of LARC methods.
The use of IUDs is now about 7.5% – a substantial and encouraging increase over the 5.5% reported in recent years, but still far less than the 15%-20% of women who report oral contraceptive use, Dr. Espey said.
Findings from the Contraceptive CHOICE Project – a prospective cohort study designed to promote the use of LARCs among women and adolescents in the St. Louis area, and to reduce the rate of unintended pregnancies in the region, demonstrated that the unintended pregnancy rate was more than 20-fold greater with short-acting vs. LARC methods at 2- to 3-year follow-up. The rate was twice as high in adolescents as in adults (N. Engl. J. Med. 2012;366:1998-2007).
The CHOICE Project included 9,256 women who received a brief educational intervention and access to their contraceptive method of choice free of charge. The majority – 75% - chose LARC methods, suggesting that when cost and access barriers are removed, the typically low use of these highly effective methods (about 5.5% at the time of the study) increases substantially. The increased use of LARC methods was associated with an unplanned pregnancy rate of 35 per 1,000 women, compared with the national rate of 52 per 1,000 women, Dr. Jeffrey Peipert, the lead investigator for the project, said at the meeting.
Moreover, the continuation rate, which is strongly associated with outcomes, was 86% among LARC users at 12 months, compared with 55% for short-acting methods in a separate analysis of data from more than 4,100 project participants (Obstet. Gynecol. 2011;117:1105-13).
The abortion rate among CHOICE Project participants was 6 per 1,000 at follow-up, compared with the national rate of 20 per 1,000. The number needed to treat to prevent 1 abortion was 108.
"These are very reasonable numbers," said Dr. Peipert of Washington University in St. Louis, noting that the findings are all the more astounding given that the CHOICE population was much higher risk than the general population due to younger age, a high percentage of African Americans (about 50%), and lower socioeconomic status (about 40% had trouble affording basic necessities and 40% were uninsured).
If all women in the United States had access to LARC methods, more than 1 million unplanned pregnancies and nearly 900,000 abortions could be prevented each year, according to a CHOICE Project video he shared during his presentation.
"So if we truly want to reduce abortion in this country, what we need to do is increase contraceptive prevalence, and, in particular, talk about the advantages of LARCs," Dr. Espey said, referencing the CHOICE Project findings.
The currently available LARC methods include intrauterine devices and systems (the copper IUD and the hormonal intrauterine systems Mirena and Skyla) and the contraceptive implant (Nexplanon).
Skyla (Bayer HealthCare), approved in January, is the newest system on the market. Compared with Mirena (Bayer HealthCare) – a hormonal intrauterine system that has been available since 2000, the new system uses less levonorgestrel (14 mcg vs. 20 mcg), has a smaller frame and inserter tube that have been shown to be less painful on insertion in nulliparous women, is associated with more abnormal bleeding, and is approved for 3 years (vs. 5 years) of use, Dr. Espey said.
In her experience, some women prefer the 3- vs. 5-year product, even after they are told that the 5-year product can be removed early, she noted.
"It shouldn’t make a difference, but psychologically it does," she said.
As for the contraceptive implant, Nexplanon (Merck) is the latest-generation product, having replaced its predecessor, Implanon. The major difference between the two is that Nexplanon, which is approved for 3 years of use, can be seen on x-ray. Also, it only requires one hand for insertion, improving ease of use.
Overall, the implant, which works by preventing ovulation, is easy to learn, and is safe and highly effective, with very few contraindications, according to Dr. Tony Ogburn, who also spoke at the ACOG meeting.
Unpredictable bleeding can be an issue for some women, and is the most common reason for removal. Counseling and education, along with reassurance about Nexplanon’s safety, can promote continuation, he said.
Contraceptives prevent pregnancy, which is inherently more dangerous than contraception in most cases. Physicians previously certified to insert Implanon can take an online training course to become certified for Nexplanon insertion; those not previously trained must attend a live course, said Dr. Ogburn of the University of New Mexico, Albuquerque.
Deciding which LARC method is appropriate in a given patient can be somewhat daunting, but an app available from the Centers for Disease Control and Prevention’s U.S. Medical Eligibility Criteria (US MEC) for Contraceptive Use can help.
Dr. Espey, who swears by the app – even for "non-app people" like herself – said that it provides evidence-based reviews of every type of contraceptive method lined up against various patient characteristics and conditions, and coordinates recommendations.
For those averse to using an app, a chart is also available. Notably, the chart shows that most contraceptive use is safe.
There are a lot of misconceptions on the part of patients about the safety of one contraceptive method or another, and the fact is that contraception is "overmedicalized," Dr. Espey said.
Contraceptives prevent pregnancy, which is inherently more dangerous than contraception in most cases, she added.
Of course, the low use of LARC methods is hardly the only hurdle when it comes to improving the unintended pregnancy rate in the United States. Compared with European Countries that have extremely low unintended pregnancy rates, the United States has a lack of comprehensive sex education that begins at a young age, and a greater cultural acceptance of teen motherhood. Progress in the United States also is hampered by patriarchal attitudes that may allow men control over reproductive health, and by a mismatch in cultural values that is apparent in the "wildly sexual" U.S. media and the puritanical views that limit conversations with children about sex and sexuality as a normal part of human behavior, Dr Espey said.
Additional hurdles include poverty, racism, and inadequate social and health care safety nets, she noted.
That’s not to say, however, that major inroads can’t be made by promoting LARC use. A flurry of research presented at the ACOG annual meeting that focused on various approaches to increasing use among patients highlights the increasing focus on, and commitment to, helping patients take control of their fertility. One study, for example, showed that the use of a short, simple counseling intervention – much like the one used for the CHOICE project, is feasible and effective for promoting LARC use when provided in the immediate postpartum period. Another suggested that while a postpartum educational script increased interest in LARC methods, certain barriers to access may limit uptake.
Dr. Espey said that it is important to focus first and foremost on LARC methods when counseling a patient about contraception.
"So often I think we approach contraceptive counseling as if we have to tell everybody about all the methods as if they were all equal, but in other kinds of medication we would naturally lean toward recommending methods that are most highly effective," she said.
For contraception, that’s intrauterine devices and implants, she added.
Dr. Peipert agreed, noting, "If we had a pill for hypertension that was 20-fold less effective, we wouldn’t offer it first line."
Not only are LARC methods the most effective contraceptive methods, but under the right circumstances they also have a high rate of acceptability, as demonstrated by the CHOICE Project, he said.
They also have the potential to dramatically reduce health care costs.
"We believe that family planning saves dollars. We spend over $11 billion each year on unintended pregnancy. No-cost contraception and wide access to contraception can prevent unintended pregnancy and save health care dollars," he said, adding that every dollar spent on family planning can save $3 or $4 down the road – and because of their effectiveness, the savings are even greater with LARC use.
"We really have an opportunity to impact public health. It’s been decades where we’ve been stuck at a rate of unintended pregnancy in the U.S. of close to 50%, and now, if we can shift our emphasis to LARC methods, I think we will finally see a reduction in unintended pregnancies," he said.
Dr. Espey and Dr. Ogburn reported having no disclosures. Dr. Peipert has received research funding from Bayer and Merck. The CHOICE Project was funded by an anonymous donation.
Vitamin D deficiency in elderly linked to functional limitations
Older adults with vitamin D deficiency are more likely than those with adequate vitamin D levels to have functional limitations, findings from the Longitudinal Aging Study Amsterdam suggest.
These limitations became apparent within 3 years in a cohort aged 65-88 years, compared with 6 years in one aged 55-65 years.
In a cohort of 1,237 adults who were aged 65-88 years at baseline, 56% had at least one functional limitation, and in a second cohort of 725 adults aged 55-65 years at baseline, 30% had at least 1 functional limitation. Those in the older cohort who had vitamin D deficiency, defined as a 25-hydroxyvitamin D (25[OH]D) level of less than 20 ng/mL, had a nearly twofold increased risk of having a functional limitation, compared with those who had a 25(OH)D level greater than 30 ng/mL (odds ratio, 1.7), and those in the younger cohort who had vitamin D deficiency had more than a twofold increased risk (OR, 2.1), according to Dr. Evelien Sohl and her colleagues from the VU University Medical Center, Amsterdam.
Vitamin D status also was associated with the number of functional limitations cross-sectionally, the investigators reported online July 17 in the Journal of Clinical Endocrinology and Metabolism.
"In the fully adjusted models, participants in the older cohort with serum 25(OH)D levels of less than 20 ng/mL had a 1.6 times higher odds for having 1 more functional limitation than participants in the reference category. In the younger cohort, this odds ratio was 1.9," they said (J. Clin. Endocrinol. Metab. 2013 July 17 [doi: 10.1210/jc.2013-1698]).
The investigators also found that vitamin D deficiency was associated with an increase of two or more additional limitations at 3 years in the older cohort (OR, 2.0), but not at 6 years, while the association between vitamin D deficiency and an increase of two or more limitations did not reach significance until after 6 years in the younger cohort (OR, 3.3).
"Age and sex did not significantly modify the associations between serum 25(OH)D and functional status. Relevant confounders were age, sex, body mass index, number of chronic diseases, level of education, and level of urbanization," they said.
Specific functional limitations associated with vitamin D status in the older cohort included walking stairs (OR, 1.8), cutting toenails (OR, 1.5), and walking outside (OR, 2.1). Statistical power was insufficient for analyzing functional limitations in the younger cohort separately because of the low number of limitations in that group.
The Longitudinal Aging Study Amsterdam is an ongoing prospective cohort study of the older Dutch population. Functional limitations were assessed by an interviewer-administered questionnaire. Participants were asked about their ability and degree of difficulty with respect to walking up and down a staircase of 15 steps without resting; dressing and undressing oneself; sitting down and standing up from a chair; cutting one’s toenails; walking outside for 5 minutes without resting; and using one’s own or public transportation. Limitations were defined as any difficulty with performing a specific activity.
Although research on the association between vitamin D status and functional limitations is "scarce and not conclusive," according to the investigators, the findings "are in line with the results of most studies on vitamin D status and physical performance."
"Because functional limitations are a predictor of adverse outcomes, further research is necessary to explore underlying mechanisms and the potential benefits of vitamin D supplements on functional status," they concluded.
The study authors reported having no disclosures.
The findings by Dr. Sohl and colleagues provide "one more piece of evidence that vitamin D deficiency has adverse consequences, especially for an aging population whose reserves are dwindling," said Dr. Robert P. Heaney.
The findings extend upon those from prior studies, which, taken together, leave little doubt that vitamin D status and functional ability have a dose-dependent relationship, and that improving vitamin D levels in those with vitamin D deficiency would be of benefit, he said in an interview.
"The only real question is not if it works, but what level is necessary to achieve the desired effect," he said, noting the importance of striving for ancestral intake levels. In the Dutch population in this study, the ancestral level – the level that preagricultural ancestors obtained from the environment (and a good indicator of the level that would be optimal today) – was probably in the 50-ng/mL range, which is much higher than the 30-ng/mL level considered adequate by the investigators.
During his participation a few years ago on a panel convened by the American Geriatric Society and the Centers for Disease Control and Prevention to develop guidelines for supplementation in the elderly, Dr. Heaney polled the nine scientists on the panel about their own vitamin D supplementation and found that on average, their intake was about 5,500 IU/day, which is sufficient to support ancestral intake, he said.
Existing data support that level of intake, and those panelists felt the evidence was quite strong, he noted, adding: "I think it’s useful for clinicians to know that’s where the working scientific community is on this topic."
The recommendations developed by that panel are pending publication, he noted.
Dr. Heaney, a clinical endocrinologist, holds the John A. Creighton University Professorship at Creighton University, Omaha, Neb. He reported having no disclosures.
The findings by Dr. Sohl and colleagues provide "one more piece of evidence that vitamin D deficiency has adverse consequences, especially for an aging population whose reserves are dwindling," said Dr. Robert P. Heaney.
The findings extend upon those from prior studies, which, taken together, leave little doubt that vitamin D status and functional ability have a dose-dependent relationship, and that improving vitamin D levels in those with vitamin D deficiency would be of benefit, he said in an interview.
"The only real question is not if it works, but what level is necessary to achieve the desired effect," he said, noting the importance of striving for ancestral intake levels. In the Dutch population in this study, the ancestral level – the level that preagricultural ancestors obtained from the environment (and a good indicator of the level that would be optimal today) – was probably in the 50-ng/mL range, which is much higher than the 30-ng/mL level considered adequate by the investigators.
During his participation a few years ago on a panel convened by the American Geriatric Society and the Centers for Disease Control and Prevention to develop guidelines for supplementation in the elderly, Dr. Heaney polled the nine scientists on the panel about their own vitamin D supplementation and found that on average, their intake was about 5,500 IU/day, which is sufficient to support ancestral intake, he said.
Existing data support that level of intake, and those panelists felt the evidence was quite strong, he noted, adding: "I think it’s useful for clinicians to know that’s where the working scientific community is on this topic."
The recommendations developed by that panel are pending publication, he noted.
Dr. Heaney, a clinical endocrinologist, holds the John A. Creighton University Professorship at Creighton University, Omaha, Neb. He reported having no disclosures.
The findings by Dr. Sohl and colleagues provide "one more piece of evidence that vitamin D deficiency has adverse consequences, especially for an aging population whose reserves are dwindling," said Dr. Robert P. Heaney.
The findings extend upon those from prior studies, which, taken together, leave little doubt that vitamin D status and functional ability have a dose-dependent relationship, and that improving vitamin D levels in those with vitamin D deficiency would be of benefit, he said in an interview.
"The only real question is not if it works, but what level is necessary to achieve the desired effect," he said, noting the importance of striving for ancestral intake levels. In the Dutch population in this study, the ancestral level – the level that preagricultural ancestors obtained from the environment (and a good indicator of the level that would be optimal today) – was probably in the 50-ng/mL range, which is much higher than the 30-ng/mL level considered adequate by the investigators.
During his participation a few years ago on a panel convened by the American Geriatric Society and the Centers for Disease Control and Prevention to develop guidelines for supplementation in the elderly, Dr. Heaney polled the nine scientists on the panel about their own vitamin D supplementation and found that on average, their intake was about 5,500 IU/day, which is sufficient to support ancestral intake, he said.
Existing data support that level of intake, and those panelists felt the evidence was quite strong, he noted, adding: "I think it’s useful for clinicians to know that’s where the working scientific community is on this topic."
The recommendations developed by that panel are pending publication, he noted.
Dr. Heaney, a clinical endocrinologist, holds the John A. Creighton University Professorship at Creighton University, Omaha, Neb. He reported having no disclosures.
Older adults with vitamin D deficiency are more likely than those with adequate vitamin D levels to have functional limitations, findings from the Longitudinal Aging Study Amsterdam suggest.
These limitations became apparent within 3 years in a cohort aged 65-88 years, compared with 6 years in one aged 55-65 years.
In a cohort of 1,237 adults who were aged 65-88 years at baseline, 56% had at least one functional limitation, and in a second cohort of 725 adults aged 55-65 years at baseline, 30% had at least 1 functional limitation. Those in the older cohort who had vitamin D deficiency, defined as a 25-hydroxyvitamin D (25[OH]D) level of less than 20 ng/mL, had a nearly twofold increased risk of having a functional limitation, compared with those who had a 25(OH)D level greater than 30 ng/mL (odds ratio, 1.7), and those in the younger cohort who had vitamin D deficiency had more than a twofold increased risk (OR, 2.1), according to Dr. Evelien Sohl and her colleagues from the VU University Medical Center, Amsterdam.
Vitamin D status also was associated with the number of functional limitations cross-sectionally, the investigators reported online July 17 in the Journal of Clinical Endocrinology and Metabolism.
"In the fully adjusted models, participants in the older cohort with serum 25(OH)D levels of less than 20 ng/mL had a 1.6 times higher odds for having 1 more functional limitation than participants in the reference category. In the younger cohort, this odds ratio was 1.9," they said (J. Clin. Endocrinol. Metab. 2013 July 17 [doi: 10.1210/jc.2013-1698]).
The investigators also found that vitamin D deficiency was associated with an increase of two or more additional limitations at 3 years in the older cohort (OR, 2.0), but not at 6 years, while the association between vitamin D deficiency and an increase of two or more limitations did not reach significance until after 6 years in the younger cohort (OR, 3.3).
"Age and sex did not significantly modify the associations between serum 25(OH)D and functional status. Relevant confounders were age, sex, body mass index, number of chronic diseases, level of education, and level of urbanization," they said.
Specific functional limitations associated with vitamin D status in the older cohort included walking stairs (OR, 1.8), cutting toenails (OR, 1.5), and walking outside (OR, 2.1). Statistical power was insufficient for analyzing functional limitations in the younger cohort separately because of the low number of limitations in that group.
The Longitudinal Aging Study Amsterdam is an ongoing prospective cohort study of the older Dutch population. Functional limitations were assessed by an interviewer-administered questionnaire. Participants were asked about their ability and degree of difficulty with respect to walking up and down a staircase of 15 steps without resting; dressing and undressing oneself; sitting down and standing up from a chair; cutting one’s toenails; walking outside for 5 minutes without resting; and using one’s own or public transportation. Limitations were defined as any difficulty with performing a specific activity.
Although research on the association between vitamin D status and functional limitations is "scarce and not conclusive," according to the investigators, the findings "are in line with the results of most studies on vitamin D status and physical performance."
"Because functional limitations are a predictor of adverse outcomes, further research is necessary to explore underlying mechanisms and the potential benefits of vitamin D supplements on functional status," they concluded.
The study authors reported having no disclosures.
Older adults with vitamin D deficiency are more likely than those with adequate vitamin D levels to have functional limitations, findings from the Longitudinal Aging Study Amsterdam suggest.
These limitations became apparent within 3 years in a cohort aged 65-88 years, compared with 6 years in one aged 55-65 years.
In a cohort of 1,237 adults who were aged 65-88 years at baseline, 56% had at least one functional limitation, and in a second cohort of 725 adults aged 55-65 years at baseline, 30% had at least 1 functional limitation. Those in the older cohort who had vitamin D deficiency, defined as a 25-hydroxyvitamin D (25[OH]D) level of less than 20 ng/mL, had a nearly twofold increased risk of having a functional limitation, compared with those who had a 25(OH)D level greater than 30 ng/mL (odds ratio, 1.7), and those in the younger cohort who had vitamin D deficiency had more than a twofold increased risk (OR, 2.1), according to Dr. Evelien Sohl and her colleagues from the VU University Medical Center, Amsterdam.
Vitamin D status also was associated with the number of functional limitations cross-sectionally, the investigators reported online July 17 in the Journal of Clinical Endocrinology and Metabolism.
"In the fully adjusted models, participants in the older cohort with serum 25(OH)D levels of less than 20 ng/mL had a 1.6 times higher odds for having 1 more functional limitation than participants in the reference category. In the younger cohort, this odds ratio was 1.9," they said (J. Clin. Endocrinol. Metab. 2013 July 17 [doi: 10.1210/jc.2013-1698]).
The investigators also found that vitamin D deficiency was associated with an increase of two or more additional limitations at 3 years in the older cohort (OR, 2.0), but not at 6 years, while the association between vitamin D deficiency and an increase of two or more limitations did not reach significance until after 6 years in the younger cohort (OR, 3.3).
"Age and sex did not significantly modify the associations between serum 25(OH)D and functional status. Relevant confounders were age, sex, body mass index, number of chronic diseases, level of education, and level of urbanization," they said.
Specific functional limitations associated with vitamin D status in the older cohort included walking stairs (OR, 1.8), cutting toenails (OR, 1.5), and walking outside (OR, 2.1). Statistical power was insufficient for analyzing functional limitations in the younger cohort separately because of the low number of limitations in that group.
The Longitudinal Aging Study Amsterdam is an ongoing prospective cohort study of the older Dutch population. Functional limitations were assessed by an interviewer-administered questionnaire. Participants were asked about their ability and degree of difficulty with respect to walking up and down a staircase of 15 steps without resting; dressing and undressing oneself; sitting down and standing up from a chair; cutting one’s toenails; walking outside for 5 minutes without resting; and using one’s own or public transportation. Limitations were defined as any difficulty with performing a specific activity.
Although research on the association between vitamin D status and functional limitations is "scarce and not conclusive," according to the investigators, the findings "are in line with the results of most studies on vitamin D status and physical performance."
"Because functional limitations are a predictor of adverse outcomes, further research is necessary to explore underlying mechanisms and the potential benefits of vitamin D supplements on functional status," they concluded.
The study authors reported having no disclosures.
FROM THE JOURNAL OF CLINICAL ENDOCRINOLOGY AND METABOLISM
Major finding: Older adults with vitamin D deficiency had about a twofold increase in the risk of functional limitations.
Data source: Two cohorts from the prospective Longitudinal Aging Study Amsterdam, including a total of 1,962 older adults.
Disclosures: The authors reported having no disclosures.
Vascular testing appropriate use criteria cover 116 scenarios
Venous duplex ultrasound is rarely appropriate as a screening tool for upper or lower extremity deep vein thrombosis in the absence of pain or swelling, according to new appropriate use criteria for noninvasive vascular laboratory testing issued by the American College of Cardiology.
The clinical scenarios involving venous duplex ultrasound for DVT screening that were deemed rarely appropriate – such as screening in those with a prolonged ICU stay and those with high DVT risk – represent just a few of the 116 scenarios included in the report, which was developed in collaboration with 10 other leading professional societies to promote the most effective and most efficient use of peripheral vascular ultrasound and physiological testing in clinical practice.
The report, published online on July 19 in the Journal of the American College of Cardiology, is the second in a two-part series evaluating noninvasive testing for peripheral vascular disorders. Part I, published last year (J. Am. Coll. Cardiol. 2012;60:242-76), addressed peripheral arterial disorders, and Part II (J. Am. Coll. Cardiol. 2013 July 19 [doi:10.1016/j.jacc.2013.05.001]) addresses venous disease and evaluation of hemodialysis access, according to Dr. Heather Gornik, chair of the Part II writing committee.
"Vascular laboratory tests really play a central role in evaluating patients with peripheral vascular disorders. They are noninvasive, they have good accuracy data, and they don’t require radiation or dye. But we want to make sure the right tests are being ordered for the right reasons," Dr. Gornik, a cardiologist and vascular medicine specialist at the Cleveland Clinic, said in an interview.
Because these tests are low risk and easily accessible, there is concern that they are sometimes used excessively, she explained – specifically mentioning the use of duplex ultrasound for DVT screening as a commonly overused procedure.
"There is very little evidence, if any, to support broad screening for blood clots in someone who has no symptoms," she said.
The goal of the ACC Foundation Appropriate Use Criteria Task Force responsible for developing the criteria was to help clinicians minimize unnecessary testing, and maximize the most effective and efficient testing, she added.
Each of the clinical scenarios that were developed by the writing committee were rated by a technical panel as to whether they represent an "appropriate use," or whether they are "maybe appropriate" or "rarely appropriate."
The various scenarios are listed, along with their rating, in eight "at-a-glance" tables that address the following more general categories: venous duplex of the upper extremities for assessing patency and thrombosis; venous duplex of the lower extremities for assessing patency and thrombosis; duplex evaluation for venous incompetency; venous physiological testing with provocative maneuvers to assess for patency and/or incompetency; duplex of the inferior vena cava and iliac veins for patency and thrombosis; duplex of the hepatoportal system for patency, thrombosis, and flow direction; duplex of the renal vein for patency and thrombosis; and preoperative planning and postoperative assessment of a vascular access site.
Considering venous duplex ultrasound in a patient with acute unilateral limb swelling? Table 1 lists this as an appropriate use. How about duplex evaluation for venous incompetency in a patient with asymptomatic varicose veins? Table 3 says this may be appropriate, but notes that it is rarely appropriate in a patient with spider veins.
The report also covers indications for vascular testing prior to or after placement of hemodialysis access, because "evaluation of the superficial, deep, and central veins of the upper extremity constitutes a large component of these examinations," the report states.
In general, vascular studies were deemed appropriate in the presence of clinical signs and symptoms. The report also shows that the vascular laboratory plays a central role in the evaluation of patients with chronic venous insufficiency, and that preoperative vascular testing for preparing a dialysis access site is appropriate within three months of the procedure – but not for general surveillance of a functional dialysis fistula or graft in the absence of an indication of a problem, such as a palpable mass or swelling in the arm.
The report is not intended to be comprehensive, but rather is an attempt to address common and important clinical scenarios encountered in the patient with manifestations of peripheral vascular disease, the authors noted.
"The beauty of this report is that it spans many disciplines," Dr. Gornik said, noting that numerous parties have an interest in peripheral vascular disease, and that many specialties order vascular laboratory tests.
A number of them were represented in the development of these appropriate use criteria. Collaborating organizations included the American College of Radiology, the American Institute of Ultrasound in Medicine, the American Society of Echocardiography, the American Society of Nephrology, Intersocietal Accreditation Commission, Society for Cardiovascular Angiography and Interventions, the Society of Cardiovascular Computed Tomography, the Society for Interventional Radiology, the Society for Vascular Medicine, and the Society for Vascular Surgery.
While other organizations have developed appropriate use criteria for other modalities, such as cardiac testing, few have specifically addressed vascular testing.
"I hope that these criteria will allow clinicians and vascular laboratories to really focus on doing the highest quality work, and to evaluate their use of vascular testing, maximize the use of the vascular lab, and assure that the right test is done for the right indication and that tests that are not needed are not performed just because they are readily available," she said.
Dr. Gornik disclosed financial or other relationships with Zin Medical, Summit Doppler Systems Inc., the Fibromuscular Dysplasia Society of America, and the Intersocietal Accreditation Commission. A detailed list of disclosures for all Appropriate Use Criteria Task Force Members is included with the full text of the report.
Venous duplex ultrasound is rarely appropriate as a screening tool for upper or lower extremity deep vein thrombosis in the absence of pain or swelling, according to new appropriate use criteria for noninvasive vascular laboratory testing issued by the American College of Cardiology.
The clinical scenarios involving venous duplex ultrasound for DVT screening that were deemed rarely appropriate – such as screening in those with a prolonged ICU stay and those with high DVT risk – represent just a few of the 116 scenarios included in the report, which was developed in collaboration with 10 other leading professional societies to promote the most effective and most efficient use of peripheral vascular ultrasound and physiological testing in clinical practice.
The report, published online on July 19 in the Journal of the American College of Cardiology, is the second in a two-part series evaluating noninvasive testing for peripheral vascular disorders. Part I, published last year (J. Am. Coll. Cardiol. 2012;60:242-76), addressed peripheral arterial disorders, and Part II (J. Am. Coll. Cardiol. 2013 July 19 [doi:10.1016/j.jacc.2013.05.001]) addresses venous disease and evaluation of hemodialysis access, according to Dr. Heather Gornik, chair of the Part II writing committee.
"Vascular laboratory tests really play a central role in evaluating patients with peripheral vascular disorders. They are noninvasive, they have good accuracy data, and they don’t require radiation or dye. But we want to make sure the right tests are being ordered for the right reasons," Dr. Gornik, a cardiologist and vascular medicine specialist at the Cleveland Clinic, said in an interview.
Because these tests are low risk and easily accessible, there is concern that they are sometimes used excessively, she explained – specifically mentioning the use of duplex ultrasound for DVT screening as a commonly overused procedure.
"There is very little evidence, if any, to support broad screening for blood clots in someone who has no symptoms," she said.
The goal of the ACC Foundation Appropriate Use Criteria Task Force responsible for developing the criteria was to help clinicians minimize unnecessary testing, and maximize the most effective and efficient testing, she added.
Each of the clinical scenarios that were developed by the writing committee were rated by a technical panel as to whether they represent an "appropriate use," or whether they are "maybe appropriate" or "rarely appropriate."
The various scenarios are listed, along with their rating, in eight "at-a-glance" tables that address the following more general categories: venous duplex of the upper extremities for assessing patency and thrombosis; venous duplex of the lower extremities for assessing patency and thrombosis; duplex evaluation for venous incompetency; venous physiological testing with provocative maneuvers to assess for patency and/or incompetency; duplex of the inferior vena cava and iliac veins for patency and thrombosis; duplex of the hepatoportal system for patency, thrombosis, and flow direction; duplex of the renal vein for patency and thrombosis; and preoperative planning and postoperative assessment of a vascular access site.
Considering venous duplex ultrasound in a patient with acute unilateral limb swelling? Table 1 lists this as an appropriate use. How about duplex evaluation for venous incompetency in a patient with asymptomatic varicose veins? Table 3 says this may be appropriate, but notes that it is rarely appropriate in a patient with spider veins.
The report also covers indications for vascular testing prior to or after placement of hemodialysis access, because "evaluation of the superficial, deep, and central veins of the upper extremity constitutes a large component of these examinations," the report states.
In general, vascular studies were deemed appropriate in the presence of clinical signs and symptoms. The report also shows that the vascular laboratory plays a central role in the evaluation of patients with chronic venous insufficiency, and that preoperative vascular testing for preparing a dialysis access site is appropriate within three months of the procedure – but not for general surveillance of a functional dialysis fistula or graft in the absence of an indication of a problem, such as a palpable mass or swelling in the arm.
The report is not intended to be comprehensive, but rather is an attempt to address common and important clinical scenarios encountered in the patient with manifestations of peripheral vascular disease, the authors noted.
"The beauty of this report is that it spans many disciplines," Dr. Gornik said, noting that numerous parties have an interest in peripheral vascular disease, and that many specialties order vascular laboratory tests.
A number of them were represented in the development of these appropriate use criteria. Collaborating organizations included the American College of Radiology, the American Institute of Ultrasound in Medicine, the American Society of Echocardiography, the American Society of Nephrology, Intersocietal Accreditation Commission, Society for Cardiovascular Angiography and Interventions, the Society of Cardiovascular Computed Tomography, the Society for Interventional Radiology, the Society for Vascular Medicine, and the Society for Vascular Surgery.
While other organizations have developed appropriate use criteria for other modalities, such as cardiac testing, few have specifically addressed vascular testing.
"I hope that these criteria will allow clinicians and vascular laboratories to really focus on doing the highest quality work, and to evaluate their use of vascular testing, maximize the use of the vascular lab, and assure that the right test is done for the right indication and that tests that are not needed are not performed just because they are readily available," she said.
Dr. Gornik disclosed financial or other relationships with Zin Medical, Summit Doppler Systems Inc., the Fibromuscular Dysplasia Society of America, and the Intersocietal Accreditation Commission. A detailed list of disclosures for all Appropriate Use Criteria Task Force Members is included with the full text of the report.
Venous duplex ultrasound is rarely appropriate as a screening tool for upper or lower extremity deep vein thrombosis in the absence of pain or swelling, according to new appropriate use criteria for noninvasive vascular laboratory testing issued by the American College of Cardiology.
The clinical scenarios involving venous duplex ultrasound for DVT screening that were deemed rarely appropriate – such as screening in those with a prolonged ICU stay and those with high DVT risk – represent just a few of the 116 scenarios included in the report, which was developed in collaboration with 10 other leading professional societies to promote the most effective and most efficient use of peripheral vascular ultrasound and physiological testing in clinical practice.
The report, published online on July 19 in the Journal of the American College of Cardiology, is the second in a two-part series evaluating noninvasive testing for peripheral vascular disorders. Part I, published last year (J. Am. Coll. Cardiol. 2012;60:242-76), addressed peripheral arterial disorders, and Part II (J. Am. Coll. Cardiol. 2013 July 19 [doi:10.1016/j.jacc.2013.05.001]) addresses venous disease and evaluation of hemodialysis access, according to Dr. Heather Gornik, chair of the Part II writing committee.
"Vascular laboratory tests really play a central role in evaluating patients with peripheral vascular disorders. They are noninvasive, they have good accuracy data, and they don’t require radiation or dye. But we want to make sure the right tests are being ordered for the right reasons," Dr. Gornik, a cardiologist and vascular medicine specialist at the Cleveland Clinic, said in an interview.
Because these tests are low risk and easily accessible, there is concern that they are sometimes used excessively, she explained – specifically mentioning the use of duplex ultrasound for DVT screening as a commonly overused procedure.
"There is very little evidence, if any, to support broad screening for blood clots in someone who has no symptoms," she said.
The goal of the ACC Foundation Appropriate Use Criteria Task Force responsible for developing the criteria was to help clinicians minimize unnecessary testing, and maximize the most effective and efficient testing, she added.
Each of the clinical scenarios that were developed by the writing committee were rated by a technical panel as to whether they represent an "appropriate use," or whether they are "maybe appropriate" or "rarely appropriate."
The various scenarios are listed, along with their rating, in eight "at-a-glance" tables that address the following more general categories: venous duplex of the upper extremities for assessing patency and thrombosis; venous duplex of the lower extremities for assessing patency and thrombosis; duplex evaluation for venous incompetency; venous physiological testing with provocative maneuvers to assess for patency and/or incompetency; duplex of the inferior vena cava and iliac veins for patency and thrombosis; duplex of the hepatoportal system for patency, thrombosis, and flow direction; duplex of the renal vein for patency and thrombosis; and preoperative planning and postoperative assessment of a vascular access site.
Considering venous duplex ultrasound in a patient with acute unilateral limb swelling? Table 1 lists this as an appropriate use. How about duplex evaluation for venous incompetency in a patient with asymptomatic varicose veins? Table 3 says this may be appropriate, but notes that it is rarely appropriate in a patient with spider veins.
The report also covers indications for vascular testing prior to or after placement of hemodialysis access, because "evaluation of the superficial, deep, and central veins of the upper extremity constitutes a large component of these examinations," the report states.
In general, vascular studies were deemed appropriate in the presence of clinical signs and symptoms. The report also shows that the vascular laboratory plays a central role in the evaluation of patients with chronic venous insufficiency, and that preoperative vascular testing for preparing a dialysis access site is appropriate within three months of the procedure – but not for general surveillance of a functional dialysis fistula or graft in the absence of an indication of a problem, such as a palpable mass or swelling in the arm.
The report is not intended to be comprehensive, but rather is an attempt to address common and important clinical scenarios encountered in the patient with manifestations of peripheral vascular disease, the authors noted.
"The beauty of this report is that it spans many disciplines," Dr. Gornik said, noting that numerous parties have an interest in peripheral vascular disease, and that many specialties order vascular laboratory tests.
A number of them were represented in the development of these appropriate use criteria. Collaborating organizations included the American College of Radiology, the American Institute of Ultrasound in Medicine, the American Society of Echocardiography, the American Society of Nephrology, Intersocietal Accreditation Commission, Society for Cardiovascular Angiography and Interventions, the Society of Cardiovascular Computed Tomography, the Society for Interventional Radiology, the Society for Vascular Medicine, and the Society for Vascular Surgery.
While other organizations have developed appropriate use criteria for other modalities, such as cardiac testing, few have specifically addressed vascular testing.
"I hope that these criteria will allow clinicians and vascular laboratories to really focus on doing the highest quality work, and to evaluate their use of vascular testing, maximize the use of the vascular lab, and assure that the right test is done for the right indication and that tests that are not needed are not performed just because they are readily available," she said.
Dr. Gornik disclosed financial or other relationships with Zin Medical, Summit Doppler Systems Inc., the Fibromuscular Dysplasia Society of America, and the Intersocietal Accreditation Commission. A detailed list of disclosures for all Appropriate Use Criteria Task Force Members is included with the full text of the report.
FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY
Psychotic symptoms signal adolescent suicide attempt risk
Psychotic symptoms in adolescents are a clinical marker of high risk for suicide attempts, particularly in those with psychopathology, according to findings from the Saving and Empowering Young Lives in Europe study.
Of 1,112 adolescents aged 13-16 years who were part of the randomized, prospective cohort study, 7% reported psychotic symptoms at baseline, and of those, 7% attempted suicide within 3 months, compared with 1% of the rest of the sample (odds ratio, 10.01), and 20% attempted suicide within 12 months, compared with 2.5% of the rest of the sample (OR, 11.27), Dr. Ian Kelleher of Beaumont Hospital, Dublin, and his colleagues reported July 17 online in JAMA Psychiatry.
Among those with baseline psychopathology, 23% reported psychotic symptoms, compared with 4% of the rest of the sample (OR, 8.13), and the percentage of those with both psychopathology and psychotic symptoms at baseline who reported a suicide attempt by 3 months was 14% (OR, 17.91). More than a third (34%) of those with both psychopathology and psychotic symptoms at baseline attempted suicide by 12 months (OR, 32.67), the investigators reported (JAMA Psychiatry 2013 July 17[doi:10.1001/jamapsychiatry.2013.140]).
"Although the presence of psychopathology without psychotic symptoms at baseline predicted suicide attempts over time, albeit to a lesser degree than psychopathology with psychotic symptoms, some participants who were not experiencing psychotic symptoms at the time of the assessment may have experienced such symptoms closer to the time of their suicide attempt," the investigators noted.
To assess for this possibility, they looked specifically at acute suicide attempts – those occurring within the 2 weeks prior to the 3- and 12-month assessments – and found that subjects with psychopathology who did not report psychotic symptoms did not have significantly increased odds of such suicide attempts (OR, 1.09).
"Those with psychopathology who did experience psychotic symptoms, on the other hand, had a nearly 70-fold increased odds of acute suicide attempts (OR, 67.50)," they wrote, adding that in absolute terms, those with psychotic symptoms comprised less than a quarter of the total groups with psychopathology, but accounted for nearly 80% of the acute suicide attempts in the group.
The association between psychotic symptoms and suicide attempts was not explained by nonpsychotic psychiatric symptom burden or by multimorbidity, the investigators added.
The findings underscore the importance of assessing psychotic symptoms in individuals with suspected psychopathology, and the need to recognize suicide risk when these symptoms are present, they said.
"It is also important to recognize that most such symptoms do not present as true hallucinations; that is, they may occur with intact reality testing and thus may be attenuated rather than frankly psychotic," they noted.
As for why psychotic symptoms predict suicidal behavior, the reasons are not readily clear, but it is possible that psychotic symptoms are a marker of increasing severity of psychopathology, the investigators said.
Also, individuals who experience psychotic symptoms may have increased sensitivity to stress, as well as poor coping skills, which might contribute to a greater risk of suicidal behavior in the face of acute life stressors, they said, adding that shared risk factors for suicidal behavior and psychotic symptoms, such as childhood traumatic experiences, also might play a role.
Given that suicide is among the leading causes of death worldwide, and that suicide risk assessment is recognized as one of the most difficult areas of clinical practice, the findings offer some hope for improving recognition of risk based on the presence of psychotic symptoms.
"An important clinical implication of these findings is the need for a new clinical focus on careful assessment of psychotic symptoms (both attenuated and frank) in patients with nonpsychotic disorders; this should be considered a key element of suicide risk assessment," the investigators said.
Participants in the Saving and Empowering Young Lives in Europe (SEYLE) study – a randomized clinical trial aimed at assessing prevention strategies for suicidal behavior – were school-age students from 11 countries. Those included in the current analysis were students from 17 randomly selected schools in Ireland who completed questionnaires that included information about psychopathology and psychotic symptoms, and who completed 12 months of follow-up.
Additional research, including research among older age groups, is needed, the investigators wrote.
"Further community and clinical research on suicidal behavior and psychotic symptoms would be valuable, as would research on underlying mechanisms that might explain this relationship and research that provides targets for intervention," they concluded
The SEYLE project is supported by a grant from the European Union Seventh Framework Programme. Dr. Kelleher was supported by an Interdisciplinary Capacity Enhancement Award from the Health Research Board Ireland.
Psychotic symptoms in adolescents are a clinical marker of high risk for suicide attempts, particularly in those with psychopathology, according to findings from the Saving and Empowering Young Lives in Europe study.
Of 1,112 adolescents aged 13-16 years who were part of the randomized, prospective cohort study, 7% reported psychotic symptoms at baseline, and of those, 7% attempted suicide within 3 months, compared with 1% of the rest of the sample (odds ratio, 10.01), and 20% attempted suicide within 12 months, compared with 2.5% of the rest of the sample (OR, 11.27), Dr. Ian Kelleher of Beaumont Hospital, Dublin, and his colleagues reported July 17 online in JAMA Psychiatry.
Among those with baseline psychopathology, 23% reported psychotic symptoms, compared with 4% of the rest of the sample (OR, 8.13), and the percentage of those with both psychopathology and psychotic symptoms at baseline who reported a suicide attempt by 3 months was 14% (OR, 17.91). More than a third (34%) of those with both psychopathology and psychotic symptoms at baseline attempted suicide by 12 months (OR, 32.67), the investigators reported (JAMA Psychiatry 2013 July 17[doi:10.1001/jamapsychiatry.2013.140]).
"Although the presence of psychopathology without psychotic symptoms at baseline predicted suicide attempts over time, albeit to a lesser degree than psychopathology with psychotic symptoms, some participants who were not experiencing psychotic symptoms at the time of the assessment may have experienced such symptoms closer to the time of their suicide attempt," the investigators noted.
To assess for this possibility, they looked specifically at acute suicide attempts – those occurring within the 2 weeks prior to the 3- and 12-month assessments – and found that subjects with psychopathology who did not report psychotic symptoms did not have significantly increased odds of such suicide attempts (OR, 1.09).
"Those with psychopathology who did experience psychotic symptoms, on the other hand, had a nearly 70-fold increased odds of acute suicide attempts (OR, 67.50)," they wrote, adding that in absolute terms, those with psychotic symptoms comprised less than a quarter of the total groups with psychopathology, but accounted for nearly 80% of the acute suicide attempts in the group.
The association between psychotic symptoms and suicide attempts was not explained by nonpsychotic psychiatric symptom burden or by multimorbidity, the investigators added.
The findings underscore the importance of assessing psychotic symptoms in individuals with suspected psychopathology, and the need to recognize suicide risk when these symptoms are present, they said.
"It is also important to recognize that most such symptoms do not present as true hallucinations; that is, they may occur with intact reality testing and thus may be attenuated rather than frankly psychotic," they noted.
As for why psychotic symptoms predict suicidal behavior, the reasons are not readily clear, but it is possible that psychotic symptoms are a marker of increasing severity of psychopathology, the investigators said.
Also, individuals who experience psychotic symptoms may have increased sensitivity to stress, as well as poor coping skills, which might contribute to a greater risk of suicidal behavior in the face of acute life stressors, they said, adding that shared risk factors for suicidal behavior and psychotic symptoms, such as childhood traumatic experiences, also might play a role.
Given that suicide is among the leading causes of death worldwide, and that suicide risk assessment is recognized as one of the most difficult areas of clinical practice, the findings offer some hope for improving recognition of risk based on the presence of psychotic symptoms.
"An important clinical implication of these findings is the need for a new clinical focus on careful assessment of psychotic symptoms (both attenuated and frank) in patients with nonpsychotic disorders; this should be considered a key element of suicide risk assessment," the investigators said.
Participants in the Saving and Empowering Young Lives in Europe (SEYLE) study – a randomized clinical trial aimed at assessing prevention strategies for suicidal behavior – were school-age students from 11 countries. Those included in the current analysis were students from 17 randomly selected schools in Ireland who completed questionnaires that included information about psychopathology and psychotic symptoms, and who completed 12 months of follow-up.
Additional research, including research among older age groups, is needed, the investigators wrote.
"Further community and clinical research on suicidal behavior and psychotic symptoms would be valuable, as would research on underlying mechanisms that might explain this relationship and research that provides targets for intervention," they concluded
The SEYLE project is supported by a grant from the European Union Seventh Framework Programme. Dr. Kelleher was supported by an Interdisciplinary Capacity Enhancement Award from the Health Research Board Ireland.
Psychotic symptoms in adolescents are a clinical marker of high risk for suicide attempts, particularly in those with psychopathology, according to findings from the Saving and Empowering Young Lives in Europe study.
Of 1,112 adolescents aged 13-16 years who were part of the randomized, prospective cohort study, 7% reported psychotic symptoms at baseline, and of those, 7% attempted suicide within 3 months, compared with 1% of the rest of the sample (odds ratio, 10.01), and 20% attempted suicide within 12 months, compared with 2.5% of the rest of the sample (OR, 11.27), Dr. Ian Kelleher of Beaumont Hospital, Dublin, and his colleagues reported July 17 online in JAMA Psychiatry.
Among those with baseline psychopathology, 23% reported psychotic symptoms, compared with 4% of the rest of the sample (OR, 8.13), and the percentage of those with both psychopathology and psychotic symptoms at baseline who reported a suicide attempt by 3 months was 14% (OR, 17.91). More than a third (34%) of those with both psychopathology and psychotic symptoms at baseline attempted suicide by 12 months (OR, 32.67), the investigators reported (JAMA Psychiatry 2013 July 17[doi:10.1001/jamapsychiatry.2013.140]).
"Although the presence of psychopathology without psychotic symptoms at baseline predicted suicide attempts over time, albeit to a lesser degree than psychopathology with psychotic symptoms, some participants who were not experiencing psychotic symptoms at the time of the assessment may have experienced such symptoms closer to the time of their suicide attempt," the investigators noted.
To assess for this possibility, they looked specifically at acute suicide attempts – those occurring within the 2 weeks prior to the 3- and 12-month assessments – and found that subjects with psychopathology who did not report psychotic symptoms did not have significantly increased odds of such suicide attempts (OR, 1.09).
"Those with psychopathology who did experience psychotic symptoms, on the other hand, had a nearly 70-fold increased odds of acute suicide attempts (OR, 67.50)," they wrote, adding that in absolute terms, those with psychotic symptoms comprised less than a quarter of the total groups with psychopathology, but accounted for nearly 80% of the acute suicide attempts in the group.
The association between psychotic symptoms and suicide attempts was not explained by nonpsychotic psychiatric symptom burden or by multimorbidity, the investigators added.
The findings underscore the importance of assessing psychotic symptoms in individuals with suspected psychopathology, and the need to recognize suicide risk when these symptoms are present, they said.
"It is also important to recognize that most such symptoms do not present as true hallucinations; that is, they may occur with intact reality testing and thus may be attenuated rather than frankly psychotic," they noted.
As for why psychotic symptoms predict suicidal behavior, the reasons are not readily clear, but it is possible that psychotic symptoms are a marker of increasing severity of psychopathology, the investigators said.
Also, individuals who experience psychotic symptoms may have increased sensitivity to stress, as well as poor coping skills, which might contribute to a greater risk of suicidal behavior in the face of acute life stressors, they said, adding that shared risk factors for suicidal behavior and psychotic symptoms, such as childhood traumatic experiences, also might play a role.
Given that suicide is among the leading causes of death worldwide, and that suicide risk assessment is recognized as one of the most difficult areas of clinical practice, the findings offer some hope for improving recognition of risk based on the presence of psychotic symptoms.
"An important clinical implication of these findings is the need for a new clinical focus on careful assessment of psychotic symptoms (both attenuated and frank) in patients with nonpsychotic disorders; this should be considered a key element of suicide risk assessment," the investigators said.
Participants in the Saving and Empowering Young Lives in Europe (SEYLE) study – a randomized clinical trial aimed at assessing prevention strategies for suicidal behavior – were school-age students from 11 countries. Those included in the current analysis were students from 17 randomly selected schools in Ireland who completed questionnaires that included information about psychopathology and psychotic symptoms, and who completed 12 months of follow-up.
Additional research, including research among older age groups, is needed, the investigators wrote.
"Further community and clinical research on suicidal behavior and psychotic symptoms would be valuable, as would research on underlying mechanisms that might explain this relationship and research that provides targets for intervention," they concluded
The SEYLE project is supported by a grant from the European Union Seventh Framework Programme. Dr. Kelleher was supported by an Interdisciplinary Capacity Enhancement Award from the Health Research Board Ireland.
FROM JAMA PSYCHIATRY
Major finding: Adolescents with psychopathology and psychotic symptoms had a nearly 70-fold increased odds of acute suicide attempts (OR, 67.50).
Data source: An analysis of data from 1,112 participants in the prospective Saving and Empowering Young Lives in Europe (SEYLE) cohort study.
Disclosures: The SEYLE project is supported by a grant from the European Union Seventh Framework Programme. Dr. Kelleher was supported by an Interdisciplinary Capacity Enhancement Award from the Health Research Board Ireland.
Bipolar disorder strongly tied to premature death
Women and men with bipolar disorder were more likely to die prematurely than were those without bipolar disorder, according to results from a Swedish national cohort study involving nearly 6.6 million adults.
After adjustment for age, marital status, educational level, employment status, and income, all-cause mortality among the 6,618 adults with bipolar disorder in the cohort was increased twofold for both women (adjusted hazard ratio, 2.34) and men (AHR, 2.03), who died an average of 9.0 and 8.5 years earlier, respectively, did than those without bipolar disorder, according to Dr. Casey Crump of Stanford (Calif.) University, and his colleagues.
Those with bipolar disorder died prematurely from various causes, including cardiovascular disease, diabetes, chronic obstructive pulmonary disease (COPD), influenza or pneumonia, unintentional injuries, and suicide. Among women, stroke and cancer (particularly colon cancer) were also among the causes of premature death. Suicide was a particular risk for both women and men, who had 10-fold and 8-fold increases in risk, respectively (AHRs, 10.37 and 8.09), but the life expectancy differences were not fully explained by unnatural deaths, the investigators reported July 17 online in JAMA Psychiatry.
The most significant causes of death were influenza or pneumonia (3.7- and 4.4-fold increased risk for women and men, respectively), diabetes (3.6- and 2.6-fold increased risk, respectively), and COPD (2.9- and 2.6-fold increased risk).
In a separate model, the potential mediating effect of substance use disorders also was evaluated, and the effect was found to be modest, the investigators noted.
The associations between the various conditions and premature death were weakest for chronic diseases in those with a prior diagnosis, compared with those without a prior diagnosis (AHRs, 1.40 vs. 2.38), suggesting that earlier medical diagnosis and treatment might attenuate the increased mortality risk among affected individuals, they said (JAMA Psychiatry 2013 July 17 [doi: 10.1001/jamapsychiatry.2013.1394]).
"More complete provision of primary, preventive medical care among bipolar disorder patients is needed to reduce early mortality in this vulnerable population," they said, noting that multiple underlying mechanisms, including lifestyle factors, pathophysiologic mechanisms, genetic factors, and certain treatments for bipolar disorder, contribute to the disparities.
"The current study found evidence of modestly increased mortality among bipolar disorder patients who used carbamazepine, risperidone, or valproic acid or who solely used olanzapine, whereas users of aripiprazole, quetiapine, or lamotrigine had modestly reduced mortality compared with those who solely used lithium," they said.
However, consistent with prior research, those who used none of these medications had even higher rates of all-cause mortality – and twice the suicide risk – of those who used medication.
Study participants were 3,918 women and 2,700 men aged 20 years or older who lived in Sweden for at least 2 years as of Jan. 1, 2003. They were followed up to assess for physical comorbidities and mortality for 7 years. Bipolar disorder in the cohort was identified by any diagnosis during the preceding 2 years, and by the use of specific medications commonly used for bipolar disorder maintenance treatment.
The findings of this study, which is among the first to examine the association between bipolar disorder and mortality using complete diagnoses for a national population, adds to the increasing knowledge about factors that contribute to premature mortality in patients with bipolar disorder, but it is unclear to what extent the findings can be generalized to other health care systems, the investigators said.
"The substantial health disparities we found between bipolar disorder patients and the rest of the Swedish population may be even larger in other countries without universal health care," they noted.
This study was supported by a grant from the National Institute on Drug Abuse and an Agreement on Medical Training and Research (Lund, Sweden) project grant. The authors reported having no disclosures.
Women and men with bipolar disorder were more likely to die prematurely than were those without bipolar disorder, according to results from a Swedish national cohort study involving nearly 6.6 million adults.
After adjustment for age, marital status, educational level, employment status, and income, all-cause mortality among the 6,618 adults with bipolar disorder in the cohort was increased twofold for both women (adjusted hazard ratio, 2.34) and men (AHR, 2.03), who died an average of 9.0 and 8.5 years earlier, respectively, did than those without bipolar disorder, according to Dr. Casey Crump of Stanford (Calif.) University, and his colleagues.
Those with bipolar disorder died prematurely from various causes, including cardiovascular disease, diabetes, chronic obstructive pulmonary disease (COPD), influenza or pneumonia, unintentional injuries, and suicide. Among women, stroke and cancer (particularly colon cancer) were also among the causes of premature death. Suicide was a particular risk for both women and men, who had 10-fold and 8-fold increases in risk, respectively (AHRs, 10.37 and 8.09), but the life expectancy differences were not fully explained by unnatural deaths, the investigators reported July 17 online in JAMA Psychiatry.
The most significant causes of death were influenza or pneumonia (3.7- and 4.4-fold increased risk for women and men, respectively), diabetes (3.6- and 2.6-fold increased risk, respectively), and COPD (2.9- and 2.6-fold increased risk).
In a separate model, the potential mediating effect of substance use disorders also was evaluated, and the effect was found to be modest, the investigators noted.
The associations between the various conditions and premature death were weakest for chronic diseases in those with a prior diagnosis, compared with those without a prior diagnosis (AHRs, 1.40 vs. 2.38), suggesting that earlier medical diagnosis and treatment might attenuate the increased mortality risk among affected individuals, they said (JAMA Psychiatry 2013 July 17 [doi: 10.1001/jamapsychiatry.2013.1394]).
"More complete provision of primary, preventive medical care among bipolar disorder patients is needed to reduce early mortality in this vulnerable population," they said, noting that multiple underlying mechanisms, including lifestyle factors, pathophysiologic mechanisms, genetic factors, and certain treatments for bipolar disorder, contribute to the disparities.
"The current study found evidence of modestly increased mortality among bipolar disorder patients who used carbamazepine, risperidone, or valproic acid or who solely used olanzapine, whereas users of aripiprazole, quetiapine, or lamotrigine had modestly reduced mortality compared with those who solely used lithium," they said.
However, consistent with prior research, those who used none of these medications had even higher rates of all-cause mortality – and twice the suicide risk – of those who used medication.
Study participants were 3,918 women and 2,700 men aged 20 years or older who lived in Sweden for at least 2 years as of Jan. 1, 2003. They were followed up to assess for physical comorbidities and mortality for 7 years. Bipolar disorder in the cohort was identified by any diagnosis during the preceding 2 years, and by the use of specific medications commonly used for bipolar disorder maintenance treatment.
The findings of this study, which is among the first to examine the association between bipolar disorder and mortality using complete diagnoses for a national population, adds to the increasing knowledge about factors that contribute to premature mortality in patients with bipolar disorder, but it is unclear to what extent the findings can be generalized to other health care systems, the investigators said.
"The substantial health disparities we found between bipolar disorder patients and the rest of the Swedish population may be even larger in other countries without universal health care," they noted.
This study was supported by a grant from the National Institute on Drug Abuse and an Agreement on Medical Training and Research (Lund, Sweden) project grant. The authors reported having no disclosures.
Women and men with bipolar disorder were more likely to die prematurely than were those without bipolar disorder, according to results from a Swedish national cohort study involving nearly 6.6 million adults.
After adjustment for age, marital status, educational level, employment status, and income, all-cause mortality among the 6,618 adults with bipolar disorder in the cohort was increased twofold for both women (adjusted hazard ratio, 2.34) and men (AHR, 2.03), who died an average of 9.0 and 8.5 years earlier, respectively, did than those without bipolar disorder, according to Dr. Casey Crump of Stanford (Calif.) University, and his colleagues.
Those with bipolar disorder died prematurely from various causes, including cardiovascular disease, diabetes, chronic obstructive pulmonary disease (COPD), influenza or pneumonia, unintentional injuries, and suicide. Among women, stroke and cancer (particularly colon cancer) were also among the causes of premature death. Suicide was a particular risk for both women and men, who had 10-fold and 8-fold increases in risk, respectively (AHRs, 10.37 and 8.09), but the life expectancy differences were not fully explained by unnatural deaths, the investigators reported July 17 online in JAMA Psychiatry.
The most significant causes of death were influenza or pneumonia (3.7- and 4.4-fold increased risk for women and men, respectively), diabetes (3.6- and 2.6-fold increased risk, respectively), and COPD (2.9- and 2.6-fold increased risk).
In a separate model, the potential mediating effect of substance use disorders also was evaluated, and the effect was found to be modest, the investigators noted.
The associations between the various conditions and premature death were weakest for chronic diseases in those with a prior diagnosis, compared with those without a prior diagnosis (AHRs, 1.40 vs. 2.38), suggesting that earlier medical diagnosis and treatment might attenuate the increased mortality risk among affected individuals, they said (JAMA Psychiatry 2013 July 17 [doi: 10.1001/jamapsychiatry.2013.1394]).
"More complete provision of primary, preventive medical care among bipolar disorder patients is needed to reduce early mortality in this vulnerable population," they said, noting that multiple underlying mechanisms, including lifestyle factors, pathophysiologic mechanisms, genetic factors, and certain treatments for bipolar disorder, contribute to the disparities.
"The current study found evidence of modestly increased mortality among bipolar disorder patients who used carbamazepine, risperidone, or valproic acid or who solely used olanzapine, whereas users of aripiprazole, quetiapine, or lamotrigine had modestly reduced mortality compared with those who solely used lithium," they said.
However, consistent with prior research, those who used none of these medications had even higher rates of all-cause mortality – and twice the suicide risk – of those who used medication.
Study participants were 3,918 women and 2,700 men aged 20 years or older who lived in Sweden for at least 2 years as of Jan. 1, 2003. They were followed up to assess for physical comorbidities and mortality for 7 years. Bipolar disorder in the cohort was identified by any diagnosis during the preceding 2 years, and by the use of specific medications commonly used for bipolar disorder maintenance treatment.
The findings of this study, which is among the first to examine the association between bipolar disorder and mortality using complete diagnoses for a national population, adds to the increasing knowledge about factors that contribute to premature mortality in patients with bipolar disorder, but it is unclear to what extent the findings can be generalized to other health care systems, the investigators said.
"The substantial health disparities we found between bipolar disorder patients and the rest of the Swedish population may be even larger in other countries without universal health care," they noted.
This study was supported by a grant from the National Institute on Drug Abuse and an Agreement on Medical Training and Research (Lund, Sweden) project grant. The authors reported having no disclosures.
FROM JAMA PSYCHIATRY
Major finding: All-cause mortality in adults with bipolar disorder in the cohort was increased twofold for both women and men (adjusted hazard ratios, 2.34 and 2.03, respectively).
Data source: A Swedish national cohort study involving nearly 6.6 million adults.
Disclosures: This study was supported by a grant from the National Institute on Drug Abuse, and by an Agreement on Medical Training and Research (Lund, Sweden) project grant. The authors reported having no disclosures.