User login
Pyuria is an important tool in diagnosing UTI in infants
Diagnosing urinary tract infections can be achieved by determining the white blood cell concentration of the patient’s urine, according to a new study.
“Previously recommended pyuria thresholds for the presumptive diagnosis of UTI in young infants were based on manual microscopy of centrifuged urine [but] test performance has not been studied in newer automated systems that analyze uncentrifuged urine,” wrote Pradip P. Chaudhari, MD, and his associates at Harvard University in Boston.
Of these 2,700 infants with a median age of 1.7 months, 211 (7.8%) had a urine culture come back positive for UTI. Likelihood ratio (LR) positive and negative were calculated to determine the microscopic pyuria thresholds at which UTIs became more likely in both dilute and concentrated urine. A white blood cell to high-power field (WBC/HPF) count of 3 yielded an LR-positive of 9.9 and LR-negative of just 0.15, making it the cutoff for dilute urine samples. For concentrated urine samples, 6 WBC/HPF had an LR-positive of 10.1 and LR-negative of 0.17, making it the cutoff for those samples. Leukocyte esterase (LE) thresholds also were determined for dipstick testing, with investigators finding that any positive result on the dipstick was a strong indicator of UTI.
“The optimal diagnostic threshold for microscopic pyuria varies by urine concentration,” the authors concluded. “For young infants, urine concentration should be incorporated into the interpretation of uncentrifuged urine analyzed by automated microscopic urinalysis systems.”
There was no external funding for this study. Dr. Chaudhari and his coauthors did not report any relevant financial disclosures.
In this issue of Pediatrics, Chaudhari et al. share the results of a study of the impact of urine concentration on the optimal threshold in the new era of automated urinalysis. Centrifugation of urine specimens has long been standard laboratory practice, presumably performed to concentrate sediment and facilitate the detection of cellular elements and bacteria.
However, for the many sites that do not have machines for automated urinalyses (virtually all office practices, for example), the most important finding in this study may well be how well LE [leukocyte esterase] performs regardless of urine concentration. The optimal threshold for LE is not clear, however. The authors use “small” as their threshold for LE. At any threshold, can a negative urinalysis be relied on to exclude the diagnosis of UTI? A “positive” culture without inflammation evident in the urine is likely due to contamination, very early infection (rare), or asymptomatic bacteriuria (positive urine cultures in febrile children can still represent asymptomatic bacteriuria, because the fever may be due to a source other than the urinary tract).
If there are, in fact, some true UTIs without evidence of inflammation from the urinalysis, are they as harmful as those with “pyuria”?
Animal data demonstrate it is the inflammatory response, not the presence of organisms, that causes renal damage in the form of scarring. So the role of using evidence of inflammation in the urine to screen for who needs a culture seems justified on the basis not only of practicality at point of care and likelihood of UTI, but also sparing individuals at low to no risk of scarring from invasive urine collection. Moreover, using the urinalysis as a screen permits selecting individuals for antimicrobial treatment 24 hours sooner than if clinicians were to wait for culture results before treating. The urinalysis provides a practical window for clinicians to render prompt treatment. And Chaudhari et al. provide valuable assistance for interpreting the results of automated urinalyses.
Kenneth B. Roberts, MD , is a professor of therapeutic radiology at Yale University, New Haven, Conn. He did not report any relevant financial disclosures. These comments are excerpted from a commentary that accompanied Dr. Chaudhari and his associates’ study ( Pediatrics. 2016;138(5):e20162877 ).
In this issue of Pediatrics, Chaudhari et al. share the results of a study of the impact of urine concentration on the optimal threshold in the new era of automated urinalysis. Centrifugation of urine specimens has long been standard laboratory practice, presumably performed to concentrate sediment and facilitate the detection of cellular elements and bacteria.
However, for the many sites that do not have machines for automated urinalyses (virtually all office practices, for example), the most important finding in this study may well be how well LE [leukocyte esterase] performs regardless of urine concentration. The optimal threshold for LE is not clear, however. The authors use “small” as their threshold for LE. At any threshold, can a negative urinalysis be relied on to exclude the diagnosis of UTI? A “positive” culture without inflammation evident in the urine is likely due to contamination, very early infection (rare), or asymptomatic bacteriuria (positive urine cultures in febrile children can still represent asymptomatic bacteriuria, because the fever may be due to a source other than the urinary tract).
If there are, in fact, some true UTIs without evidence of inflammation from the urinalysis, are they as harmful as those with “pyuria”?
Animal data demonstrate it is the inflammatory response, not the presence of organisms, that causes renal damage in the form of scarring. So the role of using evidence of inflammation in the urine to screen for who needs a culture seems justified on the basis not only of practicality at point of care and likelihood of UTI, but also sparing individuals at low to no risk of scarring from invasive urine collection. Moreover, using the urinalysis as a screen permits selecting individuals for antimicrobial treatment 24 hours sooner than if clinicians were to wait for culture results before treating. The urinalysis provides a practical window for clinicians to render prompt treatment. And Chaudhari et al. provide valuable assistance for interpreting the results of automated urinalyses.
Kenneth B. Roberts, MD , is a professor of therapeutic radiology at Yale University, New Haven, Conn. He did not report any relevant financial disclosures. These comments are excerpted from a commentary that accompanied Dr. Chaudhari and his associates’ study ( Pediatrics. 2016;138(5):e20162877 ).
In this issue of Pediatrics, Chaudhari et al. share the results of a study of the impact of urine concentration on the optimal threshold in the new era of automated urinalysis. Centrifugation of urine specimens has long been standard laboratory practice, presumably performed to concentrate sediment and facilitate the detection of cellular elements and bacteria.
However, for the many sites that do not have machines for automated urinalyses (virtually all office practices, for example), the most important finding in this study may well be how well LE [leukocyte esterase] performs regardless of urine concentration. The optimal threshold for LE is not clear, however. The authors use “small” as their threshold for LE. At any threshold, can a negative urinalysis be relied on to exclude the diagnosis of UTI? A “positive” culture without inflammation evident in the urine is likely due to contamination, very early infection (rare), or asymptomatic bacteriuria (positive urine cultures in febrile children can still represent asymptomatic bacteriuria, because the fever may be due to a source other than the urinary tract).
If there are, in fact, some true UTIs without evidence of inflammation from the urinalysis, are they as harmful as those with “pyuria”?
Animal data demonstrate it is the inflammatory response, not the presence of organisms, that causes renal damage in the form of scarring. So the role of using evidence of inflammation in the urine to screen for who needs a culture seems justified on the basis not only of practicality at point of care and likelihood of UTI, but also sparing individuals at low to no risk of scarring from invasive urine collection. Moreover, using the urinalysis as a screen permits selecting individuals for antimicrobial treatment 24 hours sooner than if clinicians were to wait for culture results before treating. The urinalysis provides a practical window for clinicians to render prompt treatment. And Chaudhari et al. provide valuable assistance for interpreting the results of automated urinalyses.
Kenneth B. Roberts, MD , is a professor of therapeutic radiology at Yale University, New Haven, Conn. He did not report any relevant financial disclosures. These comments are excerpted from a commentary that accompanied Dr. Chaudhari and his associates’ study ( Pediatrics. 2016;138(5):e20162877 ).
Diagnosing urinary tract infections can be achieved by determining the white blood cell concentration of the patient’s urine, according to a new study.
“Previously recommended pyuria thresholds for the presumptive diagnosis of UTI in young infants were based on manual microscopy of centrifuged urine [but] test performance has not been studied in newer automated systems that analyze uncentrifuged urine,” wrote Pradip P. Chaudhari, MD, and his associates at Harvard University in Boston.
Of these 2,700 infants with a median age of 1.7 months, 211 (7.8%) had a urine culture come back positive for UTI. Likelihood ratio (LR) positive and negative were calculated to determine the microscopic pyuria thresholds at which UTIs became more likely in both dilute and concentrated urine. A white blood cell to high-power field (WBC/HPF) count of 3 yielded an LR-positive of 9.9 and LR-negative of just 0.15, making it the cutoff for dilute urine samples. For concentrated urine samples, 6 WBC/HPF had an LR-positive of 10.1 and LR-negative of 0.17, making it the cutoff for those samples. Leukocyte esterase (LE) thresholds also were determined for dipstick testing, with investigators finding that any positive result on the dipstick was a strong indicator of UTI.
“The optimal diagnostic threshold for microscopic pyuria varies by urine concentration,” the authors concluded. “For young infants, urine concentration should be incorporated into the interpretation of uncentrifuged urine analyzed by automated microscopic urinalysis systems.”
There was no external funding for this study. Dr. Chaudhari and his coauthors did not report any relevant financial disclosures.
Diagnosing urinary tract infections can be achieved by determining the white blood cell concentration of the patient’s urine, according to a new study.
“Previously recommended pyuria thresholds for the presumptive diagnosis of UTI in young infants were based on manual microscopy of centrifuged urine [but] test performance has not been studied in newer automated systems that analyze uncentrifuged urine,” wrote Pradip P. Chaudhari, MD, and his associates at Harvard University in Boston.
Of these 2,700 infants with a median age of 1.7 months, 211 (7.8%) had a urine culture come back positive for UTI. Likelihood ratio (LR) positive and negative were calculated to determine the microscopic pyuria thresholds at which UTIs became more likely in both dilute and concentrated urine. A white blood cell to high-power field (WBC/HPF) count of 3 yielded an LR-positive of 9.9 and LR-negative of just 0.15, making it the cutoff for dilute urine samples. For concentrated urine samples, 6 WBC/HPF had an LR-positive of 10.1 and LR-negative of 0.17, making it the cutoff for those samples. Leukocyte esterase (LE) thresholds also were determined for dipstick testing, with investigators finding that any positive result on the dipstick was a strong indicator of UTI.
“The optimal diagnostic threshold for microscopic pyuria varies by urine concentration,” the authors concluded. “For young infants, urine concentration should be incorporated into the interpretation of uncentrifuged urine analyzed by automated microscopic urinalysis systems.”
There was no external funding for this study. Dr. Chaudhari and his coauthors did not report any relevant financial disclosures.
FROM PEDIATRICS
Key clinical point:
Major finding: UTIs can be safely diagnosed if the patient has a pyuria threshold of at least 3 WBC/HPF in dilute urine and 6 WBC/HPF in concentrated urine.
Data source: Retrospective cross-sectional study of 2,700 infants younger than 3 months between May 2009 and December 2014.
Disclosures: No external funding for this study; authors did not report any relevant financial disclosures.
Treatment of depression – nonpharmacologic vs. pharmacologic
Major depressive disorder (MDD) affects 16% of adults in the United States at some point in their lives. It is one of the most important causes of disability, time off from work, and personal distress, accounting for more than 8 million office visits per year.
Recent information shows that while 8% of the population screens positive for depression, only a quarter of those with depression receive treatment. Most patients with depression are cared for by primary care physicians, not psychiatrists.1 It is important that primary care physicians are familiar with the range of evidence-based treatments for depression and their relative efficacy. Most patients with depression receive antidepressant medication and less than one-third of patients receive some form of psychotherapy.1 The American College of Physicians guideline reviews the evidence regarding the relative efficacy and safety of second-generation antidepressants and nonpharmacologic treatment of depression.2
Outcomes evaluated in this guideline include response, remission, functional capacity, quality of life, reduction of suicidality or hospitalizations, and harms.
The pharmacotherapy treatment of depression, as assessed in this guideline, are second-generation antidepressants (SGAs), which include selective serotonin reuptake inhibitors, serotonin norepinephrine reuptake inhibitors, and selective serotonin norepinephrine reuptake inhibitors. Previous reviews have shown that the SGAs have similar efficacy and safety with the side effects varying among the different medications; common side effects include constipation, diarrhea, nausea, decreased sexual ability, dizziness, headache, insomnia, and fatigue.
The strongest evidence, rated as moderate quality, comes from trials comparing SGAs to a form of psychotherapy called cognitive-behavioral therapy (CBT). CBT uses the technique of “collaborative empiricism” to question patients maladaptive beliefs, and by examining those beliefs, help patients to take on interpretations of reality that are less biased by their initial negative thoughts. Through these “cognitive” exercises, patients begin to take on healthier, more-adaptive approaches to the social, physical, and emotional challenges in their lives. These interpretations are then “tested” in the real world, the behavioral aspect of CBT. Studies that ranged in time from 8 to 52 weeks in patients with MDD showed SGAs and CBT to have equal efficacy with regard to response and remission of depression to therapy. Combining SGA and CBT, compared with SGA alone, did not show a difference in outcomes of response to therapy or remission of depression, though patients who received both therapies had some improved efficacy in work function.
When SGAs were compared with interpersonal therapy, psychodynamic therapy, St. John’s wort, acupuncture, and exercise, there was low-quality evidence that these interventions performed with equal efficacy to SGAs. Two trials of exercise, compared with sertraline, had moderate-quality evidence showing similar efficacy between the two treatments.
When patients have an incomplete response to initial treatment with an SGA, there was no difference in response or remission when using a strategy of switching from one SGA to another versus switching to cognitive therapy. Similarly, with regard to augmentation, CBT appears to work equally to augmenting initial SGA therapy with bupropion or buspirone.
The guidelines discuss that, with regard to adverse effects, while the discontinuation rates of SGAs and CBT are similar, CBT likely has fewer side effects. In addition, it is important to recognize that CBT has lower relapse rate associated with its use than do SGAs. This is presumably because once a skill set is developed when learning CBT, those skills can continue to be used long term.
The bottom line
Most patients who experience depression are cared for by their primary care physician. Treatments for depression include psychotherapy, complementary and alternative medicine (CAM), exercise, and pharmacotherapy. After discussion with the patient, the American College of Physicians recommends choosing either cognitive-behavioral therapy or second-generation antidepressants when treating depression.
References
1. Olfson M, Blanco C, Marcus SC. Treatment of Adult Depression in the United States. JAMA Intern Med. 2016 Oct;176(10):1482-91.
2. Qaseem A, et al. Nonpharmacologic Versus Pharmacologic Treatment of Adult Patients With Major Depressive Disorder: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016 Mar 1;164:350-59.
Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University in Philadelphia. Aaron Sutton is a behavioral therapy consultant in the family medicine residency program at Abington Memorial Hospital.
Major depressive disorder (MDD) affects 16% of adults in the United States at some point in their lives. It is one of the most important causes of disability, time off from work, and personal distress, accounting for more than 8 million office visits per year.
Recent information shows that while 8% of the population screens positive for depression, only a quarter of those with depression receive treatment. Most patients with depression are cared for by primary care physicians, not psychiatrists.1 It is important that primary care physicians are familiar with the range of evidence-based treatments for depression and their relative efficacy. Most patients with depression receive antidepressant medication and less than one-third of patients receive some form of psychotherapy.1 The American College of Physicians guideline reviews the evidence regarding the relative efficacy and safety of second-generation antidepressants and nonpharmacologic treatment of depression.2
Outcomes evaluated in this guideline include response, remission, functional capacity, quality of life, reduction of suicidality or hospitalizations, and harms.
The pharmacotherapy treatment of depression, as assessed in this guideline, are second-generation antidepressants (SGAs), which include selective serotonin reuptake inhibitors, serotonin norepinephrine reuptake inhibitors, and selective serotonin norepinephrine reuptake inhibitors. Previous reviews have shown that the SGAs have similar efficacy and safety with the side effects varying among the different medications; common side effects include constipation, diarrhea, nausea, decreased sexual ability, dizziness, headache, insomnia, and fatigue.
The strongest evidence, rated as moderate quality, comes from trials comparing SGAs to a form of psychotherapy called cognitive-behavioral therapy (CBT). CBT uses the technique of “collaborative empiricism” to question patients maladaptive beliefs, and by examining those beliefs, help patients to take on interpretations of reality that are less biased by their initial negative thoughts. Through these “cognitive” exercises, patients begin to take on healthier, more-adaptive approaches to the social, physical, and emotional challenges in their lives. These interpretations are then “tested” in the real world, the behavioral aspect of CBT. Studies that ranged in time from 8 to 52 weeks in patients with MDD showed SGAs and CBT to have equal efficacy with regard to response and remission of depression to therapy. Combining SGA and CBT, compared with SGA alone, did not show a difference in outcomes of response to therapy or remission of depression, though patients who received both therapies had some improved efficacy in work function.
When SGAs were compared with interpersonal therapy, psychodynamic therapy, St. John’s wort, acupuncture, and exercise, there was low-quality evidence that these interventions performed with equal efficacy to SGAs. Two trials of exercise, compared with sertraline, had moderate-quality evidence showing similar efficacy between the two treatments.
When patients have an incomplete response to initial treatment with an SGA, there was no difference in response or remission when using a strategy of switching from one SGA to another versus switching to cognitive therapy. Similarly, with regard to augmentation, CBT appears to work equally to augmenting initial SGA therapy with bupropion or buspirone.
The guidelines discuss that, with regard to adverse effects, while the discontinuation rates of SGAs and CBT are similar, CBT likely has fewer side effects. In addition, it is important to recognize that CBT has lower relapse rate associated with its use than do SGAs. This is presumably because once a skill set is developed when learning CBT, those skills can continue to be used long term.
The bottom line
Most patients who experience depression are cared for by their primary care physician. Treatments for depression include psychotherapy, complementary and alternative medicine (CAM), exercise, and pharmacotherapy. After discussion with the patient, the American College of Physicians recommends choosing either cognitive-behavioral therapy or second-generation antidepressants when treating depression.
References
1. Olfson M, Blanco C, Marcus SC. Treatment of Adult Depression in the United States. JAMA Intern Med. 2016 Oct;176(10):1482-91.
2. Qaseem A, et al. Nonpharmacologic Versus Pharmacologic Treatment of Adult Patients With Major Depressive Disorder: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016 Mar 1;164:350-59.
Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University in Philadelphia. Aaron Sutton is a behavioral therapy consultant in the family medicine residency program at Abington Memorial Hospital.
Major depressive disorder (MDD) affects 16% of adults in the United States at some point in their lives. It is one of the most important causes of disability, time off from work, and personal distress, accounting for more than 8 million office visits per year.
Recent information shows that while 8% of the population screens positive for depression, only a quarter of those with depression receive treatment. Most patients with depression are cared for by primary care physicians, not psychiatrists.1 It is important that primary care physicians are familiar with the range of evidence-based treatments for depression and their relative efficacy. Most patients with depression receive antidepressant medication and less than one-third of patients receive some form of psychotherapy.1 The American College of Physicians guideline reviews the evidence regarding the relative efficacy and safety of second-generation antidepressants and nonpharmacologic treatment of depression.2
Outcomes evaluated in this guideline include response, remission, functional capacity, quality of life, reduction of suicidality or hospitalizations, and harms.
The pharmacotherapy treatment of depression, as assessed in this guideline, are second-generation antidepressants (SGAs), which include selective serotonin reuptake inhibitors, serotonin norepinephrine reuptake inhibitors, and selective serotonin norepinephrine reuptake inhibitors. Previous reviews have shown that the SGAs have similar efficacy and safety with the side effects varying among the different medications; common side effects include constipation, diarrhea, nausea, decreased sexual ability, dizziness, headache, insomnia, and fatigue.
The strongest evidence, rated as moderate quality, comes from trials comparing SGAs to a form of psychotherapy called cognitive-behavioral therapy (CBT). CBT uses the technique of “collaborative empiricism” to question patients maladaptive beliefs, and by examining those beliefs, help patients to take on interpretations of reality that are less biased by their initial negative thoughts. Through these “cognitive” exercises, patients begin to take on healthier, more-adaptive approaches to the social, physical, and emotional challenges in their lives. These interpretations are then “tested” in the real world, the behavioral aspect of CBT. Studies that ranged in time from 8 to 52 weeks in patients with MDD showed SGAs and CBT to have equal efficacy with regard to response and remission of depression to therapy. Combining SGA and CBT, compared with SGA alone, did not show a difference in outcomes of response to therapy or remission of depression, though patients who received both therapies had some improved efficacy in work function.
When SGAs were compared with interpersonal therapy, psychodynamic therapy, St. John’s wort, acupuncture, and exercise, there was low-quality evidence that these interventions performed with equal efficacy to SGAs. Two trials of exercise, compared with sertraline, had moderate-quality evidence showing similar efficacy between the two treatments.
When patients have an incomplete response to initial treatment with an SGA, there was no difference in response or remission when using a strategy of switching from one SGA to another versus switching to cognitive therapy. Similarly, with regard to augmentation, CBT appears to work equally to augmenting initial SGA therapy with bupropion or buspirone.
The guidelines discuss that, with regard to adverse effects, while the discontinuation rates of SGAs and CBT are similar, CBT likely has fewer side effects. In addition, it is important to recognize that CBT has lower relapse rate associated with its use than do SGAs. This is presumably because once a skill set is developed when learning CBT, those skills can continue to be used long term.
The bottom line
Most patients who experience depression are cared for by their primary care physician. Treatments for depression include psychotherapy, complementary and alternative medicine (CAM), exercise, and pharmacotherapy. After discussion with the patient, the American College of Physicians recommends choosing either cognitive-behavioral therapy or second-generation antidepressants when treating depression.
References
1. Olfson M, Blanco C, Marcus SC. Treatment of Adult Depression in the United States. JAMA Intern Med. 2016 Oct;176(10):1482-91.
2. Qaseem A, et al. Nonpharmacologic Versus Pharmacologic Treatment of Adult Patients With Major Depressive Disorder: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016 Mar 1;164:350-59.
Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University in Philadelphia. Aaron Sutton is a behavioral therapy consultant in the family medicine residency program at Abington Memorial Hospital.
Myth of the Month: Does nitroglycerin response predict coronary artery disease?
A 55-year-old man presents to the emergency department with substernal chest pain. The pain has occurred off and on over the past 2 hours. He has no family history of coronary artery disease. He has no history of diabetes, hypertension, or cigarette smoking. His most recent total cholesterol was 220 mg/dL (HDL, 40; LDL, 155). Blood pressure is 130/70. An ECG obtained on arrival is unremarkable. When he reached the ED, he received a nitroglycerin tablet with resolution of his pain within 4 minutes.
What is the most accurate statement?
A. The chance of CAD in this man over the next 10 years was 8% before his symptoms and is now greater than 20%.
B. The chance of CAD in this man over the next 10 years was 8% and is still 8%.
C. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 100%.
D. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 50%.
For years, giving nitroglycerin to patients who present with chest pain has been considered a good therapy, and the response to the medication has been considered a sign that the pain was likely due to cardiac ischemia. Is there evidence that this is true?
The study was a retrospective review of 223 patients who presented to the ED over a 5-month period with ongoing chest pain. They looked at patients who had ongoing chest pain in the ED, received nitroglycerin, and did not receive any therapy other than aspirin within 10 minutes of receiving nitroglycerin. Nitroglycerin response was compared with the final diagnosis of cardiac versus noncardiac chest pain.
Of the patients with a final determination of cardiac chest pain, 88% had a nitroglycerin response, whereas 92% of the patients with noncardiac chest pain had a nitroglycerin response (P = .50).
Deborah B. Diercks, MD, and her colleagues looked at improvement in chest pain scores in the ED in patients treated with nitroglycerin and whether it correlated with a cardiac etiology of chest pain.2 The study was a prospective, observational study of 664 patients in an urban tertiary care ED over a 16-month period. An 11-point numeric chest pain scale was assessed and recorded by research assistants before and 5 minutes after receiving nitroglycerin. The scale ranged from 0 (no pain) to 10 (worst pain imaginable).
A final diagnosis of a cardiac etiology for chest pain was found in 18% of the patients in the study. Of the patients who had cardiac-related chest pain, 20% had no reduction in pain with nitroglycerin, compared with 19% of the patients without cardiac-related chest pain. Complete or significant reduction in chest pain occurred with nitroglycerin in 31% of patients with cardiac chest pain and 27% of the patients without cardiac chest pain (P = .76).
Two other studies with similar designs showed similar results. Robert Steele, MD, and his colleagues studied 270 patients in a prospective observational cohort study of patients with chest pain presenting to an urban ED.3 Patients presenting to the ED with active chest pain who received nitroglycerin were enrolled.
The sensitivity in this study for nitroglycerin relief determining cardiac chest pain was 72%, and the specificity was 37%, with a positive likelihood ratio for coronary artery disease if nitroglycerin response of 1.1 (0.96-1.34).
In another prospective, observational cohort study, 459 patients who presented to an ED with chest pain were evaluated for response to nitroglycerin as a marker for ischemic cardiac disease.4 In this study, presence of ischemic cardiac disease was defined as diagnosis in the ED or during a 4-month follow-up period. Nitroglycerin relieved chest pain in 35% of patients who had coronary disease, whereas 41% of patients without coronary disease had a nitroglycerin response. This study had a much lower overall nitroglycerin response rate than any of the other studies.
Katherine Grailey, MD, and Paul Glasziou, MD, PhD, published a meta-analysis of nitroglycerin use for the diagnosis of chest pain, using the above referenced studies. They concluded that in the acute setting, nitroglycerin is not a reliable test of treatment for use in diagnosis of coronary artery disease.5
High response rate for nitroglycerin in the noncoronary artery groups in the studies may be due to a strong placebo effect and/or that nitroglycerin may help with pain caused by esophageal spasm. The lack of specificity in the pain relief response for nitroglycerin makes it not a helpful test. Note that all the studies have been in the acute, ED setting for chest pain. In the case presented at the beginning of the article, the response the patient had to nitroglycerin would not change the probability that he has coronary artery disease.
References
1. Am J Cardiol. 2002 Dec 1;90(11):1264-6.
2. Ann Emerg Med. 2005 Jun;45(6):581-5.
3. CJEM. 2006 May;8(3):164-9.
4. Ann Intern Med. 2003 Dec 16;139(12):979-86.
5. Emerg Med J. 2012 Mar;29(3):173-6.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at dpaauw@uw.edu .
A 55-year-old man presents to the emergency department with substernal chest pain. The pain has occurred off and on over the past 2 hours. He has no family history of coronary artery disease. He has no history of diabetes, hypertension, or cigarette smoking. His most recent total cholesterol was 220 mg/dL (HDL, 40; LDL, 155). Blood pressure is 130/70. An ECG obtained on arrival is unremarkable. When he reached the ED, he received a nitroglycerin tablet with resolution of his pain within 4 minutes.
What is the most accurate statement?
A. The chance of CAD in this man over the next 10 years was 8% before his symptoms and is now greater than 20%.
B. The chance of CAD in this man over the next 10 years was 8% and is still 8%.
C. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 100%.
D. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 50%.
For years, giving nitroglycerin to patients who present with chest pain has been considered a good therapy, and the response to the medication has been considered a sign that the pain was likely due to cardiac ischemia. Is there evidence that this is true?
The study was a retrospective review of 223 patients who presented to the ED over a 5-month period with ongoing chest pain. They looked at patients who had ongoing chest pain in the ED, received nitroglycerin, and did not receive any therapy other than aspirin within 10 minutes of receiving nitroglycerin. Nitroglycerin response was compared with the final diagnosis of cardiac versus noncardiac chest pain.
Of the patients with a final determination of cardiac chest pain, 88% had a nitroglycerin response, whereas 92% of the patients with noncardiac chest pain had a nitroglycerin response (P = .50).
Deborah B. Diercks, MD, and her colleagues looked at improvement in chest pain scores in the ED in patients treated with nitroglycerin and whether it correlated with a cardiac etiology of chest pain.2 The study was a prospective, observational study of 664 patients in an urban tertiary care ED over a 16-month period. An 11-point numeric chest pain scale was assessed and recorded by research assistants before and 5 minutes after receiving nitroglycerin. The scale ranged from 0 (no pain) to 10 (worst pain imaginable).
A final diagnosis of a cardiac etiology for chest pain was found in 18% of the patients in the study. Of the patients who had cardiac-related chest pain, 20% had no reduction in pain with nitroglycerin, compared with 19% of the patients without cardiac-related chest pain. Complete or significant reduction in chest pain occurred with nitroglycerin in 31% of patients with cardiac chest pain and 27% of the patients without cardiac chest pain (P = .76).
Two other studies with similar designs showed similar results. Robert Steele, MD, and his colleagues studied 270 patients in a prospective observational cohort study of patients with chest pain presenting to an urban ED.3 Patients presenting to the ED with active chest pain who received nitroglycerin were enrolled.
The sensitivity in this study for nitroglycerin relief determining cardiac chest pain was 72%, and the specificity was 37%, with a positive likelihood ratio for coronary artery disease if nitroglycerin response of 1.1 (0.96-1.34).
In another prospective, observational cohort study, 459 patients who presented to an ED with chest pain were evaluated for response to nitroglycerin as a marker for ischemic cardiac disease.4 In this study, presence of ischemic cardiac disease was defined as diagnosis in the ED or during a 4-month follow-up period. Nitroglycerin relieved chest pain in 35% of patients who had coronary disease, whereas 41% of patients without coronary disease had a nitroglycerin response. This study had a much lower overall nitroglycerin response rate than any of the other studies.
Katherine Grailey, MD, and Paul Glasziou, MD, PhD, published a meta-analysis of nitroglycerin use for the diagnosis of chest pain, using the above referenced studies. They concluded that in the acute setting, nitroglycerin is not a reliable test of treatment for use in diagnosis of coronary artery disease.5
High response rate for nitroglycerin in the noncoronary artery groups in the studies may be due to a strong placebo effect and/or that nitroglycerin may help with pain caused by esophageal spasm. The lack of specificity in the pain relief response for nitroglycerin makes it not a helpful test. Note that all the studies have been in the acute, ED setting for chest pain. In the case presented at the beginning of the article, the response the patient had to nitroglycerin would not change the probability that he has coronary artery disease.
References
1. Am J Cardiol. 2002 Dec 1;90(11):1264-6.
2. Ann Emerg Med. 2005 Jun;45(6):581-5.
3. CJEM. 2006 May;8(3):164-9.
4. Ann Intern Med. 2003 Dec 16;139(12):979-86.
5. Emerg Med J. 2012 Mar;29(3):173-6.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at dpaauw@uw.edu .
A 55-year-old man presents to the emergency department with substernal chest pain. The pain has occurred off and on over the past 2 hours. He has no family history of coronary artery disease. He has no history of diabetes, hypertension, or cigarette smoking. His most recent total cholesterol was 220 mg/dL (HDL, 40; LDL, 155). Blood pressure is 130/70. An ECG obtained on arrival is unremarkable. When he reached the ED, he received a nitroglycerin tablet with resolution of his pain within 4 minutes.
What is the most accurate statement?
A. The chance of CAD in this man over the next 10 years was 8% before his symptoms and is now greater than 20%.
B. The chance of CAD in this man over the next 10 years was 8% and is still 8%.
C. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 100%.
D. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 50%.
For years, giving nitroglycerin to patients who present with chest pain has been considered a good therapy, and the response to the medication has been considered a sign that the pain was likely due to cardiac ischemia. Is there evidence that this is true?
The study was a retrospective review of 223 patients who presented to the ED over a 5-month period with ongoing chest pain. They looked at patients who had ongoing chest pain in the ED, received nitroglycerin, and did not receive any therapy other than aspirin within 10 minutes of receiving nitroglycerin. Nitroglycerin response was compared with the final diagnosis of cardiac versus noncardiac chest pain.
Of the patients with a final determination of cardiac chest pain, 88% had a nitroglycerin response, whereas 92% of the patients with noncardiac chest pain had a nitroglycerin response (P = .50).
Deborah B. Diercks, MD, and her colleagues looked at improvement in chest pain scores in the ED in patients treated with nitroglycerin and whether it correlated with a cardiac etiology of chest pain.2 The study was a prospective, observational study of 664 patients in an urban tertiary care ED over a 16-month period. An 11-point numeric chest pain scale was assessed and recorded by research assistants before and 5 minutes after receiving nitroglycerin. The scale ranged from 0 (no pain) to 10 (worst pain imaginable).
A final diagnosis of a cardiac etiology for chest pain was found in 18% of the patients in the study. Of the patients who had cardiac-related chest pain, 20% had no reduction in pain with nitroglycerin, compared with 19% of the patients without cardiac-related chest pain. Complete or significant reduction in chest pain occurred with nitroglycerin in 31% of patients with cardiac chest pain and 27% of the patients without cardiac chest pain (P = .76).
Two other studies with similar designs showed similar results. Robert Steele, MD, and his colleagues studied 270 patients in a prospective observational cohort study of patients with chest pain presenting to an urban ED.3 Patients presenting to the ED with active chest pain who received nitroglycerin were enrolled.
The sensitivity in this study for nitroglycerin relief determining cardiac chest pain was 72%, and the specificity was 37%, with a positive likelihood ratio for coronary artery disease if nitroglycerin response of 1.1 (0.96-1.34).
In another prospective, observational cohort study, 459 patients who presented to an ED with chest pain were evaluated for response to nitroglycerin as a marker for ischemic cardiac disease.4 In this study, presence of ischemic cardiac disease was defined as diagnosis in the ED or during a 4-month follow-up period. Nitroglycerin relieved chest pain in 35% of patients who had coronary disease, whereas 41% of patients without coronary disease had a nitroglycerin response. This study had a much lower overall nitroglycerin response rate than any of the other studies.
Katherine Grailey, MD, and Paul Glasziou, MD, PhD, published a meta-analysis of nitroglycerin use for the diagnosis of chest pain, using the above referenced studies. They concluded that in the acute setting, nitroglycerin is not a reliable test of treatment for use in diagnosis of coronary artery disease.5
High response rate for nitroglycerin in the noncoronary artery groups in the studies may be due to a strong placebo effect and/or that nitroglycerin may help with pain caused by esophageal spasm. The lack of specificity in the pain relief response for nitroglycerin makes it not a helpful test. Note that all the studies have been in the acute, ED setting for chest pain. In the case presented at the beginning of the article, the response the patient had to nitroglycerin would not change the probability that he has coronary artery disease.
References
1. Am J Cardiol. 2002 Dec 1;90(11):1264-6.
2. Ann Emerg Med. 2005 Jun;45(6):581-5.
3. CJEM. 2006 May;8(3):164-9.
4. Ann Intern Med. 2003 Dec 16;139(12):979-86.
5. Emerg Med J. 2012 Mar;29(3):173-6.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at dpaauw@uw.edu .
Dengue vaccine beneficial only in moderate to high transmission settings
Pediatric patients with previous natural exposure to dengue virus benefit from the dengue virus vaccine, while vaccination of seronegative patients leads to an increased risk for hospitalization because of dengue, according to the results of a mathematical model simulation.
Because of the first approved dengue vaccine’s highly variable efficacy rates among pediatric patients, the vaccine should only be used in moderate to high transmission settings, the investigators who designed the model concluded in a paper published in Science.
Dengvaxia, developed by Sanofi-Pasteur, is a recombinant chimeric live attenuated dengue virus vaccine based on a yellow fever vaccine backbone. The vaccine’s development was “considerably more challenging than for other Flavivirus infections because of the immunological interactions between the four dengue virus serotypes and the risk of immune-mediated enhancement of disease” which causes secondary infections to lead to more severe disease, Neil Ferguson, PhD, of the Imperial College of London and his associates wrote (Science. 2016 Sep 2;353:1033-6. doi: 10.1126/science.aaf9590).
Despite the complexity of the virus and vaccine, Dengvaxia was recently approved for use in six countries, and two large multicenter phase III clinical trials recently concluded. Investigators for the trials, which involved over 30,000 children in Southeast Asia and Latin America, reported an overall vaccine efficacy of about 60% in cases of symptomatic dengue disease. However, the vaccine’s efficacy varied by severity of dengue infection and by age and serotype of the patient at time of vaccination. Investigators for both trials reported higher efficacy in patients with severe infection and in patients who were seropositive for dengue virus (indicating previous exposure to the virus) at the time of vaccination. In addition, investigators for both trials reported lower vaccine efficacies in younger patients, a pattern “consistent with reduced efficacy in individuals who have not lived long enough to experience a natural infection,” the authors noted.
In an effort to provide guidance for future clinical trials and to predict the impact of wide-scale use of Dengvaxia, investigators developed a mathematical model of dengue transmission based on data from the two trials.
The model confirmed that secondary infections were nearly twice as likely to cause symptomatic infection, compared with primary and postsecondary infections.
In a highly important result, the model simulation showed that seropositive recipients always gained a substantial benefit – more than a 90% reduction in the risk of hospitalization because of dengue – from vaccination. However, among seronegative recipients, the vaccine initially induced near-perfect protection, but this protection rapidly decayed (mean duration, 7 months). Moreover, the model showed that seronegative recipients who received the vaccine were at an increased risk for hospitalization with dengue.
“This is true both in the short term and in the long term and raises fundamental issues about individual versus population benefits of vaccination,” investigators wrote. “Individual serological testing, if feasible, might radically improve the benefit-risk trade-off.”
The model also demonstrated that the optimal age for vaccination depends on the transmission intensity rate in a region where a child lives. In high-transmission settings, the optimal age to target for vaccination can be 9 years or younger, and as intensity of transmission decreases, optimal age of vaccination should increase, according to investigators.
The study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
jcraig@frontlinemedcom.com
On Twitter @jessnicolecraig
Pediatric patients with previous natural exposure to dengue virus benefit from the dengue virus vaccine, while vaccination of seronegative patients leads to an increased risk for hospitalization because of dengue, according to the results of a mathematical model simulation.
Because of the first approved dengue vaccine’s highly variable efficacy rates among pediatric patients, the vaccine should only be used in moderate to high transmission settings, the investigators who designed the model concluded in a paper published in Science.
Dengvaxia, developed by Sanofi-Pasteur, is a recombinant chimeric live attenuated dengue virus vaccine based on a yellow fever vaccine backbone. The vaccine’s development was “considerably more challenging than for other Flavivirus infections because of the immunological interactions between the four dengue virus serotypes and the risk of immune-mediated enhancement of disease” which causes secondary infections to lead to more severe disease, Neil Ferguson, PhD, of the Imperial College of London and his associates wrote (Science. 2016 Sep 2;353:1033-6. doi: 10.1126/science.aaf9590).
Despite the complexity of the virus and vaccine, Dengvaxia was recently approved for use in six countries, and two large multicenter phase III clinical trials recently concluded. Investigators for the trials, which involved over 30,000 children in Southeast Asia and Latin America, reported an overall vaccine efficacy of about 60% in cases of symptomatic dengue disease. However, the vaccine’s efficacy varied by severity of dengue infection and by age and serotype of the patient at time of vaccination. Investigators for both trials reported higher efficacy in patients with severe infection and in patients who were seropositive for dengue virus (indicating previous exposure to the virus) at the time of vaccination. In addition, investigators for both trials reported lower vaccine efficacies in younger patients, a pattern “consistent with reduced efficacy in individuals who have not lived long enough to experience a natural infection,” the authors noted.
In an effort to provide guidance for future clinical trials and to predict the impact of wide-scale use of Dengvaxia, investigators developed a mathematical model of dengue transmission based on data from the two trials.
The model confirmed that secondary infections were nearly twice as likely to cause symptomatic infection, compared with primary and postsecondary infections.
In a highly important result, the model simulation showed that seropositive recipients always gained a substantial benefit – more than a 90% reduction in the risk of hospitalization because of dengue – from vaccination. However, among seronegative recipients, the vaccine initially induced near-perfect protection, but this protection rapidly decayed (mean duration, 7 months). Moreover, the model showed that seronegative recipients who received the vaccine were at an increased risk for hospitalization with dengue.
“This is true both in the short term and in the long term and raises fundamental issues about individual versus population benefits of vaccination,” investigators wrote. “Individual serological testing, if feasible, might radically improve the benefit-risk trade-off.”
The model also demonstrated that the optimal age for vaccination depends on the transmission intensity rate in a region where a child lives. In high-transmission settings, the optimal age to target for vaccination can be 9 years or younger, and as intensity of transmission decreases, optimal age of vaccination should increase, according to investigators.
The study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
jcraig@frontlinemedcom.com
On Twitter @jessnicolecraig
Pediatric patients with previous natural exposure to dengue virus benefit from the dengue virus vaccine, while vaccination of seronegative patients leads to an increased risk for hospitalization because of dengue, according to the results of a mathematical model simulation.
Because of the first approved dengue vaccine’s highly variable efficacy rates among pediatric patients, the vaccine should only be used in moderate to high transmission settings, the investigators who designed the model concluded in a paper published in Science.
Dengvaxia, developed by Sanofi-Pasteur, is a recombinant chimeric live attenuated dengue virus vaccine based on a yellow fever vaccine backbone. The vaccine’s development was “considerably more challenging than for other Flavivirus infections because of the immunological interactions between the four dengue virus serotypes and the risk of immune-mediated enhancement of disease” which causes secondary infections to lead to more severe disease, Neil Ferguson, PhD, of the Imperial College of London and his associates wrote (Science. 2016 Sep 2;353:1033-6. doi: 10.1126/science.aaf9590).
Despite the complexity of the virus and vaccine, Dengvaxia was recently approved for use in six countries, and two large multicenter phase III clinical trials recently concluded. Investigators for the trials, which involved over 30,000 children in Southeast Asia and Latin America, reported an overall vaccine efficacy of about 60% in cases of symptomatic dengue disease. However, the vaccine’s efficacy varied by severity of dengue infection and by age and serotype of the patient at time of vaccination. Investigators for both trials reported higher efficacy in patients with severe infection and in patients who were seropositive for dengue virus (indicating previous exposure to the virus) at the time of vaccination. In addition, investigators for both trials reported lower vaccine efficacies in younger patients, a pattern “consistent with reduced efficacy in individuals who have not lived long enough to experience a natural infection,” the authors noted.
In an effort to provide guidance for future clinical trials and to predict the impact of wide-scale use of Dengvaxia, investigators developed a mathematical model of dengue transmission based on data from the two trials.
The model confirmed that secondary infections were nearly twice as likely to cause symptomatic infection, compared with primary and postsecondary infections.
In a highly important result, the model simulation showed that seropositive recipients always gained a substantial benefit – more than a 90% reduction in the risk of hospitalization because of dengue – from vaccination. However, among seronegative recipients, the vaccine initially induced near-perfect protection, but this protection rapidly decayed (mean duration, 7 months). Moreover, the model showed that seronegative recipients who received the vaccine were at an increased risk for hospitalization with dengue.
“This is true both in the short term and in the long term and raises fundamental issues about individual versus population benefits of vaccination,” investigators wrote. “Individual serological testing, if feasible, might radically improve the benefit-risk trade-off.”
The model also demonstrated that the optimal age for vaccination depends on the transmission intensity rate in a region where a child lives. In high-transmission settings, the optimal age to target for vaccination can be 9 years or younger, and as intensity of transmission decreases, optimal age of vaccination should increase, according to investigators.
The study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
jcraig@frontlinemedcom.com
On Twitter @jessnicolecraig
FROM SCIENCE
Key clinical point:
Major finding: Vaccine should only be used in moderate to high transmission settings. In high-transmission settings, the optimal age to target for vaccination is 9 years or younger.
Data source: Mathematical model simulation based on two large, multicenter, phase III clinical trials.
Disclosures: This study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
VIDEO: Open, robotic, laparoscopic approaches equally effective in pancreatectomy
WASHINGTON – Minimally invasive surgery – whether robotic or laparoscopic – is just as effective as open surgery in pancreatectomy.
Both minimally invasive approaches had perioperative and oncologic outcomes that were similar to open approaches, as well as to each other, Katelin Mirkin, MD, reported at the annual clinical congress of the American College of Surgeons. And while minimally invasive surgery (MIS) techniques were associated with a slightly faster move to neoadjuvant chemotherapy, survival outcomes in all three surgical approaches were similar.
Dr. Mirkin, a surgery resident at Penn State Milton S. Hershey Medical Center, Hershey, Pa., plumbed the National Cancer Database for patients with stage I-III pancreatic cancer who were treated by surgical resection from 2010 to 2012. Her cohort comprised 9,047 patients; of these, 7,924 were treated with open surgery, 992 with laparoscopic surgery, and 131 with robotic surgery. She examined a number of factors including lymph node harvest and surgical margins, length of stay and time to adjuvant chemotherapy, and survival.
Patients who had MIS were older (67 vs. 66 years) and more often treated at an academic center, but otherwise there were no significant baseline differences.
Dr. Mirkin first compared the open surgeries with MIS. There were no significant associations with surgical approach and cancer stage. However, distal resections were significantly more likely to be dealt with by MIS, and Whipple procedures by open approaches. There were also more open than MIS total resections.
MIS was more likely to conclude with negative surgical margins (79% vs. 75%), and open surgery more likely to end with positive margins (22% vs. 19%).
Perioperative outcomes favored MIS approaches for all types of surgery, with a mean overall stay of 9.5 days vs. 11.3 days for open surgery. The mean length of stay for a distal resection was 7 days for MIS vs. 8 for open. For a Whipple procedure, the mean stay was 10.7 vs. 11.9 days. For a total resection, it was 10 vs. 11.8 days.
MIS was also associated with a significantly shorter time to the initiation of adjuvant chemotherapy overall (56 vs. 59 days). For a Whipple, time to chemotherapy was 58 vs. 60 days, respectively. For a distal resection, it was 52 vs. 56 days, and for a total resection, 52 vs. 58 days.
Neither approach offered a survival benefit over the other, Dr. Mirkin noted. For stage I cancers, less than 50% of MIS patients and less than 25% of open patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months. For stage III tumors, the 40-month survival rates were about 10% for MIS patients and 15% for open patients.
Dr. Mirkin then examined perioperative, oncologic, and survival outcomes among those who underwent laparoscopic and robotic surgeries. There were no demographic differences between these groups.
Oncologic outcomes were almost identical with regard to the number of positive regional nodes harvested (six), and surgical margins. Nodes were negative in 82% of robotic cases vs. 78% of laparoscopic cases and positive in 17.6% of robotic cases and 19.4% of laparoscopic cases.
Length of stay was significantly shorter for a laparoscopic approach overall (10 vs. 9.4 days) and particularly in distal resection (7 vs. 10 days). However, there were no differences in length of stay in any other surgery type. Nor was there any difference in the time to neoadjuvant chemotherapy.
Survival outcomes were similar as well. For stage I cancers, 40-month survival was about 40% in the laparoscopic group and 25% in the robotic group. For stage II cancers, 40-month survival was about 15% and 25%, respectively. For stage III tumors, 20-month survival in the robotic group was near 0 and 25% in the laparoscopic group. By 40 months almost all patients were deceased.
A multivariate survival analysis controlled for age, sex, race, comorbidities, facility type and location, surgery type, surgical margins, pathologic stage, and systemic therapy. It found only one significant association: Patients with 12 or more lymph nodes harvested were 19% more likely to die than those with fewer than 12 nodes harvested.
Time to chemotherapy (longer or shorter than 57 days) did not significantly impact survival, Dr. Mirkin said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
msullivan@frontlinemedcom.com
On Twitter @alz_gal
WASHINGTON – Minimally invasive surgery – whether robotic or laparoscopic – is just as effective as open surgery in pancreatectomy.
Both minimally invasive approaches had perioperative and oncologic outcomes that were similar to open approaches, as well as to each other, Katelin Mirkin, MD, reported at the annual clinical congress of the American College of Surgeons. And while minimally invasive surgery (MIS) techniques were associated with a slightly faster move to neoadjuvant chemotherapy, survival outcomes in all three surgical approaches were similar.
Dr. Mirkin, a surgery resident at Penn State Milton S. Hershey Medical Center, Hershey, Pa., plumbed the National Cancer Database for patients with stage I-III pancreatic cancer who were treated by surgical resection from 2010 to 2012. Her cohort comprised 9,047 patients; of these, 7,924 were treated with open surgery, 992 with laparoscopic surgery, and 131 with robotic surgery. She examined a number of factors including lymph node harvest and surgical margins, length of stay and time to adjuvant chemotherapy, and survival.
Patients who had MIS were older (67 vs. 66 years) and more often treated at an academic center, but otherwise there were no significant baseline differences.
Dr. Mirkin first compared the open surgeries with MIS. There were no significant associations with surgical approach and cancer stage. However, distal resections were significantly more likely to be dealt with by MIS, and Whipple procedures by open approaches. There were also more open than MIS total resections.
MIS was more likely to conclude with negative surgical margins (79% vs. 75%), and open surgery more likely to end with positive margins (22% vs. 19%).
Perioperative outcomes favored MIS approaches for all types of surgery, with a mean overall stay of 9.5 days vs. 11.3 days for open surgery. The mean length of stay for a distal resection was 7 days for MIS vs. 8 for open. For a Whipple procedure, the mean stay was 10.7 vs. 11.9 days. For a total resection, it was 10 vs. 11.8 days.
MIS was also associated with a significantly shorter time to the initiation of adjuvant chemotherapy overall (56 vs. 59 days). For a Whipple, time to chemotherapy was 58 vs. 60 days, respectively. For a distal resection, it was 52 vs. 56 days, and for a total resection, 52 vs. 58 days.
Neither approach offered a survival benefit over the other, Dr. Mirkin noted. For stage I cancers, less than 50% of MIS patients and less than 25% of open patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months. For stage III tumors, the 40-month survival rates were about 10% for MIS patients and 15% for open patients.
Dr. Mirkin then examined perioperative, oncologic, and survival outcomes among those who underwent laparoscopic and robotic surgeries. There were no demographic differences between these groups.
Oncologic outcomes were almost identical with regard to the number of positive regional nodes harvested (six), and surgical margins. Nodes were negative in 82% of robotic cases vs. 78% of laparoscopic cases and positive in 17.6% of robotic cases and 19.4% of laparoscopic cases.
Length of stay was significantly shorter for a laparoscopic approach overall (10 vs. 9.4 days) and particularly in distal resection (7 vs. 10 days). However, there were no differences in length of stay in any other surgery type. Nor was there any difference in the time to neoadjuvant chemotherapy.
Survival outcomes were similar as well. For stage I cancers, 40-month survival was about 40% in the laparoscopic group and 25% in the robotic group. For stage II cancers, 40-month survival was about 15% and 25%, respectively. For stage III tumors, 20-month survival in the robotic group was near 0 and 25% in the laparoscopic group. By 40 months almost all patients were deceased.
A multivariate survival analysis controlled for age, sex, race, comorbidities, facility type and location, surgery type, surgical margins, pathologic stage, and systemic therapy. It found only one significant association: Patients with 12 or more lymph nodes harvested were 19% more likely to die than those with fewer than 12 nodes harvested.
Time to chemotherapy (longer or shorter than 57 days) did not significantly impact survival, Dr. Mirkin said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
msullivan@frontlinemedcom.com
On Twitter @alz_gal
WASHINGTON – Minimally invasive surgery – whether robotic or laparoscopic – is just as effective as open surgery in pancreatectomy.
Both minimally invasive approaches had perioperative and oncologic outcomes that were similar to open approaches, as well as to each other, Katelin Mirkin, MD, reported at the annual clinical congress of the American College of Surgeons. And while minimally invasive surgery (MIS) techniques were associated with a slightly faster move to neoadjuvant chemotherapy, survival outcomes in all three surgical approaches were similar.
Dr. Mirkin, a surgery resident at Penn State Milton S. Hershey Medical Center, Hershey, Pa., plumbed the National Cancer Database for patients with stage I-III pancreatic cancer who were treated by surgical resection from 2010 to 2012. Her cohort comprised 9,047 patients; of these, 7,924 were treated with open surgery, 992 with laparoscopic surgery, and 131 with robotic surgery. She examined a number of factors including lymph node harvest and surgical margins, length of stay and time to adjuvant chemotherapy, and survival.
Patients who had MIS were older (67 vs. 66 years) and more often treated at an academic center, but otherwise there were no significant baseline differences.
Dr. Mirkin first compared the open surgeries with MIS. There were no significant associations with surgical approach and cancer stage. However, distal resections were significantly more likely to be dealt with by MIS, and Whipple procedures by open approaches. There were also more open than MIS total resections.
MIS was more likely to conclude with negative surgical margins (79% vs. 75%), and open surgery more likely to end with positive margins (22% vs. 19%).
Perioperative outcomes favored MIS approaches for all types of surgery, with a mean overall stay of 9.5 days vs. 11.3 days for open surgery. The mean length of stay for a distal resection was 7 days for MIS vs. 8 for open. For a Whipple procedure, the mean stay was 10.7 vs. 11.9 days. For a total resection, it was 10 vs. 11.8 days.
MIS was also associated with a significantly shorter time to the initiation of adjuvant chemotherapy overall (56 vs. 59 days). For a Whipple, time to chemotherapy was 58 vs. 60 days, respectively. For a distal resection, it was 52 vs. 56 days, and for a total resection, 52 vs. 58 days.
Neither approach offered a survival benefit over the other, Dr. Mirkin noted. For stage I cancers, less than 50% of MIS patients and less than 25% of open patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months. For stage III tumors, the 40-month survival rates were about 10% for MIS patients and 15% for open patients.
Dr. Mirkin then examined perioperative, oncologic, and survival outcomes among those who underwent laparoscopic and robotic surgeries. There were no demographic differences between these groups.
Oncologic outcomes were almost identical with regard to the number of positive regional nodes harvested (six), and surgical margins. Nodes were negative in 82% of robotic cases vs. 78% of laparoscopic cases and positive in 17.6% of robotic cases and 19.4% of laparoscopic cases.
Length of stay was significantly shorter for a laparoscopic approach overall (10 vs. 9.4 days) and particularly in distal resection (7 vs. 10 days). However, there were no differences in length of stay in any other surgery type. Nor was there any difference in the time to neoadjuvant chemotherapy.
Survival outcomes were similar as well. For stage I cancers, 40-month survival was about 40% in the laparoscopic group and 25% in the robotic group. For stage II cancers, 40-month survival was about 15% and 25%, respectively. For stage III tumors, 20-month survival in the robotic group was near 0 and 25% in the laparoscopic group. By 40 months almost all patients were deceased.
A multivariate survival analysis controlled for age, sex, race, comorbidities, facility type and location, surgery type, surgical margins, pathologic stage, and systemic therapy. It found only one significant association: Patients with 12 or more lymph nodes harvested were 19% more likely to die than those with fewer than 12 nodes harvested.
Time to chemotherapy (longer or shorter than 57 days) did not significantly impact survival, Dr. Mirkin said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
msullivan@frontlinemedcom.com
On Twitter @alz_gal
AT THE ACS CLINICAL CONGRESS
Key clinical point:
Major finding: For stage I cancers, less than 50% of minimally invasive surgery patients and less than 25% of open surgery patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months.
Data source: The database review comprised 9,047 cases.
Disclosures: Dr. Mirkin had no financial disclosures.
Pregabalin reduces pain in IBS patients
LAS VEGAS – Pregabalin reduced abdominal pain in patients with irritable bowel syndrome (IBS) and moderate to severe abdominal pain, according to a study presented at the annual meeting of the American College of Gastroenterology.
Antispasmodics and neuromodulators are commonly used to treat such patients, but a significant number don’t respond to these agents, and opioids carry risks of addiction.
The drug makes sense for IBS patients experiencing significant pain, according to Yuri Saito, MD, of the department of medicine and a consultant in the division of gastroenterology at the Mayo Clinic. Rochester, Minn., who presented the research. She noted that pregabalin is approved by the Food and Drug Administration for fibromyalgia, which occurs in many IBS patients. IBS patients also experience frequent anxiety, which can exacerbate symptoms. Pregabalin is not approved for anxiety, but is often prescribed off label. “We thought there were multiple reasons why pregabalin would potentially be effective in IBS,” Dr. Saito said in an interview.
Patients taking pregabalin (n = 41) had lower Bowel Symptom Scale (BSS) pain scores than did placebo (n = 44; 25 vs. 42; P =.008) and lower overall BSS severity scores at weeks 9-12 (26 vs. 42; P =.009). BSS diarrhea scores were lower in pregabalin (17 vs. 32; P = .049), as were bloating BSS scores (29 vs. 44; P =.016).
The study focused on patients with moderate to severe pain, who had experienced three or more pain attacks in a month, and at least one attack during a 2-week screening period. The pregabalin dosage began at 75 mg twice per day and was ramped up to 225 mg twice per day. That dosage was maintained from day 7 through week 12.
Somewhat disappointingly, the researchers found no difference in quality of life measures, but the presence of fibromyalgia may have complicated those measures, Dr. Gerson said.
Thirty-two percent of subjects in the pregabalin arm experienced dizziness, compared with 5% in the placebo group (P =.01). Other side effects included blurred vision (15% vs. 2%; P =.05) and feeling high or tipsy (10% vs. 0%; P =.05).
The results are encouraging and provide an additional treatment option. “I think it’s probably useful, but mainly in patients with diarrhea-prominent IBS,” said Dr. Gerson.
Dr. Saito was more effusive: “The take-home message is that, for patients with moderate to severe pain who have not responded to antispasmodics or other neuromodulators, Pregabalin may be useful as an alternate modality.”
Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
LAS VEGAS – Pregabalin reduced abdominal pain in patients with irritable bowel syndrome (IBS) and moderate to severe abdominal pain, according to a study presented at the annual meeting of the American College of Gastroenterology.
Antispasmodics and neuromodulators are commonly used to treat such patients, but a significant number don’t respond to these agents, and opioids carry risks of addiction.
The drug makes sense for IBS patients experiencing significant pain, according to Yuri Saito, MD, of the department of medicine and a consultant in the division of gastroenterology at the Mayo Clinic. Rochester, Minn., who presented the research. She noted that pregabalin is approved by the Food and Drug Administration for fibromyalgia, which occurs in many IBS patients. IBS patients also experience frequent anxiety, which can exacerbate symptoms. Pregabalin is not approved for anxiety, but is often prescribed off label. “We thought there were multiple reasons why pregabalin would potentially be effective in IBS,” Dr. Saito said in an interview.
Patients taking pregabalin (n = 41) had lower Bowel Symptom Scale (BSS) pain scores than did placebo (n = 44; 25 vs. 42; P =.008) and lower overall BSS severity scores at weeks 9-12 (26 vs. 42; P =.009). BSS diarrhea scores were lower in pregabalin (17 vs. 32; P = .049), as were bloating BSS scores (29 vs. 44; P =.016).
The study focused on patients with moderate to severe pain, who had experienced three or more pain attacks in a month, and at least one attack during a 2-week screening period. The pregabalin dosage began at 75 mg twice per day and was ramped up to 225 mg twice per day. That dosage was maintained from day 7 through week 12.
Somewhat disappointingly, the researchers found no difference in quality of life measures, but the presence of fibromyalgia may have complicated those measures, Dr. Gerson said.
Thirty-two percent of subjects in the pregabalin arm experienced dizziness, compared with 5% in the placebo group (P =.01). Other side effects included blurred vision (15% vs. 2%; P =.05) and feeling high or tipsy (10% vs. 0%; P =.05).
The results are encouraging and provide an additional treatment option. “I think it’s probably useful, but mainly in patients with diarrhea-prominent IBS,” said Dr. Gerson.
Dr. Saito was more effusive: “The take-home message is that, for patients with moderate to severe pain who have not responded to antispasmodics or other neuromodulators, Pregabalin may be useful as an alternate modality.”
Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
LAS VEGAS – Pregabalin reduced abdominal pain in patients with irritable bowel syndrome (IBS) and moderate to severe abdominal pain, according to a study presented at the annual meeting of the American College of Gastroenterology.
Antispasmodics and neuromodulators are commonly used to treat such patients, but a significant number don’t respond to these agents, and opioids carry risks of addiction.
The drug makes sense for IBS patients experiencing significant pain, according to Yuri Saito, MD, of the department of medicine and a consultant in the division of gastroenterology at the Mayo Clinic. Rochester, Minn., who presented the research. She noted that pregabalin is approved by the Food and Drug Administration for fibromyalgia, which occurs in many IBS patients. IBS patients also experience frequent anxiety, which can exacerbate symptoms. Pregabalin is not approved for anxiety, but is often prescribed off label. “We thought there were multiple reasons why pregabalin would potentially be effective in IBS,” Dr. Saito said in an interview.
Patients taking pregabalin (n = 41) had lower Bowel Symptom Scale (BSS) pain scores than did placebo (n = 44; 25 vs. 42; P =.008) and lower overall BSS severity scores at weeks 9-12 (26 vs. 42; P =.009). BSS diarrhea scores were lower in pregabalin (17 vs. 32; P = .049), as were bloating BSS scores (29 vs. 44; P =.016).
The study focused on patients with moderate to severe pain, who had experienced three or more pain attacks in a month, and at least one attack during a 2-week screening period. The pregabalin dosage began at 75 mg twice per day and was ramped up to 225 mg twice per day. That dosage was maintained from day 7 through week 12.
Somewhat disappointingly, the researchers found no difference in quality of life measures, but the presence of fibromyalgia may have complicated those measures, Dr. Gerson said.
Thirty-two percent of subjects in the pregabalin arm experienced dizziness, compared with 5% in the placebo group (P =.01). Other side effects included blurred vision (15% vs. 2%; P =.05) and feeling high or tipsy (10% vs. 0%; P =.05).
The results are encouraging and provide an additional treatment option. “I think it’s probably useful, but mainly in patients with diarrhea-prominent IBS,” said Dr. Gerson.
Dr. Saito was more effusive: “The take-home message is that, for patients with moderate to severe pain who have not responded to antispasmodics or other neuromodulators, Pregabalin may be useful as an alternate modality.”
Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
AT ACG 2016
Key clinical point:
Major finding: In a pilot study, pregabalin reduced pain scores and diarrhea in patients with IBS and moderate to severe abdominal pain.
Data source: A randomized, placebo controlled clinical trial.
Disclosures: The study was funded by Pfizer. Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
Genetic risk score for low vitamin D may affect MS relapse rate
BALTIMORE – A genetic scoring system for identifying individuals at high risk for low vitamin D levels also detected multiple sclerosis patients with an increased risk for relapse in a multicenter cohort study.
The findings could have clinical significance in multiple sclerosis (MS) treatment and patient counseling, Jennifer S. Graves, MD, PhD, of the University of California, San Francisco, said in a brief oral and poster presentation of the study at the annual meeting of the American Neurological Association.
The investigators compared the SNP profile of a discovery cohort of 181 patients with MS or high-risk clinically isolated syndrome (the discovery cohort) who were enrolled at two pediatric MS centers in California between 2006 and 2011 against a replication cohort of 110 patients of comparable age, race, and median vitamin D serum level who were enrolled at nine MS centers elsewhere in the United States from 2011 to 2015.
Three of the SNPs were strongly associated with the vitamin D levels in the discovery cohort after a statistical correction that revealed individual influences of genes among the 29 different mutations. The researchers used these three SNPs to generate risk scores for vitamin D levels. A comparison of the lowest and highest scores revealed a linear association with vitamin D levels. The highest scores were associated with serum vitamin D levels that were nearly 15 ng/mL lower in both the discovery and replication cohorts (P = .00000052 and .002, respectively).
The risk of MS relapse for individuals with the highest risk score in the discovery cohort was nearly twice as high as it was for individuals with the lowest risk score (hazard ratio, 1.94; 95% confidence interval, 1.19-3.15; P = .007).
“A genetic score of three functional SNPs captures risk of low vitamin D level and identifies those who may be at risk of relapse related to this risk factor. These findings support a causal association of vitamin D with relapse rate,” Dr. Graves said.
The study may potentially be important beyond MS. “This risk score may also have some utility in other disease states where vitamin D deficiency may be contributing to disease course,” she said.
The study was funded by The Race to Erase MS, the National Multiple Sclerosis Society, and the National Institute of Neurological Disorders and Stroke. Dr. Graves had no disclosures.
BALTIMORE – A genetic scoring system for identifying individuals at high risk for low vitamin D levels also detected multiple sclerosis patients with an increased risk for relapse in a multicenter cohort study.
The findings could have clinical significance in multiple sclerosis (MS) treatment and patient counseling, Jennifer S. Graves, MD, PhD, of the University of California, San Francisco, said in a brief oral and poster presentation of the study at the annual meeting of the American Neurological Association.
The investigators compared the SNP profile of a discovery cohort of 181 patients with MS or high-risk clinically isolated syndrome (the discovery cohort) who were enrolled at two pediatric MS centers in California between 2006 and 2011 against a replication cohort of 110 patients of comparable age, race, and median vitamin D serum level who were enrolled at nine MS centers elsewhere in the United States from 2011 to 2015.
Three of the SNPs were strongly associated with the vitamin D levels in the discovery cohort after a statistical correction that revealed individual influences of genes among the 29 different mutations. The researchers used these three SNPs to generate risk scores for vitamin D levels. A comparison of the lowest and highest scores revealed a linear association with vitamin D levels. The highest scores were associated with serum vitamin D levels that were nearly 15 ng/mL lower in both the discovery and replication cohorts (P = .00000052 and .002, respectively).
The risk of MS relapse for individuals with the highest risk score in the discovery cohort was nearly twice as high as it was for individuals with the lowest risk score (hazard ratio, 1.94; 95% confidence interval, 1.19-3.15; P = .007).
“A genetic score of three functional SNPs captures risk of low vitamin D level and identifies those who may be at risk of relapse related to this risk factor. These findings support a causal association of vitamin D with relapse rate,” Dr. Graves said.
The study may potentially be important beyond MS. “This risk score may also have some utility in other disease states where vitamin D deficiency may be contributing to disease course,” she said.
The study was funded by The Race to Erase MS, the National Multiple Sclerosis Society, and the National Institute of Neurological Disorders and Stroke. Dr. Graves had no disclosures.
BALTIMORE – A genetic scoring system for identifying individuals at high risk for low vitamin D levels also detected multiple sclerosis patients with an increased risk for relapse in a multicenter cohort study.
The findings could have clinical significance in multiple sclerosis (MS) treatment and patient counseling, Jennifer S. Graves, MD, PhD, of the University of California, San Francisco, said in a brief oral and poster presentation of the study at the annual meeting of the American Neurological Association.
The investigators compared the SNP profile of a discovery cohort of 181 patients with MS or high-risk clinically isolated syndrome (the discovery cohort) who were enrolled at two pediatric MS centers in California between 2006 and 2011 against a replication cohort of 110 patients of comparable age, race, and median vitamin D serum level who were enrolled at nine MS centers elsewhere in the United States from 2011 to 2015.
Three of the SNPs were strongly associated with the vitamin D levels in the discovery cohort after a statistical correction that revealed individual influences of genes among the 29 different mutations. The researchers used these three SNPs to generate risk scores for vitamin D levels. A comparison of the lowest and highest scores revealed a linear association with vitamin D levels. The highest scores were associated with serum vitamin D levels that were nearly 15 ng/mL lower in both the discovery and replication cohorts (P = .00000052 and .002, respectively).
The risk of MS relapse for individuals with the highest risk score in the discovery cohort was nearly twice as high as it was for individuals with the lowest risk score (hazard ratio, 1.94; 95% confidence interval, 1.19-3.15; P = .007).
“A genetic score of three functional SNPs captures risk of low vitamin D level and identifies those who may be at risk of relapse related to this risk factor. These findings support a causal association of vitamin D with relapse rate,” Dr. Graves said.
The study may potentially be important beyond MS. “This risk score may also have some utility in other disease states where vitamin D deficiency may be contributing to disease course,” she said.
The study was funded by The Race to Erase MS, the National Multiple Sclerosis Society, and the National Institute of Neurological Disorders and Stroke. Dr. Graves had no disclosures.
AT ANA 2016
Key clinical point:
Major finding: The risk of MS relapse was 94% greater for individuals with the highest genetic risk score for low serum vitamin D level, compared with those who had the lowest risk score.
Data source: Databases of patients enrolled at two pediatric MS centers in California and nine national MS centers.
Disclosures: The study was funded by The Race to Erase MS, the National Multiple Sclerosis Society, and the National Institute of Neurological Disorders and Stroke. Dr. Graves had no disclosures.
CD64 validated as biomarker for pediatric Crohn’s disease
MONTREAL – Blood levels of a neutrophil receptor protein, CD64, proved to be a reliable, noninvasive marker of both Crohn’s disease activity and the risk for relapse from remission in children and adolescents in a pair of single-center studies with a total of 140 patients.
An elevation in blood levels of CD64, a marker for inflammation, in asymptomatic patients with Crohn’s disease “is a significant risk factor for treatment failure or complications during infliximab maintenance,” Phillip Minar, MD, said at the World Congress of Pediatric Gastroenterology, Hepatology and Nutrition. Although Dr. Minar acknowledged that larger validation studies are still needed, neutrophil CD64 levels can potentially serve as a “treat-to-target” biomarker of disease status in selected pediatric Crohn’s disease patients.
Dr. Minar cautioned that in some pediatric patients with Crohn’s disease CD64 is not an effective marker for inflammation and a change in their Crohn’s disease status. In his study, the sensitivity of an elevated CD64 level was 64% as a surrogate marker for mucosal damage seen with endoscopy.
“I get a CD64 level at the time we diagnose Crohn’s disease. If it is elevated, then I will follow it; if it is not elevated, then I won’t use it for that patient. It’s patient specific,” he explained in an interview.
Dr. Minar and his associates first established the prognostic value of elevated CD64 levels in patients with Crohn’s disease in a study with 208 pediatric patients with inflammatory bowel disease and 43 controls (Inflam Bowel Dis. 2014, Jun;20[6]:1037-48). His new validation study included 105 pediatric patients with Crohn’s disease, of whom 54 were newly diagnosed. Among the 51 previously diagnosed patients, 18 had inactive disease. The patients averaged 14 years old, and all 105 underwent endoscopy to directly examine their Crohn’s disease activity.
The results showed clear and statistically significant correlations among the average CD64 levels in the patients and the blinded endoscopic evaluations that categorized the patients as having inactive Crohn’s disease, mild disease, or moderate to severe disease. The results also suggested that a useful dichotomous cut point for CD64 was an index of 1. Among patients with a level above 1, diagnostic sensitivity for mucosal damage was 64% and specificity was 100%, he reported. In these studies as well as their routine practice, Dr. Minar and his associates use a commercially available immunoassay for quantifying blood levels of CD64.
The second study he reported on assessed the ability of CD64 levels to predict a patient’s status on infliximab (Remicade) maintenance treatment. This study enrolled 35 pediatric patients, who averaged about 15 years old, had been diagnosed with Crohn’s disease for an average of about 2 years and were in remission after having received at least four serial infliximab doses. During 1 year of follow-up, 15 patients relapsed and 21 remained in remission.
The researchers measured CD64 levels at baseline and found that, during the next year, those who had a CD64 index of less than 1 at baseline had a relapse rate of less than 40% during follow-up, while those with a CD64 index of 1 or greater at baseline had a relapse rate of more than 70% during follow-up, a statistically significant difference between the two subgroups. The analysis also showed that lower CD64 levels linked with higher trough levels of infliximab.
A multivariate analysis showed that a CD64 index level of 1 or greater at baseline linked with a statistically significant, 4.5-fold increased risk for relapse, compared with patients with a baseline CD64 level below 1. This analysis identified three additional significant correlates of an elevated risk for relapse: nonwhite race, a baseline serum albumin level of less than 3.9 g/dL, and a baseline infliximab serum level of less than 5 mcg/mL.
The CD64 test that his group has been using typically has a work week turnaround time of about an hour, and costs less than $100 per test per patient. Blood levels of CD64 are stable for 48 hours in the refrigerator, so specimens can sit over a weekend without compromising results. The Cincinnati group is planning to soon change to an in-house test that will cost about $10-$20 per test per patient, Dr. Minar said.
Dr. Minar had no relevant financial disclosures.
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
We are in desperate need of more reliable biomarkers of disease activity in patients with Crohn’s disease. Identifying effective noninvasive biomarkers has been a holy grail that we have pursued for many years because what we currently have is imperfect. CD64 appears to be a very reliable and specific biomarker of disease activity.
I think pediatric gastroenterologists will pay attention to Dr. Minar’s report. The entire community is very interested in this and will be watching the evolution of the science behind CD64 assessment.
John A. Barnard, MD , is chief of pediatrics at Nationwide Children’s Hospital and professor and chairman of pediatrics at Ohio State University, both in Columbus. He had no relevant disclosures. He made these comments in an interview.
We are in desperate need of more reliable biomarkers of disease activity in patients with Crohn’s disease. Identifying effective noninvasive biomarkers has been a holy grail that we have pursued for many years because what we currently have is imperfect. CD64 appears to be a very reliable and specific biomarker of disease activity.
I think pediatric gastroenterologists will pay attention to Dr. Minar’s report. The entire community is very interested in this and will be watching the evolution of the science behind CD64 assessment.
John A. Barnard, MD , is chief of pediatrics at Nationwide Children’s Hospital and professor and chairman of pediatrics at Ohio State University, both in Columbus. He had no relevant disclosures. He made these comments in an interview.
We are in desperate need of more reliable biomarkers of disease activity in patients with Crohn’s disease. Identifying effective noninvasive biomarkers has been a holy grail that we have pursued for many years because what we currently have is imperfect. CD64 appears to be a very reliable and specific biomarker of disease activity.
I think pediatric gastroenterologists will pay attention to Dr. Minar’s report. The entire community is very interested in this and will be watching the evolution of the science behind CD64 assessment.
John A. Barnard, MD , is chief of pediatrics at Nationwide Children’s Hospital and professor and chairman of pediatrics at Ohio State University, both in Columbus. He had no relevant disclosures. He made these comments in an interview.
MONTREAL – Blood levels of a neutrophil receptor protein, CD64, proved to be a reliable, noninvasive marker of both Crohn’s disease activity and the risk for relapse from remission in children and adolescents in a pair of single-center studies with a total of 140 patients.
An elevation in blood levels of CD64, a marker for inflammation, in asymptomatic patients with Crohn’s disease “is a significant risk factor for treatment failure or complications during infliximab maintenance,” Phillip Minar, MD, said at the World Congress of Pediatric Gastroenterology, Hepatology and Nutrition. Although Dr. Minar acknowledged that larger validation studies are still needed, neutrophil CD64 levels can potentially serve as a “treat-to-target” biomarker of disease status in selected pediatric Crohn’s disease patients.
Dr. Minar cautioned that in some pediatric patients with Crohn’s disease CD64 is not an effective marker for inflammation and a change in their Crohn’s disease status. In his study, the sensitivity of an elevated CD64 level was 64% as a surrogate marker for mucosal damage seen with endoscopy.
“I get a CD64 level at the time we diagnose Crohn’s disease. If it is elevated, then I will follow it; if it is not elevated, then I won’t use it for that patient. It’s patient specific,” he explained in an interview.
Dr. Minar and his associates first established the prognostic value of elevated CD64 levels in patients with Crohn’s disease in a study with 208 pediatric patients with inflammatory bowel disease and 43 controls (Inflam Bowel Dis. 2014, Jun;20[6]:1037-48). His new validation study included 105 pediatric patients with Crohn’s disease, of whom 54 were newly diagnosed. Among the 51 previously diagnosed patients, 18 had inactive disease. The patients averaged 14 years old, and all 105 underwent endoscopy to directly examine their Crohn’s disease activity.
The results showed clear and statistically significant correlations among the average CD64 levels in the patients and the blinded endoscopic evaluations that categorized the patients as having inactive Crohn’s disease, mild disease, or moderate to severe disease. The results also suggested that a useful dichotomous cut point for CD64 was an index of 1. Among patients with a level above 1, diagnostic sensitivity for mucosal damage was 64% and specificity was 100%, he reported. In these studies as well as their routine practice, Dr. Minar and his associates use a commercially available immunoassay for quantifying blood levels of CD64.
The second study he reported on assessed the ability of CD64 levels to predict a patient’s status on infliximab (Remicade) maintenance treatment. This study enrolled 35 pediatric patients, who averaged about 15 years old, had been diagnosed with Crohn’s disease for an average of about 2 years and were in remission after having received at least four serial infliximab doses. During 1 year of follow-up, 15 patients relapsed and 21 remained in remission.
The researchers measured CD64 levels at baseline and found that, during the next year, those who had a CD64 index of less than 1 at baseline had a relapse rate of less than 40% during follow-up, while those with a CD64 index of 1 or greater at baseline had a relapse rate of more than 70% during follow-up, a statistically significant difference between the two subgroups. The analysis also showed that lower CD64 levels linked with higher trough levels of infliximab.
A multivariate analysis showed that a CD64 index level of 1 or greater at baseline linked with a statistically significant, 4.5-fold increased risk for relapse, compared with patients with a baseline CD64 level below 1. This analysis identified three additional significant correlates of an elevated risk for relapse: nonwhite race, a baseline serum albumin level of less than 3.9 g/dL, and a baseline infliximab serum level of less than 5 mcg/mL.
The CD64 test that his group has been using typically has a work week turnaround time of about an hour, and costs less than $100 per test per patient. Blood levels of CD64 are stable for 48 hours in the refrigerator, so specimens can sit over a weekend without compromising results. The Cincinnati group is planning to soon change to an in-house test that will cost about $10-$20 per test per patient, Dr. Minar said.
Dr. Minar had no relevant financial disclosures.
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
MONTREAL – Blood levels of a neutrophil receptor protein, CD64, proved to be a reliable, noninvasive marker of both Crohn’s disease activity and the risk for relapse from remission in children and adolescents in a pair of single-center studies with a total of 140 patients.
An elevation in blood levels of CD64, a marker for inflammation, in asymptomatic patients with Crohn’s disease “is a significant risk factor for treatment failure or complications during infliximab maintenance,” Phillip Minar, MD, said at the World Congress of Pediatric Gastroenterology, Hepatology and Nutrition. Although Dr. Minar acknowledged that larger validation studies are still needed, neutrophil CD64 levels can potentially serve as a “treat-to-target” biomarker of disease status in selected pediatric Crohn’s disease patients.
Dr. Minar cautioned that in some pediatric patients with Crohn’s disease CD64 is not an effective marker for inflammation and a change in their Crohn’s disease status. In his study, the sensitivity of an elevated CD64 level was 64% as a surrogate marker for mucosal damage seen with endoscopy.
“I get a CD64 level at the time we diagnose Crohn’s disease. If it is elevated, then I will follow it; if it is not elevated, then I won’t use it for that patient. It’s patient specific,” he explained in an interview.
Dr. Minar and his associates first established the prognostic value of elevated CD64 levels in patients with Crohn’s disease in a study with 208 pediatric patients with inflammatory bowel disease and 43 controls (Inflam Bowel Dis. 2014, Jun;20[6]:1037-48). His new validation study included 105 pediatric patients with Crohn’s disease, of whom 54 were newly diagnosed. Among the 51 previously diagnosed patients, 18 had inactive disease. The patients averaged 14 years old, and all 105 underwent endoscopy to directly examine their Crohn’s disease activity.
The results showed clear and statistically significant correlations among the average CD64 levels in the patients and the blinded endoscopic evaluations that categorized the patients as having inactive Crohn’s disease, mild disease, or moderate to severe disease. The results also suggested that a useful dichotomous cut point for CD64 was an index of 1. Among patients with a level above 1, diagnostic sensitivity for mucosal damage was 64% and specificity was 100%, he reported. In these studies as well as their routine practice, Dr. Minar and his associates use a commercially available immunoassay for quantifying blood levels of CD64.
The second study he reported on assessed the ability of CD64 levels to predict a patient’s status on infliximab (Remicade) maintenance treatment. This study enrolled 35 pediatric patients, who averaged about 15 years old, had been diagnosed with Crohn’s disease for an average of about 2 years and were in remission after having received at least four serial infliximab doses. During 1 year of follow-up, 15 patients relapsed and 21 remained in remission.
The researchers measured CD64 levels at baseline and found that, during the next year, those who had a CD64 index of less than 1 at baseline had a relapse rate of less than 40% during follow-up, while those with a CD64 index of 1 or greater at baseline had a relapse rate of more than 70% during follow-up, a statistically significant difference between the two subgroups. The analysis also showed that lower CD64 levels linked with higher trough levels of infliximab.
A multivariate analysis showed that a CD64 index level of 1 or greater at baseline linked with a statistically significant, 4.5-fold increased risk for relapse, compared with patients with a baseline CD64 level below 1. This analysis identified three additional significant correlates of an elevated risk for relapse: nonwhite race, a baseline serum albumin level of less than 3.9 g/dL, and a baseline infliximab serum level of less than 5 mcg/mL.
The CD64 test that his group has been using typically has a work week turnaround time of about an hour, and costs less than $100 per test per patient. Blood levels of CD64 are stable for 48 hours in the refrigerator, so specimens can sit over a weekend without compromising results. The Cincinnati group is planning to soon change to an in-house test that will cost about $10-$20 per test per patient, Dr. Minar said.
Dr. Minar had no relevant financial disclosures.
mzoler@frontlinemedcom.com
On Twitter @mitchelzoler
AT WCPGHAN 2016
Key clinical point: Results from two studies further validated neutrophil CD64 as a highly specific biomarker for Crohn’s disease severity in children and adolescents and suggested that CD64 could serve as a treat-to-target guide for infliximab treatment.
Major finding: During infliximab maintenance, relapses occurred in fewer than 40% of pediatric Crohn’s disease patients with low CD64 and in more than 70% with high CD64.
Data source: A single-center study of 105 pediatric patients with Crohn’s disease to assess disease severity correlates, and 35 patients in remission on infliximab to assess predicted efficacy.
Disclosures: Dr. Minar had no relevant financial disclosures.
Beta-blockers curb death risk in patients with primary prevention ICD
ROME – Beta-blocker therapy reduces the risks of all-cause mortality as well as cardiac death in patients with a left ventricular ejection fraction below 35% who get an implantable cardioverter-defibrillator for primary prevention, Laurent Fauchier, MD, PhD, reported at the annual congress of the European Society of Cardiology.
Some physicians have recently urged reconsideration of current guidelines recommending routine use of beta-blockers for prevention of cardiovascular events in certain groups of patients with coronary artery disease, including those with chronic heart failure who have received an ICD for primary prevention of sudden death. And indeed it’s true that the now–relatively old randomized trials of ICDs for primary prevention in patients with chronic heart failure don’t provide any real evidence that beta-blockers reduce mortality in this setting. In fact, the guideline recommendation for beta-blockade has been based upon expert opinion. This was the impetus for Dr. Fauchier and coinvestigators to conduct a large retrospective observational study in a contemporary cohort of heart failure patients who received an ICD for primary prevention during a recent 10-year period at the 12 largest centers in France.
Fifteen percent of the 3,975 French ICD recipients did not receive a beta-blocker. They differed from those who did in that they were on average 2 years older, had an absolute 5% lower ejection fraction, and were more likely to also receive cardiac resynchronization therapy. Propensity score matching based on these and 19 other baseline characteristics enabled investigators to assemble a cohort of 541 closely matched patient pairs, explained Dr. Fauchier, professor of cardiology at Francois Rabelais University in Tours, France.
During a mean follow-up of 3.2 years, the risk of all-cause mortality in ICD recipients not on a beta-blocker was 34% higher than in those who were. Moreover, their risk of cardiac death was 50% greater.
In contrast, beta-blocker therapy had no effect on the risks of sudden death or of appropriate or inappropriate shocks.
The finding that beta-blocker therapy doesn’t prevent sudden death in patients with an ICD for primary prevention has not previously been reported. However, it makes sense. The device prevents such events so effectively that a beta-blocker adds nothing further in that regard, according to Dr. Fauchier.
“Beta-blockers should continue to be used widely, as currently recommended, for heart failure in the specific setting of patients with prophylactic ICD implantation. You do not have the benefit for prevention of sudden death, but you still have all the benefit from preventing cardiac death,” the electrophysiologist concluded.
This study was supported by French governmental research grants. Dr. Fauchier reported serving as a consultant to Bayer, Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.
ROME – Beta-blocker therapy reduces the risks of all-cause mortality as well as cardiac death in patients with a left ventricular ejection fraction below 35% who get an implantable cardioverter-defibrillator for primary prevention, Laurent Fauchier, MD, PhD, reported at the annual congress of the European Society of Cardiology.
Some physicians have recently urged reconsideration of current guidelines recommending routine use of beta-blockers for prevention of cardiovascular events in certain groups of patients with coronary artery disease, including those with chronic heart failure who have received an ICD for primary prevention of sudden death. And indeed it’s true that the now–relatively old randomized trials of ICDs for primary prevention in patients with chronic heart failure don’t provide any real evidence that beta-blockers reduce mortality in this setting. In fact, the guideline recommendation for beta-blockade has been based upon expert opinion. This was the impetus for Dr. Fauchier and coinvestigators to conduct a large retrospective observational study in a contemporary cohort of heart failure patients who received an ICD for primary prevention during a recent 10-year period at the 12 largest centers in France.
Fifteen percent of the 3,975 French ICD recipients did not receive a beta-blocker. They differed from those who did in that they were on average 2 years older, had an absolute 5% lower ejection fraction, and were more likely to also receive cardiac resynchronization therapy. Propensity score matching based on these and 19 other baseline characteristics enabled investigators to assemble a cohort of 541 closely matched patient pairs, explained Dr. Fauchier, professor of cardiology at Francois Rabelais University in Tours, France.
During a mean follow-up of 3.2 years, the risk of all-cause mortality in ICD recipients not on a beta-blocker was 34% higher than in those who were. Moreover, their risk of cardiac death was 50% greater.
In contrast, beta-blocker therapy had no effect on the risks of sudden death or of appropriate or inappropriate shocks.
The finding that beta-blocker therapy doesn’t prevent sudden death in patients with an ICD for primary prevention has not previously been reported. However, it makes sense. The device prevents such events so effectively that a beta-blocker adds nothing further in that regard, according to Dr. Fauchier.
“Beta-blockers should continue to be used widely, as currently recommended, for heart failure in the specific setting of patients with prophylactic ICD implantation. You do not have the benefit for prevention of sudden death, but you still have all the benefit from preventing cardiac death,” the electrophysiologist concluded.
This study was supported by French governmental research grants. Dr. Fauchier reported serving as a consultant to Bayer, Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.
ROME – Beta-blocker therapy reduces the risks of all-cause mortality as well as cardiac death in patients with a left ventricular ejection fraction below 35% who get an implantable cardioverter-defibrillator for primary prevention, Laurent Fauchier, MD, PhD, reported at the annual congress of the European Society of Cardiology.
Some physicians have recently urged reconsideration of current guidelines recommending routine use of beta-blockers for prevention of cardiovascular events in certain groups of patients with coronary artery disease, including those with chronic heart failure who have received an ICD for primary prevention of sudden death. And indeed it’s true that the now–relatively old randomized trials of ICDs for primary prevention in patients with chronic heart failure don’t provide any real evidence that beta-blockers reduce mortality in this setting. In fact, the guideline recommendation for beta-blockade has been based upon expert opinion. This was the impetus for Dr. Fauchier and coinvestigators to conduct a large retrospective observational study in a contemporary cohort of heart failure patients who received an ICD for primary prevention during a recent 10-year period at the 12 largest centers in France.
Fifteen percent of the 3,975 French ICD recipients did not receive a beta-blocker. They differed from those who did in that they were on average 2 years older, had an absolute 5% lower ejection fraction, and were more likely to also receive cardiac resynchronization therapy. Propensity score matching based on these and 19 other baseline characteristics enabled investigators to assemble a cohort of 541 closely matched patient pairs, explained Dr. Fauchier, professor of cardiology at Francois Rabelais University in Tours, France.
During a mean follow-up of 3.2 years, the risk of all-cause mortality in ICD recipients not on a beta-blocker was 34% higher than in those who were. Moreover, their risk of cardiac death was 50% greater.
In contrast, beta-blocker therapy had no effect on the risks of sudden death or of appropriate or inappropriate shocks.
The finding that beta-blocker therapy doesn’t prevent sudden death in patients with an ICD for primary prevention has not previously been reported. However, it makes sense. The device prevents such events so effectively that a beta-blocker adds nothing further in that regard, according to Dr. Fauchier.
“Beta-blockers should continue to be used widely, as currently recommended, for heart failure in the specific setting of patients with prophylactic ICD implantation. You do not have the benefit for prevention of sudden death, but you still have all the benefit from preventing cardiac death,” the electrophysiologist concluded.
This study was supported by French governmental research grants. Dr. Fauchier reported serving as a consultant to Bayer, Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.
AT THE ESC CONGRESS 2016
Key clinical point:
Major finding: Patients with heart failure with reduced ejection fraction who received an ICD for primary prevention and were not on a beta-blocker were at an adjusted 50% increased risk for cardiac death and 34% increased risk for all-cause mortality during 3.2 years of follow-up, but they were at no increased risk for sudden death.
Data source: A retrospective observational study of all of the nearly 4,000 patients who received a primary prevention ICD at the 12 largest French centers during a recent 10-year period.
Disclosures: This study was supported by French governmental research funds. The presenter reported serving as a consultant to Bayer, Pfizer, Boehringer Ingelheim, Medtronic, and Novartis.
High-risk deliveries much more likely to be C-sections
Cesarean section rates for low-risk deliveries are 4.7 times lower than for deliveries that had a medical indication for the procedure listed in the record, according to the Agency for Healthcare Research and Quality.
The C-section rate for non–low-risk deliveries (deliveries with a medical indication that excluded them from the low-risk category) was 76.1% in 2013, compared with 16.2% for low-risk deliveries. Conversely, 23.9% of non–low-risk deliveries that year were performed vaginally, compared with 83.8% of low-risk deliveries, the AHRQ reported.
The AHRQ analysis combined a new definition of low-risk delivery developed by the Society for Maternal-Fetal Medicine (Am J Obstet Gynecol. 2016;214[2]:153-63) with data from the State Inpatient Databases of 43 states and the District of Columbia. This approach allowed AHRQ researchers to apply the new definition to actual counts of deliveries from 2,719 hospitals – representing 95% of the population – instead of national estimates based on a much smaller sample.
Cesarean section rates for low-risk deliveries are 4.7 times lower than for deliveries that had a medical indication for the procedure listed in the record, according to the Agency for Healthcare Research and Quality.
The C-section rate for non–low-risk deliveries (deliveries with a medical indication that excluded them from the low-risk category) was 76.1% in 2013, compared with 16.2% for low-risk deliveries. Conversely, 23.9% of non–low-risk deliveries that year were performed vaginally, compared with 83.8% of low-risk deliveries, the AHRQ reported.
The AHRQ analysis combined a new definition of low-risk delivery developed by the Society for Maternal-Fetal Medicine (Am J Obstet Gynecol. 2016;214[2]:153-63) with data from the State Inpatient Databases of 43 states and the District of Columbia. This approach allowed AHRQ researchers to apply the new definition to actual counts of deliveries from 2,719 hospitals – representing 95% of the population – instead of national estimates based on a much smaller sample.
Cesarean section rates for low-risk deliveries are 4.7 times lower than for deliveries that had a medical indication for the procedure listed in the record, according to the Agency for Healthcare Research and Quality.
The C-section rate for non–low-risk deliveries (deliveries with a medical indication that excluded them from the low-risk category) was 76.1% in 2013, compared with 16.2% for low-risk deliveries. Conversely, 23.9% of non–low-risk deliveries that year were performed vaginally, compared with 83.8% of low-risk deliveries, the AHRQ reported.
The AHRQ analysis combined a new definition of low-risk delivery developed by the Society for Maternal-Fetal Medicine (Am J Obstet Gynecol. 2016;214[2]:153-63) with data from the State Inpatient Databases of 43 states and the District of Columbia. This approach allowed AHRQ researchers to apply the new definition to actual counts of deliveries from 2,719 hospitals – representing 95% of the population – instead of national estimates based on a much smaller sample.