User login
How coffee and cigarettes can affect the response to psychopharmacotherapy
When a patient who smokes enters a tobacco-free medical facility and has access to caffeinated beverages, he (she) might experience toxicity to many pharmaceuticals and caffeine. Similarly, if a patient is discharged from a smoke-free environment with a newly adjusted medication regimen and resumes smoking or caffeine consumption, alterations in enzyme activity might reduce therapeutic efficacy of prescribed medicines. These effects are a result of alterations in the hepatic cytochrome P450 (CYP) enzyme system.
Taking a careful history of tobacco and caffeine use, and knowing the effects that these substances will have on specific medications, will help guide treatment and management decisions.
The role of CYP enzymes
CYP hepatic enzymes detoxify a variety of environmental agents into water-soluble compounds that are excreted in urine. CYP1A2 metabolizes 20% of drugs handled by the CYP system and comprises 13% of all the CYP enzymes expressed in the liver. The wide interindividual variation in CYP1A2 enzyme activity is influenced by a combination of genetic, epigenetic, ethnic, and environmental variables.1
Influence of tobacco on CYP
The polycyclic aromatic hydrocarbons in tobacco smoke induce CYP1A2 and CYP2B6 hepatic enzymes.2 Smokers exhibit increased activity of these enzymes, which results in faster clearance of many drugs, lower concentrations in blood, and diminished clinical response. The Table lists psychotropic medicines that are metabolized by CYP1A2 and CYP2B6. Co-administration of these substrates could decrease the elimination rate of other drugs also metabolized by CYP1A2. Nicotine in tobacco or in nicotine replacement therapies does not play a role in inducing CYP enzymes.
Psychiatric patients smoke at a higher rate than the general population.2 One study found that approximately 70% of patients with schizophrenia and as many as 45% of those with bipolar disorder smoke enough cigarettes (7 to 20 a day) to induce CYP1A2 and CYP2B6 activity.2 Patients who smoke and are given clozapine, haloperidol, or olanzapine show a lower serum concentration than non-smokers; in fact, the clozapine level can be reduced as much as 2.4-fold.2-5 Subsequently, patients can experience diminished clinical response to these 3 psychotropics.3
The turnover time for CYP1A2 is rapid— approximately 3 days—and a new CYP1A2 steady state activity is reached after approximately 1 week,4 which is important to remember when managing inpatients in a smoke-free facility. During acute hospitalization, patients could experience drug toxicity if the outpatient dosage is maintained.5
When they resume smoking after being discharged on a stabilized dosage of any of the medications listed in the Table, previous enzyme activity rebounds and might reduce the drug level, potentially leading to inadequate clinical response.
Caffeine and other substances
Asking about the patient’s caffeine intake is necessary because consumption of coffee is prevalent among smokers, and caffeine is metabolized by CYP1A2. Smokers need to consume as much as 4 times the amount of caffeine as non-smokers to achieve a similar caffeine serum concentration.2 Caffeine can form an insoluble precipitate with antipsychotic medication in the gut, which decreases absorption. The interaction between smoking-related induction of CYP1A2 enzymes and forced smoking cessation during hospitalization, with ongoing caffeine consumption, could lead to caffeine toxicity.4,5
Other common inducers of CYP1A2 are insulin, cabbage, cauliflower, broccoli, and charcoal-grilled meat. Also, cumin and turmeric inhibit CYP1A2 activity, which might explain an ethnic difference in drug tolerance across population groups. Additionally, certain genetic polymorphisms, in specific ethnic distributions, alter the potential for tobacco smoke to induce CYP1A2.6
Some of these polymorphisms can be genotyped for clinical application.3
Asking about a patient’s tobacco and caffeine use and understanding their interactions with specific medications provides guidance when prescribing antipsychotic medications and adjusting dosage for inpatients and during clinical follow-up care.
Disclosures
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
1. Wang B, Zhou SF. Synthetic and natural compounds that interact with human cytochrome P450 1A2 and implications in drug development. Curr Med Chem. 2009;16(31):4066-4218.
2. Lucas C, Martin J. Smoking and drug interactions. Australian Prescriber. 2013;36(3):102-104.
3. Eap CB, Bender S, Jaquenoud Sirot E, et al. Nonresponse to clozapine and ultrarapid CYP1A2 activity: clinical data and analysis of CYP1A2 gene. J Clin Psychopharmacol. 2004; 24(2):214-209.
4. Faber MS, Fuhr U. Time response of cytochrome P450 1A2 activity on cessation of heavy smoking. Clin Pharmacol Ther. 2004;76(2):178-184.
5. Berk M, Ng F, Wang WV, et al. Going up in smoke: tobacco smoking is associated with worse treatment outcomes in mania. J Affect Disord. 2008;110(1-2):126-134.
6. Zhou SF, Yang LP, Zhou ZW, et al. Insights into the substrate specificity, inhibitors, regulation, and polymorphisms and the clinical impact of human cytochrome P450 1A2. AAPS. 2009;11(3):481-494.
When a patient who smokes enters a tobacco-free medical facility and has access to caffeinated beverages, he (she) might experience toxicity to many pharmaceuticals and caffeine. Similarly, if a patient is discharged from a smoke-free environment with a newly adjusted medication regimen and resumes smoking or caffeine consumption, alterations in enzyme activity might reduce therapeutic efficacy of prescribed medicines. These effects are a result of alterations in the hepatic cytochrome P450 (CYP) enzyme system.
Taking a careful history of tobacco and caffeine use, and knowing the effects that these substances will have on specific medications, will help guide treatment and management decisions.
The role of CYP enzymes
CYP hepatic enzymes detoxify a variety of environmental agents into water-soluble compounds that are excreted in urine. CYP1A2 metabolizes 20% of drugs handled by the CYP system and comprises 13% of all the CYP enzymes expressed in the liver. The wide interindividual variation in CYP1A2 enzyme activity is influenced by a combination of genetic, epigenetic, ethnic, and environmental variables.1
Influence of tobacco on CYP
The polycyclic aromatic hydrocarbons in tobacco smoke induce CYP1A2 and CYP2B6 hepatic enzymes.2 Smokers exhibit increased activity of these enzymes, which results in faster clearance of many drugs, lower concentrations in blood, and diminished clinical response. The Table lists psychotropic medicines that are metabolized by CYP1A2 and CYP2B6. Co-administration of these substrates could decrease the elimination rate of other drugs also metabolized by CYP1A2. Nicotine in tobacco or in nicotine replacement therapies does not play a role in inducing CYP enzymes.
Psychiatric patients smoke at a higher rate than the general population.2 One study found that approximately 70% of patients with schizophrenia and as many as 45% of those with bipolar disorder smoke enough cigarettes (7 to 20 a day) to induce CYP1A2 and CYP2B6 activity.2 Patients who smoke and are given clozapine, haloperidol, or olanzapine show a lower serum concentration than non-smokers; in fact, the clozapine level can be reduced as much as 2.4-fold.2-5 Subsequently, patients can experience diminished clinical response to these 3 psychotropics.3
The turnover time for CYP1A2 is rapid— approximately 3 days—and a new CYP1A2 steady state activity is reached after approximately 1 week,4 which is important to remember when managing inpatients in a smoke-free facility. During acute hospitalization, patients could experience drug toxicity if the outpatient dosage is maintained.5
When they resume smoking after being discharged on a stabilized dosage of any of the medications listed in the Table, previous enzyme activity rebounds and might reduce the drug level, potentially leading to inadequate clinical response.
Caffeine and other substances
Asking about the patient’s caffeine intake is necessary because consumption of coffee is prevalent among smokers, and caffeine is metabolized by CYP1A2. Smokers need to consume as much as 4 times the amount of caffeine as non-smokers to achieve a similar caffeine serum concentration.2 Caffeine can form an insoluble precipitate with antipsychotic medication in the gut, which decreases absorption. The interaction between smoking-related induction of CYP1A2 enzymes and forced smoking cessation during hospitalization, with ongoing caffeine consumption, could lead to caffeine toxicity.4,5
Other common inducers of CYP1A2 are insulin, cabbage, cauliflower, broccoli, and charcoal-grilled meat. Also, cumin and turmeric inhibit CYP1A2 activity, which might explain an ethnic difference in drug tolerance across population groups. Additionally, certain genetic polymorphisms, in specific ethnic distributions, alter the potential for tobacco smoke to induce CYP1A2.6
Some of these polymorphisms can be genotyped for clinical application.3
Asking about a patient’s tobacco and caffeine use and understanding their interactions with specific medications provides guidance when prescribing antipsychotic medications and adjusting dosage for inpatients and during clinical follow-up care.
Disclosures
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
When a patient who smokes enters a tobacco-free medical facility and has access to caffeinated beverages, he (she) might experience toxicity to many pharmaceuticals and caffeine. Similarly, if a patient is discharged from a smoke-free environment with a newly adjusted medication regimen and resumes smoking or caffeine consumption, alterations in enzyme activity might reduce therapeutic efficacy of prescribed medicines. These effects are a result of alterations in the hepatic cytochrome P450 (CYP) enzyme system.
Taking a careful history of tobacco and caffeine use, and knowing the effects that these substances will have on specific medications, will help guide treatment and management decisions.
The role of CYP enzymes
CYP hepatic enzymes detoxify a variety of environmental agents into water-soluble compounds that are excreted in urine. CYP1A2 metabolizes 20% of drugs handled by the CYP system and comprises 13% of all the CYP enzymes expressed in the liver. The wide interindividual variation in CYP1A2 enzyme activity is influenced by a combination of genetic, epigenetic, ethnic, and environmental variables.1
Influence of tobacco on CYP
The polycyclic aromatic hydrocarbons in tobacco smoke induce CYP1A2 and CYP2B6 hepatic enzymes.2 Smokers exhibit increased activity of these enzymes, which results in faster clearance of many drugs, lower concentrations in blood, and diminished clinical response. The Table lists psychotropic medicines that are metabolized by CYP1A2 and CYP2B6. Co-administration of these substrates could decrease the elimination rate of other drugs also metabolized by CYP1A2. Nicotine in tobacco or in nicotine replacement therapies does not play a role in inducing CYP enzymes.
Psychiatric patients smoke at a higher rate than the general population.2 One study found that approximately 70% of patients with schizophrenia and as many as 45% of those with bipolar disorder smoke enough cigarettes (7 to 20 a day) to induce CYP1A2 and CYP2B6 activity.2 Patients who smoke and are given clozapine, haloperidol, or olanzapine show a lower serum concentration than non-smokers; in fact, the clozapine level can be reduced as much as 2.4-fold.2-5 Subsequently, patients can experience diminished clinical response to these 3 psychotropics.3
The turnover time for CYP1A2 is rapid— approximately 3 days—and a new CYP1A2 steady state activity is reached after approximately 1 week,4 which is important to remember when managing inpatients in a smoke-free facility. During acute hospitalization, patients could experience drug toxicity if the outpatient dosage is maintained.5
When they resume smoking after being discharged on a stabilized dosage of any of the medications listed in the Table, previous enzyme activity rebounds and might reduce the drug level, potentially leading to inadequate clinical response.
Caffeine and other substances
Asking about the patient’s caffeine intake is necessary because consumption of coffee is prevalent among smokers, and caffeine is metabolized by CYP1A2. Smokers need to consume as much as 4 times the amount of caffeine as non-smokers to achieve a similar caffeine serum concentration.2 Caffeine can form an insoluble precipitate with antipsychotic medication in the gut, which decreases absorption. The interaction between smoking-related induction of CYP1A2 enzymes and forced smoking cessation during hospitalization, with ongoing caffeine consumption, could lead to caffeine toxicity.4,5
Other common inducers of CYP1A2 are insulin, cabbage, cauliflower, broccoli, and charcoal-grilled meat. Also, cumin and turmeric inhibit CYP1A2 activity, which might explain an ethnic difference in drug tolerance across population groups. Additionally, certain genetic polymorphisms, in specific ethnic distributions, alter the potential for tobacco smoke to induce CYP1A2.6
Some of these polymorphisms can be genotyped for clinical application.3
Asking about a patient’s tobacco and caffeine use and understanding their interactions with specific medications provides guidance when prescribing antipsychotic medications and adjusting dosage for inpatients and during clinical follow-up care.
Disclosures
The authors report no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
1. Wang B, Zhou SF. Synthetic and natural compounds that interact with human cytochrome P450 1A2 and implications in drug development. Curr Med Chem. 2009;16(31):4066-4218.
2. Lucas C, Martin J. Smoking and drug interactions. Australian Prescriber. 2013;36(3):102-104.
3. Eap CB, Bender S, Jaquenoud Sirot E, et al. Nonresponse to clozapine and ultrarapid CYP1A2 activity: clinical data and analysis of CYP1A2 gene. J Clin Psychopharmacol. 2004; 24(2):214-209.
4. Faber MS, Fuhr U. Time response of cytochrome P450 1A2 activity on cessation of heavy smoking. Clin Pharmacol Ther. 2004;76(2):178-184.
5. Berk M, Ng F, Wang WV, et al. Going up in smoke: tobacco smoking is associated with worse treatment outcomes in mania. J Affect Disord. 2008;110(1-2):126-134.
6. Zhou SF, Yang LP, Zhou ZW, et al. Insights into the substrate specificity, inhibitors, regulation, and polymorphisms and the clinical impact of human cytochrome P450 1A2. AAPS. 2009;11(3):481-494.
1. Wang B, Zhou SF. Synthetic and natural compounds that interact with human cytochrome P450 1A2 and implications in drug development. Curr Med Chem. 2009;16(31):4066-4218.
2. Lucas C, Martin J. Smoking and drug interactions. Australian Prescriber. 2013;36(3):102-104.
3. Eap CB, Bender S, Jaquenoud Sirot E, et al. Nonresponse to clozapine and ultrarapid CYP1A2 activity: clinical data and analysis of CYP1A2 gene. J Clin Psychopharmacol. 2004; 24(2):214-209.
4. Faber MS, Fuhr U. Time response of cytochrome P450 1A2 activity on cessation of heavy smoking. Clin Pharmacol Ther. 2004;76(2):178-184.
5. Berk M, Ng F, Wang WV, et al. Going up in smoke: tobacco smoking is associated with worse treatment outcomes in mania. J Affect Disord. 2008;110(1-2):126-134.
6. Zhou SF, Yang LP, Zhou ZW, et al. Insights into the substrate specificity, inhibitors, regulation, and polymorphisms and the clinical impact of human cytochrome P450 1A2. AAPS. 2009;11(3):481-494.
Brexpiprazole for schizophrenia and as adjunct for major depressive disorder
Brexpiprazole, FDA-approved in July 2015 to treat schizophrenia and as an adjunct for major depressive disorder (MDD) (Table 1), has shown efficacy in 2 phase-III acute trials for each indication.1-6 Although brexpiprazole is a dopamine D2 partial agonist, it differs from aripiprazole, the other available D2 partial agonist, because it is more potent at serotonin 5-HT1A and 5-HT2A receptors and displays less intrinsic activity at D2 receptors,7 which could mean better tolerability.
Clinical implications
Schizophrenia is heterogeneous, and individual response and tolerability to antipsychotics vary greatly8; therefore, new drug options are useful. For MDD, before the availability of brexpiprazole, only 3 antipsychotics were FDA-approved for adjunctive use with antidepressant therapy9; brexpiprazole represents another agent for patients whose depressive symptoms persist after standard antidepressant treatment.
Variables that limit the use of antipsychotics include extrapyramidal symptoms (EPS), akathisia, sedation/somnolence, weight gain, metabolic abnormalities, and hyperprolactinemia. If post-marketing studies and clinical experience confirm that brexpiprazole has an overall favorable side-effect profile regarding these tolerability obstacles, brexpiprazole would potentially have advantages over some other available agents, including aripiprazole.
How it works
In addition to a subnanomolar binding affinity (Ki < 1 nM) to dopamine D2 receptors as a partial agonist, brexpiprazole also exhibits similar binding affinities for serotonin 5-HT1A (partial agonist), 5-HT2A (antagonist), and adrenergic α1B (antagonist) and α2C (antagonist) receptors.7
Brexpiprazole also has high affinity (Ki < 5 nM) for dopamine D3 (partial ago nist), serotonin 5-HT2B (antagonist), and 5-HT7 (antagonist), and at adrenergic α1A (antagonist) and α1D (antagonist) receptors. Brexpiprazole has moderate affinity for histamine H1 receptors (Ki = 19 nM, antagonist), and low affinity for muscarinic M1 receptors (Ki > 1000 nM, antagonist).
Brexpiprazole’s pharmacodynamic profile differs from other available antipsychotics, including aripiprazole. Whether this translates to meaningful differences in efficacy and tolerability will depend on the outcomes of specifically designed clinical trials as well as “real-world” experience. Animal models have suggested amelioration of schizophrenia-like behavior, depression-like behavior, and anxiety-like behavior with brexipiprazole.6
Pharmacokinetics
At 91 hours, brexpiprazole’s half-life is relatively long; a steady-state concentration therefore is attained in approximately 2 weeks.1 In the phase-III clinical trials, brexpiprazole was titrated to target dosages, and therefore the product label recommends the same. Brexpiprazole can be administered with or without food.
In a study of brexpiprazole excretion, after a single oral dose of [14C]-labeled brexpiprazole, approximately 25% and 46% of the administered radioactivity was recovered in urine and feces, respectively. Less than 1% of unchanged brexpiprazole was excreted in the urine, and approximately 14% of the oral dose was recovered unchanged in the feces.
Exposure, as measured by maximum concentration and area under the concentration curve, is dose proportional.
Metabolism of brexpiprazole is mediated principally by cytochrome P450 (CYP) 3A4 and CYP2D6. Based on in vitro data, brexpiprazole shows little or no inhibition of CYP450 isozymes.
Efficacy
FDA approval for brexpiprazole for schizophrenia and for adjunctive use in MDD was based on 4 phase-III pivotal acute clinical trials conducted in adults, 2 studies each for each disorder.1-6 These studies are described in Table 2.2-5
Schizophrenia. The primary outcome measure for the acute schizophrenia trials was change on the Positive and Negative Syndrome Scale (PANSS) total scores from baseline to 6-week endpoint. Statistically significant reductions in PANSS total score were observed for brexpiprazole dosages of 2 mg/d and 4 mg/d in one study,2 and 4 mg/d in another study.3 Responder rates also were measured, with response defined as a reduction of ≥30% from baseline in PANSS total score or a Clinical Global Impressions-Improvement score of 1 (very much improved) or 2 (much improved).2,3 Pooling together the available data for the recommended target dosage of brexpiprazole for schizophrenia (2 to 4 mg/d) from the 2 phase-III studies, 45.5% of patients responded to the drug, compared with 31% for the pooled placebo groups, yielding a number needed to treat (NNT) of 7 (95% CI, 5-12).6
Although not described in product labeling, a phase-III 52-week maintenance study demonstrated brexpiprazole’s efficacy in preventing exacerbation of psychotic symptoms and impending relapse in patients with schizophrenia.10 Time from randomization to exacerbation of psychotic symptoms or impending relapse showed a beneficial effect with brexpiprazole compared with placebo (log-rank test: hazard ratio = 0.292, P < .0001). Significantly fewer patients in the brexpiprazole group relapsed compared with placebo (13.5% vs 38.5%, P < .0001), resulting in a NNT of 4 (95% CI, 3-8).
Major depressive disorder. The primary outcome measure for the acute MDD studies was change in Montgomery-Åsberg Depression Rating Scale (MADRS) scores from baseline to 6-week endpoint of the randomized treatment phase. All patients were required to have a history of inadequate response to 1 to 3 treatment trials of standard antidepressants for their current depressive episode. In addition, patients entered the randomized phase only if they had an inadequate response to antidepressant therapy during an 8-week prospective treatment trial of standard antidepressant treatment plus single-blind placebo.
Participants who responded adequately to the antidepressant in the prospective single-blind phase were not randomized, but instead continued on antidepressant treatment plus single-blind placebo for 6 weeks.
The phase-III studies showed positive results for brexpiprazole, 2 mg/d and 3 mg/d, with change in MADRS from baseline to endpoint superior to that observed with placebo.4,5
When examining treatment response, defined as a reduction of ≥50% in MADRS total score from baseline, NNT vs placebo for response were 12 at all dosages tested, however, NNT vs placebo for remission (defined as MADRS total score ≤10 and ≥50% improvement from baseline) ranged from 17 to 31 and were not statistically significant.6 When the results for brexpiprazole, 1 mg/d, 2 mg/d, and 3 mg/d, from the 2 phase-III trials are pooled together, 23.2% of the patients receiving brexpiprazole were responders, vs 14.5% for placebo, yielding a NNT of 12 (95% CI, 8-26); 14.4% of the brexpiprazole-treated patients met remission criteria, vs 9.6% for placebo, resulting in a NNT of 21 (95% CI, 12-138).6
Tolerability
Overall tolerability can be evaluated by examining the percentage of patients who discontinued the clinical trials because of an adverse event (AE). In the acute schizophrenia double-blind trials for the recommended dosage range of 2 to 4 mg, the discontinuation rates were lower overall for patients receiving brexpiprazole compared with placebo.2,3 In the acute MDD trials, 32.6% of brexpiprazole-treated patients and 10.7% of placebo-treated patients discontinued because of AEs,4,5 yielding a number needed to harm (NNH) of 53 (95% CI, 30-235).6
The most commonly encountered AEs for MDD (incidence ≥5% and at least twice the rate for placebo) were akathisia (8.6% vs 1.7% for brexpiprazole vs placebo, and dose-related) and weight gain (6.7% vs 1.9%),1 with NNH values of 15 (95% CI, 11-23), and 22 (95% CI, 15-42), respectively.6 The most commonly encountered AE for schizophrenia (incidence ≥4% and at least twice the rate for placebo) was weight gain (4% vs 2%),1 with a NNH of 50 (95% CI, 26-1773).6
Of note, rates of akathisia in the schizophrenia trials were 5.5% for brexpiprazole and 4.6% for placebo,1 yielding a non-statistically significant NNH of 112.6 In a 6-week exploratory study,11 the incidence of EPS-related AEs including akathisia was lower for brexpiprazole-treated patients (14.1%) compared with those receiving aripiprazole (30.3%), for a NNT advantage for brexpiprazole of 7 (not statistically significant).
Short-term weight gain appears modest; however, outliers with an increase of ≥7% of body weight were evident in open-label long-term safety studies.1,6 Effects on glucose and lipids were small. Minimal effects on prolactin were observed, and no clinically relevant effects on the QT interval were evident.
Contraindications
The only absolute contraindication for brexpiprazole is known hypersensitivity to brexpiprazole or any of its components. Reactions have included rash, facial swelling, urticaria, and anaphylaxis.1
As with all antipsychotics and antipsychotics with an indication for a depressive disorder:
• there is a bolded boxed warning in the product label regarding increased mortality in geriatric patients with dementia-related psychosis. Brexpiprazole is not approved for treating patients with dementia-related psychosis
• there is a bolded boxed warning in the product label about suicidal thoughts and behaviors in patients age ≤24. The safety and efficacy of brexpiprazole have not been established in pediatric patients.
Dosing
Schizophrenia. The recommended starting dosage for brexpiprazole for schizophrenia is 1 mg/d on Days 1 to 4. Brexpiprazole can be titrated to 2 mg/d on Day 5 through Day 7, then to 4 mg/d on Day 8 based on the patient’s response and ability to tolerate the medication. The recommended target dosage is 2 to 4 mg/d with a maximum recommended daily dosage of 4 mg.
Major depressive disorder. The recommended starting dosage for brexpiprazole as adjunctive treatment for MDD is 0.5 mg or 1 mg/d. Brexpiprazole can be titrated to 1 mg/d, then up to the target dosage of 2 mg/d, with dosage increases occurring at weekly intervals based on the patient’s clinical response and ability to tolerate the agent, with a maximum recommended dosage of 3 mg/d.
Other considerations. For patients with moderate to severe hepatic impairment, or moderate, severe, or end-stage renal impairment, the maximum recommended dosage is 3 mg/d for patients with schizophrenia, and 2 mg/d for patients with MDD.
In general, dosage adjustments are recommended in patients who are known CYP2D6 poor metabolizers and in those taking concomitant CYP3A4 inhibitors or CYP2D6 inhibitors or strong CYP3A4 inducers1:
• for strong CYP2D6 or CYP3A4 inhibitors, administer one-half the usual dosage
• for strong/moderate CYP2D6 with strong/moderate CYP3A4 inhibitors, administer a one-quarter of the usual dosage
• for known CYP2D6 poor metabolizers taking strong/moderate CYP3A4 inhibitors, also administer a one-quarter of the usual dosage
• for strong CYP3A4 inducers, double the usual dosage and further adjust based on clinical response.
In clinical trials for MDD, brexpiprazole dosage was not adjusted for strong CYP2D6 inhibitors (eg, paroxetine, fluoxetine). Therefore, CYP considerations are already factored into general dosing recommendations and brexpiprazole could be administered without dosage adjustment in patients with MDD; however, under these circumstances, it would be prudent to start brexpiprazole at 0.5 mg, which, although “on-label,” represents a low starting dosage. (Whenever 2 drugs are co-administered and 1 agent has the ability to disturb the metabolism of the other, using smaller increments to the target dosage and possibly waiting longer between dosage adjustments could help avoid potential drug–drug interactions.)
No dosage adjustment for brexpiprazole is required on the basis of sex, race or ethnicity, or smoking status. Although clinical studies did not include patients age ≥65, the product label recommends that in general, dose selection for a geriatric patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, and cardiac function, concomitant diseases, and other drug therapy.
Bottom Line
Brexpiprazole, an atypical antipsychotic, is FDA-approved for schizophrenia and as an adjunct to antidepressants in major depressive disorder. For both indications, brexpiprazole demonstrated positive results compared with placebo in phase-III trials. Brexpiprazole is more potent at serotonin 5-HT1A and 5-HT2A receptors and displays less intrinsic activity at D2 receptors than aripiprazole, which could mean that the drug may be better-tolerated.
Related Resources
• Citrome L. Brexpiprazole: a new dopamine D2 receptor partial agonist for the treatment of schizophrenia and major depressive disorder. Drugs Today (Barc). 2015;51(7):397-414.
• Citrome L, Stensbøl TB, Maeda K. The preclinical profile of brexpiprazole: what is its clinical relevance for the treatment of psychiatric disorders? Expert Rev Neurother. In press.
Drug Brand Names
Aripiprazole • Abilify
Brexpiprazole • Rexulti
Fluoxetine • Prozac
Paroxetine • Paxil
Disclosure
Dr. Citrome is a consultant to Alexza Pharmaceuticals, Alkermes, Allergan, Boehringer Ingelheim, Bristol-Myers Squibb, Eli Lilly and Company, Forum Pharmaceuticals, Genentech, Janssen, Jazz Pharmaceuticals, Lundbeck, Merck, Medivation, Mylan, Novartis, Noven, Otsuka, Pfizer, Reckitt Benckiser, Reviva, Shire, Sunovion, Takeda, Teva, and Valeant Pharmaceuticals; and is a speaker for Allergan, AstraZeneca, Janssen, Jazz Pharmaceuticals, Lundbeck, Merck, Novartis, Otsuka, Pfizer, Shire, Sunovion, Takeda, and Teva.
1. Rexulti [package insert]. Rockville, MD: Otsuka; 2015.
2. Correll CU, Skuban A, Ouyang J, et al. Efficacy and safety of brexpiprazole for the treatment of acute schizophrenia: a 6-week randomized, double-blind, placebo-controlled trial. Am J Psychiatry. 2015;172(9):870-880.
3. Kane JM, Skuban A, Ouyang J, et al. A multicenter, randomized, double-blind, controlled phase 3 trial of fixed-dose brexpiprazole for the treatment of adults with acute schizophrenia. Schizophr Res. 2015;164(1-3):127-135.
4. Thase ME, Youakim JM, Skuban A, et al. Adjunctive brexpiprazole 1 and 3 mg for patients with major depressive disorder following inadequate response to antidepressants: a phase 3, randomized, double-blind study [published online August 4, 2015]. J Clin Psychiatry. doi: 10.4088/ JCP.14m09689.
5. Thase ME, Youakim JM, Skuban A, et al. Efficacy and safety of adjunctive brexpiprazole 2 mg in major depressive disorder: a phase 3, randomized, placebo-controlled study in patients with inadequate response to antidepressants [published online August 4, 2015]. J Clin Psychiatry. doi: 10.4088/JCP.14m09688.
6. Citrome L. Brexpiprazole for schizophrenia and as adjunct for major depressive disorder: a systematic review of the efficacy and safety profile for this newly approved antipsychotic—what is the number needed to treat, number needed to harm and likelihood to be helped or harmed? Int J Clin Pract. 2015;69(9):978-997.
7. Maeda K, Sugino H, Akazawa H, et al. Brexpiprazole I: in vitro and in vivo characterization of a novel serotonin-dopamine activity modulator. J Pharmacol Exp Ther. 2014;350(3):589-604.
8. Volavka J, Citrome L. Oral antipsychotics for the treatment of schizophrenia: heterogeneity in efficacy and tolerability should drive decision-making. Expert Opin Pharmacother. 2009;10(12):1917-1928.
9. Citrome L. Adjunctive aripiprazole, olanzapine, or quetiapine for major depressive disorder: an analysis of number needed to treat, number needed to harm, and likelihood to be helped or harmed. Postgrad Med. 2010;122(4):39-48.
10. Hobart M, Ouyang J, Forbes A, et al. Efficacy and safety of brexpiprazole (OPC-34712) as maintenance treatment in adults with schizophrenia: a randomized, double-blind, placebo-controlled study. Poster presented at: the American Society of Clinical Psychopharmacology Annual Meeting; June 22 to 25, 2015; Miami, FL.
11. Citrome L, Ota A, Nagamizu K, Perry P, et al. The effect of brexpiprazole (OPC‐34712) versus aripiprazole in adult patients with acute schizophrenia: an exploratory study. Poster presented at: the Society of Biological Psychiatry Annual Scientific Meeting and Convention; May 15, 2015; Toronto, Ontario, Canada.
Brexpiprazole, FDA-approved in July 2015 to treat schizophrenia and as an adjunct for major depressive disorder (MDD) (Table 1), has shown efficacy in 2 phase-III acute trials for each indication.1-6 Although brexpiprazole is a dopamine D2 partial agonist, it differs from aripiprazole, the other available D2 partial agonist, because it is more potent at serotonin 5-HT1A and 5-HT2A receptors and displays less intrinsic activity at D2 receptors,7 which could mean better tolerability.
Clinical implications
Schizophrenia is heterogeneous, and individual response and tolerability to antipsychotics vary greatly8; therefore, new drug options are useful. For MDD, before the availability of brexpiprazole, only 3 antipsychotics were FDA-approved for adjunctive use with antidepressant therapy9; brexpiprazole represents another agent for patients whose depressive symptoms persist after standard antidepressant treatment.
Variables that limit the use of antipsychotics include extrapyramidal symptoms (EPS), akathisia, sedation/somnolence, weight gain, metabolic abnormalities, and hyperprolactinemia. If post-marketing studies and clinical experience confirm that brexpiprazole has an overall favorable side-effect profile regarding these tolerability obstacles, brexpiprazole would potentially have advantages over some other available agents, including aripiprazole.
How it works
In addition to a subnanomolar binding affinity (Ki < 1 nM) to dopamine D2 receptors as a partial agonist, brexpiprazole also exhibits similar binding affinities for serotonin 5-HT1A (partial agonist), 5-HT2A (antagonist), and adrenergic α1B (antagonist) and α2C (antagonist) receptors.7
Brexpiprazole also has high affinity (Ki < 5 nM) for dopamine D3 (partial ago nist), serotonin 5-HT2B (antagonist), and 5-HT7 (antagonist), and at adrenergic α1A (antagonist) and α1D (antagonist) receptors. Brexpiprazole has moderate affinity for histamine H1 receptors (Ki = 19 nM, antagonist), and low affinity for muscarinic M1 receptors (Ki > 1000 nM, antagonist).
Brexpiprazole’s pharmacodynamic profile differs from other available antipsychotics, including aripiprazole. Whether this translates to meaningful differences in efficacy and tolerability will depend on the outcomes of specifically designed clinical trials as well as “real-world” experience. Animal models have suggested amelioration of schizophrenia-like behavior, depression-like behavior, and anxiety-like behavior with brexipiprazole.6
Pharmacokinetics
At 91 hours, brexpiprazole’s half-life is relatively long; a steady-state concentration therefore is attained in approximately 2 weeks.1 In the phase-III clinical trials, brexpiprazole was titrated to target dosages, and therefore the product label recommends the same. Brexpiprazole can be administered with or without food.
In a study of brexpiprazole excretion, after a single oral dose of [14C]-labeled brexpiprazole, approximately 25% and 46% of the administered radioactivity was recovered in urine and feces, respectively. Less than 1% of unchanged brexpiprazole was excreted in the urine, and approximately 14% of the oral dose was recovered unchanged in the feces.
Exposure, as measured by maximum concentration and area under the concentration curve, is dose proportional.
Metabolism of brexpiprazole is mediated principally by cytochrome P450 (CYP) 3A4 and CYP2D6. Based on in vitro data, brexpiprazole shows little or no inhibition of CYP450 isozymes.
Efficacy
FDA approval for brexpiprazole for schizophrenia and for adjunctive use in MDD was based on 4 phase-III pivotal acute clinical trials conducted in adults, 2 studies each for each disorder.1-6 These studies are described in Table 2.2-5
Schizophrenia. The primary outcome measure for the acute schizophrenia trials was change on the Positive and Negative Syndrome Scale (PANSS) total scores from baseline to 6-week endpoint. Statistically significant reductions in PANSS total score were observed for brexpiprazole dosages of 2 mg/d and 4 mg/d in one study,2 and 4 mg/d in another study.3 Responder rates also were measured, with response defined as a reduction of ≥30% from baseline in PANSS total score or a Clinical Global Impressions-Improvement score of 1 (very much improved) or 2 (much improved).2,3 Pooling together the available data for the recommended target dosage of brexpiprazole for schizophrenia (2 to 4 mg/d) from the 2 phase-III studies, 45.5% of patients responded to the drug, compared with 31% for the pooled placebo groups, yielding a number needed to treat (NNT) of 7 (95% CI, 5-12).6
Although not described in product labeling, a phase-III 52-week maintenance study demonstrated brexpiprazole’s efficacy in preventing exacerbation of psychotic symptoms and impending relapse in patients with schizophrenia.10 Time from randomization to exacerbation of psychotic symptoms or impending relapse showed a beneficial effect with brexpiprazole compared with placebo (log-rank test: hazard ratio = 0.292, P < .0001). Significantly fewer patients in the brexpiprazole group relapsed compared with placebo (13.5% vs 38.5%, P < .0001), resulting in a NNT of 4 (95% CI, 3-8).
Major depressive disorder. The primary outcome measure for the acute MDD studies was change in Montgomery-Åsberg Depression Rating Scale (MADRS) scores from baseline to 6-week endpoint of the randomized treatment phase. All patients were required to have a history of inadequate response to 1 to 3 treatment trials of standard antidepressants for their current depressive episode. In addition, patients entered the randomized phase only if they had an inadequate response to antidepressant therapy during an 8-week prospective treatment trial of standard antidepressant treatment plus single-blind placebo.
Participants who responded adequately to the antidepressant in the prospective single-blind phase were not randomized, but instead continued on antidepressant treatment plus single-blind placebo for 6 weeks.
The phase-III studies showed positive results for brexpiprazole, 2 mg/d and 3 mg/d, with change in MADRS from baseline to endpoint superior to that observed with placebo.4,5
When examining treatment response, defined as a reduction of ≥50% in MADRS total score from baseline, NNT vs placebo for response were 12 at all dosages tested, however, NNT vs placebo for remission (defined as MADRS total score ≤10 and ≥50% improvement from baseline) ranged from 17 to 31 and were not statistically significant.6 When the results for brexpiprazole, 1 mg/d, 2 mg/d, and 3 mg/d, from the 2 phase-III trials are pooled together, 23.2% of the patients receiving brexpiprazole were responders, vs 14.5% for placebo, yielding a NNT of 12 (95% CI, 8-26); 14.4% of the brexpiprazole-treated patients met remission criteria, vs 9.6% for placebo, resulting in a NNT of 21 (95% CI, 12-138).6
Tolerability
Overall tolerability can be evaluated by examining the percentage of patients who discontinued the clinical trials because of an adverse event (AE). In the acute schizophrenia double-blind trials for the recommended dosage range of 2 to 4 mg, the discontinuation rates were lower overall for patients receiving brexpiprazole compared with placebo.2,3 In the acute MDD trials, 32.6% of brexpiprazole-treated patients and 10.7% of placebo-treated patients discontinued because of AEs,4,5 yielding a number needed to harm (NNH) of 53 (95% CI, 30-235).6
The most commonly encountered AEs for MDD (incidence ≥5% and at least twice the rate for placebo) were akathisia (8.6% vs 1.7% for brexpiprazole vs placebo, and dose-related) and weight gain (6.7% vs 1.9%),1 with NNH values of 15 (95% CI, 11-23), and 22 (95% CI, 15-42), respectively.6 The most commonly encountered AE for schizophrenia (incidence ≥4% and at least twice the rate for placebo) was weight gain (4% vs 2%),1 with a NNH of 50 (95% CI, 26-1773).6
Of note, rates of akathisia in the schizophrenia trials were 5.5% for brexpiprazole and 4.6% for placebo,1 yielding a non-statistically significant NNH of 112.6 In a 6-week exploratory study,11 the incidence of EPS-related AEs including akathisia was lower for brexpiprazole-treated patients (14.1%) compared with those receiving aripiprazole (30.3%), for a NNT advantage for brexpiprazole of 7 (not statistically significant).
Short-term weight gain appears modest; however, outliers with an increase of ≥7% of body weight were evident in open-label long-term safety studies.1,6 Effects on glucose and lipids were small. Minimal effects on prolactin were observed, and no clinically relevant effects on the QT interval were evident.
Contraindications
The only absolute contraindication for brexpiprazole is known hypersensitivity to brexpiprazole or any of its components. Reactions have included rash, facial swelling, urticaria, and anaphylaxis.1
As with all antipsychotics and antipsychotics with an indication for a depressive disorder:
• there is a bolded boxed warning in the product label regarding increased mortality in geriatric patients with dementia-related psychosis. Brexpiprazole is not approved for treating patients with dementia-related psychosis
• there is a bolded boxed warning in the product label about suicidal thoughts and behaviors in patients age ≤24. The safety and efficacy of brexpiprazole have not been established in pediatric patients.
Dosing
Schizophrenia. The recommended starting dosage for brexpiprazole for schizophrenia is 1 mg/d on Days 1 to 4. Brexpiprazole can be titrated to 2 mg/d on Day 5 through Day 7, then to 4 mg/d on Day 8 based on the patient’s response and ability to tolerate the medication. The recommended target dosage is 2 to 4 mg/d with a maximum recommended daily dosage of 4 mg.
Major depressive disorder. The recommended starting dosage for brexpiprazole as adjunctive treatment for MDD is 0.5 mg or 1 mg/d. Brexpiprazole can be titrated to 1 mg/d, then up to the target dosage of 2 mg/d, with dosage increases occurring at weekly intervals based on the patient’s clinical response and ability to tolerate the agent, with a maximum recommended dosage of 3 mg/d.
Other considerations. For patients with moderate to severe hepatic impairment, or moderate, severe, or end-stage renal impairment, the maximum recommended dosage is 3 mg/d for patients with schizophrenia, and 2 mg/d for patients with MDD.
In general, dosage adjustments are recommended in patients who are known CYP2D6 poor metabolizers and in those taking concomitant CYP3A4 inhibitors or CYP2D6 inhibitors or strong CYP3A4 inducers1:
• for strong CYP2D6 or CYP3A4 inhibitors, administer one-half the usual dosage
• for strong/moderate CYP2D6 with strong/moderate CYP3A4 inhibitors, administer a one-quarter of the usual dosage
• for known CYP2D6 poor metabolizers taking strong/moderate CYP3A4 inhibitors, also administer a one-quarter of the usual dosage
• for strong CYP3A4 inducers, double the usual dosage and further adjust based on clinical response.
In clinical trials for MDD, brexpiprazole dosage was not adjusted for strong CYP2D6 inhibitors (eg, paroxetine, fluoxetine). Therefore, CYP considerations are already factored into general dosing recommendations and brexpiprazole could be administered without dosage adjustment in patients with MDD; however, under these circumstances, it would be prudent to start brexpiprazole at 0.5 mg, which, although “on-label,” represents a low starting dosage. (Whenever 2 drugs are co-administered and 1 agent has the ability to disturb the metabolism of the other, using smaller increments to the target dosage and possibly waiting longer between dosage adjustments could help avoid potential drug–drug interactions.)
No dosage adjustment for brexpiprazole is required on the basis of sex, race or ethnicity, or smoking status. Although clinical studies did not include patients age ≥65, the product label recommends that in general, dose selection for a geriatric patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, and cardiac function, concomitant diseases, and other drug therapy.
Bottom Line
Brexpiprazole, an atypical antipsychotic, is FDA-approved for schizophrenia and as an adjunct to antidepressants in major depressive disorder. For both indications, brexpiprazole demonstrated positive results compared with placebo in phase-III trials. Brexpiprazole is more potent at serotonin 5-HT1A and 5-HT2A receptors and displays less intrinsic activity at D2 receptors than aripiprazole, which could mean that the drug may be better-tolerated.
Related Resources
• Citrome L. Brexpiprazole: a new dopamine D2 receptor partial agonist for the treatment of schizophrenia and major depressive disorder. Drugs Today (Barc). 2015;51(7):397-414.
• Citrome L, Stensbøl TB, Maeda K. The preclinical profile of brexpiprazole: what is its clinical relevance for the treatment of psychiatric disorders? Expert Rev Neurother. In press.
Drug Brand Names
Aripiprazole • Abilify
Brexpiprazole • Rexulti
Fluoxetine • Prozac
Paroxetine • Paxil
Disclosure
Dr. Citrome is a consultant to Alexza Pharmaceuticals, Alkermes, Allergan, Boehringer Ingelheim, Bristol-Myers Squibb, Eli Lilly and Company, Forum Pharmaceuticals, Genentech, Janssen, Jazz Pharmaceuticals, Lundbeck, Merck, Medivation, Mylan, Novartis, Noven, Otsuka, Pfizer, Reckitt Benckiser, Reviva, Shire, Sunovion, Takeda, Teva, and Valeant Pharmaceuticals; and is a speaker for Allergan, AstraZeneca, Janssen, Jazz Pharmaceuticals, Lundbeck, Merck, Novartis, Otsuka, Pfizer, Shire, Sunovion, Takeda, and Teva.
Brexpiprazole, FDA-approved in July 2015 to treat schizophrenia and as an adjunct for major depressive disorder (MDD) (Table 1), has shown efficacy in 2 phase-III acute trials for each indication.1-6 Although brexpiprazole is a dopamine D2 partial agonist, it differs from aripiprazole, the other available D2 partial agonist, because it is more potent at serotonin 5-HT1A and 5-HT2A receptors and displays less intrinsic activity at D2 receptors,7 which could mean better tolerability.
Clinical implications
Schizophrenia is heterogeneous, and individual response and tolerability to antipsychotics vary greatly8; therefore, new drug options are useful. For MDD, before the availability of brexpiprazole, only 3 antipsychotics were FDA-approved for adjunctive use with antidepressant therapy9; brexpiprazole represents another agent for patients whose depressive symptoms persist after standard antidepressant treatment.
Variables that limit the use of antipsychotics include extrapyramidal symptoms (EPS), akathisia, sedation/somnolence, weight gain, metabolic abnormalities, and hyperprolactinemia. If post-marketing studies and clinical experience confirm that brexpiprazole has an overall favorable side-effect profile regarding these tolerability obstacles, brexpiprazole would potentially have advantages over some other available agents, including aripiprazole.
How it works
In addition to a subnanomolar binding affinity (Ki < 1 nM) to dopamine D2 receptors as a partial agonist, brexpiprazole also exhibits similar binding affinities for serotonin 5-HT1A (partial agonist), 5-HT2A (antagonist), and adrenergic α1B (antagonist) and α2C (antagonist) receptors.7
Brexpiprazole also has high affinity (Ki < 5 nM) for dopamine D3 (partial ago nist), serotonin 5-HT2B (antagonist), and 5-HT7 (antagonist), and at adrenergic α1A (antagonist) and α1D (antagonist) receptors. Brexpiprazole has moderate affinity for histamine H1 receptors (Ki = 19 nM, antagonist), and low affinity for muscarinic M1 receptors (Ki > 1000 nM, antagonist).
Brexpiprazole’s pharmacodynamic profile differs from other available antipsychotics, including aripiprazole. Whether this translates to meaningful differences in efficacy and tolerability will depend on the outcomes of specifically designed clinical trials as well as “real-world” experience. Animal models have suggested amelioration of schizophrenia-like behavior, depression-like behavior, and anxiety-like behavior with brexipiprazole.6
Pharmacokinetics
At 91 hours, brexpiprazole’s half-life is relatively long; a steady-state concentration therefore is attained in approximately 2 weeks.1 In the phase-III clinical trials, brexpiprazole was titrated to target dosages, and therefore the product label recommends the same. Brexpiprazole can be administered with or without food.
In a study of brexpiprazole excretion, after a single oral dose of [14C]-labeled brexpiprazole, approximately 25% and 46% of the administered radioactivity was recovered in urine and feces, respectively. Less than 1% of unchanged brexpiprazole was excreted in the urine, and approximately 14% of the oral dose was recovered unchanged in the feces.
Exposure, as measured by maximum concentration and area under the concentration curve, is dose proportional.
Metabolism of brexpiprazole is mediated principally by cytochrome P450 (CYP) 3A4 and CYP2D6. Based on in vitro data, brexpiprazole shows little or no inhibition of CYP450 isozymes.
Efficacy
FDA approval for brexpiprazole for schizophrenia and for adjunctive use in MDD was based on 4 phase-III pivotal acute clinical trials conducted in adults, 2 studies each for each disorder.1-6 These studies are described in Table 2.2-5
Schizophrenia. The primary outcome measure for the acute schizophrenia trials was change on the Positive and Negative Syndrome Scale (PANSS) total scores from baseline to 6-week endpoint. Statistically significant reductions in PANSS total score were observed for brexpiprazole dosages of 2 mg/d and 4 mg/d in one study,2 and 4 mg/d in another study.3 Responder rates also were measured, with response defined as a reduction of ≥30% from baseline in PANSS total score or a Clinical Global Impressions-Improvement score of 1 (very much improved) or 2 (much improved).2,3 Pooling together the available data for the recommended target dosage of brexpiprazole for schizophrenia (2 to 4 mg/d) from the 2 phase-III studies, 45.5% of patients responded to the drug, compared with 31% for the pooled placebo groups, yielding a number needed to treat (NNT) of 7 (95% CI, 5-12).6
Although not described in product labeling, a phase-III 52-week maintenance study demonstrated brexpiprazole’s efficacy in preventing exacerbation of psychotic symptoms and impending relapse in patients with schizophrenia.10 Time from randomization to exacerbation of psychotic symptoms or impending relapse showed a beneficial effect with brexpiprazole compared with placebo (log-rank test: hazard ratio = 0.292, P < .0001). Significantly fewer patients in the brexpiprazole group relapsed compared with placebo (13.5% vs 38.5%, P < .0001), resulting in a NNT of 4 (95% CI, 3-8).
Major depressive disorder. The primary outcome measure for the acute MDD studies was change in Montgomery-Åsberg Depression Rating Scale (MADRS) scores from baseline to 6-week endpoint of the randomized treatment phase. All patients were required to have a history of inadequate response to 1 to 3 treatment trials of standard antidepressants for their current depressive episode. In addition, patients entered the randomized phase only if they had an inadequate response to antidepressant therapy during an 8-week prospective treatment trial of standard antidepressant treatment plus single-blind placebo.
Participants who responded adequately to the antidepressant in the prospective single-blind phase were not randomized, but instead continued on antidepressant treatment plus single-blind placebo for 6 weeks.
The phase-III studies showed positive results for brexpiprazole, 2 mg/d and 3 mg/d, with change in MADRS from baseline to endpoint superior to that observed with placebo.4,5
When examining treatment response, defined as a reduction of ≥50% in MADRS total score from baseline, NNT vs placebo for response were 12 at all dosages tested, however, NNT vs placebo for remission (defined as MADRS total score ≤10 and ≥50% improvement from baseline) ranged from 17 to 31 and were not statistically significant.6 When the results for brexpiprazole, 1 mg/d, 2 mg/d, and 3 mg/d, from the 2 phase-III trials are pooled together, 23.2% of the patients receiving brexpiprazole were responders, vs 14.5% for placebo, yielding a NNT of 12 (95% CI, 8-26); 14.4% of the brexpiprazole-treated patients met remission criteria, vs 9.6% for placebo, resulting in a NNT of 21 (95% CI, 12-138).6
Tolerability
Overall tolerability can be evaluated by examining the percentage of patients who discontinued the clinical trials because of an adverse event (AE). In the acute schizophrenia double-blind trials for the recommended dosage range of 2 to 4 mg, the discontinuation rates were lower overall for patients receiving brexpiprazole compared with placebo.2,3 In the acute MDD trials, 32.6% of brexpiprazole-treated patients and 10.7% of placebo-treated patients discontinued because of AEs,4,5 yielding a number needed to harm (NNH) of 53 (95% CI, 30-235).6
The most commonly encountered AEs for MDD (incidence ≥5% and at least twice the rate for placebo) were akathisia (8.6% vs 1.7% for brexpiprazole vs placebo, and dose-related) and weight gain (6.7% vs 1.9%),1 with NNH values of 15 (95% CI, 11-23), and 22 (95% CI, 15-42), respectively.6 The most commonly encountered AE for schizophrenia (incidence ≥4% and at least twice the rate for placebo) was weight gain (4% vs 2%),1 with a NNH of 50 (95% CI, 26-1773).6
Of note, rates of akathisia in the schizophrenia trials were 5.5% for brexpiprazole and 4.6% for placebo,1 yielding a non-statistically significant NNH of 112.6 In a 6-week exploratory study,11 the incidence of EPS-related AEs including akathisia was lower for brexpiprazole-treated patients (14.1%) compared with those receiving aripiprazole (30.3%), for a NNT advantage for brexpiprazole of 7 (not statistically significant).
Short-term weight gain appears modest; however, outliers with an increase of ≥7% of body weight were evident in open-label long-term safety studies.1,6 Effects on glucose and lipids were small. Minimal effects on prolactin were observed, and no clinically relevant effects on the QT interval were evident.
Contraindications
The only absolute contraindication for brexpiprazole is known hypersensitivity to brexpiprazole or any of its components. Reactions have included rash, facial swelling, urticaria, and anaphylaxis.1
As with all antipsychotics and antipsychotics with an indication for a depressive disorder:
• there is a bolded boxed warning in the product label regarding increased mortality in geriatric patients with dementia-related psychosis. Brexpiprazole is not approved for treating patients with dementia-related psychosis
• there is a bolded boxed warning in the product label about suicidal thoughts and behaviors in patients age ≤24. The safety and efficacy of brexpiprazole have not been established in pediatric patients.
Dosing
Schizophrenia. The recommended starting dosage for brexpiprazole for schizophrenia is 1 mg/d on Days 1 to 4. Brexpiprazole can be titrated to 2 mg/d on Day 5 through Day 7, then to 4 mg/d on Day 8 based on the patient’s response and ability to tolerate the medication. The recommended target dosage is 2 to 4 mg/d with a maximum recommended daily dosage of 4 mg.
Major depressive disorder. The recommended starting dosage for brexpiprazole as adjunctive treatment for MDD is 0.5 mg or 1 mg/d. Brexpiprazole can be titrated to 1 mg/d, then up to the target dosage of 2 mg/d, with dosage increases occurring at weekly intervals based on the patient’s clinical response and ability to tolerate the agent, with a maximum recommended dosage of 3 mg/d.
Other considerations. For patients with moderate to severe hepatic impairment, or moderate, severe, or end-stage renal impairment, the maximum recommended dosage is 3 mg/d for patients with schizophrenia, and 2 mg/d for patients with MDD.
In general, dosage adjustments are recommended in patients who are known CYP2D6 poor metabolizers and in those taking concomitant CYP3A4 inhibitors or CYP2D6 inhibitors or strong CYP3A4 inducers1:
• for strong CYP2D6 or CYP3A4 inhibitors, administer one-half the usual dosage
• for strong/moderate CYP2D6 with strong/moderate CYP3A4 inhibitors, administer a one-quarter of the usual dosage
• for known CYP2D6 poor metabolizers taking strong/moderate CYP3A4 inhibitors, also administer a one-quarter of the usual dosage
• for strong CYP3A4 inducers, double the usual dosage and further adjust based on clinical response.
In clinical trials for MDD, brexpiprazole dosage was not adjusted for strong CYP2D6 inhibitors (eg, paroxetine, fluoxetine). Therefore, CYP considerations are already factored into general dosing recommendations and brexpiprazole could be administered without dosage adjustment in patients with MDD; however, under these circumstances, it would be prudent to start brexpiprazole at 0.5 mg, which, although “on-label,” represents a low starting dosage. (Whenever 2 drugs are co-administered and 1 agent has the ability to disturb the metabolism of the other, using smaller increments to the target dosage and possibly waiting longer between dosage adjustments could help avoid potential drug–drug interactions.)
No dosage adjustment for brexpiprazole is required on the basis of sex, race or ethnicity, or smoking status. Although clinical studies did not include patients age ≥65, the product label recommends that in general, dose selection for a geriatric patient should be cautious, usually starting at the low end of the dosing range, reflecting the greater frequency of decreased hepatic, renal, and cardiac function, concomitant diseases, and other drug therapy.
Bottom Line
Brexpiprazole, an atypical antipsychotic, is FDA-approved for schizophrenia and as an adjunct to antidepressants in major depressive disorder. For both indications, brexpiprazole demonstrated positive results compared with placebo in phase-III trials. Brexpiprazole is more potent at serotonin 5-HT1A and 5-HT2A receptors and displays less intrinsic activity at D2 receptors than aripiprazole, which could mean that the drug may be better-tolerated.
Related Resources
• Citrome L. Brexpiprazole: a new dopamine D2 receptor partial agonist for the treatment of schizophrenia and major depressive disorder. Drugs Today (Barc). 2015;51(7):397-414.
• Citrome L, Stensbøl TB, Maeda K. The preclinical profile of brexpiprazole: what is its clinical relevance for the treatment of psychiatric disorders? Expert Rev Neurother. In press.
Drug Brand Names
Aripiprazole • Abilify
Brexpiprazole • Rexulti
Fluoxetine • Prozac
Paroxetine • Paxil
Disclosure
Dr. Citrome is a consultant to Alexza Pharmaceuticals, Alkermes, Allergan, Boehringer Ingelheim, Bristol-Myers Squibb, Eli Lilly and Company, Forum Pharmaceuticals, Genentech, Janssen, Jazz Pharmaceuticals, Lundbeck, Merck, Medivation, Mylan, Novartis, Noven, Otsuka, Pfizer, Reckitt Benckiser, Reviva, Shire, Sunovion, Takeda, Teva, and Valeant Pharmaceuticals; and is a speaker for Allergan, AstraZeneca, Janssen, Jazz Pharmaceuticals, Lundbeck, Merck, Novartis, Otsuka, Pfizer, Shire, Sunovion, Takeda, and Teva.
1. Rexulti [package insert]. Rockville, MD: Otsuka; 2015.
2. Correll CU, Skuban A, Ouyang J, et al. Efficacy and safety of brexpiprazole for the treatment of acute schizophrenia: a 6-week randomized, double-blind, placebo-controlled trial. Am J Psychiatry. 2015;172(9):870-880.
3. Kane JM, Skuban A, Ouyang J, et al. A multicenter, randomized, double-blind, controlled phase 3 trial of fixed-dose brexpiprazole for the treatment of adults with acute schizophrenia. Schizophr Res. 2015;164(1-3):127-135.
4. Thase ME, Youakim JM, Skuban A, et al. Adjunctive brexpiprazole 1 and 3 mg for patients with major depressive disorder following inadequate response to antidepressants: a phase 3, randomized, double-blind study [published online August 4, 2015]. J Clin Psychiatry. doi: 10.4088/ JCP.14m09689.
5. Thase ME, Youakim JM, Skuban A, et al. Efficacy and safety of adjunctive brexpiprazole 2 mg in major depressive disorder: a phase 3, randomized, placebo-controlled study in patients with inadequate response to antidepressants [published online August 4, 2015]. J Clin Psychiatry. doi: 10.4088/JCP.14m09688.
6. Citrome L. Brexpiprazole for schizophrenia and as adjunct for major depressive disorder: a systematic review of the efficacy and safety profile for this newly approved antipsychotic—what is the number needed to treat, number needed to harm and likelihood to be helped or harmed? Int J Clin Pract. 2015;69(9):978-997.
7. Maeda K, Sugino H, Akazawa H, et al. Brexpiprazole I: in vitro and in vivo characterization of a novel serotonin-dopamine activity modulator. J Pharmacol Exp Ther. 2014;350(3):589-604.
8. Volavka J, Citrome L. Oral antipsychotics for the treatment of schizophrenia: heterogeneity in efficacy and tolerability should drive decision-making. Expert Opin Pharmacother. 2009;10(12):1917-1928.
9. Citrome L. Adjunctive aripiprazole, olanzapine, or quetiapine for major depressive disorder: an analysis of number needed to treat, number needed to harm, and likelihood to be helped or harmed. Postgrad Med. 2010;122(4):39-48.
10. Hobart M, Ouyang J, Forbes A, et al. Efficacy and safety of brexpiprazole (OPC-34712) as maintenance treatment in adults with schizophrenia: a randomized, double-blind, placebo-controlled study. Poster presented at: the American Society of Clinical Psychopharmacology Annual Meeting; June 22 to 25, 2015; Miami, FL.
11. Citrome L, Ota A, Nagamizu K, Perry P, et al. The effect of brexpiprazole (OPC‐34712) versus aripiprazole in adult patients with acute schizophrenia: an exploratory study. Poster presented at: the Society of Biological Psychiatry Annual Scientific Meeting and Convention; May 15, 2015; Toronto, Ontario, Canada.
1. Rexulti [package insert]. Rockville, MD: Otsuka; 2015.
2. Correll CU, Skuban A, Ouyang J, et al. Efficacy and safety of brexpiprazole for the treatment of acute schizophrenia: a 6-week randomized, double-blind, placebo-controlled trial. Am J Psychiatry. 2015;172(9):870-880.
3. Kane JM, Skuban A, Ouyang J, et al. A multicenter, randomized, double-blind, controlled phase 3 trial of fixed-dose brexpiprazole for the treatment of adults with acute schizophrenia. Schizophr Res. 2015;164(1-3):127-135.
4. Thase ME, Youakim JM, Skuban A, et al. Adjunctive brexpiprazole 1 and 3 mg for patients with major depressive disorder following inadequate response to antidepressants: a phase 3, randomized, double-blind study [published online August 4, 2015]. J Clin Psychiatry. doi: 10.4088/ JCP.14m09689.
5. Thase ME, Youakim JM, Skuban A, et al. Efficacy and safety of adjunctive brexpiprazole 2 mg in major depressive disorder: a phase 3, randomized, placebo-controlled study in patients with inadequate response to antidepressants [published online August 4, 2015]. J Clin Psychiatry. doi: 10.4088/JCP.14m09688.
6. Citrome L. Brexpiprazole for schizophrenia and as adjunct for major depressive disorder: a systematic review of the efficacy and safety profile for this newly approved antipsychotic—what is the number needed to treat, number needed to harm and likelihood to be helped or harmed? Int J Clin Pract. 2015;69(9):978-997.
7. Maeda K, Sugino H, Akazawa H, et al. Brexpiprazole I: in vitro and in vivo characterization of a novel serotonin-dopamine activity modulator. J Pharmacol Exp Ther. 2014;350(3):589-604.
8. Volavka J, Citrome L. Oral antipsychotics for the treatment of schizophrenia: heterogeneity in efficacy and tolerability should drive decision-making. Expert Opin Pharmacother. 2009;10(12):1917-1928.
9. Citrome L. Adjunctive aripiprazole, olanzapine, or quetiapine for major depressive disorder: an analysis of number needed to treat, number needed to harm, and likelihood to be helped or harmed. Postgrad Med. 2010;122(4):39-48.
10. Hobart M, Ouyang J, Forbes A, et al. Efficacy and safety of brexpiprazole (OPC-34712) as maintenance treatment in adults with schizophrenia: a randomized, double-blind, placebo-controlled study. Poster presented at: the American Society of Clinical Psychopharmacology Annual Meeting; June 22 to 25, 2015; Miami, FL.
11. Citrome L, Ota A, Nagamizu K, Perry P, et al. The effect of brexpiprazole (OPC‐34712) versus aripiprazole in adult patients with acute schizophrenia: an exploratory study. Poster presented at: the Society of Biological Psychiatry Annual Scientific Meeting and Convention; May 15, 2015; Toronto, Ontario, Canada.
What to do when your depressed patient develops mania
When a known depressed patient newly develops signs of mania or hypomania, a cascade of diagnostic and therapeutic questions ensues: Does the event “automatically” signify the presence of bipolar disorder (BD), or could manic symptoms be secondary to another underlying medical problem, a prescribed antidepressant or non-psychotropic medication, or illicit substances?
Even more questions confront the clinician: If mania symptoms are nothing more than an adverse drug reaction, will they go away by stopping the presumed offending agent? Or do symptoms always indicate the unmasking of a bipolar diathesis? Should anti-manic medication be prescribed immediately? If so, which one(s) and for how long? How extensive a medical or neurologic workup is indicated?
And, how do you differentiate ambiguous hypomania symptoms (irritability, insomnia, agitation) from other phenomena, such as akathisia, anxiety, and overstimulation?
In this article, we present an overview of how to approach and answer these key questions, so that you can identify, comprehend, and manage manic symptoms that arise in the course of your patient’s treatment for depression (Box).
Does disease exist on a unipolar−bipolar continuum?
There has been a resurgence of interest in Kraepelin’s original notion of mania and depression as falling along a continuum, rather than being distinct categories of pathology. True bipolar mania has its own identifiable epidemiology, familiality, and treatment, but symptomatic shades of gray often pose a formidable diagnostic and therapeutic challenge.
For example, DSM-5 relaxed its definition of “mixed” episodes of BD to include subsyndromal mania features in unipolar depression. When a patient with unipolar depression develops a full, unequivocal manic episode, there usually isn’t much ambiguity or confusion about initial management: assure a safe environment, stop any antidepressants, rule out drug- or medically induced causes, and begin an acute anti-manic medication.
Next steps can, sometimes, be murkier:
• formulate a definitive, overarching diagnosis
• provide psycho-education
• forecast return to work or school
• discuss prognosis and likelihood of relapse
• address necessary lifestyle modifications (eg, sleep hygiene, elimination of alcohol and illicit drug use)
• determine whether indefinite maintenance pharmacotherapy is indicated— and, if so, with which medication(s).
CASE A diagnostic formulation isn’t always black and white
Ms. J, age 56, a medically healthy woman, has a 10-year history of depression and anxiety that has been treated effectively for most of that time with venlafaxine, 225 mg/d. The mother of 4 grown children, Ms. J has worked steadily for >20 years as a flight attendant for an international airline.
Today, Ms. J is brought by ambulance from work to the emergency department in a paranoid and agitated state. The admission follows her having e-blasted airline corporate executives with a voluminous manifesto that she worked on around the clock the preceding week, in which she explained her bold ideas to revolutionize the airline industry, under her leadership.
Ms. J’s family history is unremarkable for psychiatric illness.
How does one approach a case such as Ms. J’s?
Stark examples of classical mania, as depicted in this case vignette, are easy to recognize but not necessarily straightforward, nosologically. Consider the following not-so-straightforward elements of Ms. J’s case:
• a first-lifetime episode of mania or hypomania is rare after age 50
• Ms. J took a serotonin-norepinephrine reuptake inhibitor (SNRI) for many years without evidence of mood destabilization
• years of repetitive chronobiological stress (including probable frequent time zone changes with likely sleep disruption) apparently did not trigger mood destabilization
• none of Ms. J’s 4 pregnancies led to postpartum mood episodes
• at least on the surface, there are no obvious features that point to likely causes of a secondary mania (eg, drug-induced, toxic, metabolic, or medical)
• Ms. J has no known family history of BD or any other mood disorder.
Approaching a case such as Ms. J’s must involve a systematic strategy that can best be broken into 2 segments: (1) a period of acute initial assessment and treatment and (2) later efforts focused on broader diagnostic evaluation and longer-term relapse prevention.
Initial assessment and treatment
Immediate assessment and management hinges on initial triage and forming a working diagnostic impression. Although full-blown mania usually is obvious (sometimes even without a formal interview), be alert to patients who might minimize or altogether disavow mania symptoms—often because of denial of illness, misidentification of symptoms, or impaired insight about changes in thinking, mood, or behavior.
Because florid mania, by definition, impairs psychosocial functioning, the context of an initial presentation often holds diagnostic relevance. Manic patients who display disruptive behaviors often are brought to treatment by a third party, whereas a less severely ill patient might be more inclined to seek treatment for herself (himself) when psychosis is absent and insight is less compromised or when the patient feels she (he) might be depressed.
It is not uncommon for a manic patient to report “depression” as the chief complaint or to omit elements related to psychomotor acceleration (such as racing thoughts or psychomotor agitation) in the description of symptoms. An accurate diagnosis often requires clinical probing and clarification of symptoms (eg, differentiating simple insomnia with consequent next-day fatigue from loss of the need for sleep with intact or even enhanced next-day energy) or discriminating racing thoughts from anxious ruminations that might be more intrusive than rapid.
Presentations of frank mania also can come to light as a consequence of symptoms, rather than as symptoms per se (eg, conflict in relationships, problems at work, financial reversals).
Particularly in patients who do not have a history of mania, avoid the temptation to begin or modify existing pharmacotherapy until you have performed a basic initial evaluation. Immediate considerations for initial assessment and management include the following:
Provide containment. Ensure a safe setting, level of care, and frequency of monitoring. Evaluate suicide risk (particularly when mixed features are present), and risk of withdrawal from any psychoactive substances.
Engage significant others. Close family members can provide essential history, particularly when a patient’s insight about her illness and need for treatment are impaired. Family members and significant others also often play important roles in helping to restrict access to finances, fostering medication adherence, preventing access to weapons in the home, and sharing information with providers about substance use or high-risk behavior.
Systematically assess for DSM-5 symptoms of mania and depression. DSM-5 modified criteria for mania/hypomania to necessitate increased energy, in addition to change in mood, to make a syndromal diagnosis. Useful during a clinical interview is the popular mnemonic DIGFAST to aid recognition of core mania symptomsa:
• Distractibility
• Indiscretion/impulsivity
• Grandiosity
• Flight of ideas
• Activity increase
• Sleep deficit
• Talkativeness.
aAlso see: “Mnemonics in a mnutshell: 32 aids to psychiatric diagnosis,” in the October 2008 issue Current Psychiatry and in the archive at CurrentPsychiatry.com.
These symptoms should represent a departure from normal baseline characteristics; it often is helpful to ask a significant other or collateral historian how the present symptoms differ from the patient’s usual state.
Assess for unstable medical conditions or toxicity states. When evaluating an acute change in mental status, toxicology screening is relatively standard and the absence of illicit substances should seldom, if ever, be taken for granted—especially because occult substance use can lead to identification of false-positive BD “cases.”1
Stop any antidepressant. During a manic episode, continuing antidepressant medication serves no purpose other than to contribute to or exacerbate mania symptoms. Nonetheless, observational studies demonstrate that approximately 15% of syndromally manic patients continue to receive an antidepressant, often when a clinician perceives more severe depression during mania, multiple prior depressive episodes, current anxiety, or rapid cycling.2
Importantly, antidepressants have been shown to harm, rather than alleviate, presentations that involve a mixed state,3 and have no demonstrated value in preventing post-manic depression. Mere elimination of an antidepressant might ease symptoms during a manic or mixed episode.4
In some cases, it might be advisable to taper, not abruptly stop, a short half-life serotonergic antidepressant, even in the setting of mania, to minimize the potential for aggravating autonomic dysregulation that can result from antidepressant discontinuation effects.
Begin anti-manic pharmacotherapy. Initiation of an anti-manic mood stabilizer, such as lithium and divalproex, has been standard in the treatment of acute mania.
In the 1990s, protocols for oral loading of divalproex (20 to 30 mg/kg/d) gained popularity for achieving more rapid symptom improvement than might occur with lithium. In the current era, atypical antipsychotics have all but replaced mood stabilizers as an initial intervention to contain mania symptoms quickly (and with less risk than first-generation antipsychotics for acute adverse motor effects from so-called rapid neuroleptization).
Because atypical antipsychotics often rapidly subdue mania, psychosis, and agitation, regardless of the underlying process, many practitioners might feel more comfortable initiating them than a mood stabilizer when the diagnosis is ambiguous or provisional, although their longer-term efficacy and safety, relative to traditional mood stabilizers, remains contested. Considerations for choosing from among feasible anti-manic pharmacotherapies are summarized in Table 1.5
Normalize the sleep-wake cycle. Chronobiological and circadian variables, such as irregular sleep patterns, are thought to contribute to the pathophysiology of affective switch in BD. Behavioral and pharmacotherapeutic efforts to impose a normal sleep−wake schedule are considered fundamental to stabilizing acute mania.
Facilitate next steps after acute stabilization. For inpatients, this might involve step-down to a partial hospitalization or intensive outpatient program, alongside taking steps to ensure continued treatment adherence and minimize relapse.
What medical and neurologic workup is appropriate?
Not every first lifetime presentation of mania requires extensive medical and neurologic workup, particularly among patients who have a history of depression and those whose presentation neatly fits the demographic and clinical profile of newly emergent BD. Basic assessment should determine whether any new medication has been started that could plausibly contribute to abnormal mental status (Table 2).
Nevertheless, evaluation of almost all first presentations of mania should include:
• urine toxicology screen
• complete blood count
• comprehensive metabolic panel
• thyroid-stimulating hormone assay
• serum vitamin B12 level assay
• serum folic acid level assay
• rapid plasma reagin test.
Clinical features that usually lead a clinician to pursue a more detailed medical and neurologic evaluation of first-episode mania include:
• onset age >40
• absence of a family history of mood disorder
• symptoms arising during a major medical illness
• multiple medications
• suspicion of a degenerative or hereditary neurologic disorder
• altered state of consciousness
• signs of cortical or diffuse subcortical dysfunction (eg, cognitive deficits, motor deficits, tremor)
• abnormal vital signs.
Depending on the presentation, additional testing might include:
• tests of HIV antibody, immune autoantibodies, and Lyme disease antibody
• heavy metal screening (when suggested by environmental exposure)
• lumbar puncture (eg, in a setting of manic delirium or suspected central nervous system infection or paraneoplastic syndrome)
• neuroimaging (note: MRI provides better visualization than CT of white matter pathology and small vessel cerebrovascular disease) electroencephalography.
Making an overarching diagnosis: Is mania always bipolar disorder?
Mania is considered a manifestation of BD when symptoms cannot be attributed to another psychiatric condition, another underlying medical or neurologic condition, or a toxic-metabolic state (Table 3 and Table 46-9). Classification of mania that occurs soon after antidepressant exposure in patients without a known history of BD continues to be the subject of debate, varying in its conceptualization across editions of DSM.
The National Institute of Mental Health (NIMH) Systematic Treatment Enhancement Program for Bipolar Disorder, or STEP-BD, observed a fairly low (approximately 10%) incidence of switch from depression to mania when an antidepressant is added to a mood stabilizer; the study authors concluded that much of what is presumed to be antidepressant-induced mania might simply be the natural course of illness.10
Notably, several reports suggest that antidepressants might pose a greater risk of mood destabilization in people with BD I than with either BD II or other suspected variants on the bipolar spectrum.
DSM-5 advises that a diagnosis of substance-induced mood disorder appropriately describes symptoms that spontaneously dissipate once an antidepressant has been discontinued, whereas a diagnosis of BD can be made when manic or hypomanic symptoms persist at a syndromal level after an antidepressant has been stopped and its physiological effects are no longer present. With respect to time course, the International Society of Bipolar Disorders proposes that, beyond 12 to 16 weeks after an antidepressant has been started or the dosage has been increased, it is unlikely that new-onset mania/hypomania can reasonably be attributed to “triggering” by an antidepressant11 (although antidepressants should be stopped when symptoms of mania emerge).
Several clinical features have been linked in the literature with an increased susceptibility to BD after an initial depressive episode, including:
• early (pre-adolescent) age at onset of first mood disorder episode6
• family history of BD, highly recurrent depression, or psychosis12,13
• psychosis when depressed.7,14
A number of other characteristics of depressive illness—including seasonal depression, atypical depressive features, suicidality, irritability, anxiety or substance use comorbidity, postpartum mood episodes, and brief recurrent depressive episodes—have been described in the literature as potential correlates of a bipolar diathesis; none have proved to be robust or pathognomonic of a BD diagnosis, as opposed to a unipolar diagnosis.
Data from the NIMH Collaborative Depression Study suggest that recurrent mania/hypomania after an antidepressant-associated polarity switch is greater when a family history of BD is present; other clinical variables might hold less predictive value.15
In addition, although some practitioners consider a history of nonresponse to trials of multiple antidepressants suggestive of an underlying bipolar process, polarity is only one of many variables that must be considered in the differential diagnosis of antidepressant-resistant depression.b Likewise, molecular genetic studies do not support a link between antidepressant nonresponse and the likelihood of a diagnosis of BD.16
bSee “A practical approach to subtyping depression among your patients” in the April 2014 issue of Current Psychiatry or in the archive at CurrentPsychiatry.com.
Indefinite pharmacotherapy for bipolar disorder?
An important but nagging issue when diagnosing BD after a first manic (or hypomanic) episode is the implied need for indefinite pharmacotherapy to sustain remission and prevent relapse and recurrence.
The likelihood of subsequent depression or mania/hypomania remains high after an index manic/hypomanic episode, particularly for 6 to 8 months after recovery.8,17 Natural history data suggest that, during the year that follows a first lifetime mania, approximately 40% of patients experience a second manic episode.8 A second lifetime mania might be especially likely in patients whose index episode involved mood-congruent psychosis, low premorbid work functioning, and an initial manic episode, as opposed to a mixed episode17 or early age at onset.8
In the absence of randomized, placebo-controlled studies of maintenance pharmacotherapy after a first lifetime manic episode, clinical judgment often drives decisions about the duration of continuing pharmacotherapy after initial symptoms resolve. The Texas Medication Algorithm Project for BD advises that:
Similarly, in the most recent (2004) Expert Consensus Guideline Series for the Treatment of Bipolar Disorder,19 84% of practitioner−respondents favored indefinite mood stabilizer therapy after a second lifetime manic episode. No recommendation was made about the duration of maintenance pharmacotherapy after a first lifetime manic/hypomanic episode.
Avoid or reintroduce an antidepressant if depression recurs after a first mania?
Controversies surrounding antidepressant use in BD are extensive; detailed discussion is beyond the scope of this review (Goldberg and Ghaemi provided a broader discussion of risks and benefits of antidepressants in BD20). Although the main clinical concern regarding antidepressant use was, at one time, the potential to induce mania or accelerate the frequency of recurrent episodes, more recent, empirical studies suggest that the greater risk of using antidepressants for BD is lack of efficacy.10,21
If a careful longitudinal history and clinical evaluation reveal that an initial manic episode heralds the onset of BD, decisions about whether to avoid an antidepressant (as opposed to using other, more evidence-based interventions for bipolar depression) depend on a number of variables, including establishing whether the index episode was manic or hypomanic; ruling out current subthreshold mixed features; and clarifying how recently mania developed. Decisions about future antidepressant use (or avoidance) might be less clear if an index manic/hypomanic episode was brief and self-limited once the antidepressant was stopped.
Although some experts eschew antidepressant monotherapy after such occurrences, there is no body of literature to inform decisions about the safety or efficacy of undertaking a future antidepressant trial in such patients. That said, reasonable judgment probably includes several considerations:
• Re-exposure to the same antidepressant that was associated with an induction of mania is likely riskier than choosing a different antidepressant; in general, purely serotonergic antidepressants or bupropion are considered to pose less risk of mood destabilization than is seen with an SNRI or tricyclic antidepressant.
• After a manic episode, a subsequent antidepressant trial generally shouldn’t be attempted without concurrent anti-manic medication.
• Introducing any antidepressant is probably ill-advised in the recent (~2 months) aftermath of acute manic/ hypomanic symptoms.22
• Patients and their significant other should be apprised of the risk of emerging symptoms of mania or hypomania, or mixed features, and should be familiar with key target symptoms to watch for. Prospective mood charting can be helpful.
• Patients should be monitored closely both for an exacerbation of depression and recurrence of mania/hypomania symptoms.
• Any antidepressant should be discontinued promptly at the first sign of psychomotor acceleration or the emergence of mixed features, as defined by DSM-5.
Psychoeducation and forecasting
Functional recovery from a manic episode can lag behind symptomatic recovery. Subsyndromal symptoms often persist after a full episode subsides.
Mania often is followed by a depressive episode, and questions inevitably arise about how to prevent and treat these episodes. Because the median duration of a manic episode is approximately 13 weeks,23 it is crucial for patients and their immediate family to recognize that recovery might be gradual, and that it will likely take time before she (he) can resume full-time responsibilities at work or school or in the home.
Today, a patient who is hospitalized for severe acute mania (as Ms. J was, in the case vignette) seldom remains an inpatient long enough to achieve remission of symptoms; sometimes, she (he) might continue to manifest significant symptoms, even though decisions about the “medical necessity” of ongoing inpatient care tend to be governed mainly by issues of safety and imminent danger. (This web exclusive Table20,24,25 provides considerations when making the transition from the acute phase to the continuation phase of treatment.)
To minimize risk of relapse, psycho-education should include discussion of:
• psychiatrically deleterious effects of alcohol and illicit drug use
• suicide risk, including what to do in an emergency
• protecting a regular sleep schedule and avoiding sleep deprivation
• the potential for poor medication adherence and management of side effects
• the need for periodic laboratory monitoring, as needed
• the role of adjunctive psychotherapy and effective stress management
• familiarity with symptoms that serve as warning signs, and how to monitor their onset.
Bottom Line
When a patient being treated for depression develops signs of mania or hypomania, stop any antidepressant and consider initiating a mood stabilizer, antipsychotic, or both, to contain and stabilize symptoms. Entertain medical and substance-related causes of mania symptoms, and evaluate and treat as suggested by the patient’s presentation. Long-term drug therapy to prevent recurrence of mania/hypomania, as well as risks and benefits of future exposure to antidepressants, should be decided case by case.
Related Resources
• Proudfoot J, Whitton A, Parker G, et al. Triggers of mania and depression in young adults with bipolar disorder. J Affect Disord. 2012;143(1-3):196-202.
• Stange JP, Sylvia LG, Magalhães PV, et al. Extreme attributions predict transition from depression to mania or hypomania in bipolar disorder. J Psychiatr Res. 2013;47(10):1329-1336.
Drug Brand Names
Albuterol • Proventil, Ventolin
Anastrozole • Arimidex
Aripiprazole • Abilify
Bupropion • Wellbutrin
Carbamazepine • Tegretol
Chloroquine • Aralen
Ciprofloxacin • Cipro
Clarithromycin • Biaxin
Clomiphene • Clomid
Digoxin • Digox, Lanoxin
Divalproex • Depakote
5-Fluorouracil • Carac, Efudex
Human chorionic gonadotropin • Novarel, Pregnyl
Ifosfamide • Ifex
Isoniazid • Nydrazid
Lamotrigine • Lamictal
Letrozole • Femara
Lithium • Eskalith, Lithobid
Lurasidone • Latuda
Mefloquine • Lariam
Olanzapine • Zyprexa
Olanzapine/fluoxetine combination • Symbyax
ramipexole • Mirapex
Procarbazine • Matulane
Quetiapine • Seroquel
Ropinirole • Requip
Rotigotine • Neupro
Venlafaxine • Effexor
Zidovudine • Retrovir
Disclosures
Dr. Goldberg is a consultant to Merck & Co. and Sunovion. He is a member of the speakers’ bureau of AstraZeneca, Janssen, Merck & Co., Takeda and Lundbeck, and Sunovion.
Dr. Ernst reports no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
1. Goldberg JF, Garno JL, Callahan AM, et al. Overdiagnosis of bipolar disorder among substance use disorder in patients with mood instability. J Clin Psychiatry. 2008;69(11):1751-1757.
2. Rosa AR, Cruz B, Franco C, et al. Why do clinicians maintain antidepressants in some patients with acute mania? Hints from the European Mania in Bipolar Longitudinal Evaluation of Medication (EMBLEM), a large naturalistic study. J Clin Psychiatry. 2010;71(8):1000-1006.
3. Goldberg JF, Perlis RH, Ghaemi SN, et al. Adjunctive antidepressant use and symptomatic recovery among bipolar depressed patients with concomitant manic symptoms: findings from the STEP-BD. Am J Psychiatry. 2007;164(9):1348-1355.
4. Bowers MB Jr, McKay BG, Mazure CM. Discontinuation of antidepressants in newly admitted psychotic patients. J Neuropsychiatr Clin Neurosci. 2003;15(2):227-230.
5. Perlis RH, Welge JA, Vornik LA, et al. Atypical antipsychotics in the treatment of mania: a meta-analysis of randomized, placebo-controlled trials. J Clin Psychiatry. 2006;67(4):509-516.
6. Geller B, Zimmerman B, Williams M, et al. Bipolar disorder at prospective follow-up of adults who had prepubertal major depressive disorder. Am J Psychiatry. 2001;158(1):125-127.
7. Goldberg JF, Harrow M, Whiteside JE. Risk for bipolar illness in patients initially hospitalized for unipolar depression. Am J Psychiatry. 2001;158(8):1265-1270.
8. Yatham LN, Kauer-Sant’Anna M, Bond DJ, et al. Course and outcome after the first manic episode in patients with bipolar disorder: prospective 12-month data from the Systematic Treatment Optimization Project for Early Mania project. Can J Psychiatry. 2009;54(2):105-112.
9. Chaudron LH, Pies RW. The relationship between postpartum psychosis and bipolar disorder: a review. J Clin Psychiatry 2003;64(11):1284-1292.
10. Sachs GS, Nierenberg AA, Calabrese JR, et al. Effectiveness of adjunctive antidepressant treatment for bipolar depression. N Engl J Med. 2007;356(17):1711-1722.
11. Tohen M, Frank E, Bowden CL, et al. The International Society for Bipolar Disorders (ISBD) Task Force report on the nomenclature of course and outcome in bipolar disorders. Bipolar Disord. 2009;11(15):453-473.
12. Schulze TG, Hedeker D, Zandi P, et al. What is familial about familial bipolar disorder? Resemblance among relatives across a broad spectrum of phenotypic characteristics. Arch Gen Psychiatry. 2006;63(12):1368-1376.
13. Song J, Bergen SE, Kuja-Halkola R, et al. Bipolar disorder and its relation to major psychiatric disorders: a family-based study in the Swedish population. Bipolar Disord. 2015;7(2):184-193.
14. Goes FS, Sadler B, Toolan J, et al. Psychotic features in bipolar and unipolar depression. Bipolar Disord. 2007;9(8):901-906.
15. Fiedorowicz JG, Endicott J, Solomon DA, et al. Course of illness following prospectively observed mania or hypomania in individuals presenting with unipolar depression. Bipolar Disord. 2007;14(6):664-671.
16. Tansey KE, Guipponi M, Domenici E, et al. Genetic susceptibility for bipolar disorder and response to antidepressants in major depressive disorder. Am J Med Genetics B Neuropsychiatr Genet. 2014;165B(1):77-83.
17. Tohen M, Zarate CA Jr, Hennen J, et al. The McLean-Harvard First-Episode Mania Study: prediction of recovery and first recurrence. Am J Psychiatry. 2003;160(12):2099-2107.
18. Suppes T, Dennehy EB, Swann AC, et al. Report of the Texas Consensus Conference Panel on medication treatment of bipolar disorder 2000. J Clin Psychiatry. 2002;63(4):288-299.
19. Keck PE Jr, Perlis RH, Otto MW, et al. The Expert Consensus Guideline Series: treatment of bipolar disorder 2004. Postgrad Med Special Report. 2004:1-120.
20. Goldberg JF, Ghaemi SN. Benefits and limitations of antidepressants and traditional mood stabilizers for treatment of bipolar depression. Bipolar Disord. 2005;7(suppl 5):3-12.
21. Sidor MM, MacQueen GM. Antidepressants for the acute treatment of bipolar depression: a systematic review and meta-analysis. J Clin Psychiatry. 2011;72(2):156-167.
22. MacQueen GM, Trevor Young L, Marriott M, et al. Previous mood state predicts response and switch rates in patients with bipolar depression. Acta Psychiatr Scand. 2002;105(6):414-418.
23. Solomon DA, Leon AC, Coryell WH, et al. Longitudinal course of bipolar I disorder: duration of mood episodes. Arch Gen Psychiatry. 2010;67(4):339-347.
24. Tohen M, Chengappa KN, Suppes T, et al. Relapse prevention in bipolar I disorder: 18-month comparison of olanzapine plus mood stabiliser v. mood stabiliser alone. Br J Psychiatry. 2004;184:337-345.
25. Suppes T, Vieta E, Liu S, et al. Maintenance treatment for patients with bipolar I disorder: results from a North American study of quetiapine in combination with lithium or divalproex (trial 127). Am J Psychiatry. 2009;166(4):476-488.
When a known depressed patient newly develops signs of mania or hypomania, a cascade of diagnostic and therapeutic questions ensues: Does the event “automatically” signify the presence of bipolar disorder (BD), or could manic symptoms be secondary to another underlying medical problem, a prescribed antidepressant or non-psychotropic medication, or illicit substances?
Even more questions confront the clinician: If mania symptoms are nothing more than an adverse drug reaction, will they go away by stopping the presumed offending agent? Or do symptoms always indicate the unmasking of a bipolar diathesis? Should anti-manic medication be prescribed immediately? If so, which one(s) and for how long? How extensive a medical or neurologic workup is indicated?
And, how do you differentiate ambiguous hypomania symptoms (irritability, insomnia, agitation) from other phenomena, such as akathisia, anxiety, and overstimulation?
In this article, we present an overview of how to approach and answer these key questions, so that you can identify, comprehend, and manage manic symptoms that arise in the course of your patient’s treatment for depression (Box).
Does disease exist on a unipolar−bipolar continuum?
There has been a resurgence of interest in Kraepelin’s original notion of mania and depression as falling along a continuum, rather than being distinct categories of pathology. True bipolar mania has its own identifiable epidemiology, familiality, and treatment, but symptomatic shades of gray often pose a formidable diagnostic and therapeutic challenge.
For example, DSM-5 relaxed its definition of “mixed” episodes of BD to include subsyndromal mania features in unipolar depression. When a patient with unipolar depression develops a full, unequivocal manic episode, there usually isn’t much ambiguity or confusion about initial management: assure a safe environment, stop any antidepressants, rule out drug- or medically induced causes, and begin an acute anti-manic medication.
Next steps can, sometimes, be murkier:
• formulate a definitive, overarching diagnosis
• provide psycho-education
• forecast return to work or school
• discuss prognosis and likelihood of relapse
• address necessary lifestyle modifications (eg, sleep hygiene, elimination of alcohol and illicit drug use)
• determine whether indefinite maintenance pharmacotherapy is indicated— and, if so, with which medication(s).
CASE A diagnostic formulation isn’t always black and white
Ms. J, age 56, a medically healthy woman, has a 10-year history of depression and anxiety that has been treated effectively for most of that time with venlafaxine, 225 mg/d. The mother of 4 grown children, Ms. J has worked steadily for >20 years as a flight attendant for an international airline.
Today, Ms. J is brought by ambulance from work to the emergency department in a paranoid and agitated state. The admission follows her having e-blasted airline corporate executives with a voluminous manifesto that she worked on around the clock the preceding week, in which she explained her bold ideas to revolutionize the airline industry, under her leadership.
Ms. J’s family history is unremarkable for psychiatric illness.
How does one approach a case such as Ms. J’s?
Stark examples of classical mania, as depicted in this case vignette, are easy to recognize but not necessarily straightforward, nosologically. Consider the following not-so-straightforward elements of Ms. J’s case:
• a first-lifetime episode of mania or hypomania is rare after age 50
• Ms. J took a serotonin-norepinephrine reuptake inhibitor (SNRI) for many years without evidence of mood destabilization
• years of repetitive chronobiological stress (including probable frequent time zone changes with likely sleep disruption) apparently did not trigger mood destabilization
• none of Ms. J’s 4 pregnancies led to postpartum mood episodes
• at least on the surface, there are no obvious features that point to likely causes of a secondary mania (eg, drug-induced, toxic, metabolic, or medical)
• Ms. J has no known family history of BD or any other mood disorder.
Approaching a case such as Ms. J’s must involve a systematic strategy that can best be broken into 2 segments: (1) a period of acute initial assessment and treatment and (2) later efforts focused on broader diagnostic evaluation and longer-term relapse prevention.
Initial assessment and treatment
Immediate assessment and management hinges on initial triage and forming a working diagnostic impression. Although full-blown mania usually is obvious (sometimes even without a formal interview), be alert to patients who might minimize or altogether disavow mania symptoms—often because of denial of illness, misidentification of symptoms, or impaired insight about changes in thinking, mood, or behavior.
Because florid mania, by definition, impairs psychosocial functioning, the context of an initial presentation often holds diagnostic relevance. Manic patients who display disruptive behaviors often are brought to treatment by a third party, whereas a less severely ill patient might be more inclined to seek treatment for herself (himself) when psychosis is absent and insight is less compromised or when the patient feels she (he) might be depressed.
It is not uncommon for a manic patient to report “depression” as the chief complaint or to omit elements related to psychomotor acceleration (such as racing thoughts or psychomotor agitation) in the description of symptoms. An accurate diagnosis often requires clinical probing and clarification of symptoms (eg, differentiating simple insomnia with consequent next-day fatigue from loss of the need for sleep with intact or even enhanced next-day energy) or discriminating racing thoughts from anxious ruminations that might be more intrusive than rapid.
Presentations of frank mania also can come to light as a consequence of symptoms, rather than as symptoms per se (eg, conflict in relationships, problems at work, financial reversals).
Particularly in patients who do not have a history of mania, avoid the temptation to begin or modify existing pharmacotherapy until you have performed a basic initial evaluation. Immediate considerations for initial assessment and management include the following:
Provide containment. Ensure a safe setting, level of care, and frequency of monitoring. Evaluate suicide risk (particularly when mixed features are present), and risk of withdrawal from any psychoactive substances.
Engage significant others. Close family members can provide essential history, particularly when a patient’s insight about her illness and need for treatment are impaired. Family members and significant others also often play important roles in helping to restrict access to finances, fostering medication adherence, preventing access to weapons in the home, and sharing information with providers about substance use or high-risk behavior.
Systematically assess for DSM-5 symptoms of mania and depression. DSM-5 modified criteria for mania/hypomania to necessitate increased energy, in addition to change in mood, to make a syndromal diagnosis. Useful during a clinical interview is the popular mnemonic DIGFAST to aid recognition of core mania symptomsa:
• Distractibility
• Indiscretion/impulsivity
• Grandiosity
• Flight of ideas
• Activity increase
• Sleep deficit
• Talkativeness.
aAlso see: “Mnemonics in a mnutshell: 32 aids to psychiatric diagnosis,” in the October 2008 issue Current Psychiatry and in the archive at CurrentPsychiatry.com.
These symptoms should represent a departure from normal baseline characteristics; it often is helpful to ask a significant other or collateral historian how the present symptoms differ from the patient’s usual state.
Assess for unstable medical conditions or toxicity states. When evaluating an acute change in mental status, toxicology screening is relatively standard and the absence of illicit substances should seldom, if ever, be taken for granted—especially because occult substance use can lead to identification of false-positive BD “cases.”1
Stop any antidepressant. During a manic episode, continuing antidepressant medication serves no purpose other than to contribute to or exacerbate mania symptoms. Nonetheless, observational studies demonstrate that approximately 15% of syndromally manic patients continue to receive an antidepressant, often when a clinician perceives more severe depression during mania, multiple prior depressive episodes, current anxiety, or rapid cycling.2
Importantly, antidepressants have been shown to harm, rather than alleviate, presentations that involve a mixed state,3 and have no demonstrated value in preventing post-manic depression. Mere elimination of an antidepressant might ease symptoms during a manic or mixed episode.4
In some cases, it might be advisable to taper, not abruptly stop, a short half-life serotonergic antidepressant, even in the setting of mania, to minimize the potential for aggravating autonomic dysregulation that can result from antidepressant discontinuation effects.
Begin anti-manic pharmacotherapy. Initiation of an anti-manic mood stabilizer, such as lithium and divalproex, has been standard in the treatment of acute mania.
In the 1990s, protocols for oral loading of divalproex (20 to 30 mg/kg/d) gained popularity for achieving more rapid symptom improvement than might occur with lithium. In the current era, atypical antipsychotics have all but replaced mood stabilizers as an initial intervention to contain mania symptoms quickly (and with less risk than first-generation antipsychotics for acute adverse motor effects from so-called rapid neuroleptization).
Because atypical antipsychotics often rapidly subdue mania, psychosis, and agitation, regardless of the underlying process, many practitioners might feel more comfortable initiating them than a mood stabilizer when the diagnosis is ambiguous or provisional, although their longer-term efficacy and safety, relative to traditional mood stabilizers, remains contested. Considerations for choosing from among feasible anti-manic pharmacotherapies are summarized in Table 1.5
Normalize the sleep-wake cycle. Chronobiological and circadian variables, such as irregular sleep patterns, are thought to contribute to the pathophysiology of affective switch in BD. Behavioral and pharmacotherapeutic efforts to impose a normal sleep−wake schedule are considered fundamental to stabilizing acute mania.
Facilitate next steps after acute stabilization. For inpatients, this might involve step-down to a partial hospitalization or intensive outpatient program, alongside taking steps to ensure continued treatment adherence and minimize relapse.
What medical and neurologic workup is appropriate?
Not every first lifetime presentation of mania requires extensive medical and neurologic workup, particularly among patients who have a history of depression and those whose presentation neatly fits the demographic and clinical profile of newly emergent BD. Basic assessment should determine whether any new medication has been started that could plausibly contribute to abnormal mental status (Table 2).
Nevertheless, evaluation of almost all first presentations of mania should include:
• urine toxicology screen
• complete blood count
• comprehensive metabolic panel
• thyroid-stimulating hormone assay
• serum vitamin B12 level assay
• serum folic acid level assay
• rapid plasma reagin test.
Clinical features that usually lead a clinician to pursue a more detailed medical and neurologic evaluation of first-episode mania include:
• onset age >40
• absence of a family history of mood disorder
• symptoms arising during a major medical illness
• multiple medications
• suspicion of a degenerative or hereditary neurologic disorder
• altered state of consciousness
• signs of cortical or diffuse subcortical dysfunction (eg, cognitive deficits, motor deficits, tremor)
• abnormal vital signs.
Depending on the presentation, additional testing might include:
• tests of HIV antibody, immune autoantibodies, and Lyme disease antibody
• heavy metal screening (when suggested by environmental exposure)
• lumbar puncture (eg, in a setting of manic delirium or suspected central nervous system infection or paraneoplastic syndrome)
• neuroimaging (note: MRI provides better visualization than CT of white matter pathology and small vessel cerebrovascular disease) electroencephalography.
Making an overarching diagnosis: Is mania always bipolar disorder?
Mania is considered a manifestation of BD when symptoms cannot be attributed to another psychiatric condition, another underlying medical or neurologic condition, or a toxic-metabolic state (Table 3 and Table 46-9). Classification of mania that occurs soon after antidepressant exposure in patients without a known history of BD continues to be the subject of debate, varying in its conceptualization across editions of DSM.
The National Institute of Mental Health (NIMH) Systematic Treatment Enhancement Program for Bipolar Disorder, or STEP-BD, observed a fairly low (approximately 10%) incidence of switch from depression to mania when an antidepressant is added to a mood stabilizer; the study authors concluded that much of what is presumed to be antidepressant-induced mania might simply be the natural course of illness.10
Notably, several reports suggest that antidepressants might pose a greater risk of mood destabilization in people with BD I than with either BD II or other suspected variants on the bipolar spectrum.
DSM-5 advises that a diagnosis of substance-induced mood disorder appropriately describes symptoms that spontaneously dissipate once an antidepressant has been discontinued, whereas a diagnosis of BD can be made when manic or hypomanic symptoms persist at a syndromal level after an antidepressant has been stopped and its physiological effects are no longer present. With respect to time course, the International Society of Bipolar Disorders proposes that, beyond 12 to 16 weeks after an antidepressant has been started or the dosage has been increased, it is unlikely that new-onset mania/hypomania can reasonably be attributed to “triggering” by an antidepressant11 (although antidepressants should be stopped when symptoms of mania emerge).
Several clinical features have been linked in the literature with an increased susceptibility to BD after an initial depressive episode, including:
• early (pre-adolescent) age at onset of first mood disorder episode6
• family history of BD, highly recurrent depression, or psychosis12,13
• psychosis when depressed.7,14
A number of other characteristics of depressive illness—including seasonal depression, atypical depressive features, suicidality, irritability, anxiety or substance use comorbidity, postpartum mood episodes, and brief recurrent depressive episodes—have been described in the literature as potential correlates of a bipolar diathesis; none have proved to be robust or pathognomonic of a BD diagnosis, as opposed to a unipolar diagnosis.
Data from the NIMH Collaborative Depression Study suggest that recurrent mania/hypomania after an antidepressant-associated polarity switch is greater when a family history of BD is present; other clinical variables might hold less predictive value.15
In addition, although some practitioners consider a history of nonresponse to trials of multiple antidepressants suggestive of an underlying bipolar process, polarity is only one of many variables that must be considered in the differential diagnosis of antidepressant-resistant depression.b Likewise, molecular genetic studies do not support a link between antidepressant nonresponse and the likelihood of a diagnosis of BD.16
bSee “A practical approach to subtyping depression among your patients” in the April 2014 issue of Current Psychiatry or in the archive at CurrentPsychiatry.com.
Indefinite pharmacotherapy for bipolar disorder?
An important but nagging issue when diagnosing BD after a first manic (or hypomanic) episode is the implied need for indefinite pharmacotherapy to sustain remission and prevent relapse and recurrence.
The likelihood of subsequent depression or mania/hypomania remains high after an index manic/hypomanic episode, particularly for 6 to 8 months after recovery.8,17 Natural history data suggest that, during the year that follows a first lifetime mania, approximately 40% of patients experience a second manic episode.8 A second lifetime mania might be especially likely in patients whose index episode involved mood-congruent psychosis, low premorbid work functioning, and an initial manic episode, as opposed to a mixed episode17 or early age at onset.8
In the absence of randomized, placebo-controlled studies of maintenance pharmacotherapy after a first lifetime manic episode, clinical judgment often drives decisions about the duration of continuing pharmacotherapy after initial symptoms resolve. The Texas Medication Algorithm Project for BD advises that:
Similarly, in the most recent (2004) Expert Consensus Guideline Series for the Treatment of Bipolar Disorder,19 84% of practitioner−respondents favored indefinite mood stabilizer therapy after a second lifetime manic episode. No recommendation was made about the duration of maintenance pharmacotherapy after a first lifetime manic/hypomanic episode.
Avoid or reintroduce an antidepressant if depression recurs after a first mania?
Controversies surrounding antidepressant use in BD are extensive; detailed discussion is beyond the scope of this review (Goldberg and Ghaemi provided a broader discussion of risks and benefits of antidepressants in BD20). Although the main clinical concern regarding antidepressant use was, at one time, the potential to induce mania or accelerate the frequency of recurrent episodes, more recent, empirical studies suggest that the greater risk of using antidepressants for BD is lack of efficacy.10,21
If a careful longitudinal history and clinical evaluation reveal that an initial manic episode heralds the onset of BD, decisions about whether to avoid an antidepressant (as opposed to using other, more evidence-based interventions for bipolar depression) depend on a number of variables, including establishing whether the index episode was manic or hypomanic; ruling out current subthreshold mixed features; and clarifying how recently mania developed. Decisions about future antidepressant use (or avoidance) might be less clear if an index manic/hypomanic episode was brief and self-limited once the antidepressant was stopped.
Although some experts eschew antidepressant monotherapy after such occurrences, there is no body of literature to inform decisions about the safety or efficacy of undertaking a future antidepressant trial in such patients. That said, reasonable judgment probably includes several considerations:
• Re-exposure to the same antidepressant that was associated with an induction of mania is likely riskier than choosing a different antidepressant; in general, purely serotonergic antidepressants or bupropion are considered to pose less risk of mood destabilization than is seen with an SNRI or tricyclic antidepressant.
• After a manic episode, a subsequent antidepressant trial generally shouldn’t be attempted without concurrent anti-manic medication.
• Introducing any antidepressant is probably ill-advised in the recent (~2 months) aftermath of acute manic/ hypomanic symptoms.22
• Patients and their significant other should be apprised of the risk of emerging symptoms of mania or hypomania, or mixed features, and should be familiar with key target symptoms to watch for. Prospective mood charting can be helpful.
• Patients should be monitored closely both for an exacerbation of depression and recurrence of mania/hypomania symptoms.
• Any antidepressant should be discontinued promptly at the first sign of psychomotor acceleration or the emergence of mixed features, as defined by DSM-5.
Psychoeducation and forecasting
Functional recovery from a manic episode can lag behind symptomatic recovery. Subsyndromal symptoms often persist after a full episode subsides.
Mania often is followed by a depressive episode, and questions inevitably arise about how to prevent and treat these episodes. Because the median duration of a manic episode is approximately 13 weeks,23 it is crucial for patients and their immediate family to recognize that recovery might be gradual, and that it will likely take time before she (he) can resume full-time responsibilities at work or school or in the home.
Today, a patient who is hospitalized for severe acute mania (as Ms. J was, in the case vignette) seldom remains an inpatient long enough to achieve remission of symptoms; sometimes, she (he) might continue to manifest significant symptoms, even though decisions about the “medical necessity” of ongoing inpatient care tend to be governed mainly by issues of safety and imminent danger. (This web exclusive Table20,24,25 provides considerations when making the transition from the acute phase to the continuation phase of treatment.)
To minimize risk of relapse, psycho-education should include discussion of:
• psychiatrically deleterious effects of alcohol and illicit drug use
• suicide risk, including what to do in an emergency
• protecting a regular sleep schedule and avoiding sleep deprivation
• the potential for poor medication adherence and management of side effects
• the need for periodic laboratory monitoring, as needed
• the role of adjunctive psychotherapy and effective stress management
• familiarity with symptoms that serve as warning signs, and how to monitor their onset.
Bottom Line
When a patient being treated for depression develops signs of mania or hypomania, stop any antidepressant and consider initiating a mood stabilizer, antipsychotic, or both, to contain and stabilize symptoms. Entertain medical and substance-related causes of mania symptoms, and evaluate and treat as suggested by the patient’s presentation. Long-term drug therapy to prevent recurrence of mania/hypomania, as well as risks and benefits of future exposure to antidepressants, should be decided case by case.
Related Resources
• Proudfoot J, Whitton A, Parker G, et al. Triggers of mania and depression in young adults with bipolar disorder. J Affect Disord. 2012;143(1-3):196-202.
• Stange JP, Sylvia LG, Magalhães PV, et al. Extreme attributions predict transition from depression to mania or hypomania in bipolar disorder. J Psychiatr Res. 2013;47(10):1329-1336.
Drug Brand Names
Albuterol • Proventil, Ventolin
Anastrozole • Arimidex
Aripiprazole • Abilify
Bupropion • Wellbutrin
Carbamazepine • Tegretol
Chloroquine • Aralen
Ciprofloxacin • Cipro
Clarithromycin • Biaxin
Clomiphene • Clomid
Digoxin • Digox, Lanoxin
Divalproex • Depakote
5-Fluorouracil • Carac, Efudex
Human chorionic gonadotropin • Novarel, Pregnyl
Ifosfamide • Ifex
Isoniazid • Nydrazid
Lamotrigine • Lamictal
Letrozole • Femara
Lithium • Eskalith, Lithobid
Lurasidone • Latuda
Mefloquine • Lariam
Olanzapine • Zyprexa
Olanzapine/fluoxetine combination • Symbyax
ramipexole • Mirapex
Procarbazine • Matulane
Quetiapine • Seroquel
Ropinirole • Requip
Rotigotine • Neupro
Venlafaxine • Effexor
Zidovudine • Retrovir
Disclosures
Dr. Goldberg is a consultant to Merck & Co. and Sunovion. He is a member of the speakers’ bureau of AstraZeneca, Janssen, Merck & Co., Takeda and Lundbeck, and Sunovion.
Dr. Ernst reports no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
When a known depressed patient newly develops signs of mania or hypomania, a cascade of diagnostic and therapeutic questions ensues: Does the event “automatically” signify the presence of bipolar disorder (BD), or could manic symptoms be secondary to another underlying medical problem, a prescribed antidepressant or non-psychotropic medication, or illicit substances?
Even more questions confront the clinician: If mania symptoms are nothing more than an adverse drug reaction, will they go away by stopping the presumed offending agent? Or do symptoms always indicate the unmasking of a bipolar diathesis? Should anti-manic medication be prescribed immediately? If so, which one(s) and for how long? How extensive a medical or neurologic workup is indicated?
And, how do you differentiate ambiguous hypomania symptoms (irritability, insomnia, agitation) from other phenomena, such as akathisia, anxiety, and overstimulation?
In this article, we present an overview of how to approach and answer these key questions, so that you can identify, comprehend, and manage manic symptoms that arise in the course of your patient’s treatment for depression (Box).
Does disease exist on a unipolar−bipolar continuum?
There has been a resurgence of interest in Kraepelin’s original notion of mania and depression as falling along a continuum, rather than being distinct categories of pathology. True bipolar mania has its own identifiable epidemiology, familiality, and treatment, but symptomatic shades of gray often pose a formidable diagnostic and therapeutic challenge.
For example, DSM-5 relaxed its definition of “mixed” episodes of BD to include subsyndromal mania features in unipolar depression. When a patient with unipolar depression develops a full, unequivocal manic episode, there usually isn’t much ambiguity or confusion about initial management: assure a safe environment, stop any antidepressants, rule out drug- or medically induced causes, and begin an acute anti-manic medication.
Next steps can, sometimes, be murkier:
• formulate a definitive, overarching diagnosis
• provide psycho-education
• forecast return to work or school
• discuss prognosis and likelihood of relapse
• address necessary lifestyle modifications (eg, sleep hygiene, elimination of alcohol and illicit drug use)
• determine whether indefinite maintenance pharmacotherapy is indicated— and, if so, with which medication(s).
CASE A diagnostic formulation isn’t always black and white
Ms. J, age 56, a medically healthy woman, has a 10-year history of depression and anxiety that has been treated effectively for most of that time with venlafaxine, 225 mg/d. The mother of 4 grown children, Ms. J has worked steadily for >20 years as a flight attendant for an international airline.
Today, Ms. J is brought by ambulance from work to the emergency department in a paranoid and agitated state. The admission follows her having e-blasted airline corporate executives with a voluminous manifesto that she worked on around the clock the preceding week, in which she explained her bold ideas to revolutionize the airline industry, under her leadership.
Ms. J’s family history is unremarkable for psychiatric illness.
How does one approach a case such as Ms. J’s?
Stark examples of classical mania, as depicted in this case vignette, are easy to recognize but not necessarily straightforward, nosologically. Consider the following not-so-straightforward elements of Ms. J’s case:
• a first-lifetime episode of mania or hypomania is rare after age 50
• Ms. J took a serotonin-norepinephrine reuptake inhibitor (SNRI) for many years without evidence of mood destabilization
• years of repetitive chronobiological stress (including probable frequent time zone changes with likely sleep disruption) apparently did not trigger mood destabilization
• none of Ms. J’s 4 pregnancies led to postpartum mood episodes
• at least on the surface, there are no obvious features that point to likely causes of a secondary mania (eg, drug-induced, toxic, metabolic, or medical)
• Ms. J has no known family history of BD or any other mood disorder.
Approaching a case such as Ms. J’s must involve a systematic strategy that can best be broken into 2 segments: (1) a period of acute initial assessment and treatment and (2) later efforts focused on broader diagnostic evaluation and longer-term relapse prevention.
Initial assessment and treatment
Immediate assessment and management hinges on initial triage and forming a working diagnostic impression. Although full-blown mania usually is obvious (sometimes even without a formal interview), be alert to patients who might minimize or altogether disavow mania symptoms—often because of denial of illness, misidentification of symptoms, or impaired insight about changes in thinking, mood, or behavior.
Because florid mania, by definition, impairs psychosocial functioning, the context of an initial presentation often holds diagnostic relevance. Manic patients who display disruptive behaviors often are brought to treatment by a third party, whereas a less severely ill patient might be more inclined to seek treatment for herself (himself) when psychosis is absent and insight is less compromised or when the patient feels she (he) might be depressed.
It is not uncommon for a manic patient to report “depression” as the chief complaint or to omit elements related to psychomotor acceleration (such as racing thoughts or psychomotor agitation) in the description of symptoms. An accurate diagnosis often requires clinical probing and clarification of symptoms (eg, differentiating simple insomnia with consequent next-day fatigue from loss of the need for sleep with intact or even enhanced next-day energy) or discriminating racing thoughts from anxious ruminations that might be more intrusive than rapid.
Presentations of frank mania also can come to light as a consequence of symptoms, rather than as symptoms per se (eg, conflict in relationships, problems at work, financial reversals).
Particularly in patients who do not have a history of mania, avoid the temptation to begin or modify existing pharmacotherapy until you have performed a basic initial evaluation. Immediate considerations for initial assessment and management include the following:
Provide containment. Ensure a safe setting, level of care, and frequency of monitoring. Evaluate suicide risk (particularly when mixed features are present), and risk of withdrawal from any psychoactive substances.
Engage significant others. Close family members can provide essential history, particularly when a patient’s insight about her illness and need for treatment are impaired. Family members and significant others also often play important roles in helping to restrict access to finances, fostering medication adherence, preventing access to weapons in the home, and sharing information with providers about substance use or high-risk behavior.
Systematically assess for DSM-5 symptoms of mania and depression. DSM-5 modified criteria for mania/hypomania to necessitate increased energy, in addition to change in mood, to make a syndromal diagnosis. Useful during a clinical interview is the popular mnemonic DIGFAST to aid recognition of core mania symptomsa:
• Distractibility
• Indiscretion/impulsivity
• Grandiosity
• Flight of ideas
• Activity increase
• Sleep deficit
• Talkativeness.
aAlso see: “Mnemonics in a mnutshell: 32 aids to psychiatric diagnosis,” in the October 2008 issue Current Psychiatry and in the archive at CurrentPsychiatry.com.
These symptoms should represent a departure from normal baseline characteristics; it often is helpful to ask a significant other or collateral historian how the present symptoms differ from the patient’s usual state.
Assess for unstable medical conditions or toxicity states. When evaluating an acute change in mental status, toxicology screening is relatively standard and the absence of illicit substances should seldom, if ever, be taken for granted—especially because occult substance use can lead to identification of false-positive BD “cases.”1
Stop any antidepressant. During a manic episode, continuing antidepressant medication serves no purpose other than to contribute to or exacerbate mania symptoms. Nonetheless, observational studies demonstrate that approximately 15% of syndromally manic patients continue to receive an antidepressant, often when a clinician perceives more severe depression during mania, multiple prior depressive episodes, current anxiety, or rapid cycling.2
Importantly, antidepressants have been shown to harm, rather than alleviate, presentations that involve a mixed state,3 and have no demonstrated value in preventing post-manic depression. Mere elimination of an antidepressant might ease symptoms during a manic or mixed episode.4
In some cases, it might be advisable to taper, not abruptly stop, a short half-life serotonergic antidepressant, even in the setting of mania, to minimize the potential for aggravating autonomic dysregulation that can result from antidepressant discontinuation effects.
Begin anti-manic pharmacotherapy. Initiation of an anti-manic mood stabilizer, such as lithium and divalproex, has been standard in the treatment of acute mania.
In the 1990s, protocols for oral loading of divalproex (20 to 30 mg/kg/d) gained popularity for achieving more rapid symptom improvement than might occur with lithium. In the current era, atypical antipsychotics have all but replaced mood stabilizers as an initial intervention to contain mania symptoms quickly (and with less risk than first-generation antipsychotics for acute adverse motor effects from so-called rapid neuroleptization).
Because atypical antipsychotics often rapidly subdue mania, psychosis, and agitation, regardless of the underlying process, many practitioners might feel more comfortable initiating them than a mood stabilizer when the diagnosis is ambiguous or provisional, although their longer-term efficacy and safety, relative to traditional mood stabilizers, remains contested. Considerations for choosing from among feasible anti-manic pharmacotherapies are summarized in Table 1.5
Normalize the sleep-wake cycle. Chronobiological and circadian variables, such as irregular sleep patterns, are thought to contribute to the pathophysiology of affective switch in BD. Behavioral and pharmacotherapeutic efforts to impose a normal sleep−wake schedule are considered fundamental to stabilizing acute mania.
Facilitate next steps after acute stabilization. For inpatients, this might involve step-down to a partial hospitalization or intensive outpatient program, alongside taking steps to ensure continued treatment adherence and minimize relapse.
What medical and neurologic workup is appropriate?
Not every first lifetime presentation of mania requires extensive medical and neurologic workup, particularly among patients who have a history of depression and those whose presentation neatly fits the demographic and clinical profile of newly emergent BD. Basic assessment should determine whether any new medication has been started that could plausibly contribute to abnormal mental status (Table 2).
Nevertheless, evaluation of almost all first presentations of mania should include:
• urine toxicology screen
• complete blood count
• comprehensive metabolic panel
• thyroid-stimulating hormone assay
• serum vitamin B12 level assay
• serum folic acid level assay
• rapid plasma reagin test.
Clinical features that usually lead a clinician to pursue a more detailed medical and neurologic evaluation of first-episode mania include:
• onset age >40
• absence of a family history of mood disorder
• symptoms arising during a major medical illness
• multiple medications
• suspicion of a degenerative or hereditary neurologic disorder
• altered state of consciousness
• signs of cortical or diffuse subcortical dysfunction (eg, cognitive deficits, motor deficits, tremor)
• abnormal vital signs.
Depending on the presentation, additional testing might include:
• tests of HIV antibody, immune autoantibodies, and Lyme disease antibody
• heavy metal screening (when suggested by environmental exposure)
• lumbar puncture (eg, in a setting of manic delirium or suspected central nervous system infection or paraneoplastic syndrome)
• neuroimaging (note: MRI provides better visualization than CT of white matter pathology and small vessel cerebrovascular disease) electroencephalography.
Making an overarching diagnosis: Is mania always bipolar disorder?
Mania is considered a manifestation of BD when symptoms cannot be attributed to another psychiatric condition, another underlying medical or neurologic condition, or a toxic-metabolic state (Table 3 and Table 46-9). Classification of mania that occurs soon after antidepressant exposure in patients without a known history of BD continues to be the subject of debate, varying in its conceptualization across editions of DSM.
The National Institute of Mental Health (NIMH) Systematic Treatment Enhancement Program for Bipolar Disorder, or STEP-BD, observed a fairly low (approximately 10%) incidence of switch from depression to mania when an antidepressant is added to a mood stabilizer; the study authors concluded that much of what is presumed to be antidepressant-induced mania might simply be the natural course of illness.10
Notably, several reports suggest that antidepressants might pose a greater risk of mood destabilization in people with BD I than with either BD II or other suspected variants on the bipolar spectrum.
DSM-5 advises that a diagnosis of substance-induced mood disorder appropriately describes symptoms that spontaneously dissipate once an antidepressant has been discontinued, whereas a diagnosis of BD can be made when manic or hypomanic symptoms persist at a syndromal level after an antidepressant has been stopped and its physiological effects are no longer present. With respect to time course, the International Society of Bipolar Disorders proposes that, beyond 12 to 16 weeks after an antidepressant has been started or the dosage has been increased, it is unlikely that new-onset mania/hypomania can reasonably be attributed to “triggering” by an antidepressant11 (although antidepressants should be stopped when symptoms of mania emerge).
Several clinical features have been linked in the literature with an increased susceptibility to BD after an initial depressive episode, including:
• early (pre-adolescent) age at onset of first mood disorder episode6
• family history of BD, highly recurrent depression, or psychosis12,13
• psychosis when depressed.7,14
A number of other characteristics of depressive illness—including seasonal depression, atypical depressive features, suicidality, irritability, anxiety or substance use comorbidity, postpartum mood episodes, and brief recurrent depressive episodes—have been described in the literature as potential correlates of a bipolar diathesis; none have proved to be robust or pathognomonic of a BD diagnosis, as opposed to a unipolar diagnosis.
Data from the NIMH Collaborative Depression Study suggest that recurrent mania/hypomania after an antidepressant-associated polarity switch is greater when a family history of BD is present; other clinical variables might hold less predictive value.15
In addition, although some practitioners consider a history of nonresponse to trials of multiple antidepressants suggestive of an underlying bipolar process, polarity is only one of many variables that must be considered in the differential diagnosis of antidepressant-resistant depression.b Likewise, molecular genetic studies do not support a link between antidepressant nonresponse and the likelihood of a diagnosis of BD.16
bSee “A practical approach to subtyping depression among your patients” in the April 2014 issue of Current Psychiatry or in the archive at CurrentPsychiatry.com.
Indefinite pharmacotherapy for bipolar disorder?
An important but nagging issue when diagnosing BD after a first manic (or hypomanic) episode is the implied need for indefinite pharmacotherapy to sustain remission and prevent relapse and recurrence.
The likelihood of subsequent depression or mania/hypomania remains high after an index manic/hypomanic episode, particularly for 6 to 8 months after recovery.8,17 Natural history data suggest that, during the year that follows a first lifetime mania, approximately 40% of patients experience a second manic episode.8 A second lifetime mania might be especially likely in patients whose index episode involved mood-congruent psychosis, low premorbid work functioning, and an initial manic episode, as opposed to a mixed episode17 or early age at onset.8
In the absence of randomized, placebo-controlled studies of maintenance pharmacotherapy after a first lifetime manic episode, clinical judgment often drives decisions about the duration of continuing pharmacotherapy after initial symptoms resolve. The Texas Medication Algorithm Project for BD advises that:
Similarly, in the most recent (2004) Expert Consensus Guideline Series for the Treatment of Bipolar Disorder,19 84% of practitioner−respondents favored indefinite mood stabilizer therapy after a second lifetime manic episode. No recommendation was made about the duration of maintenance pharmacotherapy after a first lifetime manic/hypomanic episode.
Avoid or reintroduce an antidepressant if depression recurs after a first mania?
Controversies surrounding antidepressant use in BD are extensive; detailed discussion is beyond the scope of this review (Goldberg and Ghaemi provided a broader discussion of risks and benefits of antidepressants in BD20). Although the main clinical concern regarding antidepressant use was, at one time, the potential to induce mania or accelerate the frequency of recurrent episodes, more recent, empirical studies suggest that the greater risk of using antidepressants for BD is lack of efficacy.10,21
If a careful longitudinal history and clinical evaluation reveal that an initial manic episode heralds the onset of BD, decisions about whether to avoid an antidepressant (as opposed to using other, more evidence-based interventions for bipolar depression) depend on a number of variables, including establishing whether the index episode was manic or hypomanic; ruling out current subthreshold mixed features; and clarifying how recently mania developed. Decisions about future antidepressant use (or avoidance) might be less clear if an index manic/hypomanic episode was brief and self-limited once the antidepressant was stopped.
Although some experts eschew antidepressant monotherapy after such occurrences, there is no body of literature to inform decisions about the safety or efficacy of undertaking a future antidepressant trial in such patients. That said, reasonable judgment probably includes several considerations:
• Re-exposure to the same antidepressant that was associated with an induction of mania is likely riskier than choosing a different antidepressant; in general, purely serotonergic antidepressants or bupropion are considered to pose less risk of mood destabilization than is seen with an SNRI or tricyclic antidepressant.
• After a manic episode, a subsequent antidepressant trial generally shouldn’t be attempted without concurrent anti-manic medication.
• Introducing any antidepressant is probably ill-advised in the recent (~2 months) aftermath of acute manic/ hypomanic symptoms.22
• Patients and their significant other should be apprised of the risk of emerging symptoms of mania or hypomania, or mixed features, and should be familiar with key target symptoms to watch for. Prospective mood charting can be helpful.
• Patients should be monitored closely both for an exacerbation of depression and recurrence of mania/hypomania symptoms.
• Any antidepressant should be discontinued promptly at the first sign of psychomotor acceleration or the emergence of mixed features, as defined by DSM-5.
Psychoeducation and forecasting
Functional recovery from a manic episode can lag behind symptomatic recovery. Subsyndromal symptoms often persist after a full episode subsides.
Mania often is followed by a depressive episode, and questions inevitably arise about how to prevent and treat these episodes. Because the median duration of a manic episode is approximately 13 weeks,23 it is crucial for patients and their immediate family to recognize that recovery might be gradual, and that it will likely take time before she (he) can resume full-time responsibilities at work or school or in the home.
Today, a patient who is hospitalized for severe acute mania (as Ms. J was, in the case vignette) seldom remains an inpatient long enough to achieve remission of symptoms; sometimes, she (he) might continue to manifest significant symptoms, even though decisions about the “medical necessity” of ongoing inpatient care tend to be governed mainly by issues of safety and imminent danger. (This web exclusive Table20,24,25 provides considerations when making the transition from the acute phase to the continuation phase of treatment.)
To minimize risk of relapse, psycho-education should include discussion of:
• psychiatrically deleterious effects of alcohol and illicit drug use
• suicide risk, including what to do in an emergency
• protecting a regular sleep schedule and avoiding sleep deprivation
• the potential for poor medication adherence and management of side effects
• the need for periodic laboratory monitoring, as needed
• the role of adjunctive psychotherapy and effective stress management
• familiarity with symptoms that serve as warning signs, and how to monitor their onset.
Bottom Line
When a patient being treated for depression develops signs of mania or hypomania, stop any antidepressant and consider initiating a mood stabilizer, antipsychotic, or both, to contain and stabilize symptoms. Entertain medical and substance-related causes of mania symptoms, and evaluate and treat as suggested by the patient’s presentation. Long-term drug therapy to prevent recurrence of mania/hypomania, as well as risks and benefits of future exposure to antidepressants, should be decided case by case.
Related Resources
• Proudfoot J, Whitton A, Parker G, et al. Triggers of mania and depression in young adults with bipolar disorder. J Affect Disord. 2012;143(1-3):196-202.
• Stange JP, Sylvia LG, Magalhães PV, et al. Extreme attributions predict transition from depression to mania or hypomania in bipolar disorder. J Psychiatr Res. 2013;47(10):1329-1336.
Drug Brand Names
Albuterol • Proventil, Ventolin
Anastrozole • Arimidex
Aripiprazole • Abilify
Bupropion • Wellbutrin
Carbamazepine • Tegretol
Chloroquine • Aralen
Ciprofloxacin • Cipro
Clarithromycin • Biaxin
Clomiphene • Clomid
Digoxin • Digox, Lanoxin
Divalproex • Depakote
5-Fluorouracil • Carac, Efudex
Human chorionic gonadotropin • Novarel, Pregnyl
Ifosfamide • Ifex
Isoniazid • Nydrazid
Lamotrigine • Lamictal
Letrozole • Femara
Lithium • Eskalith, Lithobid
Lurasidone • Latuda
Mefloquine • Lariam
Olanzapine • Zyprexa
Olanzapine/fluoxetine combination • Symbyax
ramipexole • Mirapex
Procarbazine • Matulane
Quetiapine • Seroquel
Ropinirole • Requip
Rotigotine • Neupro
Venlafaxine • Effexor
Zidovudine • Retrovir
Disclosures
Dr. Goldberg is a consultant to Merck & Co. and Sunovion. He is a member of the speakers’ bureau of AstraZeneca, Janssen, Merck & Co., Takeda and Lundbeck, and Sunovion.
Dr. Ernst reports no financial relationships with any company whose products are mentioned in this article or with manufacturers of competing products.
1. Goldberg JF, Garno JL, Callahan AM, et al. Overdiagnosis of bipolar disorder among substance use disorder in patients with mood instability. J Clin Psychiatry. 2008;69(11):1751-1757.
2. Rosa AR, Cruz B, Franco C, et al. Why do clinicians maintain antidepressants in some patients with acute mania? Hints from the European Mania in Bipolar Longitudinal Evaluation of Medication (EMBLEM), a large naturalistic study. J Clin Psychiatry. 2010;71(8):1000-1006.
3. Goldberg JF, Perlis RH, Ghaemi SN, et al. Adjunctive antidepressant use and symptomatic recovery among bipolar depressed patients with concomitant manic symptoms: findings from the STEP-BD. Am J Psychiatry. 2007;164(9):1348-1355.
4. Bowers MB Jr, McKay BG, Mazure CM. Discontinuation of antidepressants in newly admitted psychotic patients. J Neuropsychiatr Clin Neurosci. 2003;15(2):227-230.
5. Perlis RH, Welge JA, Vornik LA, et al. Atypical antipsychotics in the treatment of mania: a meta-analysis of randomized, placebo-controlled trials. J Clin Psychiatry. 2006;67(4):509-516.
6. Geller B, Zimmerman B, Williams M, et al. Bipolar disorder at prospective follow-up of adults who had prepubertal major depressive disorder. Am J Psychiatry. 2001;158(1):125-127.
7. Goldberg JF, Harrow M, Whiteside JE. Risk for bipolar illness in patients initially hospitalized for unipolar depression. Am J Psychiatry. 2001;158(8):1265-1270.
8. Yatham LN, Kauer-Sant’Anna M, Bond DJ, et al. Course and outcome after the first manic episode in patients with bipolar disorder: prospective 12-month data from the Systematic Treatment Optimization Project for Early Mania project. Can J Psychiatry. 2009;54(2):105-112.
9. Chaudron LH, Pies RW. The relationship between postpartum psychosis and bipolar disorder: a review. J Clin Psychiatry 2003;64(11):1284-1292.
10. Sachs GS, Nierenberg AA, Calabrese JR, et al. Effectiveness of adjunctive antidepressant treatment for bipolar depression. N Engl J Med. 2007;356(17):1711-1722.
11. Tohen M, Frank E, Bowden CL, et al. The International Society for Bipolar Disorders (ISBD) Task Force report on the nomenclature of course and outcome in bipolar disorders. Bipolar Disord. 2009;11(15):453-473.
12. Schulze TG, Hedeker D, Zandi P, et al. What is familial about familial bipolar disorder? Resemblance among relatives across a broad spectrum of phenotypic characteristics. Arch Gen Psychiatry. 2006;63(12):1368-1376.
13. Song J, Bergen SE, Kuja-Halkola R, et al. Bipolar disorder and its relation to major psychiatric disorders: a family-based study in the Swedish population. Bipolar Disord. 2015;7(2):184-193.
14. Goes FS, Sadler B, Toolan J, et al. Psychotic features in bipolar and unipolar depression. Bipolar Disord. 2007;9(8):901-906.
15. Fiedorowicz JG, Endicott J, Solomon DA, et al. Course of illness following prospectively observed mania or hypomania in individuals presenting with unipolar depression. Bipolar Disord. 2007;14(6):664-671.
16. Tansey KE, Guipponi M, Domenici E, et al. Genetic susceptibility for bipolar disorder and response to antidepressants in major depressive disorder. Am J Med Genetics B Neuropsychiatr Genet. 2014;165B(1):77-83.
17. Tohen M, Zarate CA Jr, Hennen J, et al. The McLean-Harvard First-Episode Mania Study: prediction of recovery and first recurrence. Am J Psychiatry. 2003;160(12):2099-2107.
18. Suppes T, Dennehy EB, Swann AC, et al. Report of the Texas Consensus Conference Panel on medication treatment of bipolar disorder 2000. J Clin Psychiatry. 2002;63(4):288-299.
19. Keck PE Jr, Perlis RH, Otto MW, et al. The Expert Consensus Guideline Series: treatment of bipolar disorder 2004. Postgrad Med Special Report. 2004:1-120.
20. Goldberg JF, Ghaemi SN. Benefits and limitations of antidepressants and traditional mood stabilizers for treatment of bipolar depression. Bipolar Disord. 2005;7(suppl 5):3-12.
21. Sidor MM, MacQueen GM. Antidepressants for the acute treatment of bipolar depression: a systematic review and meta-analysis. J Clin Psychiatry. 2011;72(2):156-167.
22. MacQueen GM, Trevor Young L, Marriott M, et al. Previous mood state predicts response and switch rates in patients with bipolar depression. Acta Psychiatr Scand. 2002;105(6):414-418.
23. Solomon DA, Leon AC, Coryell WH, et al. Longitudinal course of bipolar I disorder: duration of mood episodes. Arch Gen Psychiatry. 2010;67(4):339-347.
24. Tohen M, Chengappa KN, Suppes T, et al. Relapse prevention in bipolar I disorder: 18-month comparison of olanzapine plus mood stabiliser v. mood stabiliser alone. Br J Psychiatry. 2004;184:337-345.
25. Suppes T, Vieta E, Liu S, et al. Maintenance treatment for patients with bipolar I disorder: results from a North American study of quetiapine in combination with lithium or divalproex (trial 127). Am J Psychiatry. 2009;166(4):476-488.
1. Goldberg JF, Garno JL, Callahan AM, et al. Overdiagnosis of bipolar disorder among substance use disorder in patients with mood instability. J Clin Psychiatry. 2008;69(11):1751-1757.
2. Rosa AR, Cruz B, Franco C, et al. Why do clinicians maintain antidepressants in some patients with acute mania? Hints from the European Mania in Bipolar Longitudinal Evaluation of Medication (EMBLEM), a large naturalistic study. J Clin Psychiatry. 2010;71(8):1000-1006.
3. Goldberg JF, Perlis RH, Ghaemi SN, et al. Adjunctive antidepressant use and symptomatic recovery among bipolar depressed patients with concomitant manic symptoms: findings from the STEP-BD. Am J Psychiatry. 2007;164(9):1348-1355.
4. Bowers MB Jr, McKay BG, Mazure CM. Discontinuation of antidepressants in newly admitted psychotic patients. J Neuropsychiatr Clin Neurosci. 2003;15(2):227-230.
5. Perlis RH, Welge JA, Vornik LA, et al. Atypical antipsychotics in the treatment of mania: a meta-analysis of randomized, placebo-controlled trials. J Clin Psychiatry. 2006;67(4):509-516.
6. Geller B, Zimmerman B, Williams M, et al. Bipolar disorder at prospective follow-up of adults who had prepubertal major depressive disorder. Am J Psychiatry. 2001;158(1):125-127.
7. Goldberg JF, Harrow M, Whiteside JE. Risk for bipolar illness in patients initially hospitalized for unipolar depression. Am J Psychiatry. 2001;158(8):1265-1270.
8. Yatham LN, Kauer-Sant’Anna M, Bond DJ, et al. Course and outcome after the first manic episode in patients with bipolar disorder: prospective 12-month data from the Systematic Treatment Optimization Project for Early Mania project. Can J Psychiatry. 2009;54(2):105-112.
9. Chaudron LH, Pies RW. The relationship between postpartum psychosis and bipolar disorder: a review. J Clin Psychiatry 2003;64(11):1284-1292.
10. Sachs GS, Nierenberg AA, Calabrese JR, et al. Effectiveness of adjunctive antidepressant treatment for bipolar depression. N Engl J Med. 2007;356(17):1711-1722.
11. Tohen M, Frank E, Bowden CL, et al. The International Society for Bipolar Disorders (ISBD) Task Force report on the nomenclature of course and outcome in bipolar disorders. Bipolar Disord. 2009;11(15):453-473.
12. Schulze TG, Hedeker D, Zandi P, et al. What is familial about familial bipolar disorder? Resemblance among relatives across a broad spectrum of phenotypic characteristics. Arch Gen Psychiatry. 2006;63(12):1368-1376.
13. Song J, Bergen SE, Kuja-Halkola R, et al. Bipolar disorder and its relation to major psychiatric disorders: a family-based study in the Swedish population. Bipolar Disord. 2015;7(2):184-193.
14. Goes FS, Sadler B, Toolan J, et al. Psychotic features in bipolar and unipolar depression. Bipolar Disord. 2007;9(8):901-906.
15. Fiedorowicz JG, Endicott J, Solomon DA, et al. Course of illness following prospectively observed mania or hypomania in individuals presenting with unipolar depression. Bipolar Disord. 2007;14(6):664-671.
16. Tansey KE, Guipponi M, Domenici E, et al. Genetic susceptibility for bipolar disorder and response to antidepressants in major depressive disorder. Am J Med Genetics B Neuropsychiatr Genet. 2014;165B(1):77-83.
17. Tohen M, Zarate CA Jr, Hennen J, et al. The McLean-Harvard First-Episode Mania Study: prediction of recovery and first recurrence. Am J Psychiatry. 2003;160(12):2099-2107.
18. Suppes T, Dennehy EB, Swann AC, et al. Report of the Texas Consensus Conference Panel on medication treatment of bipolar disorder 2000. J Clin Psychiatry. 2002;63(4):288-299.
19. Keck PE Jr, Perlis RH, Otto MW, et al. The Expert Consensus Guideline Series: treatment of bipolar disorder 2004. Postgrad Med Special Report. 2004:1-120.
20. Goldberg JF, Ghaemi SN. Benefits and limitations of antidepressants and traditional mood stabilizers for treatment of bipolar depression. Bipolar Disord. 2005;7(suppl 5):3-12.
21. Sidor MM, MacQueen GM. Antidepressants for the acute treatment of bipolar depression: a systematic review and meta-analysis. J Clin Psychiatry. 2011;72(2):156-167.
22. MacQueen GM, Trevor Young L, Marriott M, et al. Previous mood state predicts response and switch rates in patients with bipolar depression. Acta Psychiatr Scand. 2002;105(6):414-418.
23. Solomon DA, Leon AC, Coryell WH, et al. Longitudinal course of bipolar I disorder: duration of mood episodes. Arch Gen Psychiatry. 2010;67(4):339-347.
24. Tohen M, Chengappa KN, Suppes T, et al. Relapse prevention in bipolar I disorder: 18-month comparison of olanzapine plus mood stabiliser v. mood stabiliser alone. Br J Psychiatry. 2004;184:337-345.
25. Suppes T, Vieta E, Liu S, et al. Maintenance treatment for patients with bipolar I disorder: results from a North American study of quetiapine in combination with lithium or divalproex (trial 127). Am J Psychiatry. 2009;166(4):476-488.
Striving for Optimal Care/
Hospitalists have a professional obligation to provide the highest quality care for patients and increasingly, hospitalists lead programs to improve quality, value, and patient experience.[1, 2, 3]
The federal government introduced the hospital Value‐Based Purchasing (VBP) program in 2012, initially with 1% of Medicare hospital payments tied to quality indicators. This percentage will continue to grow and the VBP program has expanded to include metrics related to quality, safety, cost‐effectiveness, and patient satisfaction.[4] Hospitals now face significant financial penalties if they do not achieve these benchmarks; thus, remaining up‐to‐date with the literature and the most promising interventions in these arenas is vital for hospitalists.
The goal of this update is to summarize and critique recently published research that has the greatest potential to impact clinical practice in quality, value, and patient experience in hospital medicine. We reviewed articles published between January 2014 and February 2015. To identify articles, we hand‐searched leading journals, continuing medical education collaborative journal reviews (including New England Journal of Medicine Journal Watch and the American College of Physicians Journal Club), the Agency for Healthcare Research and Quality's Patient Safety network, and PubMed. We evaluated articles based on their scientific rigor (peer review, study methodology, site number, and sample size) and applicability to hospital medicine. In this review, we summarize 9 articles that were felt by the authors to have the highest potential for impact on the clinical practice of hospital medicine, as directly related to quality, value, or patient experience. We present each topic with a current quality question that the accompanying article(s) will help address. We summarize each article and its findings and note cautions and implications for practice. The selected articles cover aspects related to patient safety, readmissions, patient satisfaction, and resource utilization, with each of these topics related to specific metrics included in VBP. We presented this update at the 2015 Society of Hospital Medicine national meeting.
IS THERE ANYTHING WE CAN DO TO MAKE HANDOFFS SAFER?
Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
Background
With recent changes in resident duty hours and staffing models, the number of clinical handoffs during a patient's hospital stay has been increasing.[5] The omission of critical information and the transfer of erroneous information during handoffs is common, which contributes to preventable medical errors.[6]
Findings
This prospective intervention study of a resident handoff program in 9 hospitals sought to improve communication between healthcare providers and to decrease medical errors. The I‐PASS mnemonic, which stands for illness severity, patient summary, action list, situation awareness, and synthesis by receiver, was introduced to standardize oral and written handoffs. The program also included a 2‐hour workshop, a 1‐hour role‐playing and simulation session, a computer module, a faculty development program, direct observation tools, and a culture change campaign. Medical errors decreased by 23% following the intervention, compared to the preintervention baseline (24.5 vs 18.8 per 100 admissions, P < 0.001), and the rate of preventable adverse events dropped by 30% (4.7 vs 3.3 events per 100 admissions, P < 0.001), whereas nonpreventable adverse events did not change. Process measures of handoff quality uniformly improved with the intervention. The duration of oral handoffs was approximately 2.5 minutes per patient both before and during the intervention period.
Cautions
Not all of the sites in the study saw significant reductions in medical errors; 3 of the programs did not have significantly improved medical error rates following implementation of the I‐PASS handoff bundle. The study design was not a randomized controlled trial, and thus the pre‐ versus postimplementation analyses cannot draw definitive causal links between the intervention and the observed improvements in safety outcomes. Furthermore, this study was done with pediatric residents, and one cannot assume that the results will translate to practicing hospitalists, who may not benefit as much from a scripted sign‐out.
Implications
A comprehensive handoff program that included the I‐PASS mnemonic along with extensive training, faculty development, and a culture‐change campaign was associated with impressive improvements in patient safety outcomes, without negatively effecting workflow.
WHAT ARE THE COMMON FEATURES OF INTERVENTIONS THAT HAVE SUCCESSFULLY REDUCED READMISSIONS?
Leppin AL, Glonfriddo MR, Kessler M, et al. Preventing 30‐day hospital readmissions: a systematic review and meta‐analysis of randomized trials. JAMA Intern Med. 2014;174(7):10951107.
Background
Hospital readmissions are common, costly, and potentially represent a failure to adequately prepare patients for hospital discharge, but efforts to prevent 30‐day readmissions have been mixed.[7] The investigators in this study offer a novel framework, the cumulative complexity model, as a way to conceptualize postdischarge outcomes such as readmission. The model depicts the balance between the patient's workload of managing their illness, including the demands of monitoring treatment and self‐care, and the patient's capacity to handle that workfunctionality, financial/social resources, literacy, and empowerment. Workload‐capacity imbalances (when workload outstrips capacity) may lead to progressively increasing illness and increasing complexity, which contribute to poor patient outcomes like readmissions. Decreasing a patient's workload or increasing their capacity may be effective in reducing readmissions.
Findings
Investigators sought to identify factors associated with successful interventions to reduce 30‐day readmissions, including how the interventions fit into the cumulative complexity model. After performing a comprehensive search of randomized trials of interventions to reduce readmissions, the investigators identified 42 randomized trials with the primary outcome of 30‐day readmission rates. In addition to reviewing intervention characteristics, blinded raters scored interventions based on their effects on reducing or increasing patient workload and reducing or increasing patient capacity for self‐care. Interventions that had several components (eg, pharmacy education, postdischarge phone calls, visiting nurses, health coaches, close primary care follow‐up) were more likely to be successful (1.4 times as likely; P = 0.001), as were interventions that involved 2 or more individuals (1.3 times as likely; P = 0.05). Interventions that were published prior to 2002 were 1.6 times more likely to have reduced readmissions (P = 0.01). When applied to the cumulative complexity model, interventions that sought to augment patient capacity for self‐care were 1.3 times as likely to be successful (P = 0.04), whereas no relationship was found between an intervention's effect on patient workload and readmission.
Cautions
The authors evaluated each intervention based on the degree to which it was likely to affect patient workload and patient capacity. Because a multifaceted intervention may have had components that increased patient workload (eg, more self‐monitoring, appointments) and decreased patient workload (home visits, visiting nurses), the true effect of patient workload on readmissions may not have been optimally analyzed in this study. Additionally, this element of the study relied on a value judgment original to this work. Interventions that are burdensome to some, may be beneficial to those with the capacity and resources to access the care.
Implications
The body of studies reviewed suggests that interventions to reduce 30‐day readmissions are on the whole successful. Their findings are in keeping with past studies demonstrating more successful interventions that are resource‐intensive and multifaceted. Finding successful interventions that are also cost‐effective may be challenging. This article adds the cumulative complexity framework to what we already know about readmissions, highlighting patient capacity to manage the burden of their Illness as a new factor for success. Efforts to deliver patient‐centered education, explore barriers to adherence, and provide health coaching may be more successful than interventions that unwittingly add to the burden of disease treatment (multiple follow‐up appointments, complex medication schedules, and posthospital surveys and patient self‐assessments).
DOES PATIENT ACTIVATION CORRELATE WITH DECREASED RESOURCE USE OR READMISSIONS?
Mitchell SE, Gardiner PM, Sadikova E, et al. Patient activation and 30‐day post discharge hospital utilization. J Gen Intern Med. 2014;29(2):349355.
Background
Patient activation is widely recognized as the knowledge, skills, and confidence a person has in managing their own health or healthcare. Higher patient activation has been associated with improved health outcomes, but the relationship between patient activation and readmission to the hospital within 30 days is unknown.[8]
Findings
Using data from Project RED‐LIT (Re‐Engineered Discharge for patients with low health literacy), a randomized controlled trial conducted at an urban safety‐net hospital, investigators examined the relationship between all unplanned utilization events of hospital services within 30 days of discharge and patient activation, as measured by an abbreviated 8‐item version of the validated Patient Activation Measure (PAM). The PAM uses agreement with statements about a patient's sense of responsibility for his or her own health, confidence in seeking care and following through with medical treatments, and confidence in managing new problems to measure activation. The 695 participants were divided into quartiles based on their PAM score, and the investigators looked at the rates of unplanned utilization events in each group. After adjusting for potential confounders such as gender, age, Charlson Comorbidity Index, insurance, marital status, and education, there remained a significant effect between PAM and 30‐day hospital reutilization. Compared with those who scored in the highest quartile of activation, those in the lowest quartile had 1.75 times the rate of 30‐day reutilization (P < 0.001). Those in the second highest and third highest quartile had 1.3 (P = 0.03) and 1.5 times (P < 0.001) the rate of reutilization demonstrating a dose‐response relationship between activation and low reutilization.
Cautions
It is as yet unclear how best to apply these results and whether activation is a modifiable risk factor. Can a patient become more activated by providing more education and coaching during their hospital stay? Can providing close follow‐up and home services make a person more confident to manage their own illness? Although early identification of patients with low activation using PAM is being done at many hospitals, there is no study to suggest that targeting these patients can reduce readmission.
Implications
A low level of patient activation appears to be a risk factor for unplanned hospital utilization within 30 days of discharge. Given the increasing financial penalties, many hospitals across the country are using the PAM to determine how much support and which services they provide after discharge. Identifying these patients early in their hospitalization could allow providers to spend more time and attention on preparing them for managing their own illness after discharge. As above, the effects of this intervention on readmissions is as yet unclear.
IS THERE A RELATIONSHIP BETWEEN PATIENT SATISFACTION AND UNDERSTANDING OF THE PLAN OF CARE?
Kebede S, Shihab HM, Berger ZD, et al. Patients' understanding of their hospitalizations and association with satisfaction. JAMA Intern Med. 2014;174(10):16981700.
Background
Effective patient‐physician communication is associated with improved patient satisfaction, care quality, and clinical outcomes.[9] Whether a shared understanding of the plan of care between patients and clinicians affects satisfaction is unknown.
Findings
One hundred seventy‐seven patients who had 2 or more medical conditions, 2 or more medical procedures, and 2 or more days in the hospital were interviewed on the day of discharge. Patients were questioned about their overall understanding of their hospitalization and about specific aspects of their care. They were also asked to provide objective data to measure their understanding of their hospital course by (1) listing their medical diagnoses, (2) identifying indications for medication on discharge paperwork, and (3) listing tests or procedures they underwent from a standard list. Patients were then asked to rate their satisfaction with their hospitalization. Patients' self‐reported understanding was an average of 4.0 (very good) on a 5‐point scale. Their measured understanding scores for medical diagnoses, indications for medications and tests and procedures were 48.9%, 56.2%, and 59.4%, respectively. Factors associated with poor understanding of their hospital course were increasing age, less education, lower household income, black race, and longer length of stay. Patients reported a mean satisfaction of 4.0 (very satisfied). Higher self‐reported understanding was associated with higher patient satisfaction, irrespective of actual understanding.
Cautions
Despite their suboptimal measured understanding of their hospital course, the average patient rated their understanding as very good. This suggests that patients are either poor judges of effective communication or have low expectations for understanding. It also calls into question the relationship between quality of communication and patient satisfaction, because despite their satisfaction, patients' actual understanding was low. There was, however, a clear and positive relationship between patients' perceived understanding and their satisfaction, suggesting that shared understanding remains integral to patient satisfaction.
Implications
Patient satisfaction appears to be tied to patients' perceived understanding of their care, but when tested actual understanding was suboptimal. Further efforts in patient satisfaction should not only focus on the quality of our communication, but on the resulting understanding of our patients.
WHAT ARE UNIVERSAL STRATEGIES TO IMPROVE SATISFACTION AND PATIENT OUTCOMES?
Detsky AS, Krumholz HM. Reducing the trauma of hospitalization. JAMA. 2014;311(21):21692170.
Background
Although high readmission rates are a national problem, a minority of patients treated for common conditions like pneumonia, heart failure, and chronic obstructive pulmonary disease are readmitted for the same problem.[10] This suggests that readmissions may stem not from poor disease management, but from patient vulnerability to illness in the period following hospitalization.
Findings
In this viewpoint opinion article, the authors suggest that the depersonalizing and stressful hospital atmosphere contributes to a transient vulnerability in the period following hospitalization that makes it challenging for patients to care for themselves and their illness. They offer specific strategies for changing the nature of our hospital care to promote healing and to decrease patient stress. The authors suggest promoting personalization through accommodation of family members, and allowing personal clothing and personal dcor in their rooms. Physicians and consultants should make appointments so that patients and families can know when to expect important visits. The authors also focus on the provision of rest and nourishment by reducing nighttime disruption and the elimination on unnecessary restrictive diets. They argue that the hospital is a place of stressful disruptions and surprises, which could all be ameliorated by providing patients with a way to understand the members of their team and their roles as well as through providing a clear schedule for the day. Healthcare providers should not enter a room unannounced, and patients should be given private rooms as much as possible. Last, the authors focus on the elimination of unnecessary tests and procedures such as blood draws, telemetry, and urine cultures and the encouragement of activity by providing activities where patients can gather together outside their rooms.
Cautions
If these changes seem simple, they may not be. Many involve a significant shift in our thinking on how we provide carefrom a focus on disease and provider convenience to a true consideration for the health and peace of mind of our patients. Starting with small steps, such as reductions in phlebotomy and nighttime vital signs checks for the most stable patients and ensuring accommodations for families, may make this long list seem less daunting.
Implications
By promoting factors that affect a patient's well beingrest, nutrition, peace of mindwe may be discharging patients who are better equipped to manage their illness after their hospitalization.
DO HOSPITALISTS OVERTEST, AND IF SO, WHY?
Kachalia A, Berg A, Fagerlin A, et al. Overuse of testing in preoperative evaluation and syncope: a survey of hospitalists. Ann Intern Med. 2015;162(2):100108.
Background
National efforts, such as the Choosing Wisely campaign, seek to decrease overuse of low‐value services.[11] The extent of the problem of overtesting among hospitalists and the underlying drivers for unnecessary testing in this group have not been clearly defined.
Findings
Practicing adult medicine hospitalists across the country were given a questionnaire that included clinical vignettes for common inpatient scenarios: a preoperative evaluation and a syncope workup. Respondents were randomly provided 1 of 4 versions of each vignette, which contained the same clinical information but varied by a family member's request for further testing and by disclosure of the occupation of the family member. For example, in the preoperative evaluation, the vignettes either: (1) provided no details about the patient's son; (2) identified the son as a physician; (3) mentioned the son's request for testing, but did not identify the son as a physician; or (4) identified the son as a physician who requested testing. The syncope vignette versions were structured similarly, except the family member was the patient's wife and she was an attorney. The authors collected 1020 responses from an initial pool of 1500, for a decent 68% response rate. Hospitalists commonly reported overuse of testing, with 52% to 65% of respondents requesting unnecessary testing in the preoperative evaluation scenario, and 82% to 85% in the syncope scenario. The majority of physicians reported that they knew the testing was not clinically indicated based on evidence or guidelines, but were ordering the test due to a desire to reassure the patients or themselves.
Cautions
Responses to clinical vignettes in a survey may not represent actually practices. In addition, all hospitalists surveyed in this study were members of the Society of Hospital Medicine, so may not accurately exemplify all practicing hospitalists.
Implications
Overuse of testing is very common among hospitalists. Although roughly one‐third of respondents incorrectly thought that testing in the given scenarios was supported by the evidence or guidelines, the majority knew that testing was not clinically indicated and reported ordering tests to help reassure their patients or themselves. This suggests evidence‐based medicine approaches to overuse, such as the Choosing Wisely campaign and the emergence of appropriateness criteria, are likely necessary but insufficient to change physician practice patterns. Efforts to decrease overuse will need to engage clinicians and patients in ways that help overcome the attitude that more testing is required to provide reassurance.
DO UNREALISTIC PATIENT EXPECTATIONS ABOUT INTERVENTIONS INFLUENCE DECISION MAKING AND CONTRIBUTE TO OVERUSE?
Hoffmann TC, Del Mar C. Patient expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med. 2015;175(2):274286.
Background
Patient expectations have been implicated as a contributor to overuse of medical interventions. Studies that have measured patients' understanding of the potential benefits and harms of medical treatments and tests have been scattered across the literature.
Findings
This systematic review aggregated all studies that have quantitatively assessed patients' expectations of the benefits and/or harms of any treatment or test. Of more than 15,000 records screened, only 36 articles met the inclusion criteria of describing a study in which participants were asked to provide a quantitative estimate of the expected benefits and/or harms of a treatment, test, or screen. Fourteen of the studies (40%) focused on screening, 15 (43%) on treatment, 3 (9%) on a test, and 3 (9%) on both treatment and screening. Topics included cancer, medications, surgery, cardiovascular disease, and fetal‐maternal medicine. The majority of patients overestimated intervention benefit and underestimated harm, regardless of whether the intervention was a test or a treatment. For example, more than half of participants overestimated benefit for 22 of the 34 outcomes (65%) for which overestimation data were provided, and a majority of participants underestimated harm for 10 of the 15 outcomes (67%) with underestimation data available.
Cautions
This systematic review included a limited number of studies, with varying levels of quality and a lot of heterogeneity, making it difficult to reach clear aggregate conclusions.
Implications
Patients are often overly optimistic about medical interventions and they downplay potential risks, making it more difficult to effectively discourage overuse. Clinicians should clearly understand and communicate realistic expectations for the potential benefits and risks of screening, testing, and medical treatments with patients and the public at large.
HOW BIG OF A PROBLEM IS ANTIBIOTIC OVERUSE IN HOSPITALS AND CAN WE DO BETTER?
Fridkin S, Baggs J, Fagan R, et al. Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep. 2014;63(9):194200.
Background
Antibiotics are life‐saving therapies, but when used in inappropriate scenarios they can pose many risks.
Findings
This large national database study used the MarketScan Hospital Drug Database and the Centers for Disease Control and Prevention's (CDC) Emerging Infections Program data to explore antibiotic prescribing in hospital patients. More than half of all hospitalized patients (55.7%) received antibiotics during their stay. Half of all treatment antibiotics were prescribed for the treatment of either lower respiratory infections, urinary tract infections, or presumed gram‐positive infections. There was wide variation seen in antibiotic usage across hospital wards. Objective criteria for potential improvement in antimicrobial use were developed and applied at a subset of 36 hospitals. Antibiotic prescribing could be improved in 37.2% of the most common prescription scenarios reviewed, including patients receiving vancomycin or those being treated for a urinary tract infection. The impact of reducing inpatient antibiotic exposure on the incidence of Clostridium difficile colitis was modeled using data from 2 hospitals, revealing that decreasing hospitalized patients' exposure to broad‐spectrum antibiotics by 30% would lead to a 26% reduction in C difficile infections (interquartile range = 15%38%).
Cautions
Some of the estimates in this study are based on a convenience sample of claims and hospital‐based data, thus may not be an accurate representation, particularly when extrapolating to all US hospitals.
Implications
Antibiotic overuse is a rampant problem in hospitals, with many severe downstream effects such as C difficile infections and antimicrobial resistance. All hospital units should have an antibiotic stewardship program and should monitor antibiotic usage.
Lee TC, Frenette C, Jayaraman D, Green L, Pilote L. Antibiotic self‐stewardship: trainee‐led structured antibiotic time‐outs to improve antimicrobial use. Ann Intern Med. 2014;161(10 suppl):S53S58.
Background
The CDC and other groups have called for stewardship programs to address antibiotic overuse.[12] Few interventions have been shown to successfully engage medical trainees in efforts to improve their own antibiotic prescribing practices.
Findings
An antibiotic self‐stewardship program was developed and led by internal medicine residents at Montreal General Hospital. The intervention included a monthly resident education lecture on antimicrobial stewardship and twice‐weekly time‐out audits using a structured electronic checklist. Adherence with auditing was 80%. Total costs for antibiotics decreased from $149,743 CAD to $80,319 CAD, mostly due to an observed reduction in carbapenems. Moxifloxicin use decreased by 1.9 defined daily doses per 1000 patient‐days per month (P = 0.048). Rates of clostridium difficile colitis declined from 24.2 to 19.6 per 10,000 patient‐days, although this trend did not meet statistical significance (incidence rate ratio, 0.8 [confidence interval, 0.5‐1.3]).
Cautions
Although the use of some broader spectrum antibiotics decreased, there was no measurable change in overall antibiotic use, suggesting that physicians may have narrowed antibiotics but did not often completely discontinue them. The time‐series analyses in this study cannot provide causal conclusions between the intervention and outcomes. In fact, carbapenem usage appears to have significantly decreased prior to the implementation of the program, for unclear reasons. The feasibility of this educational intervention outside of a residency program is unclear.
Implications
A combination of education, oversight and frontline clinician engagement in structured time‐outs may be effective, at least in narrowing antibiotic usage. The structured audit checklist developed by these authors is available for free in the supplementary materials of the Annals of Internal Medicine article.
Disclosures: Dr. Moriates has received grant funding from the ABIM Foundation, and royalties from McGraw‐Hill for the textbook Understanding Value‐Based Healthcare. The authors report no conflicts of interest.
- The role of the hospitalist in quality improvement: systems for improving the care of patients with acute coronary syndrome. J Hosp Med. 2010;5(suppl 4):S1–S7. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Development of a hospital‐based program focused on improving healthcare value. J Hosp Med. 2014;9(10):671–677. , , , .
- Value‐driven health care: implications for hospitals and hospitalists. J Hosp Med. 2009;4(8):507–511. .
- Effect of the 2011 vs 2003 duty hour regulation‐compliant models on sleep duration, trainee education, and continuity of patient care among internal medicine house staff: a randomized trial. JAMA Intern Med. 2013;173(8):649–655. , , , et al.
- Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866–872. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155:520–528. , , , , .
- Participatory decision making, patient activation, medication adherence, and intermediate clinical outcomes in type 2 diabetes: a STARNet study. Ann Fam Med. 2010;8(5):410–417. , , .
- Effective physician‐patient communication and health outcomes: a review. CMAJ. 2007;152(9):1423–1433. .
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486–492. , , , et al.
- Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep. 2014;63(9):194–200. , , , et al.
Hospitalists have a professional obligation to provide the highest quality care for patients and increasingly, hospitalists lead programs to improve quality, value, and patient experience.[1, 2, 3]
The federal government introduced the hospital Value‐Based Purchasing (VBP) program in 2012, initially with 1% of Medicare hospital payments tied to quality indicators. This percentage will continue to grow and the VBP program has expanded to include metrics related to quality, safety, cost‐effectiveness, and patient satisfaction.[4] Hospitals now face significant financial penalties if they do not achieve these benchmarks; thus, remaining up‐to‐date with the literature and the most promising interventions in these arenas is vital for hospitalists.
The goal of this update is to summarize and critique recently published research that has the greatest potential to impact clinical practice in quality, value, and patient experience in hospital medicine. We reviewed articles published between January 2014 and February 2015. To identify articles, we hand‐searched leading journals, continuing medical education collaborative journal reviews (including New England Journal of Medicine Journal Watch and the American College of Physicians Journal Club), the Agency for Healthcare Research and Quality's Patient Safety network, and PubMed. We evaluated articles based on their scientific rigor (peer review, study methodology, site number, and sample size) and applicability to hospital medicine. In this review, we summarize 9 articles that were felt by the authors to have the highest potential for impact on the clinical practice of hospital medicine, as directly related to quality, value, or patient experience. We present each topic with a current quality question that the accompanying article(s) will help address. We summarize each article and its findings and note cautions and implications for practice. The selected articles cover aspects related to patient safety, readmissions, patient satisfaction, and resource utilization, with each of these topics related to specific metrics included in VBP. We presented this update at the 2015 Society of Hospital Medicine national meeting.
IS THERE ANYTHING WE CAN DO TO MAKE HANDOFFS SAFER?
Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
Background
With recent changes in resident duty hours and staffing models, the number of clinical handoffs during a patient's hospital stay has been increasing.[5] The omission of critical information and the transfer of erroneous information during handoffs is common, which contributes to preventable medical errors.[6]
Findings
This prospective intervention study of a resident handoff program in 9 hospitals sought to improve communication between healthcare providers and to decrease medical errors. The I‐PASS mnemonic, which stands for illness severity, patient summary, action list, situation awareness, and synthesis by receiver, was introduced to standardize oral and written handoffs. The program also included a 2‐hour workshop, a 1‐hour role‐playing and simulation session, a computer module, a faculty development program, direct observation tools, and a culture change campaign. Medical errors decreased by 23% following the intervention, compared to the preintervention baseline (24.5 vs 18.8 per 100 admissions, P < 0.001), and the rate of preventable adverse events dropped by 30% (4.7 vs 3.3 events per 100 admissions, P < 0.001), whereas nonpreventable adverse events did not change. Process measures of handoff quality uniformly improved with the intervention. The duration of oral handoffs was approximately 2.5 minutes per patient both before and during the intervention period.
Cautions
Not all of the sites in the study saw significant reductions in medical errors; 3 of the programs did not have significantly improved medical error rates following implementation of the I‐PASS handoff bundle. The study design was not a randomized controlled trial, and thus the pre‐ versus postimplementation analyses cannot draw definitive causal links between the intervention and the observed improvements in safety outcomes. Furthermore, this study was done with pediatric residents, and one cannot assume that the results will translate to practicing hospitalists, who may not benefit as much from a scripted sign‐out.
Implications
A comprehensive handoff program that included the I‐PASS mnemonic along with extensive training, faculty development, and a culture‐change campaign was associated with impressive improvements in patient safety outcomes, without negatively effecting workflow.
WHAT ARE THE COMMON FEATURES OF INTERVENTIONS THAT HAVE SUCCESSFULLY REDUCED READMISSIONS?
Leppin AL, Glonfriddo MR, Kessler M, et al. Preventing 30‐day hospital readmissions: a systematic review and meta‐analysis of randomized trials. JAMA Intern Med. 2014;174(7):10951107.
Background
Hospital readmissions are common, costly, and potentially represent a failure to adequately prepare patients for hospital discharge, but efforts to prevent 30‐day readmissions have been mixed.[7] The investigators in this study offer a novel framework, the cumulative complexity model, as a way to conceptualize postdischarge outcomes such as readmission. The model depicts the balance between the patient's workload of managing their illness, including the demands of monitoring treatment and self‐care, and the patient's capacity to handle that workfunctionality, financial/social resources, literacy, and empowerment. Workload‐capacity imbalances (when workload outstrips capacity) may lead to progressively increasing illness and increasing complexity, which contribute to poor patient outcomes like readmissions. Decreasing a patient's workload or increasing their capacity may be effective in reducing readmissions.
Findings
Investigators sought to identify factors associated with successful interventions to reduce 30‐day readmissions, including how the interventions fit into the cumulative complexity model. After performing a comprehensive search of randomized trials of interventions to reduce readmissions, the investigators identified 42 randomized trials with the primary outcome of 30‐day readmission rates. In addition to reviewing intervention characteristics, blinded raters scored interventions based on their effects on reducing or increasing patient workload and reducing or increasing patient capacity for self‐care. Interventions that had several components (eg, pharmacy education, postdischarge phone calls, visiting nurses, health coaches, close primary care follow‐up) were more likely to be successful (1.4 times as likely; P = 0.001), as were interventions that involved 2 or more individuals (1.3 times as likely; P = 0.05). Interventions that were published prior to 2002 were 1.6 times more likely to have reduced readmissions (P = 0.01). When applied to the cumulative complexity model, interventions that sought to augment patient capacity for self‐care were 1.3 times as likely to be successful (P = 0.04), whereas no relationship was found between an intervention's effect on patient workload and readmission.
Cautions
The authors evaluated each intervention based on the degree to which it was likely to affect patient workload and patient capacity. Because a multifaceted intervention may have had components that increased patient workload (eg, more self‐monitoring, appointments) and decreased patient workload (home visits, visiting nurses), the true effect of patient workload on readmissions may not have been optimally analyzed in this study. Additionally, this element of the study relied on a value judgment original to this work. Interventions that are burdensome to some, may be beneficial to those with the capacity and resources to access the care.
Implications
The body of studies reviewed suggests that interventions to reduce 30‐day readmissions are on the whole successful. Their findings are in keeping with past studies demonstrating more successful interventions that are resource‐intensive and multifaceted. Finding successful interventions that are also cost‐effective may be challenging. This article adds the cumulative complexity framework to what we already know about readmissions, highlighting patient capacity to manage the burden of their Illness as a new factor for success. Efforts to deliver patient‐centered education, explore barriers to adherence, and provide health coaching may be more successful than interventions that unwittingly add to the burden of disease treatment (multiple follow‐up appointments, complex medication schedules, and posthospital surveys and patient self‐assessments).
DOES PATIENT ACTIVATION CORRELATE WITH DECREASED RESOURCE USE OR READMISSIONS?
Mitchell SE, Gardiner PM, Sadikova E, et al. Patient activation and 30‐day post discharge hospital utilization. J Gen Intern Med. 2014;29(2):349355.
Background
Patient activation is widely recognized as the knowledge, skills, and confidence a person has in managing their own health or healthcare. Higher patient activation has been associated with improved health outcomes, but the relationship between patient activation and readmission to the hospital within 30 days is unknown.[8]
Findings
Using data from Project RED‐LIT (Re‐Engineered Discharge for patients with low health literacy), a randomized controlled trial conducted at an urban safety‐net hospital, investigators examined the relationship between all unplanned utilization events of hospital services within 30 days of discharge and patient activation, as measured by an abbreviated 8‐item version of the validated Patient Activation Measure (PAM). The PAM uses agreement with statements about a patient's sense of responsibility for his or her own health, confidence in seeking care and following through with medical treatments, and confidence in managing new problems to measure activation. The 695 participants were divided into quartiles based on their PAM score, and the investigators looked at the rates of unplanned utilization events in each group. After adjusting for potential confounders such as gender, age, Charlson Comorbidity Index, insurance, marital status, and education, there remained a significant effect between PAM and 30‐day hospital reutilization. Compared with those who scored in the highest quartile of activation, those in the lowest quartile had 1.75 times the rate of 30‐day reutilization (P < 0.001). Those in the second highest and third highest quartile had 1.3 (P = 0.03) and 1.5 times (P < 0.001) the rate of reutilization demonstrating a dose‐response relationship between activation and low reutilization.
Cautions
It is as yet unclear how best to apply these results and whether activation is a modifiable risk factor. Can a patient become more activated by providing more education and coaching during their hospital stay? Can providing close follow‐up and home services make a person more confident to manage their own illness? Although early identification of patients with low activation using PAM is being done at many hospitals, there is no study to suggest that targeting these patients can reduce readmission.
Implications
A low level of patient activation appears to be a risk factor for unplanned hospital utilization within 30 days of discharge. Given the increasing financial penalties, many hospitals across the country are using the PAM to determine how much support and which services they provide after discharge. Identifying these patients early in their hospitalization could allow providers to spend more time and attention on preparing them for managing their own illness after discharge. As above, the effects of this intervention on readmissions is as yet unclear.
IS THERE A RELATIONSHIP BETWEEN PATIENT SATISFACTION AND UNDERSTANDING OF THE PLAN OF CARE?
Kebede S, Shihab HM, Berger ZD, et al. Patients' understanding of their hospitalizations and association with satisfaction. JAMA Intern Med. 2014;174(10):16981700.
Background
Effective patient‐physician communication is associated with improved patient satisfaction, care quality, and clinical outcomes.[9] Whether a shared understanding of the plan of care between patients and clinicians affects satisfaction is unknown.
Findings
One hundred seventy‐seven patients who had 2 or more medical conditions, 2 or more medical procedures, and 2 or more days in the hospital were interviewed on the day of discharge. Patients were questioned about their overall understanding of their hospitalization and about specific aspects of their care. They were also asked to provide objective data to measure their understanding of their hospital course by (1) listing their medical diagnoses, (2) identifying indications for medication on discharge paperwork, and (3) listing tests or procedures they underwent from a standard list. Patients were then asked to rate their satisfaction with their hospitalization. Patients' self‐reported understanding was an average of 4.0 (very good) on a 5‐point scale. Their measured understanding scores for medical diagnoses, indications for medications and tests and procedures were 48.9%, 56.2%, and 59.4%, respectively. Factors associated with poor understanding of their hospital course were increasing age, less education, lower household income, black race, and longer length of stay. Patients reported a mean satisfaction of 4.0 (very satisfied). Higher self‐reported understanding was associated with higher patient satisfaction, irrespective of actual understanding.
Cautions
Despite their suboptimal measured understanding of their hospital course, the average patient rated their understanding as very good. This suggests that patients are either poor judges of effective communication or have low expectations for understanding. It also calls into question the relationship between quality of communication and patient satisfaction, because despite their satisfaction, patients' actual understanding was low. There was, however, a clear and positive relationship between patients' perceived understanding and their satisfaction, suggesting that shared understanding remains integral to patient satisfaction.
Implications
Patient satisfaction appears to be tied to patients' perceived understanding of their care, but when tested actual understanding was suboptimal. Further efforts in patient satisfaction should not only focus on the quality of our communication, but on the resulting understanding of our patients.
WHAT ARE UNIVERSAL STRATEGIES TO IMPROVE SATISFACTION AND PATIENT OUTCOMES?
Detsky AS, Krumholz HM. Reducing the trauma of hospitalization. JAMA. 2014;311(21):21692170.
Background
Although high readmission rates are a national problem, a minority of patients treated for common conditions like pneumonia, heart failure, and chronic obstructive pulmonary disease are readmitted for the same problem.[10] This suggests that readmissions may stem not from poor disease management, but from patient vulnerability to illness in the period following hospitalization.
Findings
In this viewpoint opinion article, the authors suggest that the depersonalizing and stressful hospital atmosphere contributes to a transient vulnerability in the period following hospitalization that makes it challenging for patients to care for themselves and their illness. They offer specific strategies for changing the nature of our hospital care to promote healing and to decrease patient stress. The authors suggest promoting personalization through accommodation of family members, and allowing personal clothing and personal dcor in their rooms. Physicians and consultants should make appointments so that patients and families can know when to expect important visits. The authors also focus on the provision of rest and nourishment by reducing nighttime disruption and the elimination on unnecessary restrictive diets. They argue that the hospital is a place of stressful disruptions and surprises, which could all be ameliorated by providing patients with a way to understand the members of their team and their roles as well as through providing a clear schedule for the day. Healthcare providers should not enter a room unannounced, and patients should be given private rooms as much as possible. Last, the authors focus on the elimination of unnecessary tests and procedures such as blood draws, telemetry, and urine cultures and the encouragement of activity by providing activities where patients can gather together outside their rooms.
Cautions
If these changes seem simple, they may not be. Many involve a significant shift in our thinking on how we provide carefrom a focus on disease and provider convenience to a true consideration for the health and peace of mind of our patients. Starting with small steps, such as reductions in phlebotomy and nighttime vital signs checks for the most stable patients and ensuring accommodations for families, may make this long list seem less daunting.
Implications
By promoting factors that affect a patient's well beingrest, nutrition, peace of mindwe may be discharging patients who are better equipped to manage their illness after their hospitalization.
DO HOSPITALISTS OVERTEST, AND IF SO, WHY?
Kachalia A, Berg A, Fagerlin A, et al. Overuse of testing in preoperative evaluation and syncope: a survey of hospitalists. Ann Intern Med. 2015;162(2):100108.
Background
National efforts, such as the Choosing Wisely campaign, seek to decrease overuse of low‐value services.[11] The extent of the problem of overtesting among hospitalists and the underlying drivers for unnecessary testing in this group have not been clearly defined.
Findings
Practicing adult medicine hospitalists across the country were given a questionnaire that included clinical vignettes for common inpatient scenarios: a preoperative evaluation and a syncope workup. Respondents were randomly provided 1 of 4 versions of each vignette, which contained the same clinical information but varied by a family member's request for further testing and by disclosure of the occupation of the family member. For example, in the preoperative evaluation, the vignettes either: (1) provided no details about the patient's son; (2) identified the son as a physician; (3) mentioned the son's request for testing, but did not identify the son as a physician; or (4) identified the son as a physician who requested testing. The syncope vignette versions were structured similarly, except the family member was the patient's wife and she was an attorney. The authors collected 1020 responses from an initial pool of 1500, for a decent 68% response rate. Hospitalists commonly reported overuse of testing, with 52% to 65% of respondents requesting unnecessary testing in the preoperative evaluation scenario, and 82% to 85% in the syncope scenario. The majority of physicians reported that they knew the testing was not clinically indicated based on evidence or guidelines, but were ordering the test due to a desire to reassure the patients or themselves.
Cautions
Responses to clinical vignettes in a survey may not represent actually practices. In addition, all hospitalists surveyed in this study were members of the Society of Hospital Medicine, so may not accurately exemplify all practicing hospitalists.
Implications
Overuse of testing is very common among hospitalists. Although roughly one‐third of respondents incorrectly thought that testing in the given scenarios was supported by the evidence or guidelines, the majority knew that testing was not clinically indicated and reported ordering tests to help reassure their patients or themselves. This suggests evidence‐based medicine approaches to overuse, such as the Choosing Wisely campaign and the emergence of appropriateness criteria, are likely necessary but insufficient to change physician practice patterns. Efforts to decrease overuse will need to engage clinicians and patients in ways that help overcome the attitude that more testing is required to provide reassurance.
DO UNREALISTIC PATIENT EXPECTATIONS ABOUT INTERVENTIONS INFLUENCE DECISION MAKING AND CONTRIBUTE TO OVERUSE?
Hoffmann TC, Del Mar C. Patient expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med. 2015;175(2):274286.
Background
Patient expectations have been implicated as a contributor to overuse of medical interventions. Studies that have measured patients' understanding of the potential benefits and harms of medical treatments and tests have been scattered across the literature.
Findings
This systematic review aggregated all studies that have quantitatively assessed patients' expectations of the benefits and/or harms of any treatment or test. Of more than 15,000 records screened, only 36 articles met the inclusion criteria of describing a study in which participants were asked to provide a quantitative estimate of the expected benefits and/or harms of a treatment, test, or screen. Fourteen of the studies (40%) focused on screening, 15 (43%) on treatment, 3 (9%) on a test, and 3 (9%) on both treatment and screening. Topics included cancer, medications, surgery, cardiovascular disease, and fetal‐maternal medicine. The majority of patients overestimated intervention benefit and underestimated harm, regardless of whether the intervention was a test or a treatment. For example, more than half of participants overestimated benefit for 22 of the 34 outcomes (65%) for which overestimation data were provided, and a majority of participants underestimated harm for 10 of the 15 outcomes (67%) with underestimation data available.
Cautions
This systematic review included a limited number of studies, with varying levels of quality and a lot of heterogeneity, making it difficult to reach clear aggregate conclusions.
Implications
Patients are often overly optimistic about medical interventions and they downplay potential risks, making it more difficult to effectively discourage overuse. Clinicians should clearly understand and communicate realistic expectations for the potential benefits and risks of screening, testing, and medical treatments with patients and the public at large.
HOW BIG OF A PROBLEM IS ANTIBIOTIC OVERUSE IN HOSPITALS AND CAN WE DO BETTER?
Fridkin S, Baggs J, Fagan R, et al. Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep. 2014;63(9):194200.
Background
Antibiotics are life‐saving therapies, but when used in inappropriate scenarios they can pose many risks.
Findings
This large national database study used the MarketScan Hospital Drug Database and the Centers for Disease Control and Prevention's (CDC) Emerging Infections Program data to explore antibiotic prescribing in hospital patients. More than half of all hospitalized patients (55.7%) received antibiotics during their stay. Half of all treatment antibiotics were prescribed for the treatment of either lower respiratory infections, urinary tract infections, or presumed gram‐positive infections. There was wide variation seen in antibiotic usage across hospital wards. Objective criteria for potential improvement in antimicrobial use were developed and applied at a subset of 36 hospitals. Antibiotic prescribing could be improved in 37.2% of the most common prescription scenarios reviewed, including patients receiving vancomycin or those being treated for a urinary tract infection. The impact of reducing inpatient antibiotic exposure on the incidence of Clostridium difficile colitis was modeled using data from 2 hospitals, revealing that decreasing hospitalized patients' exposure to broad‐spectrum antibiotics by 30% would lead to a 26% reduction in C difficile infections (interquartile range = 15%38%).
Cautions
Some of the estimates in this study are based on a convenience sample of claims and hospital‐based data, thus may not be an accurate representation, particularly when extrapolating to all US hospitals.
Implications
Antibiotic overuse is a rampant problem in hospitals, with many severe downstream effects such as C difficile infections and antimicrobial resistance. All hospital units should have an antibiotic stewardship program and should monitor antibiotic usage.
Lee TC, Frenette C, Jayaraman D, Green L, Pilote L. Antibiotic self‐stewardship: trainee‐led structured antibiotic time‐outs to improve antimicrobial use. Ann Intern Med. 2014;161(10 suppl):S53S58.
Background
The CDC and other groups have called for stewardship programs to address antibiotic overuse.[12] Few interventions have been shown to successfully engage medical trainees in efforts to improve their own antibiotic prescribing practices.
Findings
An antibiotic self‐stewardship program was developed and led by internal medicine residents at Montreal General Hospital. The intervention included a monthly resident education lecture on antimicrobial stewardship and twice‐weekly time‐out audits using a structured electronic checklist. Adherence with auditing was 80%. Total costs for antibiotics decreased from $149,743 CAD to $80,319 CAD, mostly due to an observed reduction in carbapenems. Moxifloxicin use decreased by 1.9 defined daily doses per 1000 patient‐days per month (P = 0.048). Rates of clostridium difficile colitis declined from 24.2 to 19.6 per 10,000 patient‐days, although this trend did not meet statistical significance (incidence rate ratio, 0.8 [confidence interval, 0.5‐1.3]).
Cautions
Although the use of some broader spectrum antibiotics decreased, there was no measurable change in overall antibiotic use, suggesting that physicians may have narrowed antibiotics but did not often completely discontinue them. The time‐series analyses in this study cannot provide causal conclusions between the intervention and outcomes. In fact, carbapenem usage appears to have significantly decreased prior to the implementation of the program, for unclear reasons. The feasibility of this educational intervention outside of a residency program is unclear.
Implications
A combination of education, oversight and frontline clinician engagement in structured time‐outs may be effective, at least in narrowing antibiotic usage. The structured audit checklist developed by these authors is available for free in the supplementary materials of the Annals of Internal Medicine article.
Disclosures: Dr. Moriates has received grant funding from the ABIM Foundation, and royalties from McGraw‐Hill for the textbook Understanding Value‐Based Healthcare. The authors report no conflicts of interest.
Hospitalists have a professional obligation to provide the highest quality care for patients and increasingly, hospitalists lead programs to improve quality, value, and patient experience.[1, 2, 3]
The federal government introduced the hospital Value‐Based Purchasing (VBP) program in 2012, initially with 1% of Medicare hospital payments tied to quality indicators. This percentage will continue to grow and the VBP program has expanded to include metrics related to quality, safety, cost‐effectiveness, and patient satisfaction.[4] Hospitals now face significant financial penalties if they do not achieve these benchmarks; thus, remaining up‐to‐date with the literature and the most promising interventions in these arenas is vital for hospitalists.
The goal of this update is to summarize and critique recently published research that has the greatest potential to impact clinical practice in quality, value, and patient experience in hospital medicine. We reviewed articles published between January 2014 and February 2015. To identify articles, we hand‐searched leading journals, continuing medical education collaborative journal reviews (including New England Journal of Medicine Journal Watch and the American College of Physicians Journal Club), the Agency for Healthcare Research and Quality's Patient Safety network, and PubMed. We evaluated articles based on their scientific rigor (peer review, study methodology, site number, and sample size) and applicability to hospital medicine. In this review, we summarize 9 articles that were felt by the authors to have the highest potential for impact on the clinical practice of hospital medicine, as directly related to quality, value, or patient experience. We present each topic with a current quality question that the accompanying article(s) will help address. We summarize each article and its findings and note cautions and implications for practice. The selected articles cover aspects related to patient safety, readmissions, patient satisfaction, and resource utilization, with each of these topics related to specific metrics included in VBP. We presented this update at the 2015 Society of Hospital Medicine national meeting.
IS THERE ANYTHING WE CAN DO TO MAKE HANDOFFS SAFER?
Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):18031812.
Background
With recent changes in resident duty hours and staffing models, the number of clinical handoffs during a patient's hospital stay has been increasing.[5] The omission of critical information and the transfer of erroneous information during handoffs is common, which contributes to preventable medical errors.[6]
Findings
This prospective intervention study of a resident handoff program in 9 hospitals sought to improve communication between healthcare providers and to decrease medical errors. The I‐PASS mnemonic, which stands for illness severity, patient summary, action list, situation awareness, and synthesis by receiver, was introduced to standardize oral and written handoffs. The program also included a 2‐hour workshop, a 1‐hour role‐playing and simulation session, a computer module, a faculty development program, direct observation tools, and a culture change campaign. Medical errors decreased by 23% following the intervention, compared to the preintervention baseline (24.5 vs 18.8 per 100 admissions, P < 0.001), and the rate of preventable adverse events dropped by 30% (4.7 vs 3.3 events per 100 admissions, P < 0.001), whereas nonpreventable adverse events did not change. Process measures of handoff quality uniformly improved with the intervention. The duration of oral handoffs was approximately 2.5 minutes per patient both before and during the intervention period.
Cautions
Not all of the sites in the study saw significant reductions in medical errors; 3 of the programs did not have significantly improved medical error rates following implementation of the I‐PASS handoff bundle. The study design was not a randomized controlled trial, and thus the pre‐ versus postimplementation analyses cannot draw definitive causal links between the intervention and the observed improvements in safety outcomes. Furthermore, this study was done with pediatric residents, and one cannot assume that the results will translate to practicing hospitalists, who may not benefit as much from a scripted sign‐out.
Implications
A comprehensive handoff program that included the I‐PASS mnemonic along with extensive training, faculty development, and a culture‐change campaign was associated with impressive improvements in patient safety outcomes, without negatively effecting workflow.
WHAT ARE THE COMMON FEATURES OF INTERVENTIONS THAT HAVE SUCCESSFULLY REDUCED READMISSIONS?
Leppin AL, Glonfriddo MR, Kessler M, et al. Preventing 30‐day hospital readmissions: a systematic review and meta‐analysis of randomized trials. JAMA Intern Med. 2014;174(7):10951107.
Background
Hospital readmissions are common, costly, and potentially represent a failure to adequately prepare patients for hospital discharge, but efforts to prevent 30‐day readmissions have been mixed.[7] The investigators in this study offer a novel framework, the cumulative complexity model, as a way to conceptualize postdischarge outcomes such as readmission. The model depicts the balance between the patient's workload of managing their illness, including the demands of monitoring treatment and self‐care, and the patient's capacity to handle that workfunctionality, financial/social resources, literacy, and empowerment. Workload‐capacity imbalances (when workload outstrips capacity) may lead to progressively increasing illness and increasing complexity, which contribute to poor patient outcomes like readmissions. Decreasing a patient's workload or increasing their capacity may be effective in reducing readmissions.
Findings
Investigators sought to identify factors associated with successful interventions to reduce 30‐day readmissions, including how the interventions fit into the cumulative complexity model. After performing a comprehensive search of randomized trials of interventions to reduce readmissions, the investigators identified 42 randomized trials with the primary outcome of 30‐day readmission rates. In addition to reviewing intervention characteristics, blinded raters scored interventions based on their effects on reducing or increasing patient workload and reducing or increasing patient capacity for self‐care. Interventions that had several components (eg, pharmacy education, postdischarge phone calls, visiting nurses, health coaches, close primary care follow‐up) were more likely to be successful (1.4 times as likely; P = 0.001), as were interventions that involved 2 or more individuals (1.3 times as likely; P = 0.05). Interventions that were published prior to 2002 were 1.6 times more likely to have reduced readmissions (P = 0.01). When applied to the cumulative complexity model, interventions that sought to augment patient capacity for self‐care were 1.3 times as likely to be successful (P = 0.04), whereas no relationship was found between an intervention's effect on patient workload and readmission.
Cautions
The authors evaluated each intervention based on the degree to which it was likely to affect patient workload and patient capacity. Because a multifaceted intervention may have had components that increased patient workload (eg, more self‐monitoring, appointments) and decreased patient workload (home visits, visiting nurses), the true effect of patient workload on readmissions may not have been optimally analyzed in this study. Additionally, this element of the study relied on a value judgment original to this work. Interventions that are burdensome to some, may be beneficial to those with the capacity and resources to access the care.
Implications
The body of studies reviewed suggests that interventions to reduce 30‐day readmissions are on the whole successful. Their findings are in keeping with past studies demonstrating more successful interventions that are resource‐intensive and multifaceted. Finding successful interventions that are also cost‐effective may be challenging. This article adds the cumulative complexity framework to what we already know about readmissions, highlighting patient capacity to manage the burden of their Illness as a new factor for success. Efforts to deliver patient‐centered education, explore barriers to adherence, and provide health coaching may be more successful than interventions that unwittingly add to the burden of disease treatment (multiple follow‐up appointments, complex medication schedules, and posthospital surveys and patient self‐assessments).
DOES PATIENT ACTIVATION CORRELATE WITH DECREASED RESOURCE USE OR READMISSIONS?
Mitchell SE, Gardiner PM, Sadikova E, et al. Patient activation and 30‐day post discharge hospital utilization. J Gen Intern Med. 2014;29(2):349355.
Background
Patient activation is widely recognized as the knowledge, skills, and confidence a person has in managing their own health or healthcare. Higher patient activation has been associated with improved health outcomes, but the relationship between patient activation and readmission to the hospital within 30 days is unknown.[8]
Findings
Using data from Project RED‐LIT (Re‐Engineered Discharge for patients with low health literacy), a randomized controlled trial conducted at an urban safety‐net hospital, investigators examined the relationship between all unplanned utilization events of hospital services within 30 days of discharge and patient activation, as measured by an abbreviated 8‐item version of the validated Patient Activation Measure (PAM). The PAM uses agreement with statements about a patient's sense of responsibility for his or her own health, confidence in seeking care and following through with medical treatments, and confidence in managing new problems to measure activation. The 695 participants were divided into quartiles based on their PAM score, and the investigators looked at the rates of unplanned utilization events in each group. After adjusting for potential confounders such as gender, age, Charlson Comorbidity Index, insurance, marital status, and education, there remained a significant effect between PAM and 30‐day hospital reutilization. Compared with those who scored in the highest quartile of activation, those in the lowest quartile had 1.75 times the rate of 30‐day reutilization (P < 0.001). Those in the second highest and third highest quartile had 1.3 (P = 0.03) and 1.5 times (P < 0.001) the rate of reutilization demonstrating a dose‐response relationship between activation and low reutilization.
Cautions
It is as yet unclear how best to apply these results and whether activation is a modifiable risk factor. Can a patient become more activated by providing more education and coaching during their hospital stay? Can providing close follow‐up and home services make a person more confident to manage their own illness? Although early identification of patients with low activation using PAM is being done at many hospitals, there is no study to suggest that targeting these patients can reduce readmission.
Implications
A low level of patient activation appears to be a risk factor for unplanned hospital utilization within 30 days of discharge. Given the increasing financial penalties, many hospitals across the country are using the PAM to determine how much support and which services they provide after discharge. Identifying these patients early in their hospitalization could allow providers to spend more time and attention on preparing them for managing their own illness after discharge. As above, the effects of this intervention on readmissions is as yet unclear.
IS THERE A RELATIONSHIP BETWEEN PATIENT SATISFACTION AND UNDERSTANDING OF THE PLAN OF CARE?
Kebede S, Shihab HM, Berger ZD, et al. Patients' understanding of their hospitalizations and association with satisfaction. JAMA Intern Med. 2014;174(10):16981700.
Background
Effective patient‐physician communication is associated with improved patient satisfaction, care quality, and clinical outcomes.[9] Whether a shared understanding of the plan of care between patients and clinicians affects satisfaction is unknown.
Findings
One hundred seventy‐seven patients who had 2 or more medical conditions, 2 or more medical procedures, and 2 or more days in the hospital were interviewed on the day of discharge. Patients were questioned about their overall understanding of their hospitalization and about specific aspects of their care. They were also asked to provide objective data to measure their understanding of their hospital course by (1) listing their medical diagnoses, (2) identifying indications for medication on discharge paperwork, and (3) listing tests or procedures they underwent from a standard list. Patients were then asked to rate their satisfaction with their hospitalization. Patients' self‐reported understanding was an average of 4.0 (very good) on a 5‐point scale. Their measured understanding scores for medical diagnoses, indications for medications and tests and procedures were 48.9%, 56.2%, and 59.4%, respectively. Factors associated with poor understanding of their hospital course were increasing age, less education, lower household income, black race, and longer length of stay. Patients reported a mean satisfaction of 4.0 (very satisfied). Higher self‐reported understanding was associated with higher patient satisfaction, irrespective of actual understanding.
Cautions
Despite their suboptimal measured understanding of their hospital course, the average patient rated their understanding as very good. This suggests that patients are either poor judges of effective communication or have low expectations for understanding. It also calls into question the relationship between quality of communication and patient satisfaction, because despite their satisfaction, patients' actual understanding was low. There was, however, a clear and positive relationship between patients' perceived understanding and their satisfaction, suggesting that shared understanding remains integral to patient satisfaction.
Implications
Patient satisfaction appears to be tied to patients' perceived understanding of their care, but when tested actual understanding was suboptimal. Further efforts in patient satisfaction should not only focus on the quality of our communication, but on the resulting understanding of our patients.
WHAT ARE UNIVERSAL STRATEGIES TO IMPROVE SATISFACTION AND PATIENT OUTCOMES?
Detsky AS, Krumholz HM. Reducing the trauma of hospitalization. JAMA. 2014;311(21):21692170.
Background
Although high readmission rates are a national problem, a minority of patients treated for common conditions like pneumonia, heart failure, and chronic obstructive pulmonary disease are readmitted for the same problem.[10] This suggests that readmissions may stem not from poor disease management, but from patient vulnerability to illness in the period following hospitalization.
Findings
In this viewpoint opinion article, the authors suggest that the depersonalizing and stressful hospital atmosphere contributes to a transient vulnerability in the period following hospitalization that makes it challenging for patients to care for themselves and their illness. They offer specific strategies for changing the nature of our hospital care to promote healing and to decrease patient stress. The authors suggest promoting personalization through accommodation of family members, and allowing personal clothing and personal dcor in their rooms. Physicians and consultants should make appointments so that patients and families can know when to expect important visits. The authors also focus on the provision of rest and nourishment by reducing nighttime disruption and the elimination on unnecessary restrictive diets. They argue that the hospital is a place of stressful disruptions and surprises, which could all be ameliorated by providing patients with a way to understand the members of their team and their roles as well as through providing a clear schedule for the day. Healthcare providers should not enter a room unannounced, and patients should be given private rooms as much as possible. Last, the authors focus on the elimination of unnecessary tests and procedures such as blood draws, telemetry, and urine cultures and the encouragement of activity by providing activities where patients can gather together outside their rooms.
Cautions
If these changes seem simple, they may not be. Many involve a significant shift in our thinking on how we provide carefrom a focus on disease and provider convenience to a true consideration for the health and peace of mind of our patients. Starting with small steps, such as reductions in phlebotomy and nighttime vital signs checks for the most stable patients and ensuring accommodations for families, may make this long list seem less daunting.
Implications
By promoting factors that affect a patient's well beingrest, nutrition, peace of mindwe may be discharging patients who are better equipped to manage their illness after their hospitalization.
DO HOSPITALISTS OVERTEST, AND IF SO, WHY?
Kachalia A, Berg A, Fagerlin A, et al. Overuse of testing in preoperative evaluation and syncope: a survey of hospitalists. Ann Intern Med. 2015;162(2):100108.
Background
National efforts, such as the Choosing Wisely campaign, seek to decrease overuse of low‐value services.[11] The extent of the problem of overtesting among hospitalists and the underlying drivers for unnecessary testing in this group have not been clearly defined.
Findings
Practicing adult medicine hospitalists across the country were given a questionnaire that included clinical vignettes for common inpatient scenarios: a preoperative evaluation and a syncope workup. Respondents were randomly provided 1 of 4 versions of each vignette, which contained the same clinical information but varied by a family member's request for further testing and by disclosure of the occupation of the family member. For example, in the preoperative evaluation, the vignettes either: (1) provided no details about the patient's son; (2) identified the son as a physician; (3) mentioned the son's request for testing, but did not identify the son as a physician; or (4) identified the son as a physician who requested testing. The syncope vignette versions were structured similarly, except the family member was the patient's wife and she was an attorney. The authors collected 1020 responses from an initial pool of 1500, for a decent 68% response rate. Hospitalists commonly reported overuse of testing, with 52% to 65% of respondents requesting unnecessary testing in the preoperative evaluation scenario, and 82% to 85% in the syncope scenario. The majority of physicians reported that they knew the testing was not clinically indicated based on evidence or guidelines, but were ordering the test due to a desire to reassure the patients or themselves.
Cautions
Responses to clinical vignettes in a survey may not represent actually practices. In addition, all hospitalists surveyed in this study were members of the Society of Hospital Medicine, so may not accurately exemplify all practicing hospitalists.
Implications
Overuse of testing is very common among hospitalists. Although roughly one‐third of respondents incorrectly thought that testing in the given scenarios was supported by the evidence or guidelines, the majority knew that testing was not clinically indicated and reported ordering tests to help reassure their patients or themselves. This suggests evidence‐based medicine approaches to overuse, such as the Choosing Wisely campaign and the emergence of appropriateness criteria, are likely necessary but insufficient to change physician practice patterns. Efforts to decrease overuse will need to engage clinicians and patients in ways that help overcome the attitude that more testing is required to provide reassurance.
DO UNREALISTIC PATIENT EXPECTATIONS ABOUT INTERVENTIONS INFLUENCE DECISION MAKING AND CONTRIBUTE TO OVERUSE?
Hoffmann TC, Del Mar C. Patient expectations of the benefits and harms of treatments, screening, and tests: a systematic review. JAMA Intern Med. 2015;175(2):274286.
Background
Patient expectations have been implicated as a contributor to overuse of medical interventions. Studies that have measured patients' understanding of the potential benefits and harms of medical treatments and tests have been scattered across the literature.
Findings
This systematic review aggregated all studies that have quantitatively assessed patients' expectations of the benefits and/or harms of any treatment or test. Of more than 15,000 records screened, only 36 articles met the inclusion criteria of describing a study in which participants were asked to provide a quantitative estimate of the expected benefits and/or harms of a treatment, test, or screen. Fourteen of the studies (40%) focused on screening, 15 (43%) on treatment, 3 (9%) on a test, and 3 (9%) on both treatment and screening. Topics included cancer, medications, surgery, cardiovascular disease, and fetal‐maternal medicine. The majority of patients overestimated intervention benefit and underestimated harm, regardless of whether the intervention was a test or a treatment. For example, more than half of participants overestimated benefit for 22 of the 34 outcomes (65%) for which overestimation data were provided, and a majority of participants underestimated harm for 10 of the 15 outcomes (67%) with underestimation data available.
Cautions
This systematic review included a limited number of studies, with varying levels of quality and a lot of heterogeneity, making it difficult to reach clear aggregate conclusions.
Implications
Patients are often overly optimistic about medical interventions and they downplay potential risks, making it more difficult to effectively discourage overuse. Clinicians should clearly understand and communicate realistic expectations for the potential benefits and risks of screening, testing, and medical treatments with patients and the public at large.
HOW BIG OF A PROBLEM IS ANTIBIOTIC OVERUSE IN HOSPITALS AND CAN WE DO BETTER?
Fridkin S, Baggs J, Fagan R, et al. Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep. 2014;63(9):194200.
Background
Antibiotics are life‐saving therapies, but when used in inappropriate scenarios they can pose many risks.
Findings
This large national database study used the MarketScan Hospital Drug Database and the Centers for Disease Control and Prevention's (CDC) Emerging Infections Program data to explore antibiotic prescribing in hospital patients. More than half of all hospitalized patients (55.7%) received antibiotics during their stay. Half of all treatment antibiotics were prescribed for the treatment of either lower respiratory infections, urinary tract infections, or presumed gram‐positive infections. There was wide variation seen in antibiotic usage across hospital wards. Objective criteria for potential improvement in antimicrobial use were developed and applied at a subset of 36 hospitals. Antibiotic prescribing could be improved in 37.2% of the most common prescription scenarios reviewed, including patients receiving vancomycin or those being treated for a urinary tract infection. The impact of reducing inpatient antibiotic exposure on the incidence of Clostridium difficile colitis was modeled using data from 2 hospitals, revealing that decreasing hospitalized patients' exposure to broad‐spectrum antibiotics by 30% would lead to a 26% reduction in C difficile infections (interquartile range = 15%38%).
Cautions
Some of the estimates in this study are based on a convenience sample of claims and hospital‐based data, thus may not be an accurate representation, particularly when extrapolating to all US hospitals.
Implications
Antibiotic overuse is a rampant problem in hospitals, with many severe downstream effects such as C difficile infections and antimicrobial resistance. All hospital units should have an antibiotic stewardship program and should monitor antibiotic usage.
Lee TC, Frenette C, Jayaraman D, Green L, Pilote L. Antibiotic self‐stewardship: trainee‐led structured antibiotic time‐outs to improve antimicrobial use. Ann Intern Med. 2014;161(10 suppl):S53S58.
Background
The CDC and other groups have called for stewardship programs to address antibiotic overuse.[12] Few interventions have been shown to successfully engage medical trainees in efforts to improve their own antibiotic prescribing practices.
Findings
An antibiotic self‐stewardship program was developed and led by internal medicine residents at Montreal General Hospital. The intervention included a monthly resident education lecture on antimicrobial stewardship and twice‐weekly time‐out audits using a structured electronic checklist. Adherence with auditing was 80%. Total costs for antibiotics decreased from $149,743 CAD to $80,319 CAD, mostly due to an observed reduction in carbapenems. Moxifloxicin use decreased by 1.9 defined daily doses per 1000 patient‐days per month (P = 0.048). Rates of clostridium difficile colitis declined from 24.2 to 19.6 per 10,000 patient‐days, although this trend did not meet statistical significance (incidence rate ratio, 0.8 [confidence interval, 0.5‐1.3]).
Cautions
Although the use of some broader spectrum antibiotics decreased, there was no measurable change in overall antibiotic use, suggesting that physicians may have narrowed antibiotics but did not often completely discontinue them. The time‐series analyses in this study cannot provide causal conclusions between the intervention and outcomes. In fact, carbapenem usage appears to have significantly decreased prior to the implementation of the program, for unclear reasons. The feasibility of this educational intervention outside of a residency program is unclear.
Implications
A combination of education, oversight and frontline clinician engagement in structured time‐outs may be effective, at least in narrowing antibiotic usage. The structured audit checklist developed by these authors is available for free in the supplementary materials of the Annals of Internal Medicine article.
Disclosures: Dr. Moriates has received grant funding from the ABIM Foundation, and royalties from McGraw‐Hill for the textbook Understanding Value‐Based Healthcare. The authors report no conflicts of interest.
- The role of the hospitalist in quality improvement: systems for improving the care of patients with acute coronary syndrome. J Hosp Med. 2010;5(suppl 4):S1–S7. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Development of a hospital‐based program focused on improving healthcare value. J Hosp Med. 2014;9(10):671–677. , , , .
- Value‐driven health care: implications for hospitals and hospitalists. J Hosp Med. 2009;4(8):507–511. .
- Effect of the 2011 vs 2003 duty hour regulation‐compliant models on sleep duration, trainee education, and continuity of patient care among internal medicine house staff: a randomized trial. JAMA Intern Med. 2013;173(8):649–655. , , , et al.
- Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866–872. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155:520–528. , , , , .
- Participatory decision making, patient activation, medication adherence, and intermediate clinical outcomes in type 2 diabetes: a STARNet study. Ann Fam Med. 2010;8(5):410–417. , , .
- Effective physician‐patient communication and health outcomes: a review. CMAJ. 2007;152(9):1423–1433. .
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486–492. , , , et al.
- Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep. 2014;63(9):194–200. , , , et al.
- The role of the hospitalist in quality improvement: systems for improving the care of patients with acute coronary syndrome. J Hosp Med. 2010;5(suppl 4):S1–S7. .
- Impact of hospitalist communication‐skills training on patient‐satisfaction scores. J Hosp Med. 2013;8(6):315–320. , , , .
- Development of a hospital‐based program focused on improving healthcare value. J Hosp Med. 2014;9(10):671–677. , , , .
- Value‐driven health care: implications for hospitals and hospitalists. J Hosp Med. 2009;4(8):507–511. .
- Effect of the 2011 vs 2003 duty hour regulation‐compliant models on sleep duration, trainee education, and continuity of patient care among internal medicine house staff: a randomized trial. JAMA Intern Med. 2013;173(8):649–655. , , , et al.
- Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866–872. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155:520–528. , , , , .
- Participatory decision making, patient activation, medication adherence, and intermediate clinical outcomes in type 2 diabetes: a STARNet study. Ann Fam Med. 2010;8(5):410–417. , , .
- Effective physician‐patient communication and health outcomes: a review. CMAJ. 2007;152(9):1423–1433. .
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Choosing wisely in adult hospital medicine: five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486–492. , , , et al.
- Vital signs: improving antibiotic use among hospitalized patients. MMWR Morb Mortal Wkly Rep. 2014;63(9):194–200. , , , et al.
Individualizing treatment of menopausal symptoms
Menopause experts Andrew M. Kaunitz, MD, and JoAnn E. Manson, MD, DrPH, provide a comprehensive review of various treatments for menopausal symptoms in an article recently published ahead of print in Obstetrics and Gynecology.1 They discuss hormonal and nonhormonal options to treat vasomotor symptoms, genitourinary syndrome of menopause (GSM), and considerations for the use of hormone therapy in special populations: women with early menopause, women with a history of breast cancer and those who carry the BRCA gene mutation, and women with a history of venous thrombosis.1
The authors write that, “given the lower rates of adverse events on HT among women close to menopause onset and at lower baseline risk of cardiovascular disease, risk stratification and personalized risk assessment appear to represent a sound strategy for optimizing the benefit–risk profile and safety of HT.”1 They suggest that instead of stopping systemic HT at age 65 years, the length of treatment be individualized based on a woman’s risk profile and preferences. The authors encourage gynecologists and other clinicians to use benefit–risk profile tools for both hormonal and nonhormonal options to help women make sound decisions on treating menopausal symptoms.1
Readthe full Clinical Expert Series here.
Reference
- Kaunitz AM, Manson JE. Management of menopausal symptoms [published online ahead of print September 3, 2015]. Obstet Gynecol. doi: 10.1097/AOG.0000000000001058. Accessed September 18, 2015.
Menopause experts Andrew M. Kaunitz, MD, and JoAnn E. Manson, MD, DrPH, provide a comprehensive review of various treatments for menopausal symptoms in an article recently published ahead of print in Obstetrics and Gynecology.1 They discuss hormonal and nonhormonal options to treat vasomotor symptoms, genitourinary syndrome of menopause (GSM), and considerations for the use of hormone therapy in special populations: women with early menopause, women with a history of breast cancer and those who carry the BRCA gene mutation, and women with a history of venous thrombosis.1
The authors write that, “given the lower rates of adverse events on HT among women close to menopause onset and at lower baseline risk of cardiovascular disease, risk stratification and personalized risk assessment appear to represent a sound strategy for optimizing the benefit–risk profile and safety of HT.”1 They suggest that instead of stopping systemic HT at age 65 years, the length of treatment be individualized based on a woman’s risk profile and preferences. The authors encourage gynecologists and other clinicians to use benefit–risk profile tools for both hormonal and nonhormonal options to help women make sound decisions on treating menopausal symptoms.1
Readthe full Clinical Expert Series here.
Menopause experts Andrew M. Kaunitz, MD, and JoAnn E. Manson, MD, DrPH, provide a comprehensive review of various treatments for menopausal symptoms in an article recently published ahead of print in Obstetrics and Gynecology.1 They discuss hormonal and nonhormonal options to treat vasomotor symptoms, genitourinary syndrome of menopause (GSM), and considerations for the use of hormone therapy in special populations: women with early menopause, women with a history of breast cancer and those who carry the BRCA gene mutation, and women with a history of venous thrombosis.1
The authors write that, “given the lower rates of adverse events on HT among women close to menopause onset and at lower baseline risk of cardiovascular disease, risk stratification and personalized risk assessment appear to represent a sound strategy for optimizing the benefit–risk profile and safety of HT.”1 They suggest that instead of stopping systemic HT at age 65 years, the length of treatment be individualized based on a woman’s risk profile and preferences. The authors encourage gynecologists and other clinicians to use benefit–risk profile tools for both hormonal and nonhormonal options to help women make sound decisions on treating menopausal symptoms.1
Readthe full Clinical Expert Series here.
Reference
- Kaunitz AM, Manson JE. Management of menopausal symptoms [published online ahead of print September 3, 2015]. Obstet Gynecol. doi: 10.1097/AOG.0000000000001058. Accessed September 18, 2015.
Reference
- Kaunitz AM, Manson JE. Management of menopausal symptoms [published online ahead of print September 3, 2015]. Obstet Gynecol. doi: 10.1097/AOG.0000000000001058. Accessed September 18, 2015.
Updates in Perioperative Medicine
Given the rapid expansion of the field of perioperative medicine, clinicians need to remain apprised of the current evidence to ensure optimization of patient care. In this update, we review 10 key articles from the perioperative literature, with the goal of summarizing the most clinically important evidence over the past year. This summary of recent literature in perioperative medicine is derived from the Update in Perioperative Medicine sessions presented at the 10th Annual Perioperative Medicine Summit and the Society of General Internal Medicine 38th Annual Meeting. A systematic search strategy was used to identify pertinent articles, and the following were selected by the authors based on their relevance to the clinical practice of perioperative medicine.
PERIOPERATIVE CARDIOVASCULAR CARE
Fleisher LA, Fleischmann KE, Auerbach AD, et al. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. Circulation. 2014;130:e278e333.
Background
The American College of Cardiology/American Heart Association (ACC/AHA) perioperative guideline provides recommendations for the evaluation and management of cardiovascular disease in patients undergoing noncardiac surgery.
Findings
The new guideline combines the evaluation of surgery‐ and patient‐specific risk in the algorithm for preoperative cardiovascular evaluation into a single step and recommends the use of 1 of 3 tools: the Revised Cardiac Risk Index (RCRI),[1] National Surgical Quality Improvement Program (NSQIP) Surgical Risk Calculator,[2] or the NSQIP‐derived myocardial infarction and cardiac arrest calculator.[3] Estimation of risk is also simplified by stratification into only 2 groups: low risk (risk of major adverse cardiac event <1%) and elevated risk (1% risk). Coronary evaluation can be considered for patients with elevated cardiac risk and poor functional capacity, but is advised only if the results would alter perioperative management. For example, a patient with very high risk who has evidence of ischemia on stress testing may choose to forego surgery. Preoperative coronary revascularization is only indicated for patients meeting criteria in the nonsurgical setting.
For patients with previous percutaneous coronary intervention, the ACC/AHA has not changed its recommendations to optimally delay surgery for at least 30 days after bare‐metal stenting and at least 1 year after drug‐eluting stent (DES) placement. However, in patients with a DES placed 6 to 12 months previously, surgery can be performed if the risks of surgical delay outweigh the risks of DES thrombosis. After any type of coronary stenting, dual antiplatelet therapy should be continued uninterrupted through the first 4 to 6 weeks and even later whenever feasible. If not possible, aspirin therapy should be maintained through surgery unless bleeding risk is too high.
The guideline recommends perioperative continuation of ‐blockers in patients taking them chronically. Preoperative initiation of ‐blocker therapy may be considered for patients with myocardial ischemia on stress testing or 3 RCRI factors and should be started far enough in advance to allow determination of patient's tolerance prior to surgery.
Cautions
Many recommendations are based on data from nonrandomized trials or expert opinion, and the data in areas such as perioperative ‐blockade continue to evolve.
Implications
The ACC/AHA guideline continues to be a critically valuable resource for hospitalists providing perioperative care to noncardiac surgery patients.
Wijeysundera DN, Duncan D, Nkonde‐Price C, et al. Perioperative beta blockade in noncardiac surgery: a systematic review for the 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.
J Am Coll Cardiol. 2014;64(22):24062425.
Background
Various clinical trials have reported conflicting results regarding the efficacy and safety of perioperative ‐blockers resulting in guideline committees changing their recommendations. Because of questions raised regarding the scientific integrity of the DECREASE (Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography)‐I[4] and DECREASE‐IV[5] trials as well as the dosing of ‐blockers in POISE (PeriOperative Ischemic Evaluation) study,[6] this systematic review was performed in conjunction with the ACC/AHA guideline update[7] to evaluate the data with and without these trials.
Findings
Sixteen randomized control trials (RCTs) (n=12,043) and 1 cohort study (n=348) were included in the analysis. Perioperative ‐blockers were associated with a reduction in nonfatal myocardial infarction (MI) (relative risk [RR]: 0.69; 95% confidence interval [CI]: 0.58‐0.82; P<0.001) but an increase in bradycardia (RR: 2.61; 95% CI: 2.18‐3.12), hypotension (RR: 1.47; 95% CI: 1.34‐1.6), and nonfatal strokes (RR: 1.76; 95% CI: 1.07‐2.91; P=0.02). The POISE trial was the only one demonstrating a statistically significant increase in stroke.
The major discrepancy between the DECREASE trials and the other RCTs was related to mortalitya reduction in both cardiovascular and all‐cause death in DECREASE but an increased risk of all‐cause death in the other trials.
Cautions
Because of its size, the POISE trial heavily influences the results, particularly for mortality and stroke. Including the DECREASE trials reduces the otherwise increased risk for death to a null effect. Exclusion of the POISE and DECREASE trials leaves few data to make conclusions about safety and efficacy of perioperative ‐blockade. Several cohort studies have found metoprolol to be associated with worse outcomes than with atenolol or bisoprolol (which were preferred by the European Society of Cardiology guidelines).[8]
Implications
Perioperative ‐blockade started within 1 day of noncardiac surgery was associated with fewer nonfatal MIs but at the cost of an increase in hypotension, bradycardia, and a possible increase in stroke and death. Long‐term ‐blockade should be continued perioperatively, whereas the decision to initiate a ‐blocker should be individualized. If starting a ‐blocker perioperatively, it should be done 2 days before surgery.
Botto F, Alonso‐Coello P, Chan MT, et al.; on behalf of The Vascular events In noncardiac Surgery patIents cOhort evaluatioN (VISION) Investigators. Myocardial injury after noncardiac surgery: a large, international, prospective cohort study establishing diagnostic criteria, characteristics, predictors, and 30‐day outcomes. Anesthesiology. 2014;120(3):564578.
Background
Many patients sustain myocardial injury in the perioperative period as evidenced by troponin elevations, but most do not meet diagnostic criteria for MI. Myocardial injury after noncardiac surgery (MINS) is defined as prognostically relevant myocardial injury due to ischemia that occurs within 30 days after noncardiac surgery. This international, prospective cohort study of 15,065 patients 45 years old who underwent in‐patient noncardiac surgery determined diagnostic criteria, characteristics, predictors, and 30‐day outcomes of MINS.
Findings
The diagnostic criterion for MINS was a peak troponin T level 0.03 ng/mL judged to be due to an ischemic etiology. Twelve independent predictors of MINS were identified including age 75 years, known cardiovascular disease or risk factors, and surgical factors. MINS was an independent predictor of 30‐day mortality (adjusted hazard ratio [HR]: 3.87; 95% CI: 2.96‐5.08). Age >75 years, ST elevation, or new left bundle branch block, and anterior ischemic findings were independent predictors of 30‐day mortality among patients with MINS.
Cautions
Although screening high‐risk surgical patients without signs or symptoms of ischemia with postoperative troponins will increase the frequency of diagnosing MINS, evidence for an effective treatment has not yet been established. The ACC/AHA guidelines state that routine screening is of uncertain benefit for this reason.
Implications
Because MINS is common and carries a poor 30‐day prognosis, clinical trials are needed to determine when to obtain postoperative troponins and how to prevent and treat this complication.[9] Some observational data from POISE suggest that aspirin and statins can reduce the risk of 30‐day mortality in patients with postoperative MIs.
Devereaux PJ, Mrkobrada M, Sessler DI, et al. for the POISE‐2 Investigators. Aspirin in patients undergoing noncardiac surgery. N Engl J Med. 2014; 370(16):14941503.
Devereaux PJ, Sessler DI, Leslie K, et al. for the POISE‐2 Investigators. Clonidine in patients undergoing noncardiac surgery. N Engl J Med. 2014; 370(16):15041513.
Background
Medical risk reduction with aspirin and other agents in perioperative patients remains controversial. The POISE‐2 trial is a blinded RCT examining the effects of aspirin and clonidine on outcomes in >10,000 noncardiac surgery patients at risk of cardiovascular complications. The aspirin arm of the study included the initiation group and the continuation stratum, as well as placebo. Patients in the clonidine portion of the trial received 0.2 mg of clonidine or placebo daily for the same time periods.
Findings
The primary outcome was a composite of death or nonfatal MI within 30 days of surgery. Outcomes were similar in patients initiated or continued on aspirin. No difference was seen between aspirin or placebo in the primary outcome (7.0% vs 7.1%; HR: 0.86; 95% CI: 0.86‐1.15; P=0.92). There were no differences in rates of MI, venous thromboembolism, or stroke. Major bleeding rates were higher in aspirin versus placebo‐treated patients (4.6% vs 3.8%; HR: 1.23; 95% CI: 1.01‐1.49; P=0.04).
Clonidine did not alter the composite outcome of death or nonfatal MI (7.3% vs 6.8%; HR: 1.08; 95% CI: 0.93‐1.26; P=0.29). Clinically significant hypotension, bradycardia, and nonfatal cardiac arrest were more common in clonidine‐treated patients, although no difference was detected in stroke rates.
Cautions
Although patients in the trial had cardiovascular risk factors, <24% of patients had known coronary artery disease, and <5% had coronary stents. Conclusions based on this trial regarding perioperative management of antiplatelet therapy should not include patients with coronary artery stents.
Implications
Aspirin started before surgery and continued perioperatively did not decrease the rate of death or nonfatal MI but increased the risk of major bleeding. Perioperative management of aspirin needs to be undertaken in the context of cardiac and bleeding risks. Clonidine also did not improve outcomes and increased the risk of bradycardia and hypotension. Current guidelines recommend against using alpha‐2 agonists for prevention of perioperative cardiac events7; however, patients already on alpha‐2 agonists should not stop them abruptly.
PERIOPERATIVE PULMONARY CARE
Mutter TC, Chateau D, Moffatt M, et al. A matched cohort study of postoperative outcomes in obstructive sleep apnea: could preoperative diagnosis and treatment prevent complications? Anesthesiology. 2014;121(4):707718.
Background
An increasing body of literature associates obstructive sleep apnea (OSA) with an increased risk of postoperative complications. Despite evidence of risk, potential benefits of preoperative diagnosis and treatment of OSA remain unclear.
Findings
Using databases to identify patients prescribed continuous positive airway pressure (CPAP) therapy, the study compared postoperative outcomes of patients who underwent surgery any time after polysomnography (PSG) and CPAP prescription (diagnosed OSA [DOSA]) and those who had surgery during the 5 years preceding their PSG (undiagnosed OSA [UOSA]). These patients were matched with patients who underwent the same procedure for the same indication and had no insurance claims for PSG or diagnosis of sleep‐disordered breathing.
After multivariate analysis, OSA of any type was associated with increased pulmonary complications (odds ratio [OR]: 2.08; 95% CI: 1.35‐2.19). However, no significant differences in respiratory outcomes were noted between DOSA patients (N=2640) and those with UOSA (N=1571). DOSA patients did have fewer cardiovascular complications than UOSA patients (OR: 0.34; 95% CI: 0.15‐0.77). Only severe OSA (apnea‐hypopnea index >30) was associated with increased pulmonary and cardiovascular complications.
Cautions
Although this study suggests an association between preoperative diagnosis and treatment of OSA and reduced cardiovascular complications, the results are not definitive due to the inability to control for all confounding variables in a retrospective study utilizing an administrative database.
Implications
OSA is an important risk factor for postoperative complications, and this study suggests that preoperative treatment with CPAP is associated with reduced risk of cardiovascular complications, particularly in patients with severe OSA. Future controlled trials should focus on the risk‐reduction potential of preoperative diagnosis and treatment of OSA.
Mazo V, Sabat S, Canet J, et al. Prospective external validation of a predictive score for postoperative pulmonary complications. Anesthesiology. 2014;121:219231.
Background
In 2010, Canet et al. published a novel risk index, the Assess Respiratory Risk in Surgical Patients in Catalonia (ARISCAT) index, to provide a quantitative estimate of the risk of postoperative pulmonary complications (PPCs).[10]
In the current report, Mazo and colleagues studied the ARISCAT index in a broader sample to characterize its accuracy in predicting PPC risk. The ARISCAT index is derived from clinical risk factors: (1) age, (2) preoperative oxygen saturation, (3) respiratory infection in the prior month, (4) anemia, (5) surgical site, (6) duration of surgery, and (7) emergency surgery, with varying weights based on the strength of the association in a multivariable analysis. This score can be calculated via addition of these weighted risk factors, with a score>45 equal to high risk for PPC.
Findings
Examining 5099 patients from 63 European hospitals, the authors definition of PPC included respiratory failure, pulmonary infection, pleural effusion, atelectasis, pneumothorax, bronchospasm, and aspiration pneumonitis. PPC rates were as follows: low risk (3.39%), intermediate risk (12.98%), and high risk (38.01%). The positive likelihood ratio for PPC among the highest risk group was 7.12. The C statistic for fit was 0.80. Observed PPC rates were higher than predicted for the low (3.39% vs 0.87%) and intermediate (12.98% vs 7.82%) risk groups.
Cautions
The calibration slopes were less than ideal in all subsamples, with the Western European sample performing better than the other geographic areas; suggesting that the coefficients on the ARISCAT index may benefit from recalibration to match specific populations.
Implications
This is the first major pulmonary risk index that has been externally validated. Its use of readily available clinical information, simplicity, and accuracy in estimating PPC risk make it an important addition to the toolkit during a preoperative evaluation.
PERIOPERATIVE ATRIAL FIBRILLATION/ANTICOAGULATION
Gialdini G, Nearing K, Bhave P, et al. Perioperative atrial fibrillation and the long term risk of ischemic stroke. JAMA. 2014;312(6):616622.
Background
New‐onset atrial fibrillation (AF) is the most common perioperative arrhythmia.[11] However, little is known regarding the long‐term risks of ischemic stroke in patients who develop perioperative AF. This retrospective cohort study examined adults with no preexisting history of AF, hospitalized for surgery, and discharged free of cerebrovascular disease between 2007 and 2011 (n=1,729,360).
Findings
Of the eligible patients, 1.43% (95% CI: 1.41%‐1.45%) developed perioperative AF, and 0.81% (95% CI: 0.79%‐0.82%) had a stroke up to 1 year after discharge. Perioperative AF was associated with subsequent stroke after both cardiac (HR: 1.3; 95% CI: 1.1‐1.6) and noncardiac surgery (HR: 2; 95% CI: 1.7‐2.3). The association with stroke was stronger for perioperative AF after noncardiac versus cardiac surgery (P<0.001 for interaction).
Cautions
This is a retrospective cohort study, using claims data to identify AF and stroke. Data on duration of the perioperative AF episodes or use of antithrombotic therapies were not available.
Implications
The association found between perioperative AF and long‐term risk of ischemic stroke may suggest that perioperative AF, especially after noncardiac surgery, should be treated aggressively in terms of thromboembolic risk; however, further data will be required to validate this association.
Van Diepen S, Youngson E, Ezekowitz J, McAlister F. Which risk score best predicts perioperative outcomes in nonvalvular atrial fibrillation patients undergoing noncardiac surgery? Am Heart J. 2014;168(1):6067.
Background
Patients with nonvalvular AF (NVAF) are at increased risk for adverse perioperative outcomes after noncardiac surgery.[12] The RCRI is commonly used to predict perioperative cardiovascular events for all patients, including those with NVAF, though AF is not part of this risk assessment. The goal of this retrospective cohort study was to examine the prognostic utility of already existing NVAF risk indices, including the CHADS2 (Congestive heart failure, Hypertension, Age 75 years, Diabetes mellitus, prior stroke or transient ischemic attack), CHA2DS2‐VASc (Congestive heart failure; Hypertension; Age 75 years; Diabetes mellitus; Stroke, TIA, or thromboembolism [TE]; Vascular disease; Age 65 to 74 years; Sex category [female]), and R2CHADS2 (Renal dysfunction, Congestive heart failure, Hypertension, Age, Diabetes, Stroke/TIA) for perioperative outcomes in patients undergoing noncardiac surgery.
Findings
A population dataset of NVAF patients (n=32,160) who underwent noncardiac surgery was examined, with outcome measures including 30‐day mortality, stroke, TIA, or systemic embolism. The incidence of the 30‐day composite outcome was 4.2% and the C indices were 0.65 for the RCRI, 0.67 for CHADS2, 0.67 for CHA2DS2‐VASc, and 0.68 for R2CHADS2. The Net Reclassification Index (NRI), a measure evaluating the improvement in prediction performance gained by adding a marker to a set of baseline predictors, was calculated. All NVAF scores performed better than the RCRI for predicting mortality risk (NRI: 12.3%, 8.4%, and 13.3% respectively, all P<0.01).
Cautions
Patients in the highest risk category by RCRI appear to have an unadjusted higher 30‐day mortality risk (8%) than that predicted by the other 3 scores (5%, 5.6%, and 5%), indicating that these risk scores should not completely supplant the RCRI for risk stratification in this population. In addition, the overall improvement in predictive capacity of the CHADS2, CHA2DS2‐VASc, and R2CHADS2, although superior to the RCRI, is modest.
Implications
These findings indicate that the preoperative risk stratification for patients with NVAF can be improved by utilizing the CHADS2, CHA2DS2‐VASc, or R2CHADS2 scores when undergoing noncardiac surgery. For patients with NVAF identified as high risk for adverse outcomes, this assessment can be integrated into the preoperative discussion on the risks/benefits of surgery.
Steinberg BA, Peterson ED, Kim S, et al. Use and outcomes associated with bridging during anticoagulation interruptions in patients with atrial fibrillation: findings from the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT‐AF). Circulation. 2015;131:488494
Background
Oral anticoagulation (OAC) significantly reduces the risk of stroke in patients with AF. Many AF patients on long‐term anticoagulation undergo procedures requiring temporary interruption of OAC. Although guidelines have been published on when and how to initiate bridging therapy, they are based on observational data. Thus, it remains unclear which patients should receive bridging anticoagulation.
Findings
This is a US registry of outpatients with AF with temporary interruptions of OAC for a procedure. Of 7372 patients treated with OAC, 2803 overall interruption events occurred in 2200 patients (30%). Bridging anticoagulants were used in 24% (n=665). Bleeding events were more common in bridged than nonbridged patients (5.0% vs 1.3%; adjusted OR: 3.84; P<0.0001). The overall composite end point of myocardial infarction, stroke or systemic embolism, major bleeding, hospitalization, or death within 30 days was significantly higher in patients receiving bridging (13% vs 6.3%; adjusted OR: 1.94; P=0.0001). This statistically significant increase in the composite outcome, which includes cardiovascular events, is most likely in part secondary to inclusion of bleeding events. The recently published BRIDGE (Bridging Anticoagulation in Patients who Require Temporary Interruption of Warfarin Therapy for an Elective Invasive Procedure or Surgery) trial did not find a statistically significant difference in cardiovascular events between bridged and nonbridged patients.[13]
Cautions
Although patients who were bridged appear to have had more comorbidities and a higher mean CHADS2 score than patients who were not bridged, it is difficult to determine which population of patients may be high risk enough to warrant bridging, as indicated by current American College of Chest Physicians guidelines, as this was not evaluated in this study
Implications
The use of bridging anticoagulation was significantly associated with higher overall bleeding and adverse event rates. The BRIDGE trial also found that forgoing bridging anticoagulation decreased the risk of major bleeding in patients with AF and was noninferior to bridging for the prevention of arterial TE.[13]
- Derivation and prospective evaluation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100:1043–1049. , , , et al.
- Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013;217(5):833–842. , , , et al.
- Development and validation of a risk calculator for prediction of cardiac risk after surgery. Circulation. 2011;124:381–387. , , , et al.
- The effect of bisoprolol on perioperative mortality and myocardial infarction in high‐risk patients undergoing vascular surgery. Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography Study Group. N Engl J Med. 1999;341(24):1789–1794. , , , et al.
- Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography Study Group. Bisoprolol and fluvastatin for the reduction of perioperative cardiac mortality and myocardial infarction in intermediate‐risk patients undergoing noncardiovascular surgery: a randomized controlled trial (DECREASE‐IV). Ann Surg. 2009;249(6):921–926. , , , et al;
- POISE Study Group, , , , et al. Effects of extended‐release metoprolol succinate in patients undergoing non‐cardiac surgery (POISE trial): a randomised controlled trial. Lancet. 2008;371(9627):1839–1847.
- American College of Cardiology; American Heart Association. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. J Am Coll Cardiol. 2014;64(22):e77–e137. , , , et al.
- 2014 ESC/ESA Guidelines on non‐cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non‐cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J. 2014;35(35):2383–431. , , , et al.
- The long‐term impact of early cardiovascular therapy intensification for postoperative troponin elevation after major vascular surgery. Anesth Analg. 2014;119(5):1053–1063. , , , et al.
- ARISCAT Group: Prediction of postoperative pulmonary complications in a population‐based surgical cohort. Anesthesiology. 2010;113:1338–1350. , , , et al.
- Noncardiac surgery: postoperative arrhythmias. Crit Care Med. 2000;28(10 suppl):N145–N150. , .
- Incidence, predictors, and outcomes associated with postoperative atrial fibrillation after major cardiac surgery. Am Heart J. 2012;164(6):918–924. , , , et al.
- Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823–833. , , , et al.
Given the rapid expansion of the field of perioperative medicine, clinicians need to remain apprised of the current evidence to ensure optimization of patient care. In this update, we review 10 key articles from the perioperative literature, with the goal of summarizing the most clinically important evidence over the past year. This summary of recent literature in perioperative medicine is derived from the Update in Perioperative Medicine sessions presented at the 10th Annual Perioperative Medicine Summit and the Society of General Internal Medicine 38th Annual Meeting. A systematic search strategy was used to identify pertinent articles, and the following were selected by the authors based on their relevance to the clinical practice of perioperative medicine.
PERIOPERATIVE CARDIOVASCULAR CARE
Fleisher LA, Fleischmann KE, Auerbach AD, et al. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. Circulation. 2014;130:e278e333.
Background
The American College of Cardiology/American Heart Association (ACC/AHA) perioperative guideline provides recommendations for the evaluation and management of cardiovascular disease in patients undergoing noncardiac surgery.
Findings
The new guideline combines the evaluation of surgery‐ and patient‐specific risk in the algorithm for preoperative cardiovascular evaluation into a single step and recommends the use of 1 of 3 tools: the Revised Cardiac Risk Index (RCRI),[1] National Surgical Quality Improvement Program (NSQIP) Surgical Risk Calculator,[2] or the NSQIP‐derived myocardial infarction and cardiac arrest calculator.[3] Estimation of risk is also simplified by stratification into only 2 groups: low risk (risk of major adverse cardiac event <1%) and elevated risk (1% risk). Coronary evaluation can be considered for patients with elevated cardiac risk and poor functional capacity, but is advised only if the results would alter perioperative management. For example, a patient with very high risk who has evidence of ischemia on stress testing may choose to forego surgery. Preoperative coronary revascularization is only indicated for patients meeting criteria in the nonsurgical setting.
For patients with previous percutaneous coronary intervention, the ACC/AHA has not changed its recommendations to optimally delay surgery for at least 30 days after bare‐metal stenting and at least 1 year after drug‐eluting stent (DES) placement. However, in patients with a DES placed 6 to 12 months previously, surgery can be performed if the risks of surgical delay outweigh the risks of DES thrombosis. After any type of coronary stenting, dual antiplatelet therapy should be continued uninterrupted through the first 4 to 6 weeks and even later whenever feasible. If not possible, aspirin therapy should be maintained through surgery unless bleeding risk is too high.
The guideline recommends perioperative continuation of ‐blockers in patients taking them chronically. Preoperative initiation of ‐blocker therapy may be considered for patients with myocardial ischemia on stress testing or 3 RCRI factors and should be started far enough in advance to allow determination of patient's tolerance prior to surgery.
Cautions
Many recommendations are based on data from nonrandomized trials or expert opinion, and the data in areas such as perioperative ‐blockade continue to evolve.
Implications
The ACC/AHA guideline continues to be a critically valuable resource for hospitalists providing perioperative care to noncardiac surgery patients.
Wijeysundera DN, Duncan D, Nkonde‐Price C, et al. Perioperative beta blockade in noncardiac surgery: a systematic review for the 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.
J Am Coll Cardiol. 2014;64(22):24062425.
Background
Various clinical trials have reported conflicting results regarding the efficacy and safety of perioperative ‐blockers resulting in guideline committees changing their recommendations. Because of questions raised regarding the scientific integrity of the DECREASE (Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography)‐I[4] and DECREASE‐IV[5] trials as well as the dosing of ‐blockers in POISE (PeriOperative Ischemic Evaluation) study,[6] this systematic review was performed in conjunction with the ACC/AHA guideline update[7] to evaluate the data with and without these trials.
Findings
Sixteen randomized control trials (RCTs) (n=12,043) and 1 cohort study (n=348) were included in the analysis. Perioperative ‐blockers were associated with a reduction in nonfatal myocardial infarction (MI) (relative risk [RR]: 0.69; 95% confidence interval [CI]: 0.58‐0.82; P<0.001) but an increase in bradycardia (RR: 2.61; 95% CI: 2.18‐3.12), hypotension (RR: 1.47; 95% CI: 1.34‐1.6), and nonfatal strokes (RR: 1.76; 95% CI: 1.07‐2.91; P=0.02). The POISE trial was the only one demonstrating a statistically significant increase in stroke.
The major discrepancy between the DECREASE trials and the other RCTs was related to mortalitya reduction in both cardiovascular and all‐cause death in DECREASE but an increased risk of all‐cause death in the other trials.
Cautions
Because of its size, the POISE trial heavily influences the results, particularly for mortality and stroke. Including the DECREASE trials reduces the otherwise increased risk for death to a null effect. Exclusion of the POISE and DECREASE trials leaves few data to make conclusions about safety and efficacy of perioperative ‐blockade. Several cohort studies have found metoprolol to be associated with worse outcomes than with atenolol or bisoprolol (which were preferred by the European Society of Cardiology guidelines).[8]
Implications
Perioperative ‐blockade started within 1 day of noncardiac surgery was associated with fewer nonfatal MIs but at the cost of an increase in hypotension, bradycardia, and a possible increase in stroke and death. Long‐term ‐blockade should be continued perioperatively, whereas the decision to initiate a ‐blocker should be individualized. If starting a ‐blocker perioperatively, it should be done 2 days before surgery.
Botto F, Alonso‐Coello P, Chan MT, et al.; on behalf of The Vascular events In noncardiac Surgery patIents cOhort evaluatioN (VISION) Investigators. Myocardial injury after noncardiac surgery: a large, international, prospective cohort study establishing diagnostic criteria, characteristics, predictors, and 30‐day outcomes. Anesthesiology. 2014;120(3):564578.
Background
Many patients sustain myocardial injury in the perioperative period as evidenced by troponin elevations, but most do not meet diagnostic criteria for MI. Myocardial injury after noncardiac surgery (MINS) is defined as prognostically relevant myocardial injury due to ischemia that occurs within 30 days after noncardiac surgery. This international, prospective cohort study of 15,065 patients 45 years old who underwent in‐patient noncardiac surgery determined diagnostic criteria, characteristics, predictors, and 30‐day outcomes of MINS.
Findings
The diagnostic criterion for MINS was a peak troponin T level 0.03 ng/mL judged to be due to an ischemic etiology. Twelve independent predictors of MINS were identified including age 75 years, known cardiovascular disease or risk factors, and surgical factors. MINS was an independent predictor of 30‐day mortality (adjusted hazard ratio [HR]: 3.87; 95% CI: 2.96‐5.08). Age >75 years, ST elevation, or new left bundle branch block, and anterior ischemic findings were independent predictors of 30‐day mortality among patients with MINS.
Cautions
Although screening high‐risk surgical patients without signs or symptoms of ischemia with postoperative troponins will increase the frequency of diagnosing MINS, evidence for an effective treatment has not yet been established. The ACC/AHA guidelines state that routine screening is of uncertain benefit for this reason.
Implications
Because MINS is common and carries a poor 30‐day prognosis, clinical trials are needed to determine when to obtain postoperative troponins and how to prevent and treat this complication.[9] Some observational data from POISE suggest that aspirin and statins can reduce the risk of 30‐day mortality in patients with postoperative MIs.
Devereaux PJ, Mrkobrada M, Sessler DI, et al. for the POISE‐2 Investigators. Aspirin in patients undergoing noncardiac surgery. N Engl J Med. 2014; 370(16):14941503.
Devereaux PJ, Sessler DI, Leslie K, et al. for the POISE‐2 Investigators. Clonidine in patients undergoing noncardiac surgery. N Engl J Med. 2014; 370(16):15041513.
Background
Medical risk reduction with aspirin and other agents in perioperative patients remains controversial. The POISE‐2 trial is a blinded RCT examining the effects of aspirin and clonidine on outcomes in >10,000 noncardiac surgery patients at risk of cardiovascular complications. The aspirin arm of the study included the initiation group and the continuation stratum, as well as placebo. Patients in the clonidine portion of the trial received 0.2 mg of clonidine or placebo daily for the same time periods.
Findings
The primary outcome was a composite of death or nonfatal MI within 30 days of surgery. Outcomes were similar in patients initiated or continued on aspirin. No difference was seen between aspirin or placebo in the primary outcome (7.0% vs 7.1%; HR: 0.86; 95% CI: 0.86‐1.15; P=0.92). There were no differences in rates of MI, venous thromboembolism, or stroke. Major bleeding rates were higher in aspirin versus placebo‐treated patients (4.6% vs 3.8%; HR: 1.23; 95% CI: 1.01‐1.49; P=0.04).
Clonidine did not alter the composite outcome of death or nonfatal MI (7.3% vs 6.8%; HR: 1.08; 95% CI: 0.93‐1.26; P=0.29). Clinically significant hypotension, bradycardia, and nonfatal cardiac arrest were more common in clonidine‐treated patients, although no difference was detected in stroke rates.
Cautions
Although patients in the trial had cardiovascular risk factors, <24% of patients had known coronary artery disease, and <5% had coronary stents. Conclusions based on this trial regarding perioperative management of antiplatelet therapy should not include patients with coronary artery stents.
Implications
Aspirin started before surgery and continued perioperatively did not decrease the rate of death or nonfatal MI but increased the risk of major bleeding. Perioperative management of aspirin needs to be undertaken in the context of cardiac and bleeding risks. Clonidine also did not improve outcomes and increased the risk of bradycardia and hypotension. Current guidelines recommend against using alpha‐2 agonists for prevention of perioperative cardiac events7; however, patients already on alpha‐2 agonists should not stop them abruptly.
PERIOPERATIVE PULMONARY CARE
Mutter TC, Chateau D, Moffatt M, et al. A matched cohort study of postoperative outcomes in obstructive sleep apnea: could preoperative diagnosis and treatment prevent complications? Anesthesiology. 2014;121(4):707718.
Background
An increasing body of literature associates obstructive sleep apnea (OSA) with an increased risk of postoperative complications. Despite evidence of risk, potential benefits of preoperative diagnosis and treatment of OSA remain unclear.
Findings
Using databases to identify patients prescribed continuous positive airway pressure (CPAP) therapy, the study compared postoperative outcomes of patients who underwent surgery any time after polysomnography (PSG) and CPAP prescription (diagnosed OSA [DOSA]) and those who had surgery during the 5 years preceding their PSG (undiagnosed OSA [UOSA]). These patients were matched with patients who underwent the same procedure for the same indication and had no insurance claims for PSG or diagnosis of sleep‐disordered breathing.
After multivariate analysis, OSA of any type was associated with increased pulmonary complications (odds ratio [OR]: 2.08; 95% CI: 1.35‐2.19). However, no significant differences in respiratory outcomes were noted between DOSA patients (N=2640) and those with UOSA (N=1571). DOSA patients did have fewer cardiovascular complications than UOSA patients (OR: 0.34; 95% CI: 0.15‐0.77). Only severe OSA (apnea‐hypopnea index >30) was associated with increased pulmonary and cardiovascular complications.
Cautions
Although this study suggests an association between preoperative diagnosis and treatment of OSA and reduced cardiovascular complications, the results are not definitive due to the inability to control for all confounding variables in a retrospective study utilizing an administrative database.
Implications
OSA is an important risk factor for postoperative complications, and this study suggests that preoperative treatment with CPAP is associated with reduced risk of cardiovascular complications, particularly in patients with severe OSA. Future controlled trials should focus on the risk‐reduction potential of preoperative diagnosis and treatment of OSA.
Mazo V, Sabat S, Canet J, et al. Prospective external validation of a predictive score for postoperative pulmonary complications. Anesthesiology. 2014;121:219231.
Background
In 2010, Canet et al. published a novel risk index, the Assess Respiratory Risk in Surgical Patients in Catalonia (ARISCAT) index, to provide a quantitative estimate of the risk of postoperative pulmonary complications (PPCs).[10]
In the current report, Mazo and colleagues studied the ARISCAT index in a broader sample to characterize its accuracy in predicting PPC risk. The ARISCAT index is derived from clinical risk factors: (1) age, (2) preoperative oxygen saturation, (3) respiratory infection in the prior month, (4) anemia, (5) surgical site, (6) duration of surgery, and (7) emergency surgery, with varying weights based on the strength of the association in a multivariable analysis. This score can be calculated via addition of these weighted risk factors, with a score>45 equal to high risk for PPC.
Findings
Examining 5099 patients from 63 European hospitals, the authors definition of PPC included respiratory failure, pulmonary infection, pleural effusion, atelectasis, pneumothorax, bronchospasm, and aspiration pneumonitis. PPC rates were as follows: low risk (3.39%), intermediate risk (12.98%), and high risk (38.01%). The positive likelihood ratio for PPC among the highest risk group was 7.12. The C statistic for fit was 0.80. Observed PPC rates were higher than predicted for the low (3.39% vs 0.87%) and intermediate (12.98% vs 7.82%) risk groups.
Cautions
The calibration slopes were less than ideal in all subsamples, with the Western European sample performing better than the other geographic areas; suggesting that the coefficients on the ARISCAT index may benefit from recalibration to match specific populations.
Implications
This is the first major pulmonary risk index that has been externally validated. Its use of readily available clinical information, simplicity, and accuracy in estimating PPC risk make it an important addition to the toolkit during a preoperative evaluation.
PERIOPERATIVE ATRIAL FIBRILLATION/ANTICOAGULATION
Gialdini G, Nearing K, Bhave P, et al. Perioperative atrial fibrillation and the long term risk of ischemic stroke. JAMA. 2014;312(6):616622.
Background
New‐onset atrial fibrillation (AF) is the most common perioperative arrhythmia.[11] However, little is known regarding the long‐term risks of ischemic stroke in patients who develop perioperative AF. This retrospective cohort study examined adults with no preexisting history of AF, hospitalized for surgery, and discharged free of cerebrovascular disease between 2007 and 2011 (n=1,729,360).
Findings
Of the eligible patients, 1.43% (95% CI: 1.41%‐1.45%) developed perioperative AF, and 0.81% (95% CI: 0.79%‐0.82%) had a stroke up to 1 year after discharge. Perioperative AF was associated with subsequent stroke after both cardiac (HR: 1.3; 95% CI: 1.1‐1.6) and noncardiac surgery (HR: 2; 95% CI: 1.7‐2.3). The association with stroke was stronger for perioperative AF after noncardiac versus cardiac surgery (P<0.001 for interaction).
Cautions
This is a retrospective cohort study, using claims data to identify AF and stroke. Data on duration of the perioperative AF episodes or use of antithrombotic therapies were not available.
Implications
The association found between perioperative AF and long‐term risk of ischemic stroke may suggest that perioperative AF, especially after noncardiac surgery, should be treated aggressively in terms of thromboembolic risk; however, further data will be required to validate this association.
Van Diepen S, Youngson E, Ezekowitz J, McAlister F. Which risk score best predicts perioperative outcomes in nonvalvular atrial fibrillation patients undergoing noncardiac surgery? Am Heart J. 2014;168(1):6067.
Background
Patients with nonvalvular AF (NVAF) are at increased risk for adverse perioperative outcomes after noncardiac surgery.[12] The RCRI is commonly used to predict perioperative cardiovascular events for all patients, including those with NVAF, though AF is not part of this risk assessment. The goal of this retrospective cohort study was to examine the prognostic utility of already existing NVAF risk indices, including the CHADS2 (Congestive heart failure, Hypertension, Age 75 years, Diabetes mellitus, prior stroke or transient ischemic attack), CHA2DS2‐VASc (Congestive heart failure; Hypertension; Age 75 years; Diabetes mellitus; Stroke, TIA, or thromboembolism [TE]; Vascular disease; Age 65 to 74 years; Sex category [female]), and R2CHADS2 (Renal dysfunction, Congestive heart failure, Hypertension, Age, Diabetes, Stroke/TIA) for perioperative outcomes in patients undergoing noncardiac surgery.
Findings
A population dataset of NVAF patients (n=32,160) who underwent noncardiac surgery was examined, with outcome measures including 30‐day mortality, stroke, TIA, or systemic embolism. The incidence of the 30‐day composite outcome was 4.2% and the C indices were 0.65 for the RCRI, 0.67 for CHADS2, 0.67 for CHA2DS2‐VASc, and 0.68 for R2CHADS2. The Net Reclassification Index (NRI), a measure evaluating the improvement in prediction performance gained by adding a marker to a set of baseline predictors, was calculated. All NVAF scores performed better than the RCRI for predicting mortality risk (NRI: 12.3%, 8.4%, and 13.3% respectively, all P<0.01).
Cautions
Patients in the highest risk category by RCRI appear to have an unadjusted higher 30‐day mortality risk (8%) than that predicted by the other 3 scores (5%, 5.6%, and 5%), indicating that these risk scores should not completely supplant the RCRI for risk stratification in this population. In addition, the overall improvement in predictive capacity of the CHADS2, CHA2DS2‐VASc, and R2CHADS2, although superior to the RCRI, is modest.
Implications
These findings indicate that the preoperative risk stratification for patients with NVAF can be improved by utilizing the CHADS2, CHA2DS2‐VASc, or R2CHADS2 scores when undergoing noncardiac surgery. For patients with NVAF identified as high risk for adverse outcomes, this assessment can be integrated into the preoperative discussion on the risks/benefits of surgery.
Steinberg BA, Peterson ED, Kim S, et al. Use and outcomes associated with bridging during anticoagulation interruptions in patients with atrial fibrillation: findings from the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT‐AF). Circulation. 2015;131:488494
Background
Oral anticoagulation (OAC) significantly reduces the risk of stroke in patients with AF. Many AF patients on long‐term anticoagulation undergo procedures requiring temporary interruption of OAC. Although guidelines have been published on when and how to initiate bridging therapy, they are based on observational data. Thus, it remains unclear which patients should receive bridging anticoagulation.
Findings
This is a US registry of outpatients with AF with temporary interruptions of OAC for a procedure. Of 7372 patients treated with OAC, 2803 overall interruption events occurred in 2200 patients (30%). Bridging anticoagulants were used in 24% (n=665). Bleeding events were more common in bridged than nonbridged patients (5.0% vs 1.3%; adjusted OR: 3.84; P<0.0001). The overall composite end point of myocardial infarction, stroke or systemic embolism, major bleeding, hospitalization, or death within 30 days was significantly higher in patients receiving bridging (13% vs 6.3%; adjusted OR: 1.94; P=0.0001). This statistically significant increase in the composite outcome, which includes cardiovascular events, is most likely in part secondary to inclusion of bleeding events. The recently published BRIDGE (Bridging Anticoagulation in Patients who Require Temporary Interruption of Warfarin Therapy for an Elective Invasive Procedure or Surgery) trial did not find a statistically significant difference in cardiovascular events between bridged and nonbridged patients.[13]
Cautions
Although patients who were bridged appear to have had more comorbidities and a higher mean CHADS2 score than patients who were not bridged, it is difficult to determine which population of patients may be high risk enough to warrant bridging, as indicated by current American College of Chest Physicians guidelines, as this was not evaluated in this study
Implications
The use of bridging anticoagulation was significantly associated with higher overall bleeding and adverse event rates. The BRIDGE trial also found that forgoing bridging anticoagulation decreased the risk of major bleeding in patients with AF and was noninferior to bridging for the prevention of arterial TE.[13]
Given the rapid expansion of the field of perioperative medicine, clinicians need to remain apprised of the current evidence to ensure optimization of patient care. In this update, we review 10 key articles from the perioperative literature, with the goal of summarizing the most clinically important evidence over the past year. This summary of recent literature in perioperative medicine is derived from the Update in Perioperative Medicine sessions presented at the 10th Annual Perioperative Medicine Summit and the Society of General Internal Medicine 38th Annual Meeting. A systematic search strategy was used to identify pertinent articles, and the following were selected by the authors based on their relevance to the clinical practice of perioperative medicine.
PERIOPERATIVE CARDIOVASCULAR CARE
Fleisher LA, Fleischmann KE, Auerbach AD, et al. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. Circulation. 2014;130:e278e333.
Background
The American College of Cardiology/American Heart Association (ACC/AHA) perioperative guideline provides recommendations for the evaluation and management of cardiovascular disease in patients undergoing noncardiac surgery.
Findings
The new guideline combines the evaluation of surgery‐ and patient‐specific risk in the algorithm for preoperative cardiovascular evaluation into a single step and recommends the use of 1 of 3 tools: the Revised Cardiac Risk Index (RCRI),[1] National Surgical Quality Improvement Program (NSQIP) Surgical Risk Calculator,[2] or the NSQIP‐derived myocardial infarction and cardiac arrest calculator.[3] Estimation of risk is also simplified by stratification into only 2 groups: low risk (risk of major adverse cardiac event <1%) and elevated risk (1% risk). Coronary evaluation can be considered for patients with elevated cardiac risk and poor functional capacity, but is advised only if the results would alter perioperative management. For example, a patient with very high risk who has evidence of ischemia on stress testing may choose to forego surgery. Preoperative coronary revascularization is only indicated for patients meeting criteria in the nonsurgical setting.
For patients with previous percutaneous coronary intervention, the ACC/AHA has not changed its recommendations to optimally delay surgery for at least 30 days after bare‐metal stenting and at least 1 year after drug‐eluting stent (DES) placement. However, in patients with a DES placed 6 to 12 months previously, surgery can be performed if the risks of surgical delay outweigh the risks of DES thrombosis. After any type of coronary stenting, dual antiplatelet therapy should be continued uninterrupted through the first 4 to 6 weeks and even later whenever feasible. If not possible, aspirin therapy should be maintained through surgery unless bleeding risk is too high.
The guideline recommends perioperative continuation of ‐blockers in patients taking them chronically. Preoperative initiation of ‐blocker therapy may be considered for patients with myocardial ischemia on stress testing or 3 RCRI factors and should be started far enough in advance to allow determination of patient's tolerance prior to surgery.
Cautions
Many recommendations are based on data from nonrandomized trials or expert opinion, and the data in areas such as perioperative ‐blockade continue to evolve.
Implications
The ACC/AHA guideline continues to be a critically valuable resource for hospitalists providing perioperative care to noncardiac surgery patients.
Wijeysundera DN, Duncan D, Nkonde‐Price C, et al. Perioperative beta blockade in noncardiac surgery: a systematic review for the 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines.
J Am Coll Cardiol. 2014;64(22):24062425.
Background
Various clinical trials have reported conflicting results regarding the efficacy and safety of perioperative ‐blockers resulting in guideline committees changing their recommendations. Because of questions raised regarding the scientific integrity of the DECREASE (Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography)‐I[4] and DECREASE‐IV[5] trials as well as the dosing of ‐blockers in POISE (PeriOperative Ischemic Evaluation) study,[6] this systematic review was performed in conjunction with the ACC/AHA guideline update[7] to evaluate the data with and without these trials.
Findings
Sixteen randomized control trials (RCTs) (n=12,043) and 1 cohort study (n=348) were included in the analysis. Perioperative ‐blockers were associated with a reduction in nonfatal myocardial infarction (MI) (relative risk [RR]: 0.69; 95% confidence interval [CI]: 0.58‐0.82; P<0.001) but an increase in bradycardia (RR: 2.61; 95% CI: 2.18‐3.12), hypotension (RR: 1.47; 95% CI: 1.34‐1.6), and nonfatal strokes (RR: 1.76; 95% CI: 1.07‐2.91; P=0.02). The POISE trial was the only one demonstrating a statistically significant increase in stroke.
The major discrepancy between the DECREASE trials and the other RCTs was related to mortalitya reduction in both cardiovascular and all‐cause death in DECREASE but an increased risk of all‐cause death in the other trials.
Cautions
Because of its size, the POISE trial heavily influences the results, particularly for mortality and stroke. Including the DECREASE trials reduces the otherwise increased risk for death to a null effect. Exclusion of the POISE and DECREASE trials leaves few data to make conclusions about safety and efficacy of perioperative ‐blockade. Several cohort studies have found metoprolol to be associated with worse outcomes than with atenolol or bisoprolol (which were preferred by the European Society of Cardiology guidelines).[8]
Implications
Perioperative ‐blockade started within 1 day of noncardiac surgery was associated with fewer nonfatal MIs but at the cost of an increase in hypotension, bradycardia, and a possible increase in stroke and death. Long‐term ‐blockade should be continued perioperatively, whereas the decision to initiate a ‐blocker should be individualized. If starting a ‐blocker perioperatively, it should be done 2 days before surgery.
Botto F, Alonso‐Coello P, Chan MT, et al.; on behalf of The Vascular events In noncardiac Surgery patIents cOhort evaluatioN (VISION) Investigators. Myocardial injury after noncardiac surgery: a large, international, prospective cohort study establishing diagnostic criteria, characteristics, predictors, and 30‐day outcomes. Anesthesiology. 2014;120(3):564578.
Background
Many patients sustain myocardial injury in the perioperative period as evidenced by troponin elevations, but most do not meet diagnostic criteria for MI. Myocardial injury after noncardiac surgery (MINS) is defined as prognostically relevant myocardial injury due to ischemia that occurs within 30 days after noncardiac surgery. This international, prospective cohort study of 15,065 patients 45 years old who underwent in‐patient noncardiac surgery determined diagnostic criteria, characteristics, predictors, and 30‐day outcomes of MINS.
Findings
The diagnostic criterion for MINS was a peak troponin T level 0.03 ng/mL judged to be due to an ischemic etiology. Twelve independent predictors of MINS were identified including age 75 years, known cardiovascular disease or risk factors, and surgical factors. MINS was an independent predictor of 30‐day mortality (adjusted hazard ratio [HR]: 3.87; 95% CI: 2.96‐5.08). Age >75 years, ST elevation, or new left bundle branch block, and anterior ischemic findings were independent predictors of 30‐day mortality among patients with MINS.
Cautions
Although screening high‐risk surgical patients without signs or symptoms of ischemia with postoperative troponins will increase the frequency of diagnosing MINS, evidence for an effective treatment has not yet been established. The ACC/AHA guidelines state that routine screening is of uncertain benefit for this reason.
Implications
Because MINS is common and carries a poor 30‐day prognosis, clinical trials are needed to determine when to obtain postoperative troponins and how to prevent and treat this complication.[9] Some observational data from POISE suggest that aspirin and statins can reduce the risk of 30‐day mortality in patients with postoperative MIs.
Devereaux PJ, Mrkobrada M, Sessler DI, et al. for the POISE‐2 Investigators. Aspirin in patients undergoing noncardiac surgery. N Engl J Med. 2014; 370(16):14941503.
Devereaux PJ, Sessler DI, Leslie K, et al. for the POISE‐2 Investigators. Clonidine in patients undergoing noncardiac surgery. N Engl J Med. 2014; 370(16):15041513.
Background
Medical risk reduction with aspirin and other agents in perioperative patients remains controversial. The POISE‐2 trial is a blinded RCT examining the effects of aspirin and clonidine on outcomes in >10,000 noncardiac surgery patients at risk of cardiovascular complications. The aspirin arm of the study included the initiation group and the continuation stratum, as well as placebo. Patients in the clonidine portion of the trial received 0.2 mg of clonidine or placebo daily for the same time periods.
Findings
The primary outcome was a composite of death or nonfatal MI within 30 days of surgery. Outcomes were similar in patients initiated or continued on aspirin. No difference was seen between aspirin or placebo in the primary outcome (7.0% vs 7.1%; HR: 0.86; 95% CI: 0.86‐1.15; P=0.92). There were no differences in rates of MI, venous thromboembolism, or stroke. Major bleeding rates were higher in aspirin versus placebo‐treated patients (4.6% vs 3.8%; HR: 1.23; 95% CI: 1.01‐1.49; P=0.04).
Clonidine did not alter the composite outcome of death or nonfatal MI (7.3% vs 6.8%; HR: 1.08; 95% CI: 0.93‐1.26; P=0.29). Clinically significant hypotension, bradycardia, and nonfatal cardiac arrest were more common in clonidine‐treated patients, although no difference was detected in stroke rates.
Cautions
Although patients in the trial had cardiovascular risk factors, <24% of patients had known coronary artery disease, and <5% had coronary stents. Conclusions based on this trial regarding perioperative management of antiplatelet therapy should not include patients with coronary artery stents.
Implications
Aspirin started before surgery and continued perioperatively did not decrease the rate of death or nonfatal MI but increased the risk of major bleeding. Perioperative management of aspirin needs to be undertaken in the context of cardiac and bleeding risks. Clonidine also did not improve outcomes and increased the risk of bradycardia and hypotension. Current guidelines recommend against using alpha‐2 agonists for prevention of perioperative cardiac events7; however, patients already on alpha‐2 agonists should not stop them abruptly.
PERIOPERATIVE PULMONARY CARE
Mutter TC, Chateau D, Moffatt M, et al. A matched cohort study of postoperative outcomes in obstructive sleep apnea: could preoperative diagnosis and treatment prevent complications? Anesthesiology. 2014;121(4):707718.
Background
An increasing body of literature associates obstructive sleep apnea (OSA) with an increased risk of postoperative complications. Despite evidence of risk, potential benefits of preoperative diagnosis and treatment of OSA remain unclear.
Findings
Using databases to identify patients prescribed continuous positive airway pressure (CPAP) therapy, the study compared postoperative outcomes of patients who underwent surgery any time after polysomnography (PSG) and CPAP prescription (diagnosed OSA [DOSA]) and those who had surgery during the 5 years preceding their PSG (undiagnosed OSA [UOSA]). These patients were matched with patients who underwent the same procedure for the same indication and had no insurance claims for PSG or diagnosis of sleep‐disordered breathing.
After multivariate analysis, OSA of any type was associated with increased pulmonary complications (odds ratio [OR]: 2.08; 95% CI: 1.35‐2.19). However, no significant differences in respiratory outcomes were noted between DOSA patients (N=2640) and those with UOSA (N=1571). DOSA patients did have fewer cardiovascular complications than UOSA patients (OR: 0.34; 95% CI: 0.15‐0.77). Only severe OSA (apnea‐hypopnea index >30) was associated with increased pulmonary and cardiovascular complications.
Cautions
Although this study suggests an association between preoperative diagnosis and treatment of OSA and reduced cardiovascular complications, the results are not definitive due to the inability to control for all confounding variables in a retrospective study utilizing an administrative database.
Implications
OSA is an important risk factor for postoperative complications, and this study suggests that preoperative treatment with CPAP is associated with reduced risk of cardiovascular complications, particularly in patients with severe OSA. Future controlled trials should focus on the risk‐reduction potential of preoperative diagnosis and treatment of OSA.
Mazo V, Sabat S, Canet J, et al. Prospective external validation of a predictive score for postoperative pulmonary complications. Anesthesiology. 2014;121:219231.
Background
In 2010, Canet et al. published a novel risk index, the Assess Respiratory Risk in Surgical Patients in Catalonia (ARISCAT) index, to provide a quantitative estimate of the risk of postoperative pulmonary complications (PPCs).[10]
In the current report, Mazo and colleagues studied the ARISCAT index in a broader sample to characterize its accuracy in predicting PPC risk. The ARISCAT index is derived from clinical risk factors: (1) age, (2) preoperative oxygen saturation, (3) respiratory infection in the prior month, (4) anemia, (5) surgical site, (6) duration of surgery, and (7) emergency surgery, with varying weights based on the strength of the association in a multivariable analysis. This score can be calculated via addition of these weighted risk factors, with a score>45 equal to high risk for PPC.
Findings
Examining 5099 patients from 63 European hospitals, the authors definition of PPC included respiratory failure, pulmonary infection, pleural effusion, atelectasis, pneumothorax, bronchospasm, and aspiration pneumonitis. PPC rates were as follows: low risk (3.39%), intermediate risk (12.98%), and high risk (38.01%). The positive likelihood ratio for PPC among the highest risk group was 7.12. The C statistic for fit was 0.80. Observed PPC rates were higher than predicted for the low (3.39% vs 0.87%) and intermediate (12.98% vs 7.82%) risk groups.
Cautions
The calibration slopes were less than ideal in all subsamples, with the Western European sample performing better than the other geographic areas; suggesting that the coefficients on the ARISCAT index may benefit from recalibration to match specific populations.
Implications
This is the first major pulmonary risk index that has been externally validated. Its use of readily available clinical information, simplicity, and accuracy in estimating PPC risk make it an important addition to the toolkit during a preoperative evaluation.
PERIOPERATIVE ATRIAL FIBRILLATION/ANTICOAGULATION
Gialdini G, Nearing K, Bhave P, et al. Perioperative atrial fibrillation and the long term risk of ischemic stroke. JAMA. 2014;312(6):616622.
Background
New‐onset atrial fibrillation (AF) is the most common perioperative arrhythmia.[11] However, little is known regarding the long‐term risks of ischemic stroke in patients who develop perioperative AF. This retrospective cohort study examined adults with no preexisting history of AF, hospitalized for surgery, and discharged free of cerebrovascular disease between 2007 and 2011 (n=1,729,360).
Findings
Of the eligible patients, 1.43% (95% CI: 1.41%‐1.45%) developed perioperative AF, and 0.81% (95% CI: 0.79%‐0.82%) had a stroke up to 1 year after discharge. Perioperative AF was associated with subsequent stroke after both cardiac (HR: 1.3; 95% CI: 1.1‐1.6) and noncardiac surgery (HR: 2; 95% CI: 1.7‐2.3). The association with stroke was stronger for perioperative AF after noncardiac versus cardiac surgery (P<0.001 for interaction).
Cautions
This is a retrospective cohort study, using claims data to identify AF and stroke. Data on duration of the perioperative AF episodes or use of antithrombotic therapies were not available.
Implications
The association found between perioperative AF and long‐term risk of ischemic stroke may suggest that perioperative AF, especially after noncardiac surgery, should be treated aggressively in terms of thromboembolic risk; however, further data will be required to validate this association.
Van Diepen S, Youngson E, Ezekowitz J, McAlister F. Which risk score best predicts perioperative outcomes in nonvalvular atrial fibrillation patients undergoing noncardiac surgery? Am Heart J. 2014;168(1):6067.
Background
Patients with nonvalvular AF (NVAF) are at increased risk for adverse perioperative outcomes after noncardiac surgery.[12] The RCRI is commonly used to predict perioperative cardiovascular events for all patients, including those with NVAF, though AF is not part of this risk assessment. The goal of this retrospective cohort study was to examine the prognostic utility of already existing NVAF risk indices, including the CHADS2 (Congestive heart failure, Hypertension, Age 75 years, Diabetes mellitus, prior stroke or transient ischemic attack), CHA2DS2‐VASc (Congestive heart failure; Hypertension; Age 75 years; Diabetes mellitus; Stroke, TIA, or thromboembolism [TE]; Vascular disease; Age 65 to 74 years; Sex category [female]), and R2CHADS2 (Renal dysfunction, Congestive heart failure, Hypertension, Age, Diabetes, Stroke/TIA) for perioperative outcomes in patients undergoing noncardiac surgery.
Findings
A population dataset of NVAF patients (n=32,160) who underwent noncardiac surgery was examined, with outcome measures including 30‐day mortality, stroke, TIA, or systemic embolism. The incidence of the 30‐day composite outcome was 4.2% and the C indices were 0.65 for the RCRI, 0.67 for CHADS2, 0.67 for CHA2DS2‐VASc, and 0.68 for R2CHADS2. The Net Reclassification Index (NRI), a measure evaluating the improvement in prediction performance gained by adding a marker to a set of baseline predictors, was calculated. All NVAF scores performed better than the RCRI for predicting mortality risk (NRI: 12.3%, 8.4%, and 13.3% respectively, all P<0.01).
Cautions
Patients in the highest risk category by RCRI appear to have an unadjusted higher 30‐day mortality risk (8%) than that predicted by the other 3 scores (5%, 5.6%, and 5%), indicating that these risk scores should not completely supplant the RCRI for risk stratification in this population. In addition, the overall improvement in predictive capacity of the CHADS2, CHA2DS2‐VASc, and R2CHADS2, although superior to the RCRI, is modest.
Implications
These findings indicate that the preoperative risk stratification for patients with NVAF can be improved by utilizing the CHADS2, CHA2DS2‐VASc, or R2CHADS2 scores when undergoing noncardiac surgery. For patients with NVAF identified as high risk for adverse outcomes, this assessment can be integrated into the preoperative discussion on the risks/benefits of surgery.
Steinberg BA, Peterson ED, Kim S, et al. Use and outcomes associated with bridging during anticoagulation interruptions in patients with atrial fibrillation: findings from the Outcomes Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT‐AF). Circulation. 2015;131:488494
Background
Oral anticoagulation (OAC) significantly reduces the risk of stroke in patients with AF. Many AF patients on long‐term anticoagulation undergo procedures requiring temporary interruption of OAC. Although guidelines have been published on when and how to initiate bridging therapy, they are based on observational data. Thus, it remains unclear which patients should receive bridging anticoagulation.
Findings
This is a US registry of outpatients with AF with temporary interruptions of OAC for a procedure. Of 7372 patients treated with OAC, 2803 overall interruption events occurred in 2200 patients (30%). Bridging anticoagulants were used in 24% (n=665). Bleeding events were more common in bridged than nonbridged patients (5.0% vs 1.3%; adjusted OR: 3.84; P<0.0001). The overall composite end point of myocardial infarction, stroke or systemic embolism, major bleeding, hospitalization, or death within 30 days was significantly higher in patients receiving bridging (13% vs 6.3%; adjusted OR: 1.94; P=0.0001). This statistically significant increase in the composite outcome, which includes cardiovascular events, is most likely in part secondary to inclusion of bleeding events. The recently published BRIDGE (Bridging Anticoagulation in Patients who Require Temporary Interruption of Warfarin Therapy for an Elective Invasive Procedure or Surgery) trial did not find a statistically significant difference in cardiovascular events between bridged and nonbridged patients.[13]
Cautions
Although patients who were bridged appear to have had more comorbidities and a higher mean CHADS2 score than patients who were not bridged, it is difficult to determine which population of patients may be high risk enough to warrant bridging, as indicated by current American College of Chest Physicians guidelines, as this was not evaluated in this study
Implications
The use of bridging anticoagulation was significantly associated with higher overall bleeding and adverse event rates. The BRIDGE trial also found that forgoing bridging anticoagulation decreased the risk of major bleeding in patients with AF and was noninferior to bridging for the prevention of arterial TE.[13]
- Derivation and prospective evaluation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100:1043–1049. , , , et al.
- Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013;217(5):833–842. , , , et al.
- Development and validation of a risk calculator for prediction of cardiac risk after surgery. Circulation. 2011;124:381–387. , , , et al.
- The effect of bisoprolol on perioperative mortality and myocardial infarction in high‐risk patients undergoing vascular surgery. Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography Study Group. N Engl J Med. 1999;341(24):1789–1794. , , , et al.
- Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography Study Group. Bisoprolol and fluvastatin for the reduction of perioperative cardiac mortality and myocardial infarction in intermediate‐risk patients undergoing noncardiovascular surgery: a randomized controlled trial (DECREASE‐IV). Ann Surg. 2009;249(6):921–926. , , , et al;
- POISE Study Group, , , , et al. Effects of extended‐release metoprolol succinate in patients undergoing non‐cardiac surgery (POISE trial): a randomised controlled trial. Lancet. 2008;371(9627):1839–1847.
- American College of Cardiology; American Heart Association. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. J Am Coll Cardiol. 2014;64(22):e77–e137. , , , et al.
- 2014 ESC/ESA Guidelines on non‐cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non‐cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J. 2014;35(35):2383–431. , , , et al.
- The long‐term impact of early cardiovascular therapy intensification for postoperative troponin elevation after major vascular surgery. Anesth Analg. 2014;119(5):1053–1063. , , , et al.
- ARISCAT Group: Prediction of postoperative pulmonary complications in a population‐based surgical cohort. Anesthesiology. 2010;113:1338–1350. , , , et al.
- Noncardiac surgery: postoperative arrhythmias. Crit Care Med. 2000;28(10 suppl):N145–N150. , .
- Incidence, predictors, and outcomes associated with postoperative atrial fibrillation after major cardiac surgery. Am Heart J. 2012;164(6):918–924. , , , et al.
- Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823–833. , , , et al.
- Derivation and prospective evaluation of a simple index for prediction of cardiac risk of major noncardiac surgery. Circulation. 1999;100:1043–1049. , , , et al.
- Development and evaluation of the universal ACS NSQIP surgical risk calculator: a decision aid and informed consent tool for patients and surgeons. J Am Coll Surg. 2013;217(5):833–842. , , , et al.
- Development and validation of a risk calculator for prediction of cardiac risk after surgery. Circulation. 2011;124:381–387. , , , et al.
- The effect of bisoprolol on perioperative mortality and myocardial infarction in high‐risk patients undergoing vascular surgery. Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography Study Group. N Engl J Med. 1999;341(24):1789–1794. , , , et al.
- Dutch Echocardiographic Cardiac Risk Evaluation Applying Stress Echocardiography Study Group. Bisoprolol and fluvastatin for the reduction of perioperative cardiac mortality and myocardial infarction in intermediate‐risk patients undergoing noncardiovascular surgery: a randomized controlled trial (DECREASE‐IV). Ann Surg. 2009;249(6):921–926. , , , et al;
- POISE Study Group, , , , et al. Effects of extended‐release metoprolol succinate in patients undergoing non‐cardiac surgery (POISE trial): a randomised controlled trial. Lancet. 2008;371(9627):1839–1847.
- American College of Cardiology; American Heart Association. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. J Am Coll Cardiol. 2014;64(22):e77–e137. , , , et al.
- 2014 ESC/ESA Guidelines on non‐cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non‐cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J. 2014;35(35):2383–431. , , , et al.
- The long‐term impact of early cardiovascular therapy intensification for postoperative troponin elevation after major vascular surgery. Anesth Analg. 2014;119(5):1053–1063. , , , et al.
- ARISCAT Group: Prediction of postoperative pulmonary complications in a population‐based surgical cohort. Anesthesiology. 2010;113:1338–1350. , , , et al.
- Noncardiac surgery: postoperative arrhythmias. Crit Care Med. 2000;28(10 suppl):N145–N150. , .
- Incidence, predictors, and outcomes associated with postoperative atrial fibrillation after major cardiac surgery. Am Heart J. 2012;164(6):918–924. , , , et al.
- Perioperative bridging anticoagulation in patients with atrial fibrillation. N Engl J Med. 2015;373(9):823–833. , , , et al.
Critical Literature 2014
Keeping up with the medical literature in a field as broad as hospital medicine is a daunting task. In 2014 alone, there were over 9200 articles published in top‐tier internal medicine journals.[1] The authors have selected articles from among these top journals using a nonsystematic process that involved reviewing articles brought to their attention via colleagues, literature searches, and online services. The focus was to identify articles that would be of importance to the field of hospital medicine for their potential to be practice changing, provocative, or iconoclastic. After culling through hundreds of titles and abstracts, 46 articles were reviewed by both authors in full text, and ultimately 14 were selected for presentation here. Table 1 summarizes the key points.
|
1. Now that neprolysin inhibitors are approved by the FDA, hospitalists will see them prescribed as an alternative to ACE‐inhibitors given their impressive benefits in cardiovascular mortality and heart failure hospitalizations. |
2. Current evidence suggests that intravenous contrast given with CT scans may not significantly alter the incidence of acute kidney injury, its associated mortality, or the need for hemodialysis. |
3. The CAM‐S score is an important tool for prognostication in delirious patients. Those patients with high CAM‐S scores should be considered for goals of care conversations. |
4. The melatonin agonist, ramelteon, shows promise for lowering incident delirium among elderly medical patients, though larger trials are still needed. |
5. Polyethylene glycol may be an excellent alternative to lactulose for patients with acute hepatic encephalopathy once larger studies are done, as it is well tolerated and shows faster resolution of symptoms. |
6. Nonselective ‐blockers should no longer be offered to cirrhotic patients after they develop spontaneous bacterial peritonitis, as they are associated with increased mortality and acute kidney injury. |
7. Current guidelines regarding prophylaxis against VTE in medical inpatients likely result in nonbeneficial use of medications for this purpose. It remains unclear which high‐risk populations do benefit from pharmacologic prophylaxis. |
8. DOACs are as effective and are safer than conventional therapy for treatment of VTE, though they are not recommended in patients with GFR <30 mL/min. |
9. DOACs are more effective and are safer (though they may increase risk of gastrointestinal bleeding) than conventional therapy in patients with AF. |
10. DOACs are as safe and more effective than conventional therapy in elderly patients with VTE or AF, being mindful of dosing recommendations in this population. |
11. Two new once‐weekly antibiotics, dalbavancin and oritavancin, approved for skin and soft tissue infections, appear noninferior to vancomycin and have the potential to shorten hospitalizations and, in doing so, may decrease cost. |
12. Offering family members of a patient undergoing CPR the opportunity to observe has durable impact on meaningful short‐ and long‐term psychological outcomes. Clinicians should strongly consider making this offer. |
AN APPROACHING PARADIGM SHIFT IN THE TREATMENT FOR HEART FAILURE
McMurray J, Packer M, Desai A, et al. Angiotensin‐neprilysin inhibition versus enalapril in heart failure. N Engl J Med. 2014;371:9931004.
Background
The last drug approved by the Food and Drug Administration (FDA) for heart failure (HF) was 10 years ago.[2] The new PARADIGM (Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure) heart failure study comparing a novel combination drug of a neprilysin inhibitor and angiotensin receptor blocker (ARB) to an angiotensin‐converting enzyme (ACE) inhibitor has cardiologists considering a possible change in the HF treatment algorithm. Neprilysin is a naturally occurring enzyme that breaks down the protective vasoactive peptides (brain natriuretic peptide, atrial natriuretic peptide, and bradykinin) made by the heart and the body in HF. These vasoactive peptides function to increase vasodilation and block sodium and water reabsorption. This novel neprilysin inhibitor extends the life of these vasoactive peptides, thus enhancing their effect. By inhibiting both neprilysin and the renin‐angiotensin system, there should be additional improvement in HF management. The neprilysin inhibitor was combined with an ARB instead of an ACE inhibitor because of significant angioedema seen in earlier phase trials when combined with an ACE inhibitor. This is believed related to increases in bradykinin due to both agents.
Findings
In this multicenter, blinded, randomized trial, over 10,000 patients with known HF (ejection fraction<35%, New York Heart Association class II or higher) went through 2 run‐in periods to ensure tolerance of both enalapril and the study drug, a combination of a neprilysin inhibitor and valsartan (neprilysin‐I/ARB). Eventually 8442 patients underwent randomization to either enalapril (10 mg twice a day) or neprilysin‐I/ARB (200 mg twice a day). The primary outcome was a combination of cardiovascular mortality and heart failure hospitalizations. The trial was stopped early at 27 months because of overwhelming benefit with neprilysin‐I/ARB (21.8% vs 26.5%; P<0.001). There was a 20% reduction specifically in cardiovascular mortality (13.3% vs 16.5%; hazard ratio [HR]: 0.80; P<0.001). The number needed to treat (NNT) was 32. There was also a 21% reduction in the risk of hospitalization (P<0.001). More patients with neprilysin‐I/ARB had symptomatic hypotension (14% vs 9.2%; P<0.001) but patients on the ACE inhibitor experienced more cough, hyperkalemia, and increases in their serum creatinine.
Cautions
There are 2 reasons clinicians may not see the same results in practice. First, the trial was stopped early, which can sometimes exaggerate benefits.[3] Second, the 2 run‐in periods eliminated patients who could not tolerate the medications at the trial doses. Additionally, although the study's authors were independent, the trial was funded by a pharmaceutical company.
Implications
This new combination drug of a neprilysin inhibitor and valsartan shows great promise at reducing cardiovascular mortality and hospitalizations for heart failure compared to enalapril alone. Given the high morbidity and mortality of heart failure, having a new agent in the treatment algorithm will be useful to patients and physicians. The drug was just approved by the FDA in July 2015 and will likely be offered as an alternative to ACE inhibitors.
VENOUS CONTRAST‐INDUCED NEPHROTOXICITY: IS THERE REALLY A RISK?
McDonald J, McDonald R, Carter R, et al. Risk of intravenous contrast material‐mediated acute kidney injury: a propensity score‐matched study stratified by baseline‐estimated glomerular filtration rate. Radiology. 2014;271(1):6573.
McDonald R, McDonald J, Carter R, et al. Intravenous contrast material exposure is not an independent risk factor for dialysis or mortality. Radiology. 2014;273(3):714725.
Background
It is a common practice to withhold intravenous contrast material from computed tomography (CT) scans in patients with even moderately poor renal function out of concern for causing contrast‐induced nephropathy (CIN). Our understanding of CIN is based largely on observational studies and outcomes of cardiac catheterizations, where larger amounts of contrast are given intra‐arterially into an atherosclerotic aorta.[4] The exact mechanism of injury is not clear, possibly from direct tubule toxicity or renal vasoconstriction.[5] CIN is defined as a rise in creatinine >0.5 mg/dL or >25% rise in serum creatinine 24 to 48 hours after receiving intravenous contrast. Although it is usually self‐limited, there is concern that patients who develop CIN have an increase risk of dialysis and death.[6] In the last few years, radiologists have started to question whether the risk of CIN is overstated. A recent meta‐analysis of 13 studies demonstrated a similar likelihood of acute kidney injury in patients regardless of receiving intravenous contrast.[7] If the true incidence of CIN after venous contrast is actually lower, this raises the question of whether we are unnecessarily withholding contrast from CTs and thereby reducing their diagnostic accuracy. Two 2014 observational studies provide additional evidence that the concern for CIN may be overstated.
Findings
The 2 Mayo Clinic studies used the same database. They looked at all patients who underwent a contrast‐enhanced or unenhanced thoracic, abdominal, or pelvic CT between January 2000 and December 2010 at the Mayo Clinic. After limiting the data to patients with pre‐ and post‐CT creatinine measurements and excluding anyone on dialysis, with preexisting acute kidney injury, or who had received additional contrast within 14 days, they ended up with 41,229 patients, mostly inpatients. All of the patients were assigned propensity scores based on risk factors for the development of CIN and whether they would likely receive contrast. The patients were then subdivided into 4 renal function subgroups based on estimated glomerular filtration rate (eGFR). The patients who received contrast were matched based on their propensity scores to those who did not received contrast within their eGFR subgroups. Unmatched patients were eliminated, leaving a cohort of 12,508 matched patients. The outcome of the first article was acute kidney injury (AKI) defined as a rise in creatinine >0.5 mg/dL at 24 to 48 hours. Though AKI rose with worsening eGFR subgroups (eGFR > 90 [1.2%] vs eGFR < 30 [14%]), the rates of AKI were the same regardless of contrast exposure. There was no statistical difference in any of the eGFR subgroups. The second study looked at important clinical outcomesdeath and the need for dialysis. There was no statistical difference for emergent dialysis (odds ratio [OR]: 0.96, P=0.89) or 30‐day mortality (HR: 0.97; P=0.45) regardless of whether the patients received contrast or not.
Cautions
In propensity matching, unmeasured confounders can bias the results. However, the issue of whether venous contrast causes CIN will unlikely be settled in a randomized controlled trial. For patients with severe renal failure (eGFR < 30), there were far fewer patients in this subgroup, making it harder to draw conclusions. The amount of venous contrast given was not provided. Finally, this study evaluated intravenous contrast for CTs, not intra‐arterial contrast.
Implications
These 2 studies raise doubt as to whether the incidence of AKI after contrast‐enhanced CT can be attributed to the contrast itself. What exactly causes the rise in creatinine is probably multifactorial including lab variation, hydration, blood pressure changes, nephrotoxic drugs, and comorbid disease. In trying to decide whether to obtain a contrast‐enhanced CT for patients with chronic kidney dysfunction, these studies provide more evidence to consider in the decision‐making process. A conversation with the radiologist about the benefits gained from using contrast in an individual patient may be of value.
PREVENTION AND PROGNOSIS OF INPATIENT DELIRIUM
Hatta K, Yasuhiro K, Wada K, et al. Preventive effects of ramelteon on delirium: a randomized placebo controlled trial. JAMA Psych. 2014;71(4):397403.
A new melatonin agonist dramatically improves delirium incidence.
Background
Numerous medications and therapeutic approaches have been studied to prevent incident delirium in hospitalized medical and surgical patients with varying success. Many of the tested medications also have the potential for significant undesirable side effects. An earlier small trial of melatonin appeared to have impressive efficacy for this purpose and be well tolerated, but the substance is not regulated by the FDA.[8] Ramelteon, a melatonin receptor agonist, is approved by the FDA for insomnia, and authors hypothesized that it, too, may be effective in delirium prevention.
Findings
This study was a multicenter, single‐blinded, randomized controlled trial of the melatonin‐agonist ramelteon versus placebo in elderly patients admitted to the hospital ward or ICU with serious medical conditions. Researchers excluded intubated patients or those with Lewy body dementia, psychiatric disorders, and severe liver disease. Patients received either ramelteon or placebo nightly for up to a week, and the primary end point was incident delirium as determined by a blinded observer using a validated assessment tool. Sixty‐seven patients were enrolled. The baseline characteristics in the arms of the trial were similar. In the placebo arm, 11 of 34 patients (32%) developed delirium during the 7‐day observation period. In the ramelteon arm, 1 of 33 (3%) developed delirium (P=0.003). The rate of drug discontinuation was the same in each arm.
Cautions
This study is small, and the single‐blinded design (the physicians and patients knew which group they were in but the observers did not) limits the validity of these results, mandating a larger double‐blinded trial.
Implications
Ramelteon showed a dramatic impact on preventing incident delirium on elderly hospitalized patients with serious medical conditions admitted to the ward or intensive care unit (ICU) (nonintubated) in this small study. If larger trials concur with the impact of this well‐tolerated and inexpensive medication, the potential for delirium incidence reduction could have a dramatic impact on how care for delirium‐vulnerable patients is conducted as well as the systems‐level costs associated with delirium care. Further studies of this class of medications are needed to more definitively establish its value in delirium prevention.
THE CONFUSION ASSESSMENT METHOD SEVERITY SCORE CAN QUANTIFY PROGNOSIS FOR DELIRIOUS MEDICAL INPATIENTS
Innoye SK, Kosar CM, Tommet D, et al. The CAM‐S: development and validation of a new scoring system for delirium in 2 cohorts. Ann Intern Med. 2014;160:526533.
Background
Delirium is common in hospitalized elderly patients, and numerous studies show that there are both short‐ and long‐term implications of developing delirium. Using well studied and validated tools has made identifying delirium fairly straightforward, yet its treatment remains difficult. Additionally, differentiating which patients will have a simpler clinical course from those at risk for a more morbid one has proved challenging. Using the Confusion Assessment Method (CAM), both in its short (4‐item) and long (10‐item) forms, as the basis for a prognostication tool, would allow for future research on treatment to have a scale against which to measure impact, and would allow clinicians to anticipate which patients were more likely to have difficult clinical courses.
Findings
The CAM Severity (CAM‐S) score was derived in 1219 subjects participating in 2 ongoing studies: 1 included high‐risk medical inpatients 70 years old or older, and the other included similarly aged patients undergoing major orthopedic, general, or vascular surgeries. Outcomes data were not available for the surgical patients. The CAM items were rated as either present/absent or absent/mild/severe, depending on the item, with an associated score attached to each item such that the 4‐item CAM had a score of 0 to 7 and the 10‐item CAM 0 to 19 (Table 2). Clinical outcomes from the medical patients cohort showed a dose response with increasing CAM‐S scores with respect to length of stay, adjusted cost, combined 90‐day end points of skilled nursing facility placement or death, and 90‐day mortality. Specifically, for patients with a CAM‐S (short form) score of 5 to 7, the 90‐day rate of death or nursing home residence was 62%, whereas the 90‐day postdischarge mortality rate was 36%.
The CAM | The CAM‐S | |
---|---|---|
| ||
Acute onset with fluctuating course | Absent | 0 |
Present | 1 | |
Inattention or distractability | Absent | 0 |
Mild | 1 | |
Severe | 2 | |
Disorganized thinking, illogical or unclear ideas | Absent | 0 |
Mild | 1 | |
Severe | 2 | |
Alteration of consciousness | Absent | 0 |
Mild | 0 | |
Severe | 2 | |
Total | 07 |
Cautions
The CAM‐S, like the CAM, may work less well in patients with hypoactive delirium. This scale has been applied in a surgical cohort, but study outcomes were not presented in this article. This absence limits our ability to apply these results to a surgical population presently.
Implications
This study demonstrates that in medical inpatients, the CAM‐S is effective for prognostication. Moreover, the study points out that high‐scoring patients on the CAM‐S have quite poor prognoses, with more than one‐third dying by 3 months. This finding suggests that an important use of the CAM‐S is to identify patients about whom goals of care discussions should be held and end‐of‐life planning initiated if not previously done.
GET EXCITED ABOUT HEPATIC ENCEPHALOPATHY AGAINA NEW POSSIBLE TREATMENT
Rahimi R, Singal A, Cuthbert J, et al. Lactulose vs polyethylene glycol 3350‐electrolyte solution for treatment of overt hepatic encephalopathy. The HELP randomized clinical trial. JAMA Intern Med. 2014;174(11):17271733.
Background
Lactulose has been the principle treatment for acute hepatic encephalopathy (HE) since 1966.[9] It theoretically works by lowering the pH of the colon and trapping ammonia as ammonium, which is then expelled. Alternatively, it may simply decrease transit time through the colon. In fact, earlier treatments for HE were cathartics such as magnesium salts. Unfortunately 20% tp 30% of patients are poor responders to lactulose, and patients do not like it. This new study tests whether a modern‐day cathartic, polyethylene glycol, works as well as lactulose.
Findings
In this unblinded, randomized controlled trial, patients presenting to the emergency department with acute HE were assigned to either lactulose 20 to 30 g for a minimum of 3 doses over 24 hours or 4 L of polyethylene glycol (PEG) over 4 hours. The2 groups were similar in severity and etiology of liver disease. Patients were allowed to have received 1 dose of lactulose given in the emergency department prior to study enrollment. They were excluded if taking rifaximin. The primary outcome was improvement in the hepatic encephalopathy scoring algorithm (HESA) by 1 grade at 24 hours.[10] The algorithm scores HE from 0 (no clinical findings of HE) to 5 (comatose). Initial mean HESA scores in the 2 groups were identical (2.3).
In the lactulose group, 13/25 (52%) improved by at least 1 HESA score at 24 hours. Two patients (8%) completely cleared with a HESA score of 0. In comparison, 21/23 (91%) in the PEG group improved at 24 hours, and 10/23 (43%) had cleared with a HESA score of 0 (P<0.01). The median time to HE resolution was 2 days in the lactulose group compared with 1 day in the PEG group (P=0.01). There were no differences in serious adverse events. The majority (76%) of the PEG group received the full 4 L of PEG.
Cautions
The main limitations of the trial were the small sample size, that it was a single‐center study, and the fact it was unblinded. Additionally, 80% of the PEG group received 1 dose of lactulose prior to enrollment. Statistically, more patients in the PEG group developed hypokalemia, which can worsen HE. Therefore, if PEG is used for acute HE, potassium will need to be monitored.
Implications
The results are intriguing and may represent a new possible treatment for acute HE once larger studies are done. Interestingly, the ammonia level dropped further in the lactulose group than the PEG group, yet there was more cognitive improvement in the PEG group. This raises questions about the role of ammonia and catharsis in HE. Although lactulose and rifaximin continue to be the standard of care, cathartics may be returning as a viable alternative.
SHOULD ‐BLOCKERS BE STOPPED IN PATIENTS WITH CIRRHOSIS WHEN SPONTANEOUS BACTERIAL PERITONITIS OCCURS?
Mandorfer M, Bota S, Schwabi P, et al. Nonselective beta blockers increase risk for hepatorenal syndrome and death in patients with cirrhosis and spontaneous bacterial peritonitis. Gastroenterology. 2014;146:16801690.
Background
Nonselective ‐blockers (NSBBs) are considered the aspirin of hepatologists, as they are used for primary and secondary prevention of variceal bleeds in patients with cirrhosis.[11] Since the 1980s, their benefit in reducing bleeding risk has been known, and more recently there has been evidence that they may reduce the risk of developing ascites in patients with compensated cirrhosis. Yet, there has been some contradictory evidence suggesting reduced survival in patients with decompensated cirrhosis and infections on NSBBs. This has led to the window hypothesis of NSBBs in cirrhosis, where NSBBs are beneficial only during a certain window period during the progression of cirrhosis.[12] Early on in cirrhosis, before the development of varices or ascites, NSBBs have no benefit. As cirrhosis progresses and portal hypertension develops, NSBBs play a major role in reducing bleeding from varices. However, in advanced cirrhosis, NSBBs may become harmful. In theory, they block the body's attempt to increase cardiac output during situations of increased physiologic stress, resulting in decreased mean arterial pressure and perfusion. This, in turn, causes end‐organ damage and increased risk of death. When exactly this NSBB window closes is unclear. A 2014 study suggests the window should close when patients develop spontaneous bacterial peritonitis (SBP).
Findings
This retrospective study followed 607 consecutive patients seen at a liver transplant center in Vienna, Austria, from 2006 to 2011. All of the patients were followed from the time of their first paracentesis. They were excluded if SBP was diagnosed during the first paracentesis. Patients were grouped based on whether they took an NSBB. As expected, more patients on an NSBB had varices (90% vs 62%; P<0.001) and a lower mean heart rate (77.5 vs 83.9 beats/minute; P<0.001). However, the 2 groups were similar in mean arterial pressure, systolic blood pressure, Model for End‐Stage Liver Disease score (17.5), Childs Pugh Score (CPS) (50% were C), and in the etiology of cirrhosis (55% were from alcoholic liver disease). They followed the patients for development of SBP. The primary outcome was transplant‐free survival. For the patients who never developed SBP, there was a 25% reduction in the risk of death for those on an NSBB adjusted for varices and CPS stage (HR=0.75, P=0.027). However, for the 182 patients who developed SBP, those on an NSBB had a 58% increase risk of death, again adjusted for varices and CPS stage (HR=1.58, P=0.014). Among the patients who developed SBP, there was a higher risk of hepatorenal syndrome (HRS) within 90 days for those on an NSBB (24% vs 11%, P=0.027). Although the mean arterial pressures (MAP) had been similar in the 2 groups before SBP, after the development of SBP, those on an NSBB had a significantly lower MAP (77.2 vs 82.6 mm Hg, P=0.005).
Cautions
This is a retrospective study, and although the authors controlled for varices and CPS, it is still possible the 2 groups were not similar. Whether patients were actually taking the NSBB is unknown, and doses of the NSBB were variable.
Implications
This study provides more evidence for the NSBB window hypothesis in the treatment of patients with cirrhosis. It suggests that the window on NSBB closes when patients develop SBP, as NSBBs appear to increase mortality and the risk of HRS. Thus, NSBB therapy should probably be discontinued in cirrhotic patients developing SBP. The question is for how long? The editorial accompanying the article says permanently.[13]
VTE PROPHYLAXIS FOR MEDICAL INPATIENTS: IS IT A THING OF THE PAST?
Flanders SA, Greene T, Grant P, et al. Hospital performance for pharmacologic venous thromboembolism prophylaxis and rate of venous thromboembolism. A cohort study. JAMA Intern Med. 2014;174(10):15771584.
Background
Based on early research studies, many quality and regulatory organizations have stressed the importance of assessing hospitalized patients' venous thromboembolism (VTE) risk and prophylaxing those patients at increased risk either pharmacologically or mechanically. In 2011, a meta‐analysis of 40 studies of medical and stroke patients including approximately 52,000 patients failed to demonstrate a mortality benefit, showing that for every 3 pulmonary embolisms (PEs) prevented, it caused 4 major bleeding episodes per 1000 patients.[14] A second study in 2011, a multicenter, randomized controlled trial with medically complex patients deemed high risk for VTE, also failed to demonstrate a mortality benefit.[15] Despite these and other trials showing questionable benefit, guidelines continue to recommend that high‐risk medical patients should get pharmacologic prophylaxis against VTE.
Findings
This retrospective cohort trial retrospectively evaluated a cohort of 20,794 medical patients (non‐ICU) across 35 hospitals, excluding those with a Caprini score of <2 (ie, low risk for VTE). The authors divided the hospitals into tertiles based on adherence to VTE prophylaxis guidelines. Patients were followed to 90 days after hospitalization with telephone calls (reaching 56%) and chart reviews (100% reviewed) to identify clinically evident VTE events, excluding those that occurred within the first 3 days of index hospitalization. The study identified no statistically significant differences among the tertiles in terms of VTE rates, either in the hospital or at 90 days, though the overall VTE event rate was low. Interestingly, 85% of events took place postdischarge. Subgroup analyses also failed to identify a population of medical patients who benefited from prophylaxis.
Cautions
Debate about whether the Caprini risk score is the best available VTE risk scoring system exists. This study also excluded surgical and ICU patients.
Implications
This trial adds to the mounting literature suggesting that current guidelines‐based pharmacologic VTE prophylaxis for medical patients may offer no clear benefit in terms of incident VTE events or mortality. Although it is not yet time to abandon VTE prophylaxis completely, this study does raise the important question of whether it is time to revisit the quality guidelines and regulatory standards around VTE prophylaxis in medical inpatients. It also highlights the difficulty in assessing medical patients for their VTE risk. Though this study is provocative and important for its real‐world setting, further studies are required.
OUT WITH THE OLD AND IN WITH THE NEW? SHOULD DIRECT ORAL ANTICOAGULANTS BE OUR FIRST CHOICE FOR CARING FOR PATIENTS WITH VTE AND ATRIAL FIBRILLATION?
van Es N, Coppens M, Schulman S. et al. Direct oral anticoagulants compared with vitamin K antagonists for acute venous thromboembolism: evidence from phase 3 trials. Blood. 2014;124(12):19681975.
For patients with acute VTE, direct oral anticoagulants work as well and are safer.
Background
There have been 6 large published randomized controlled trials of direct oral anticoagulants (DOACs) versus vitamin K antagonists (VKAs) in patients with acute VTE. Study sizes range from approximately 2500 to over 8000 subjects. All showed no significant difference between the arms with respect to efficacy (VTE or VTE‐related death) but had variable results with respect to major bleeding risk, a major concern given the nonreversibility of this group of medications. Additionally, subgroup analysis within these studies was challenging given sample size issues.
Findings
These 6 studies were combined in a meta‐analysis to address the DOACs' overall efficacy and safety profile, as well as looking in prespecified subgroups. The meta‐analysis included data from over 27,000 patients, evenly divided between DOACs (edoxaban, apixaban, rivaroxaban, and dabigatran) and VKAs, with the time in the therapeutic range (TTR) in the VKA arm being 64%. Overall, the primary efficacy endpoint (VTE and VTE‐related death) was similar (DOACs relative tisk [RR]=0.90; 95% confidence interval [CI]: 0.77‐1.06) but major bleeding (DOACs RR=0.61; 95% CI: 0.45‐0.83; NNT=150) and combined fatal and intracranial bleeding (DOACs RR=0.37; 95% CI: 0.27‐0.68; NNT=314) favored the DOACs. In subgroup analysis, there was no efficacy difference between the therapeutic groups in the subset specifically with DVT or with PE, or with patients weighing >100 kg, though safety data in these subsets were not evaluable. Patients with creatinine clearances of 30 to 49 mL/min demonstrated similar efficacy in both treatment arms, and the safety analysis in this subset with moderate renal impairment was better in the DOAC arm. Cancer patients achieved better efficacy with similar safety with the DOACs, whereas elderly patients achieved both better safety and efficacy with DOACs.
Cautions
As yet, there are inadequate data on patients with more advanced renal failure (creatinine clearance <30 mL/min) to advise using DOACs in that subset. Also, as there were no data comparing cancer patients with VTE that investigated DOACs versus low molecular weight heparins (the standard of care rather than warfarin since the CLOT [Comparison of Low‐molecular‐weight heparin versus Oral anticoagulant Therapy] trial[16]), the current meta‐analysis does not yet answer whether DOACs should be used in this population despite the efficacy benefit noted in the subgroup analysis.
Implications
This large meta‐analysis strongly suggests we can achieve comparable treatment efficacy from the DOACs as with VKAs, with better safety profiles in patients with acute VTE. In the subset of patients with moderate renal impairment (creatinine clearance 3049 mL/min), it appears safe and effective to choose DOACs.
IN PATIENTS WITH ATRIAL FIBRILLATION, DOACs APPEAR MORE EFFECTIVE THAN VKAs WITH COMPARABLE OR BETTER SAFETY PROFILES
Ruff CT, Guigliano RP, Braunwald E, et al. Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta‐analysis of randomized trials. Lancet. 2014;383(9921):955962.
Background
Adding to the previously published meta‐analyses of the original phase 3 randomized trials regarding the DOACs' impact on the atrial fibrillation (AF) treatment safety and efficacy literature relative to VKAs, a 2013 trial, ENGAGE AF‐TIMI 48 (Effective Anticoagulation with Factor Xa Next Generation in Atrial FibrillationThrombolysis in Myocardial Infarction 48), with edoxaban was published and warrants inclusion to have a better opportunity to glean important subgroup information.[17]
Findings
This meta‐analysis included data on 71,683 patients, 42,411 in the DOAC arm and 29,272 in the warfarin arm, as 2 of the trials were3‐arm studies, comparing warfarin to a high dose and a low dose of the DOAC. Meta‐analyses of the 4 trials were broken down into a high‐dose subsetthe 2 high‐dose arms and the standard doses used in the other 2 trialsand a low‐dose subsetthe 2 low‐dose arms and the standard doses used in the other 2 trials. With respect to the efficacy endpoint (incident stroke or systemic embolization), the high‐dose subset analyses of the DOACs yielded a 19% reduction (P<0.0001; NNT=142) relative to the VKAs. The safety endpoint of major bleeding in this analysis identified a 14% reduction in the DOAC group that was nonsignificant (P=0.06). Within the high‐dose subset, analyses favored DOACs with respect to hemorrhagic stroke (51% reduction; P<0.0001; NNT=220), intracranial hemorrhage (52% reduction; P<0.0001; NNT=132), and overall mortality (10% reduction; P=0.0003; NNT=129), whereas they increased the risk of gastrointestinal bleeding (25% increase; P=0.043; NNH=185). There was no significant difference between DOACs and warfarin with respect to ischemic stroke. The low‐dose subset had similar overall results with even fewer hemorrhage strokes balancing a higher incidence of ischemic strokes in the DOAC arm than in warfarin. Other important subgroup analyses suggest the safety and efficacy impact of DOACs is significant for VKA‐naive and experienced patients, though only statistically so for VKA‐naive patients. Additionally, the anticoagulation centers included in the study that had a TTR <66% seemed to gain a safety advantage from the DOACs, whereas both TTR groups (<66% and 66%) appeared to achieve an efficacy benefit from DOACs.
Cautions
There are not sufficient data to suggest routinely switching patients tolerating and well managed on VKAs to DOACs for AF.
Implications
DOACs reduce stroke and systemic emboli in patients with AF without increasing intracranial bleeding or hemorrhagic stroke, though at the cost of increased gastrointestinal bleeding in patients on the high‐dose regimens. Those patients on the low‐dose regimens have even a lower hemorrhagic stroke risk, the benefit of which is negated by a higher than VKA risk of ischemic strokes. Centers with lower TTRs (and perhaps by extrapolation, those patients with more difficulty staying in the therapeutic range) may gain more benefit by switching. New patients on treatment for AF should strongly be considered for DOAC therapy as the first line.
IN ELDERLY PATIENTS, THE DOACs APPEAR TO OFFER IMPROVED EFFICACY WITHOUT SACRIFICING SAFETY
Sardar P, Chatterjee S, Chaudhari S, Lip GYH. New oral anticoagulants in elderly adults: evidence from meta‐analysis of randomized trials. J Am Geriatr Soc. 2014;62(5):857864.
Background
The prevalence of AF rises with age, as does the prevalence of malignancy, limited mobility, and other comorbidities that increase the risk for VTEs. These factors may also increase the risk of bleeding with conventional therapy with heparins and VKAs. As such, understanding the implications of using DOACs in the elderly population is important.
Findings
This meta‐analysis included the elderly (age 75 years) subset of patients from existing AF treatment and VTE treatment and prophylaxis randomized trials comparing DOACs with VKAs, low‐molecular‐weight heparin (LMWH), aspirin, or placebo. The primary safety outcome was major bleeding. For AF trials, the efficacy endpoint was stroke or systemic embolization, whereas in VTE trials it was VTE or VTE‐related death. Authors were able to extract data on 25,031 patients across 10 trials that evaluated rivaroxaban, apixaban, and dabigatran (not edoxaban), with follow‐up data ranging from 35 days to 2 years. For safety outcomes, the 2 arms showed no statistical difference (DOAC: 6.4%; conventional therapy: 6.3%; OR: 1.02; 95% CI: 0.73‐1.43). For efficacy endpoints in VTE studies, DOACs were more effective (3.7% vs 7.0%; OR: 0.45; 95% CI: 0.27‐77; NNT=30). For AF, the efficacy analysis favored DOACs also (3.3% vs 4.7%; OR: 0.65; 95% CI: 0.48‐0.87; NNT=71). When analyzed by the efficacy of the individual DOAC, rivaroxaban and apixaban both appeared to outperform the VKA/LMWH arm for both VTE and AF treatment, whereas data on dabigatran were only available for AF, also showing an efficacy benefit. Individual DOAC analyses for safety endpoints showed all the 3 to be similar to VKA/LMWH.
Cautions
Authors note, however, that coexisting low body weight and renal insufficiency may influence dosing choices in this population. There are specific dosage recommendations in the elderly for some DOACs.
Implications
The use of DOACs in patients aged 75 years and older appears to confer a substantial efficacy advantage when used for treatment of VTE and AF patients. The safety data presented in this meta‐analysis suggest that this class is comparable to VKA/LMWH medications.
CHANGING INPATIENT MANAGEMENT OF SKIN INFECTIONS
Boucher, H, Wilcox M, Talbot G, et al. Once‐weekly dalbavancin versus daily conventional therapy for skin infection. N Engl J Med. 2014;370:21692179.
Corey G, Kabler, H, Mahra P, et al. Single‐dose oritavancin in the treatment of acute bacterial skin infections. N Engl J Med. 2014;370:21802190.
Background
There are over 870,000 hospital admissions yearly for skin infection, making it one of most common reasons for hospitalization in the United States.[18] Management often requires lengthy treatments with intravenous antibiotics, especially with the emergence of methicillin‐resistant Staphylococcus aureus. Results from 2 large randomized, double‐blinded, multicenter clinical trials were published looking at new once‐weekly intravenous antibiotics. Dalbavancin and oritavancin are both lipoglycopeptides in the same family as vancomycin. What is unique is that their serum drug concentrations exceed the minimum inhibitor concentrations for over a week. Both drugs were compared in noninferiority trials to vancomycin. The studies had similar outcomes. The dalbavancin results are presented below.
Findings
Researchers randomized 1312 patients with significant cellulitis, large abscess, or wound infection. Patients also had fever, leukocytosis, or bandemia, and the infection had to be deemed severe enough to require a minimum of 3 days of intravenous antibiotics. The patients could not have received any prior antibiotics. Over 80% of the patients had fevers, and more than half met the criteria for systemic inflammatory response syndrome. Patients were randomized to either dalbavancin (on day 1 and day 8) or vancomycin every 12 hours (1 gm or 15 mg/kg), with both groups receiving placebo dosing of the other drug. The blinded physicians could decide to switch to oral agent (placebo or linezolid in the vancomycin group) anytime after day 3, and the physicians could stop antibiotics anytime after day 10. Otherwise, all patients received 14 days of antibiotics.
The FDA‐approved outcome was cessation of spread of erythema at 48 to 72 hours and no fever at 3 independent readings. Results were similar in the dalbavancin group compared to the vancomycinlinezolid group (79.7% vs 79.8%). Dalbavancin was deemed noninferior to vancomycin. Blinded investigator's assessment of treatment success at 2 weeks was also similar (96% vs 96.7%, respectively). More treatment‐related adverse events occurred in the vancomycinlinezolid group (183 vs 139; P=0.02) and more deaths occurred in the vancomycin group (7 vs 1; P=0.03).
Cautions
These antibiotics have only been shown effective for complicated, acute bacterial skin infections. Their performance for other gram‐positive infections is unknown. In the future, it is possible that patients with severe skin infections will receive a dose of these antibiotics on hospital day 1 and be sent home with close follow‐up. However, that study has not been done yet to confirm efficacy and safety. Though the drugs appear safe, there needs to be more clinical use before they become standard of care, especially because of the long half‐life. Finally, these drugs are very expensive and provide broad spectrum gram‐positive coverage. They are not meant for a simple cellulitis.
Implications
These 2 new once‐weekly antibioticsdalbavancin and oritavancinare noninferior to vancomycin for acute bacterial skin infections. They provide alternative treatment choices for managing patients with significant infections requiring hospitalization. In the future, they may change the need for hospitalization of these patients or significantly reduce their length of stay. Though expensive, a significant reduction in hospitalization will offset costs.
SHOULD THEY STAY OR SHOULD THEY GO? FAMILY PRESENCE DURING CPR MAY IMPROVE THE GRIEF PROCESS DURABLY
Jabre P, Tazarourte K, Azoulay E, et al. Offering the opportunity for family to be present during cardiopulmonary resuscitation: 1 year assessment. Intensive Care Med. 2014;40:981987.
Background
In 2013, a French study randomized adult family members of a patient undergoing cardiopulmonary resuscitation (CPR) occurring at home to either be invited to stay and watch the resuscitation or to have no specific invitation offered.[19] At 90 days, this study revealed that those who were invited to watch (and 79% did) had fewer symptoms of post‐traumatic stress disorder (PTSD) (27% vs 37%) and anxiety (15% vs 23%), though not depression, than did the group not offered the opportunity to watch (though 43% watched anyway). There were 570 subjects (family members) in the trial, of whom a greater number in the control arm declined to participate in a 90‐day follow‐up due to emotional distress. Notably, only 4% of the patients in this study undergoing CPR survived to day 28. Whether the apparent positive psychological impact of the offer to watch CPR for families was durable remained in question.
Findings
The study group followed the families up to 1 year. At that time, dropout rates were similar (with the assumption, as in the prior study, that those who dropped out of either arm had PTSD symptoms). At follow‐up, subjects were again assessed for PTSD, anxiety, and depression symptoms as well as for meeting criteria for having had a major depressive episode or complicated grief. Four hundred eight of the original 570 subjects were able to undergo reevaluation. The 1‐year results showed the group offered the chance to watch CPR had fewer PTSD symptoms (20% vs 32%) and depression symptoms (10% vs 16%), as well as fewer major depressive episodes (23% vs 31%) and less complicated grief (21% vs 36%) but without a durable impact on anxiety symptoms.
Cautions
The resuscitation efforts in question here occurred out of hospital (in the home). Part of the protocol for those family members observing CPR was that a clinician was assigned to stay with them and explain the resuscitation process as it occurred.
Implications
It is postulated that having the chance to observe CPR, if desired, may help the grieving process. This study clearly raises a question about the wisdom of routinely escorting patient's families out of the room during resuscitative efforts. It seems likely that the durable and important psychological effects observed in this study for family members would similarly persist in emergency department and inpatient settings, where staff can be with patients' families to talk them through the events they are witnessing. It is time to ask families if they prefer to stay and watch CPR and not automatically move them to a waiting room.
Disclosure: Nothing to report.
- http://scientific.thomsonreuters.com/imgblast/JCRFullCovlist-2014.pdf. Accessed August 28, 2015. Journals in the 2014 release of the JCR. Available at:
- Neprilysin inhibition—a novel therapy for heart failure. N Engl J Med. 2014;371(11):1062–1064.
- Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta‐regression analysis. JAMA. 2010;303(12):1180–1187. , , , et al.
- Intravenous contrast medium‐induced nephrotoxicity: is the medical risk really as great as we have come to believe? Radiology 2010;256(1):21–28. ,
- Pathophysiology of contrast medium‐induced nephropathy. Kidney Int. 2005;68(1):14–22. , ,
- Contrast‐induced acute kidney injury: short‐ and long‐term implications. Semin Nephrol. 2011;31(3):300–309. ,
- Frequency of acute kidney injury following intravenous contrast medium administration: a systematic review and meta‐analysis. Radiology. 2013;267(1):119–128. , , , et al.
- Melatonin decreases delirium in elderly patients: a randomized, placebo‐controlled trial. Int J Geriatr Psychiatry. 2011;26(7):687–694. , , , , ,
- Lactulose in the treatment of chronic portal‐systemic encephalopathy. A double‐blind clinical trial. N Engl J Med. 1969;281(8):408–412. , ,
- Performance of the hepatic encephalopathy scoring algorithm in a clinical trial of patients with cirrhosis and severe hepatic encephalopathy. Am J Gastroenterol. 2009;104(6):1392–1400. , , , et al.
- The changing role of beta‐blocker therapy in patients with cirrhosis. J Hepatol. 2014;60(3):643–653. ,
- The window hypothesis: haemodynamic and non‐haemodynamic effects of beta‐blockers improve survival of patients with cirrhosis during a window in the disease. Gut. 2012;61(7):967–969. , , ,
- When should the beta‐blocker window in cirrhosis close? Gastroenterology. 2014;146(7):1597–1599. ,
- Venous thromboembolism prophylaxis in hospitalized medical patients and those with stroke: a background review for an American College of Physicians Clinical Practice Guideline. Ann Intern Med. 2011;155(9):602–615. , , ,
- LIFENOX Investigators. Low‐molecular‐weight heparin and mortality in acutely ill medical patients. N Engl J Med. 2011;365(26):2463–2472. , , , , , ;
- Randomized Comparison of Low‐Molecular‐Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators. Low‐molecular‐weight heparin versus a coumarin for the prevention of recurrent venous thromboembolism in patients with cancer. N Engl J Med. 2003;349(2):146–153. , , , et al.;
- ENGAGE AF‐TIMI 48 Investigators. Edoxaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2013;369(22):2093–2104. , , , et al.;
- Pharmacology and the treatment of complicated skin and skin‐structure infections. N Engl J Med. 2014;370(23):2238–2239.
- Family presence during cardiopulmonary resuscitation. N Engl J Med. 2013;368(11):1008–1018. , , , et al.
Keeping up with the medical literature in a field as broad as hospital medicine is a daunting task. In 2014 alone, there were over 9200 articles published in top‐tier internal medicine journals.[1] The authors have selected articles from among these top journals using a nonsystematic process that involved reviewing articles brought to their attention via colleagues, literature searches, and online services. The focus was to identify articles that would be of importance to the field of hospital medicine for their potential to be practice changing, provocative, or iconoclastic. After culling through hundreds of titles and abstracts, 46 articles were reviewed by both authors in full text, and ultimately 14 were selected for presentation here. Table 1 summarizes the key points.
|
1. Now that neprolysin inhibitors are approved by the FDA, hospitalists will see them prescribed as an alternative to ACE‐inhibitors given their impressive benefits in cardiovascular mortality and heart failure hospitalizations. |
2. Current evidence suggests that intravenous contrast given with CT scans may not significantly alter the incidence of acute kidney injury, its associated mortality, or the need for hemodialysis. |
3. The CAM‐S score is an important tool for prognostication in delirious patients. Those patients with high CAM‐S scores should be considered for goals of care conversations. |
4. The melatonin agonist, ramelteon, shows promise for lowering incident delirium among elderly medical patients, though larger trials are still needed. |
5. Polyethylene glycol may be an excellent alternative to lactulose for patients with acute hepatic encephalopathy once larger studies are done, as it is well tolerated and shows faster resolution of symptoms. |
6. Nonselective ‐blockers should no longer be offered to cirrhotic patients after they develop spontaneous bacterial peritonitis, as they are associated with increased mortality and acute kidney injury. |
7. Current guidelines regarding prophylaxis against VTE in medical inpatients likely result in nonbeneficial use of medications for this purpose. It remains unclear which high‐risk populations do benefit from pharmacologic prophylaxis. |
8. DOACs are as effective and are safer than conventional therapy for treatment of VTE, though they are not recommended in patients with GFR <30 mL/min. |
9. DOACs are more effective and are safer (though they may increase risk of gastrointestinal bleeding) than conventional therapy in patients with AF. |
10. DOACs are as safe and more effective than conventional therapy in elderly patients with VTE or AF, being mindful of dosing recommendations in this population. |
11. Two new once‐weekly antibiotics, dalbavancin and oritavancin, approved for skin and soft tissue infections, appear noninferior to vancomycin and have the potential to shorten hospitalizations and, in doing so, may decrease cost. |
12. Offering family members of a patient undergoing CPR the opportunity to observe has durable impact on meaningful short‐ and long‐term psychological outcomes. Clinicians should strongly consider making this offer. |
AN APPROACHING PARADIGM SHIFT IN THE TREATMENT FOR HEART FAILURE
McMurray J, Packer M, Desai A, et al. Angiotensin‐neprilysin inhibition versus enalapril in heart failure. N Engl J Med. 2014;371:9931004.
Background
The last drug approved by the Food and Drug Administration (FDA) for heart failure (HF) was 10 years ago.[2] The new PARADIGM (Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure) heart failure study comparing a novel combination drug of a neprilysin inhibitor and angiotensin receptor blocker (ARB) to an angiotensin‐converting enzyme (ACE) inhibitor has cardiologists considering a possible change in the HF treatment algorithm. Neprilysin is a naturally occurring enzyme that breaks down the protective vasoactive peptides (brain natriuretic peptide, atrial natriuretic peptide, and bradykinin) made by the heart and the body in HF. These vasoactive peptides function to increase vasodilation and block sodium and water reabsorption. This novel neprilysin inhibitor extends the life of these vasoactive peptides, thus enhancing their effect. By inhibiting both neprilysin and the renin‐angiotensin system, there should be additional improvement in HF management. The neprilysin inhibitor was combined with an ARB instead of an ACE inhibitor because of significant angioedema seen in earlier phase trials when combined with an ACE inhibitor. This is believed related to increases in bradykinin due to both agents.
Findings
In this multicenter, blinded, randomized trial, over 10,000 patients with known HF (ejection fraction<35%, New York Heart Association class II or higher) went through 2 run‐in periods to ensure tolerance of both enalapril and the study drug, a combination of a neprilysin inhibitor and valsartan (neprilysin‐I/ARB). Eventually 8442 patients underwent randomization to either enalapril (10 mg twice a day) or neprilysin‐I/ARB (200 mg twice a day). The primary outcome was a combination of cardiovascular mortality and heart failure hospitalizations. The trial was stopped early at 27 months because of overwhelming benefit with neprilysin‐I/ARB (21.8% vs 26.5%; P<0.001). There was a 20% reduction specifically in cardiovascular mortality (13.3% vs 16.5%; hazard ratio [HR]: 0.80; P<0.001). The number needed to treat (NNT) was 32. There was also a 21% reduction in the risk of hospitalization (P<0.001). More patients with neprilysin‐I/ARB had symptomatic hypotension (14% vs 9.2%; P<0.001) but patients on the ACE inhibitor experienced more cough, hyperkalemia, and increases in their serum creatinine.
Cautions
There are 2 reasons clinicians may not see the same results in practice. First, the trial was stopped early, which can sometimes exaggerate benefits.[3] Second, the 2 run‐in periods eliminated patients who could not tolerate the medications at the trial doses. Additionally, although the study's authors were independent, the trial was funded by a pharmaceutical company.
Implications
This new combination drug of a neprilysin inhibitor and valsartan shows great promise at reducing cardiovascular mortality and hospitalizations for heart failure compared to enalapril alone. Given the high morbidity and mortality of heart failure, having a new agent in the treatment algorithm will be useful to patients and physicians. The drug was just approved by the FDA in July 2015 and will likely be offered as an alternative to ACE inhibitors.
VENOUS CONTRAST‐INDUCED NEPHROTOXICITY: IS THERE REALLY A RISK?
McDonald J, McDonald R, Carter R, et al. Risk of intravenous contrast material‐mediated acute kidney injury: a propensity score‐matched study stratified by baseline‐estimated glomerular filtration rate. Radiology. 2014;271(1):6573.
McDonald R, McDonald J, Carter R, et al. Intravenous contrast material exposure is not an independent risk factor for dialysis or mortality. Radiology. 2014;273(3):714725.
Background
It is a common practice to withhold intravenous contrast material from computed tomography (CT) scans in patients with even moderately poor renal function out of concern for causing contrast‐induced nephropathy (CIN). Our understanding of CIN is based largely on observational studies and outcomes of cardiac catheterizations, where larger amounts of contrast are given intra‐arterially into an atherosclerotic aorta.[4] The exact mechanism of injury is not clear, possibly from direct tubule toxicity or renal vasoconstriction.[5] CIN is defined as a rise in creatinine >0.5 mg/dL or >25% rise in serum creatinine 24 to 48 hours after receiving intravenous contrast. Although it is usually self‐limited, there is concern that patients who develop CIN have an increase risk of dialysis and death.[6] In the last few years, radiologists have started to question whether the risk of CIN is overstated. A recent meta‐analysis of 13 studies demonstrated a similar likelihood of acute kidney injury in patients regardless of receiving intravenous contrast.[7] If the true incidence of CIN after venous contrast is actually lower, this raises the question of whether we are unnecessarily withholding contrast from CTs and thereby reducing their diagnostic accuracy. Two 2014 observational studies provide additional evidence that the concern for CIN may be overstated.
Findings
The 2 Mayo Clinic studies used the same database. They looked at all patients who underwent a contrast‐enhanced or unenhanced thoracic, abdominal, or pelvic CT between January 2000 and December 2010 at the Mayo Clinic. After limiting the data to patients with pre‐ and post‐CT creatinine measurements and excluding anyone on dialysis, with preexisting acute kidney injury, or who had received additional contrast within 14 days, they ended up with 41,229 patients, mostly inpatients. All of the patients were assigned propensity scores based on risk factors for the development of CIN and whether they would likely receive contrast. The patients were then subdivided into 4 renal function subgroups based on estimated glomerular filtration rate (eGFR). The patients who received contrast were matched based on their propensity scores to those who did not received contrast within their eGFR subgroups. Unmatched patients were eliminated, leaving a cohort of 12,508 matched patients. The outcome of the first article was acute kidney injury (AKI) defined as a rise in creatinine >0.5 mg/dL at 24 to 48 hours. Though AKI rose with worsening eGFR subgroups (eGFR > 90 [1.2%] vs eGFR < 30 [14%]), the rates of AKI were the same regardless of contrast exposure. There was no statistical difference in any of the eGFR subgroups. The second study looked at important clinical outcomesdeath and the need for dialysis. There was no statistical difference for emergent dialysis (odds ratio [OR]: 0.96, P=0.89) or 30‐day mortality (HR: 0.97; P=0.45) regardless of whether the patients received contrast or not.
Cautions
In propensity matching, unmeasured confounders can bias the results. However, the issue of whether venous contrast causes CIN will unlikely be settled in a randomized controlled trial. For patients with severe renal failure (eGFR < 30), there were far fewer patients in this subgroup, making it harder to draw conclusions. The amount of venous contrast given was not provided. Finally, this study evaluated intravenous contrast for CTs, not intra‐arterial contrast.
Implications
These 2 studies raise doubt as to whether the incidence of AKI after contrast‐enhanced CT can be attributed to the contrast itself. What exactly causes the rise in creatinine is probably multifactorial including lab variation, hydration, blood pressure changes, nephrotoxic drugs, and comorbid disease. In trying to decide whether to obtain a contrast‐enhanced CT for patients with chronic kidney dysfunction, these studies provide more evidence to consider in the decision‐making process. A conversation with the radiologist about the benefits gained from using contrast in an individual patient may be of value.
PREVENTION AND PROGNOSIS OF INPATIENT DELIRIUM
Hatta K, Yasuhiro K, Wada K, et al. Preventive effects of ramelteon on delirium: a randomized placebo controlled trial. JAMA Psych. 2014;71(4):397403.
A new melatonin agonist dramatically improves delirium incidence.
Background
Numerous medications and therapeutic approaches have been studied to prevent incident delirium in hospitalized medical and surgical patients with varying success. Many of the tested medications also have the potential for significant undesirable side effects. An earlier small trial of melatonin appeared to have impressive efficacy for this purpose and be well tolerated, but the substance is not regulated by the FDA.[8] Ramelteon, a melatonin receptor agonist, is approved by the FDA for insomnia, and authors hypothesized that it, too, may be effective in delirium prevention.
Findings
This study was a multicenter, single‐blinded, randomized controlled trial of the melatonin‐agonist ramelteon versus placebo in elderly patients admitted to the hospital ward or ICU with serious medical conditions. Researchers excluded intubated patients or those with Lewy body dementia, psychiatric disorders, and severe liver disease. Patients received either ramelteon or placebo nightly for up to a week, and the primary end point was incident delirium as determined by a blinded observer using a validated assessment tool. Sixty‐seven patients were enrolled. The baseline characteristics in the arms of the trial were similar. In the placebo arm, 11 of 34 patients (32%) developed delirium during the 7‐day observation period. In the ramelteon arm, 1 of 33 (3%) developed delirium (P=0.003). The rate of drug discontinuation was the same in each arm.
Cautions
This study is small, and the single‐blinded design (the physicians and patients knew which group they were in but the observers did not) limits the validity of these results, mandating a larger double‐blinded trial.
Implications
Ramelteon showed a dramatic impact on preventing incident delirium on elderly hospitalized patients with serious medical conditions admitted to the ward or intensive care unit (ICU) (nonintubated) in this small study. If larger trials concur with the impact of this well‐tolerated and inexpensive medication, the potential for delirium incidence reduction could have a dramatic impact on how care for delirium‐vulnerable patients is conducted as well as the systems‐level costs associated with delirium care. Further studies of this class of medications are needed to more definitively establish its value in delirium prevention.
THE CONFUSION ASSESSMENT METHOD SEVERITY SCORE CAN QUANTIFY PROGNOSIS FOR DELIRIOUS MEDICAL INPATIENTS
Innoye SK, Kosar CM, Tommet D, et al. The CAM‐S: development and validation of a new scoring system for delirium in 2 cohorts. Ann Intern Med. 2014;160:526533.
Background
Delirium is common in hospitalized elderly patients, and numerous studies show that there are both short‐ and long‐term implications of developing delirium. Using well studied and validated tools has made identifying delirium fairly straightforward, yet its treatment remains difficult. Additionally, differentiating which patients will have a simpler clinical course from those at risk for a more morbid one has proved challenging. Using the Confusion Assessment Method (CAM), both in its short (4‐item) and long (10‐item) forms, as the basis for a prognostication tool, would allow for future research on treatment to have a scale against which to measure impact, and would allow clinicians to anticipate which patients were more likely to have difficult clinical courses.
Findings
The CAM Severity (CAM‐S) score was derived in 1219 subjects participating in 2 ongoing studies: 1 included high‐risk medical inpatients 70 years old or older, and the other included similarly aged patients undergoing major orthopedic, general, or vascular surgeries. Outcomes data were not available for the surgical patients. The CAM items were rated as either present/absent or absent/mild/severe, depending on the item, with an associated score attached to each item such that the 4‐item CAM had a score of 0 to 7 and the 10‐item CAM 0 to 19 (Table 2). Clinical outcomes from the medical patients cohort showed a dose response with increasing CAM‐S scores with respect to length of stay, adjusted cost, combined 90‐day end points of skilled nursing facility placement or death, and 90‐day mortality. Specifically, for patients with a CAM‐S (short form) score of 5 to 7, the 90‐day rate of death or nursing home residence was 62%, whereas the 90‐day postdischarge mortality rate was 36%.
The CAM | The CAM‐S | |
---|---|---|
| ||
Acute onset with fluctuating course | Absent | 0 |
Present | 1 | |
Inattention or distractability | Absent | 0 |
Mild | 1 | |
Severe | 2 | |
Disorganized thinking, illogical or unclear ideas | Absent | 0 |
Mild | 1 | |
Severe | 2 | |
Alteration of consciousness | Absent | 0 |
Mild | 0 | |
Severe | 2 | |
Total | 07 |
Cautions
The CAM‐S, like the CAM, may work less well in patients with hypoactive delirium. This scale has been applied in a surgical cohort, but study outcomes were not presented in this article. This absence limits our ability to apply these results to a surgical population presently.
Implications
This study demonstrates that in medical inpatients, the CAM‐S is effective for prognostication. Moreover, the study points out that high‐scoring patients on the CAM‐S have quite poor prognoses, with more than one‐third dying by 3 months. This finding suggests that an important use of the CAM‐S is to identify patients about whom goals of care discussions should be held and end‐of‐life planning initiated if not previously done.
GET EXCITED ABOUT HEPATIC ENCEPHALOPATHY AGAINA NEW POSSIBLE TREATMENT
Rahimi R, Singal A, Cuthbert J, et al. Lactulose vs polyethylene glycol 3350‐electrolyte solution for treatment of overt hepatic encephalopathy. The HELP randomized clinical trial. JAMA Intern Med. 2014;174(11):17271733.
Background
Lactulose has been the principle treatment for acute hepatic encephalopathy (HE) since 1966.[9] It theoretically works by lowering the pH of the colon and trapping ammonia as ammonium, which is then expelled. Alternatively, it may simply decrease transit time through the colon. In fact, earlier treatments for HE were cathartics such as magnesium salts. Unfortunately 20% tp 30% of patients are poor responders to lactulose, and patients do not like it. This new study tests whether a modern‐day cathartic, polyethylene glycol, works as well as lactulose.
Findings
In this unblinded, randomized controlled trial, patients presenting to the emergency department with acute HE were assigned to either lactulose 20 to 30 g for a minimum of 3 doses over 24 hours or 4 L of polyethylene glycol (PEG) over 4 hours. The2 groups were similar in severity and etiology of liver disease. Patients were allowed to have received 1 dose of lactulose given in the emergency department prior to study enrollment. They were excluded if taking rifaximin. The primary outcome was improvement in the hepatic encephalopathy scoring algorithm (HESA) by 1 grade at 24 hours.[10] The algorithm scores HE from 0 (no clinical findings of HE) to 5 (comatose). Initial mean HESA scores in the 2 groups were identical (2.3).
In the lactulose group, 13/25 (52%) improved by at least 1 HESA score at 24 hours. Two patients (8%) completely cleared with a HESA score of 0. In comparison, 21/23 (91%) in the PEG group improved at 24 hours, and 10/23 (43%) had cleared with a HESA score of 0 (P<0.01). The median time to HE resolution was 2 days in the lactulose group compared with 1 day in the PEG group (P=0.01). There were no differences in serious adverse events. The majority (76%) of the PEG group received the full 4 L of PEG.
Cautions
The main limitations of the trial were the small sample size, that it was a single‐center study, and the fact it was unblinded. Additionally, 80% of the PEG group received 1 dose of lactulose prior to enrollment. Statistically, more patients in the PEG group developed hypokalemia, which can worsen HE. Therefore, if PEG is used for acute HE, potassium will need to be monitored.
Implications
The results are intriguing and may represent a new possible treatment for acute HE once larger studies are done. Interestingly, the ammonia level dropped further in the lactulose group than the PEG group, yet there was more cognitive improvement in the PEG group. This raises questions about the role of ammonia and catharsis in HE. Although lactulose and rifaximin continue to be the standard of care, cathartics may be returning as a viable alternative.
SHOULD ‐BLOCKERS BE STOPPED IN PATIENTS WITH CIRRHOSIS WHEN SPONTANEOUS BACTERIAL PERITONITIS OCCURS?
Mandorfer M, Bota S, Schwabi P, et al. Nonselective beta blockers increase risk for hepatorenal syndrome and death in patients with cirrhosis and spontaneous bacterial peritonitis. Gastroenterology. 2014;146:16801690.
Background
Nonselective ‐blockers (NSBBs) are considered the aspirin of hepatologists, as they are used for primary and secondary prevention of variceal bleeds in patients with cirrhosis.[11] Since the 1980s, their benefit in reducing bleeding risk has been known, and more recently there has been evidence that they may reduce the risk of developing ascites in patients with compensated cirrhosis. Yet, there has been some contradictory evidence suggesting reduced survival in patients with decompensated cirrhosis and infections on NSBBs. This has led to the window hypothesis of NSBBs in cirrhosis, where NSBBs are beneficial only during a certain window period during the progression of cirrhosis.[12] Early on in cirrhosis, before the development of varices or ascites, NSBBs have no benefit. As cirrhosis progresses and portal hypertension develops, NSBBs play a major role in reducing bleeding from varices. However, in advanced cirrhosis, NSBBs may become harmful. In theory, they block the body's attempt to increase cardiac output during situations of increased physiologic stress, resulting in decreased mean arterial pressure and perfusion. This, in turn, causes end‐organ damage and increased risk of death. When exactly this NSBB window closes is unclear. A 2014 study suggests the window should close when patients develop spontaneous bacterial peritonitis (SBP).
Findings
This retrospective study followed 607 consecutive patients seen at a liver transplant center in Vienna, Austria, from 2006 to 2011. All of the patients were followed from the time of their first paracentesis. They were excluded if SBP was diagnosed during the first paracentesis. Patients were grouped based on whether they took an NSBB. As expected, more patients on an NSBB had varices (90% vs 62%; P<0.001) and a lower mean heart rate (77.5 vs 83.9 beats/minute; P<0.001). However, the 2 groups were similar in mean arterial pressure, systolic blood pressure, Model for End‐Stage Liver Disease score (17.5), Childs Pugh Score (CPS) (50% were C), and in the etiology of cirrhosis (55% were from alcoholic liver disease). They followed the patients for development of SBP. The primary outcome was transplant‐free survival. For the patients who never developed SBP, there was a 25% reduction in the risk of death for those on an NSBB adjusted for varices and CPS stage (HR=0.75, P=0.027). However, for the 182 patients who developed SBP, those on an NSBB had a 58% increase risk of death, again adjusted for varices and CPS stage (HR=1.58, P=0.014). Among the patients who developed SBP, there was a higher risk of hepatorenal syndrome (HRS) within 90 days for those on an NSBB (24% vs 11%, P=0.027). Although the mean arterial pressures (MAP) had been similar in the 2 groups before SBP, after the development of SBP, those on an NSBB had a significantly lower MAP (77.2 vs 82.6 mm Hg, P=0.005).
Cautions
This is a retrospective study, and although the authors controlled for varices and CPS, it is still possible the 2 groups were not similar. Whether patients were actually taking the NSBB is unknown, and doses of the NSBB were variable.
Implications
This study provides more evidence for the NSBB window hypothesis in the treatment of patients with cirrhosis. It suggests that the window on NSBB closes when patients develop SBP, as NSBBs appear to increase mortality and the risk of HRS. Thus, NSBB therapy should probably be discontinued in cirrhotic patients developing SBP. The question is for how long? The editorial accompanying the article says permanently.[13]
VTE PROPHYLAXIS FOR MEDICAL INPATIENTS: IS IT A THING OF THE PAST?
Flanders SA, Greene T, Grant P, et al. Hospital performance for pharmacologic venous thromboembolism prophylaxis and rate of venous thromboembolism. A cohort study. JAMA Intern Med. 2014;174(10):15771584.
Background
Based on early research studies, many quality and regulatory organizations have stressed the importance of assessing hospitalized patients' venous thromboembolism (VTE) risk and prophylaxing those patients at increased risk either pharmacologically or mechanically. In 2011, a meta‐analysis of 40 studies of medical and stroke patients including approximately 52,000 patients failed to demonstrate a mortality benefit, showing that for every 3 pulmonary embolisms (PEs) prevented, it caused 4 major bleeding episodes per 1000 patients.[14] A second study in 2011, a multicenter, randomized controlled trial with medically complex patients deemed high risk for VTE, also failed to demonstrate a mortality benefit.[15] Despite these and other trials showing questionable benefit, guidelines continue to recommend that high‐risk medical patients should get pharmacologic prophylaxis against VTE.
Findings
This retrospective cohort trial retrospectively evaluated a cohort of 20,794 medical patients (non‐ICU) across 35 hospitals, excluding those with a Caprini score of <2 (ie, low risk for VTE). The authors divided the hospitals into tertiles based on adherence to VTE prophylaxis guidelines. Patients were followed to 90 days after hospitalization with telephone calls (reaching 56%) and chart reviews (100% reviewed) to identify clinically evident VTE events, excluding those that occurred within the first 3 days of index hospitalization. The study identified no statistically significant differences among the tertiles in terms of VTE rates, either in the hospital or at 90 days, though the overall VTE event rate was low. Interestingly, 85% of events took place postdischarge. Subgroup analyses also failed to identify a population of medical patients who benefited from prophylaxis.
Cautions
Debate about whether the Caprini risk score is the best available VTE risk scoring system exists. This study also excluded surgical and ICU patients.
Implications
This trial adds to the mounting literature suggesting that current guidelines‐based pharmacologic VTE prophylaxis for medical patients may offer no clear benefit in terms of incident VTE events or mortality. Although it is not yet time to abandon VTE prophylaxis completely, this study does raise the important question of whether it is time to revisit the quality guidelines and regulatory standards around VTE prophylaxis in medical inpatients. It also highlights the difficulty in assessing medical patients for their VTE risk. Though this study is provocative and important for its real‐world setting, further studies are required.
OUT WITH THE OLD AND IN WITH THE NEW? SHOULD DIRECT ORAL ANTICOAGULANTS BE OUR FIRST CHOICE FOR CARING FOR PATIENTS WITH VTE AND ATRIAL FIBRILLATION?
van Es N, Coppens M, Schulman S. et al. Direct oral anticoagulants compared with vitamin K antagonists for acute venous thromboembolism: evidence from phase 3 trials. Blood. 2014;124(12):19681975.
For patients with acute VTE, direct oral anticoagulants work as well and are safer.
Background
There have been 6 large published randomized controlled trials of direct oral anticoagulants (DOACs) versus vitamin K antagonists (VKAs) in patients with acute VTE. Study sizes range from approximately 2500 to over 8000 subjects. All showed no significant difference between the arms with respect to efficacy (VTE or VTE‐related death) but had variable results with respect to major bleeding risk, a major concern given the nonreversibility of this group of medications. Additionally, subgroup analysis within these studies was challenging given sample size issues.
Findings
These 6 studies were combined in a meta‐analysis to address the DOACs' overall efficacy and safety profile, as well as looking in prespecified subgroups. The meta‐analysis included data from over 27,000 patients, evenly divided between DOACs (edoxaban, apixaban, rivaroxaban, and dabigatran) and VKAs, with the time in the therapeutic range (TTR) in the VKA arm being 64%. Overall, the primary efficacy endpoint (VTE and VTE‐related death) was similar (DOACs relative tisk [RR]=0.90; 95% confidence interval [CI]: 0.77‐1.06) but major bleeding (DOACs RR=0.61; 95% CI: 0.45‐0.83; NNT=150) and combined fatal and intracranial bleeding (DOACs RR=0.37; 95% CI: 0.27‐0.68; NNT=314) favored the DOACs. In subgroup analysis, there was no efficacy difference between the therapeutic groups in the subset specifically with DVT or with PE, or with patients weighing >100 kg, though safety data in these subsets were not evaluable. Patients with creatinine clearances of 30 to 49 mL/min demonstrated similar efficacy in both treatment arms, and the safety analysis in this subset with moderate renal impairment was better in the DOAC arm. Cancer patients achieved better efficacy with similar safety with the DOACs, whereas elderly patients achieved both better safety and efficacy with DOACs.
Cautions
As yet, there are inadequate data on patients with more advanced renal failure (creatinine clearance <30 mL/min) to advise using DOACs in that subset. Also, as there were no data comparing cancer patients with VTE that investigated DOACs versus low molecular weight heparins (the standard of care rather than warfarin since the CLOT [Comparison of Low‐molecular‐weight heparin versus Oral anticoagulant Therapy] trial[16]), the current meta‐analysis does not yet answer whether DOACs should be used in this population despite the efficacy benefit noted in the subgroup analysis.
Implications
This large meta‐analysis strongly suggests we can achieve comparable treatment efficacy from the DOACs as with VKAs, with better safety profiles in patients with acute VTE. In the subset of patients with moderate renal impairment (creatinine clearance 3049 mL/min), it appears safe and effective to choose DOACs.
IN PATIENTS WITH ATRIAL FIBRILLATION, DOACs APPEAR MORE EFFECTIVE THAN VKAs WITH COMPARABLE OR BETTER SAFETY PROFILES
Ruff CT, Guigliano RP, Braunwald E, et al. Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta‐analysis of randomized trials. Lancet. 2014;383(9921):955962.
Background
Adding to the previously published meta‐analyses of the original phase 3 randomized trials regarding the DOACs' impact on the atrial fibrillation (AF) treatment safety and efficacy literature relative to VKAs, a 2013 trial, ENGAGE AF‐TIMI 48 (Effective Anticoagulation with Factor Xa Next Generation in Atrial FibrillationThrombolysis in Myocardial Infarction 48), with edoxaban was published and warrants inclusion to have a better opportunity to glean important subgroup information.[17]
Findings
This meta‐analysis included data on 71,683 patients, 42,411 in the DOAC arm and 29,272 in the warfarin arm, as 2 of the trials were3‐arm studies, comparing warfarin to a high dose and a low dose of the DOAC. Meta‐analyses of the 4 trials were broken down into a high‐dose subsetthe 2 high‐dose arms and the standard doses used in the other 2 trialsand a low‐dose subsetthe 2 low‐dose arms and the standard doses used in the other 2 trials. With respect to the efficacy endpoint (incident stroke or systemic embolization), the high‐dose subset analyses of the DOACs yielded a 19% reduction (P<0.0001; NNT=142) relative to the VKAs. The safety endpoint of major bleeding in this analysis identified a 14% reduction in the DOAC group that was nonsignificant (P=0.06). Within the high‐dose subset, analyses favored DOACs with respect to hemorrhagic stroke (51% reduction; P<0.0001; NNT=220), intracranial hemorrhage (52% reduction; P<0.0001; NNT=132), and overall mortality (10% reduction; P=0.0003; NNT=129), whereas they increased the risk of gastrointestinal bleeding (25% increase; P=0.043; NNH=185). There was no significant difference between DOACs and warfarin with respect to ischemic stroke. The low‐dose subset had similar overall results with even fewer hemorrhage strokes balancing a higher incidence of ischemic strokes in the DOAC arm than in warfarin. Other important subgroup analyses suggest the safety and efficacy impact of DOACs is significant for VKA‐naive and experienced patients, though only statistically so for VKA‐naive patients. Additionally, the anticoagulation centers included in the study that had a TTR <66% seemed to gain a safety advantage from the DOACs, whereas both TTR groups (<66% and 66%) appeared to achieve an efficacy benefit from DOACs.
Cautions
There are not sufficient data to suggest routinely switching patients tolerating and well managed on VKAs to DOACs for AF.
Implications
DOACs reduce stroke and systemic emboli in patients with AF without increasing intracranial bleeding or hemorrhagic stroke, though at the cost of increased gastrointestinal bleeding in patients on the high‐dose regimens. Those patients on the low‐dose regimens have even a lower hemorrhagic stroke risk, the benefit of which is negated by a higher than VKA risk of ischemic strokes. Centers with lower TTRs (and perhaps by extrapolation, those patients with more difficulty staying in the therapeutic range) may gain more benefit by switching. New patients on treatment for AF should strongly be considered for DOAC therapy as the first line.
IN ELDERLY PATIENTS, THE DOACs APPEAR TO OFFER IMPROVED EFFICACY WITHOUT SACRIFICING SAFETY
Sardar P, Chatterjee S, Chaudhari S, Lip GYH. New oral anticoagulants in elderly adults: evidence from meta‐analysis of randomized trials. J Am Geriatr Soc. 2014;62(5):857864.
Background
The prevalence of AF rises with age, as does the prevalence of malignancy, limited mobility, and other comorbidities that increase the risk for VTEs. These factors may also increase the risk of bleeding with conventional therapy with heparins and VKAs. As such, understanding the implications of using DOACs in the elderly population is important.
Findings
This meta‐analysis included the elderly (age 75 years) subset of patients from existing AF treatment and VTE treatment and prophylaxis randomized trials comparing DOACs with VKAs, low‐molecular‐weight heparin (LMWH), aspirin, or placebo. The primary safety outcome was major bleeding. For AF trials, the efficacy endpoint was stroke or systemic embolization, whereas in VTE trials it was VTE or VTE‐related death. Authors were able to extract data on 25,031 patients across 10 trials that evaluated rivaroxaban, apixaban, and dabigatran (not edoxaban), with follow‐up data ranging from 35 days to 2 years. For safety outcomes, the 2 arms showed no statistical difference (DOAC: 6.4%; conventional therapy: 6.3%; OR: 1.02; 95% CI: 0.73‐1.43). For efficacy endpoints in VTE studies, DOACs were more effective (3.7% vs 7.0%; OR: 0.45; 95% CI: 0.27‐77; NNT=30). For AF, the efficacy analysis favored DOACs also (3.3% vs 4.7%; OR: 0.65; 95% CI: 0.48‐0.87; NNT=71). When analyzed by the efficacy of the individual DOAC, rivaroxaban and apixaban both appeared to outperform the VKA/LMWH arm for both VTE and AF treatment, whereas data on dabigatran were only available for AF, also showing an efficacy benefit. Individual DOAC analyses for safety endpoints showed all the 3 to be similar to VKA/LMWH.
Cautions
Authors note, however, that coexisting low body weight and renal insufficiency may influence dosing choices in this population. There are specific dosage recommendations in the elderly for some DOACs.
Implications
The use of DOACs in patients aged 75 years and older appears to confer a substantial efficacy advantage when used for treatment of VTE and AF patients. The safety data presented in this meta‐analysis suggest that this class is comparable to VKA/LMWH medications.
CHANGING INPATIENT MANAGEMENT OF SKIN INFECTIONS
Boucher, H, Wilcox M, Talbot G, et al. Once‐weekly dalbavancin versus daily conventional therapy for skin infection. N Engl J Med. 2014;370:21692179.
Corey G, Kabler, H, Mahra P, et al. Single‐dose oritavancin in the treatment of acute bacterial skin infections. N Engl J Med. 2014;370:21802190.
Background
There are over 870,000 hospital admissions yearly for skin infection, making it one of most common reasons for hospitalization in the United States.[18] Management often requires lengthy treatments with intravenous antibiotics, especially with the emergence of methicillin‐resistant Staphylococcus aureus. Results from 2 large randomized, double‐blinded, multicenter clinical trials were published looking at new once‐weekly intravenous antibiotics. Dalbavancin and oritavancin are both lipoglycopeptides in the same family as vancomycin. What is unique is that their serum drug concentrations exceed the minimum inhibitor concentrations for over a week. Both drugs were compared in noninferiority trials to vancomycin. The studies had similar outcomes. The dalbavancin results are presented below.
Findings
Researchers randomized 1312 patients with significant cellulitis, large abscess, or wound infection. Patients also had fever, leukocytosis, or bandemia, and the infection had to be deemed severe enough to require a minimum of 3 days of intravenous antibiotics. The patients could not have received any prior antibiotics. Over 80% of the patients had fevers, and more than half met the criteria for systemic inflammatory response syndrome. Patients were randomized to either dalbavancin (on day 1 and day 8) or vancomycin every 12 hours (1 gm or 15 mg/kg), with both groups receiving placebo dosing of the other drug. The blinded physicians could decide to switch to oral agent (placebo or linezolid in the vancomycin group) anytime after day 3, and the physicians could stop antibiotics anytime after day 10. Otherwise, all patients received 14 days of antibiotics.
The FDA‐approved outcome was cessation of spread of erythema at 48 to 72 hours and no fever at 3 independent readings. Results were similar in the dalbavancin group compared to the vancomycinlinezolid group (79.7% vs 79.8%). Dalbavancin was deemed noninferior to vancomycin. Blinded investigator's assessment of treatment success at 2 weeks was also similar (96% vs 96.7%, respectively). More treatment‐related adverse events occurred in the vancomycinlinezolid group (183 vs 139; P=0.02) and more deaths occurred in the vancomycin group (7 vs 1; P=0.03).
Cautions
These antibiotics have only been shown effective for complicated, acute bacterial skin infections. Their performance for other gram‐positive infections is unknown. In the future, it is possible that patients with severe skin infections will receive a dose of these antibiotics on hospital day 1 and be sent home with close follow‐up. However, that study has not been done yet to confirm efficacy and safety. Though the drugs appear safe, there needs to be more clinical use before they become standard of care, especially because of the long half‐life. Finally, these drugs are very expensive and provide broad spectrum gram‐positive coverage. They are not meant for a simple cellulitis.
Implications
These 2 new once‐weekly antibioticsdalbavancin and oritavancinare noninferior to vancomycin for acute bacterial skin infections. They provide alternative treatment choices for managing patients with significant infections requiring hospitalization. In the future, they may change the need for hospitalization of these patients or significantly reduce their length of stay. Though expensive, a significant reduction in hospitalization will offset costs.
SHOULD THEY STAY OR SHOULD THEY GO? FAMILY PRESENCE DURING CPR MAY IMPROVE THE GRIEF PROCESS DURABLY
Jabre P, Tazarourte K, Azoulay E, et al. Offering the opportunity for family to be present during cardiopulmonary resuscitation: 1 year assessment. Intensive Care Med. 2014;40:981987.
Background
In 2013, a French study randomized adult family members of a patient undergoing cardiopulmonary resuscitation (CPR) occurring at home to either be invited to stay and watch the resuscitation or to have no specific invitation offered.[19] At 90 days, this study revealed that those who were invited to watch (and 79% did) had fewer symptoms of post‐traumatic stress disorder (PTSD) (27% vs 37%) and anxiety (15% vs 23%), though not depression, than did the group not offered the opportunity to watch (though 43% watched anyway). There were 570 subjects (family members) in the trial, of whom a greater number in the control arm declined to participate in a 90‐day follow‐up due to emotional distress. Notably, only 4% of the patients in this study undergoing CPR survived to day 28. Whether the apparent positive psychological impact of the offer to watch CPR for families was durable remained in question.
Findings
The study group followed the families up to 1 year. At that time, dropout rates were similar (with the assumption, as in the prior study, that those who dropped out of either arm had PTSD symptoms). At follow‐up, subjects were again assessed for PTSD, anxiety, and depression symptoms as well as for meeting criteria for having had a major depressive episode or complicated grief. Four hundred eight of the original 570 subjects were able to undergo reevaluation. The 1‐year results showed the group offered the chance to watch CPR had fewer PTSD symptoms (20% vs 32%) and depression symptoms (10% vs 16%), as well as fewer major depressive episodes (23% vs 31%) and less complicated grief (21% vs 36%) but without a durable impact on anxiety symptoms.
Cautions
The resuscitation efforts in question here occurred out of hospital (in the home). Part of the protocol for those family members observing CPR was that a clinician was assigned to stay with them and explain the resuscitation process as it occurred.
Implications
It is postulated that having the chance to observe CPR, if desired, may help the grieving process. This study clearly raises a question about the wisdom of routinely escorting patient's families out of the room during resuscitative efforts. It seems likely that the durable and important psychological effects observed in this study for family members would similarly persist in emergency department and inpatient settings, where staff can be with patients' families to talk them through the events they are witnessing. It is time to ask families if they prefer to stay and watch CPR and not automatically move them to a waiting room.
Disclosure: Nothing to report.
Keeping up with the medical literature in a field as broad as hospital medicine is a daunting task. In 2014 alone, there were over 9200 articles published in top‐tier internal medicine journals.[1] The authors have selected articles from among these top journals using a nonsystematic process that involved reviewing articles brought to their attention via colleagues, literature searches, and online services. The focus was to identify articles that would be of importance to the field of hospital medicine for their potential to be practice changing, provocative, or iconoclastic. After culling through hundreds of titles and abstracts, 46 articles were reviewed by both authors in full text, and ultimately 14 were selected for presentation here. Table 1 summarizes the key points.
|
1. Now that neprolysin inhibitors are approved by the FDA, hospitalists will see them prescribed as an alternative to ACE‐inhibitors given their impressive benefits in cardiovascular mortality and heart failure hospitalizations. |
2. Current evidence suggests that intravenous contrast given with CT scans may not significantly alter the incidence of acute kidney injury, its associated mortality, or the need for hemodialysis. |
3. The CAM‐S score is an important tool for prognostication in delirious patients. Those patients with high CAM‐S scores should be considered for goals of care conversations. |
4. The melatonin agonist, ramelteon, shows promise for lowering incident delirium among elderly medical patients, though larger trials are still needed. |
5. Polyethylene glycol may be an excellent alternative to lactulose for patients with acute hepatic encephalopathy once larger studies are done, as it is well tolerated and shows faster resolution of symptoms. |
6. Nonselective ‐blockers should no longer be offered to cirrhotic patients after they develop spontaneous bacterial peritonitis, as they are associated with increased mortality and acute kidney injury. |
7. Current guidelines regarding prophylaxis against VTE in medical inpatients likely result in nonbeneficial use of medications for this purpose. It remains unclear which high‐risk populations do benefit from pharmacologic prophylaxis. |
8. DOACs are as effective and are safer than conventional therapy for treatment of VTE, though they are not recommended in patients with GFR <30 mL/min. |
9. DOACs are more effective and are safer (though they may increase risk of gastrointestinal bleeding) than conventional therapy in patients with AF. |
10. DOACs are as safe and more effective than conventional therapy in elderly patients with VTE or AF, being mindful of dosing recommendations in this population. |
11. Two new once‐weekly antibiotics, dalbavancin and oritavancin, approved for skin and soft tissue infections, appear noninferior to vancomycin and have the potential to shorten hospitalizations and, in doing so, may decrease cost. |
12. Offering family members of a patient undergoing CPR the opportunity to observe has durable impact on meaningful short‐ and long‐term psychological outcomes. Clinicians should strongly consider making this offer. |
AN APPROACHING PARADIGM SHIFT IN THE TREATMENT FOR HEART FAILURE
McMurray J, Packer M, Desai A, et al. Angiotensin‐neprilysin inhibition versus enalapril in heart failure. N Engl J Med. 2014;371:9931004.
Background
The last drug approved by the Food and Drug Administration (FDA) for heart failure (HF) was 10 years ago.[2] The new PARADIGM (Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure) heart failure study comparing a novel combination drug of a neprilysin inhibitor and angiotensin receptor blocker (ARB) to an angiotensin‐converting enzyme (ACE) inhibitor has cardiologists considering a possible change in the HF treatment algorithm. Neprilysin is a naturally occurring enzyme that breaks down the protective vasoactive peptides (brain natriuretic peptide, atrial natriuretic peptide, and bradykinin) made by the heart and the body in HF. These vasoactive peptides function to increase vasodilation and block sodium and water reabsorption. This novel neprilysin inhibitor extends the life of these vasoactive peptides, thus enhancing their effect. By inhibiting both neprilysin and the renin‐angiotensin system, there should be additional improvement in HF management. The neprilysin inhibitor was combined with an ARB instead of an ACE inhibitor because of significant angioedema seen in earlier phase trials when combined with an ACE inhibitor. This is believed related to increases in bradykinin due to both agents.
Findings
In this multicenter, blinded, randomized trial, over 10,000 patients with known HF (ejection fraction<35%, New York Heart Association class II or higher) went through 2 run‐in periods to ensure tolerance of both enalapril and the study drug, a combination of a neprilysin inhibitor and valsartan (neprilysin‐I/ARB). Eventually 8442 patients underwent randomization to either enalapril (10 mg twice a day) or neprilysin‐I/ARB (200 mg twice a day). The primary outcome was a combination of cardiovascular mortality and heart failure hospitalizations. The trial was stopped early at 27 months because of overwhelming benefit with neprilysin‐I/ARB (21.8% vs 26.5%; P<0.001). There was a 20% reduction specifically in cardiovascular mortality (13.3% vs 16.5%; hazard ratio [HR]: 0.80; P<0.001). The number needed to treat (NNT) was 32. There was also a 21% reduction in the risk of hospitalization (P<0.001). More patients with neprilysin‐I/ARB had symptomatic hypotension (14% vs 9.2%; P<0.001) but patients on the ACE inhibitor experienced more cough, hyperkalemia, and increases in their serum creatinine.
Cautions
There are 2 reasons clinicians may not see the same results in practice. First, the trial was stopped early, which can sometimes exaggerate benefits.[3] Second, the 2 run‐in periods eliminated patients who could not tolerate the medications at the trial doses. Additionally, although the study's authors were independent, the trial was funded by a pharmaceutical company.
Implications
This new combination drug of a neprilysin inhibitor and valsartan shows great promise at reducing cardiovascular mortality and hospitalizations for heart failure compared to enalapril alone. Given the high morbidity and mortality of heart failure, having a new agent in the treatment algorithm will be useful to patients and physicians. The drug was just approved by the FDA in July 2015 and will likely be offered as an alternative to ACE inhibitors.
VENOUS CONTRAST‐INDUCED NEPHROTOXICITY: IS THERE REALLY A RISK?
McDonald J, McDonald R, Carter R, et al. Risk of intravenous contrast material‐mediated acute kidney injury: a propensity score‐matched study stratified by baseline‐estimated glomerular filtration rate. Radiology. 2014;271(1):6573.
McDonald R, McDonald J, Carter R, et al. Intravenous contrast material exposure is not an independent risk factor for dialysis or mortality. Radiology. 2014;273(3):714725.
Background
It is a common practice to withhold intravenous contrast material from computed tomography (CT) scans in patients with even moderately poor renal function out of concern for causing contrast‐induced nephropathy (CIN). Our understanding of CIN is based largely on observational studies and outcomes of cardiac catheterizations, where larger amounts of contrast are given intra‐arterially into an atherosclerotic aorta.[4] The exact mechanism of injury is not clear, possibly from direct tubule toxicity or renal vasoconstriction.[5] CIN is defined as a rise in creatinine >0.5 mg/dL or >25% rise in serum creatinine 24 to 48 hours after receiving intravenous contrast. Although it is usually self‐limited, there is concern that patients who develop CIN have an increase risk of dialysis and death.[6] In the last few years, radiologists have started to question whether the risk of CIN is overstated. A recent meta‐analysis of 13 studies demonstrated a similar likelihood of acute kidney injury in patients regardless of receiving intravenous contrast.[7] If the true incidence of CIN after venous contrast is actually lower, this raises the question of whether we are unnecessarily withholding contrast from CTs and thereby reducing their diagnostic accuracy. Two 2014 observational studies provide additional evidence that the concern for CIN may be overstated.
Findings
The 2 Mayo Clinic studies used the same database. They looked at all patients who underwent a contrast‐enhanced or unenhanced thoracic, abdominal, or pelvic CT between January 2000 and December 2010 at the Mayo Clinic. After limiting the data to patients with pre‐ and post‐CT creatinine measurements and excluding anyone on dialysis, with preexisting acute kidney injury, or who had received additional contrast within 14 days, they ended up with 41,229 patients, mostly inpatients. All of the patients were assigned propensity scores based on risk factors for the development of CIN and whether they would likely receive contrast. The patients were then subdivided into 4 renal function subgroups based on estimated glomerular filtration rate (eGFR). The patients who received contrast were matched based on their propensity scores to those who did not received contrast within their eGFR subgroups. Unmatched patients were eliminated, leaving a cohort of 12,508 matched patients. The outcome of the first article was acute kidney injury (AKI) defined as a rise in creatinine >0.5 mg/dL at 24 to 48 hours. Though AKI rose with worsening eGFR subgroups (eGFR > 90 [1.2%] vs eGFR < 30 [14%]), the rates of AKI were the same regardless of contrast exposure. There was no statistical difference in any of the eGFR subgroups. The second study looked at important clinical outcomesdeath and the need for dialysis. There was no statistical difference for emergent dialysis (odds ratio [OR]: 0.96, P=0.89) or 30‐day mortality (HR: 0.97; P=0.45) regardless of whether the patients received contrast or not.
Cautions
In propensity matching, unmeasured confounders can bias the results. However, the issue of whether venous contrast causes CIN will unlikely be settled in a randomized controlled trial. For patients with severe renal failure (eGFR < 30), there were far fewer patients in this subgroup, making it harder to draw conclusions. The amount of venous contrast given was not provided. Finally, this study evaluated intravenous contrast for CTs, not intra‐arterial contrast.
Implications
These 2 studies raise doubt as to whether the incidence of AKI after contrast‐enhanced CT can be attributed to the contrast itself. What exactly causes the rise in creatinine is probably multifactorial including lab variation, hydration, blood pressure changes, nephrotoxic drugs, and comorbid disease. In trying to decide whether to obtain a contrast‐enhanced CT for patients with chronic kidney dysfunction, these studies provide more evidence to consider in the decision‐making process. A conversation with the radiologist about the benefits gained from using contrast in an individual patient may be of value.
PREVENTION AND PROGNOSIS OF INPATIENT DELIRIUM
Hatta K, Yasuhiro K, Wada K, et al. Preventive effects of ramelteon on delirium: a randomized placebo controlled trial. JAMA Psych. 2014;71(4):397403.
A new melatonin agonist dramatically improves delirium incidence.
Background
Numerous medications and therapeutic approaches have been studied to prevent incident delirium in hospitalized medical and surgical patients with varying success. Many of the tested medications also have the potential for significant undesirable side effects. An earlier small trial of melatonin appeared to have impressive efficacy for this purpose and be well tolerated, but the substance is not regulated by the FDA.[8] Ramelteon, a melatonin receptor agonist, is approved by the FDA for insomnia, and authors hypothesized that it, too, may be effective in delirium prevention.
Findings
This study was a multicenter, single‐blinded, randomized controlled trial of the melatonin‐agonist ramelteon versus placebo in elderly patients admitted to the hospital ward or ICU with serious medical conditions. Researchers excluded intubated patients or those with Lewy body dementia, psychiatric disorders, and severe liver disease. Patients received either ramelteon or placebo nightly for up to a week, and the primary end point was incident delirium as determined by a blinded observer using a validated assessment tool. Sixty‐seven patients were enrolled. The baseline characteristics in the arms of the trial were similar. In the placebo arm, 11 of 34 patients (32%) developed delirium during the 7‐day observation period. In the ramelteon arm, 1 of 33 (3%) developed delirium (P=0.003). The rate of drug discontinuation was the same in each arm.
Cautions
This study is small, and the single‐blinded design (the physicians and patients knew which group they were in but the observers did not) limits the validity of these results, mandating a larger double‐blinded trial.
Implications
Ramelteon showed a dramatic impact on preventing incident delirium on elderly hospitalized patients with serious medical conditions admitted to the ward or intensive care unit (ICU) (nonintubated) in this small study. If larger trials concur with the impact of this well‐tolerated and inexpensive medication, the potential for delirium incidence reduction could have a dramatic impact on how care for delirium‐vulnerable patients is conducted as well as the systems‐level costs associated with delirium care. Further studies of this class of medications are needed to more definitively establish its value in delirium prevention.
THE CONFUSION ASSESSMENT METHOD SEVERITY SCORE CAN QUANTIFY PROGNOSIS FOR DELIRIOUS MEDICAL INPATIENTS
Innoye SK, Kosar CM, Tommet D, et al. The CAM‐S: development and validation of a new scoring system for delirium in 2 cohorts. Ann Intern Med. 2014;160:526533.
Background
Delirium is common in hospitalized elderly patients, and numerous studies show that there are both short‐ and long‐term implications of developing delirium. Using well studied and validated tools has made identifying delirium fairly straightforward, yet its treatment remains difficult. Additionally, differentiating which patients will have a simpler clinical course from those at risk for a more morbid one has proved challenging. Using the Confusion Assessment Method (CAM), both in its short (4‐item) and long (10‐item) forms, as the basis for a prognostication tool, would allow for future research on treatment to have a scale against which to measure impact, and would allow clinicians to anticipate which patients were more likely to have difficult clinical courses.
Findings
The CAM Severity (CAM‐S) score was derived in 1219 subjects participating in 2 ongoing studies: 1 included high‐risk medical inpatients 70 years old or older, and the other included similarly aged patients undergoing major orthopedic, general, or vascular surgeries. Outcomes data were not available for the surgical patients. The CAM items were rated as either present/absent or absent/mild/severe, depending on the item, with an associated score attached to each item such that the 4‐item CAM had a score of 0 to 7 and the 10‐item CAM 0 to 19 (Table 2). Clinical outcomes from the medical patients cohort showed a dose response with increasing CAM‐S scores with respect to length of stay, adjusted cost, combined 90‐day end points of skilled nursing facility placement or death, and 90‐day mortality. Specifically, for patients with a CAM‐S (short form) score of 5 to 7, the 90‐day rate of death or nursing home residence was 62%, whereas the 90‐day postdischarge mortality rate was 36%.
The CAM | The CAM‐S | |
---|---|---|
| ||
Acute onset with fluctuating course | Absent | 0 |
Present | 1 | |
Inattention or distractability | Absent | 0 |
Mild | 1 | |
Severe | 2 | |
Disorganized thinking, illogical or unclear ideas | Absent | 0 |
Mild | 1 | |
Severe | 2 | |
Alteration of consciousness | Absent | 0 |
Mild | 0 | |
Severe | 2 | |
Total | 07 |
Cautions
The CAM‐S, like the CAM, may work less well in patients with hypoactive delirium. This scale has been applied in a surgical cohort, but study outcomes were not presented in this article. This absence limits our ability to apply these results to a surgical population presently.
Implications
This study demonstrates that in medical inpatients, the CAM‐S is effective for prognostication. Moreover, the study points out that high‐scoring patients on the CAM‐S have quite poor prognoses, with more than one‐third dying by 3 months. This finding suggests that an important use of the CAM‐S is to identify patients about whom goals of care discussions should be held and end‐of‐life planning initiated if not previously done.
GET EXCITED ABOUT HEPATIC ENCEPHALOPATHY AGAINA NEW POSSIBLE TREATMENT
Rahimi R, Singal A, Cuthbert J, et al. Lactulose vs polyethylene glycol 3350‐electrolyte solution for treatment of overt hepatic encephalopathy. The HELP randomized clinical trial. JAMA Intern Med. 2014;174(11):17271733.
Background
Lactulose has been the principle treatment for acute hepatic encephalopathy (HE) since 1966.[9] It theoretically works by lowering the pH of the colon and trapping ammonia as ammonium, which is then expelled. Alternatively, it may simply decrease transit time through the colon. In fact, earlier treatments for HE were cathartics such as magnesium salts. Unfortunately 20% tp 30% of patients are poor responders to lactulose, and patients do not like it. This new study tests whether a modern‐day cathartic, polyethylene glycol, works as well as lactulose.
Findings
In this unblinded, randomized controlled trial, patients presenting to the emergency department with acute HE were assigned to either lactulose 20 to 30 g for a minimum of 3 doses over 24 hours or 4 L of polyethylene glycol (PEG) over 4 hours. The2 groups were similar in severity and etiology of liver disease. Patients were allowed to have received 1 dose of lactulose given in the emergency department prior to study enrollment. They were excluded if taking rifaximin. The primary outcome was improvement in the hepatic encephalopathy scoring algorithm (HESA) by 1 grade at 24 hours.[10] The algorithm scores HE from 0 (no clinical findings of HE) to 5 (comatose). Initial mean HESA scores in the 2 groups were identical (2.3).
In the lactulose group, 13/25 (52%) improved by at least 1 HESA score at 24 hours. Two patients (8%) completely cleared with a HESA score of 0. In comparison, 21/23 (91%) in the PEG group improved at 24 hours, and 10/23 (43%) had cleared with a HESA score of 0 (P<0.01). The median time to HE resolution was 2 days in the lactulose group compared with 1 day in the PEG group (P=0.01). There were no differences in serious adverse events. The majority (76%) of the PEG group received the full 4 L of PEG.
Cautions
The main limitations of the trial were the small sample size, that it was a single‐center study, and the fact it was unblinded. Additionally, 80% of the PEG group received 1 dose of lactulose prior to enrollment. Statistically, more patients in the PEG group developed hypokalemia, which can worsen HE. Therefore, if PEG is used for acute HE, potassium will need to be monitored.
Implications
The results are intriguing and may represent a new possible treatment for acute HE once larger studies are done. Interestingly, the ammonia level dropped further in the lactulose group than the PEG group, yet there was more cognitive improvement in the PEG group. This raises questions about the role of ammonia and catharsis in HE. Although lactulose and rifaximin continue to be the standard of care, cathartics may be returning as a viable alternative.
SHOULD ‐BLOCKERS BE STOPPED IN PATIENTS WITH CIRRHOSIS WHEN SPONTANEOUS BACTERIAL PERITONITIS OCCURS?
Mandorfer M, Bota S, Schwabi P, et al. Nonselective beta blockers increase risk for hepatorenal syndrome and death in patients with cirrhosis and spontaneous bacterial peritonitis. Gastroenterology. 2014;146:16801690.
Background
Nonselective ‐blockers (NSBBs) are considered the aspirin of hepatologists, as they are used for primary and secondary prevention of variceal bleeds in patients with cirrhosis.[11] Since the 1980s, their benefit in reducing bleeding risk has been known, and more recently there has been evidence that they may reduce the risk of developing ascites in patients with compensated cirrhosis. Yet, there has been some contradictory evidence suggesting reduced survival in patients with decompensated cirrhosis and infections on NSBBs. This has led to the window hypothesis of NSBBs in cirrhosis, where NSBBs are beneficial only during a certain window period during the progression of cirrhosis.[12] Early on in cirrhosis, before the development of varices or ascites, NSBBs have no benefit. As cirrhosis progresses and portal hypertension develops, NSBBs play a major role in reducing bleeding from varices. However, in advanced cirrhosis, NSBBs may become harmful. In theory, they block the body's attempt to increase cardiac output during situations of increased physiologic stress, resulting in decreased mean arterial pressure and perfusion. This, in turn, causes end‐organ damage and increased risk of death. When exactly this NSBB window closes is unclear. A 2014 study suggests the window should close when patients develop spontaneous bacterial peritonitis (SBP).
Findings
This retrospective study followed 607 consecutive patients seen at a liver transplant center in Vienna, Austria, from 2006 to 2011. All of the patients were followed from the time of their first paracentesis. They were excluded if SBP was diagnosed during the first paracentesis. Patients were grouped based on whether they took an NSBB. As expected, more patients on an NSBB had varices (90% vs 62%; P<0.001) and a lower mean heart rate (77.5 vs 83.9 beats/minute; P<0.001). However, the 2 groups were similar in mean arterial pressure, systolic blood pressure, Model for End‐Stage Liver Disease score (17.5), Childs Pugh Score (CPS) (50% were C), and in the etiology of cirrhosis (55% were from alcoholic liver disease). They followed the patients for development of SBP. The primary outcome was transplant‐free survival. For the patients who never developed SBP, there was a 25% reduction in the risk of death for those on an NSBB adjusted for varices and CPS stage (HR=0.75, P=0.027). However, for the 182 patients who developed SBP, those on an NSBB had a 58% increase risk of death, again adjusted for varices and CPS stage (HR=1.58, P=0.014). Among the patients who developed SBP, there was a higher risk of hepatorenal syndrome (HRS) within 90 days for those on an NSBB (24% vs 11%, P=0.027). Although the mean arterial pressures (MAP) had been similar in the 2 groups before SBP, after the development of SBP, those on an NSBB had a significantly lower MAP (77.2 vs 82.6 mm Hg, P=0.005).
Cautions
This is a retrospective study, and although the authors controlled for varices and CPS, it is still possible the 2 groups were not similar. Whether patients were actually taking the NSBB is unknown, and doses of the NSBB were variable.
Implications
This study provides more evidence for the NSBB window hypothesis in the treatment of patients with cirrhosis. It suggests that the window on NSBB closes when patients develop SBP, as NSBBs appear to increase mortality and the risk of HRS. Thus, NSBB therapy should probably be discontinued in cirrhotic patients developing SBP. The question is for how long? The editorial accompanying the article says permanently.[13]
VTE PROPHYLAXIS FOR MEDICAL INPATIENTS: IS IT A THING OF THE PAST?
Flanders SA, Greene T, Grant P, et al. Hospital performance for pharmacologic venous thromboembolism prophylaxis and rate of venous thromboembolism. A cohort study. JAMA Intern Med. 2014;174(10):15771584.
Background
Based on early research studies, many quality and regulatory organizations have stressed the importance of assessing hospitalized patients' venous thromboembolism (VTE) risk and prophylaxing those patients at increased risk either pharmacologically or mechanically. In 2011, a meta‐analysis of 40 studies of medical and stroke patients including approximately 52,000 patients failed to demonstrate a mortality benefit, showing that for every 3 pulmonary embolisms (PEs) prevented, it caused 4 major bleeding episodes per 1000 patients.[14] A second study in 2011, a multicenter, randomized controlled trial with medically complex patients deemed high risk for VTE, also failed to demonstrate a mortality benefit.[15] Despite these and other trials showing questionable benefit, guidelines continue to recommend that high‐risk medical patients should get pharmacologic prophylaxis against VTE.
Findings
This retrospective cohort trial retrospectively evaluated a cohort of 20,794 medical patients (non‐ICU) across 35 hospitals, excluding those with a Caprini score of <2 (ie, low risk for VTE). The authors divided the hospitals into tertiles based on adherence to VTE prophylaxis guidelines. Patients were followed to 90 days after hospitalization with telephone calls (reaching 56%) and chart reviews (100% reviewed) to identify clinically evident VTE events, excluding those that occurred within the first 3 days of index hospitalization. The study identified no statistically significant differences among the tertiles in terms of VTE rates, either in the hospital or at 90 days, though the overall VTE event rate was low. Interestingly, 85% of events took place postdischarge. Subgroup analyses also failed to identify a population of medical patients who benefited from prophylaxis.
Cautions
Debate about whether the Caprini risk score is the best available VTE risk scoring system exists. This study also excluded surgical and ICU patients.
Implications
This trial adds to the mounting literature suggesting that current guidelines‐based pharmacologic VTE prophylaxis for medical patients may offer no clear benefit in terms of incident VTE events or mortality. Although it is not yet time to abandon VTE prophylaxis completely, this study does raise the important question of whether it is time to revisit the quality guidelines and regulatory standards around VTE prophylaxis in medical inpatients. It also highlights the difficulty in assessing medical patients for their VTE risk. Though this study is provocative and important for its real‐world setting, further studies are required.
OUT WITH THE OLD AND IN WITH THE NEW? SHOULD DIRECT ORAL ANTICOAGULANTS BE OUR FIRST CHOICE FOR CARING FOR PATIENTS WITH VTE AND ATRIAL FIBRILLATION?
van Es N, Coppens M, Schulman S. et al. Direct oral anticoagulants compared with vitamin K antagonists for acute venous thromboembolism: evidence from phase 3 trials. Blood. 2014;124(12):19681975.
For patients with acute VTE, direct oral anticoagulants work as well and are safer.
Background
There have been 6 large published randomized controlled trials of direct oral anticoagulants (DOACs) versus vitamin K antagonists (VKAs) in patients with acute VTE. Study sizes range from approximately 2500 to over 8000 subjects. All showed no significant difference between the arms with respect to efficacy (VTE or VTE‐related death) but had variable results with respect to major bleeding risk, a major concern given the nonreversibility of this group of medications. Additionally, subgroup analysis within these studies was challenging given sample size issues.
Findings
These 6 studies were combined in a meta‐analysis to address the DOACs' overall efficacy and safety profile, as well as looking in prespecified subgroups. The meta‐analysis included data from over 27,000 patients, evenly divided between DOACs (edoxaban, apixaban, rivaroxaban, and dabigatran) and VKAs, with the time in the therapeutic range (TTR) in the VKA arm being 64%. Overall, the primary efficacy endpoint (VTE and VTE‐related death) was similar (DOACs relative tisk [RR]=0.90; 95% confidence interval [CI]: 0.77‐1.06) but major bleeding (DOACs RR=0.61; 95% CI: 0.45‐0.83; NNT=150) and combined fatal and intracranial bleeding (DOACs RR=0.37; 95% CI: 0.27‐0.68; NNT=314) favored the DOACs. In subgroup analysis, there was no efficacy difference between the therapeutic groups in the subset specifically with DVT or with PE, or with patients weighing >100 kg, though safety data in these subsets were not evaluable. Patients with creatinine clearances of 30 to 49 mL/min demonstrated similar efficacy in both treatment arms, and the safety analysis in this subset with moderate renal impairment was better in the DOAC arm. Cancer patients achieved better efficacy with similar safety with the DOACs, whereas elderly patients achieved both better safety and efficacy with DOACs.
Cautions
As yet, there are inadequate data on patients with more advanced renal failure (creatinine clearance <30 mL/min) to advise using DOACs in that subset. Also, as there were no data comparing cancer patients with VTE that investigated DOACs versus low molecular weight heparins (the standard of care rather than warfarin since the CLOT [Comparison of Low‐molecular‐weight heparin versus Oral anticoagulant Therapy] trial[16]), the current meta‐analysis does not yet answer whether DOACs should be used in this population despite the efficacy benefit noted in the subgroup analysis.
Implications
This large meta‐analysis strongly suggests we can achieve comparable treatment efficacy from the DOACs as with VKAs, with better safety profiles in patients with acute VTE. In the subset of patients with moderate renal impairment (creatinine clearance 3049 mL/min), it appears safe and effective to choose DOACs.
IN PATIENTS WITH ATRIAL FIBRILLATION, DOACs APPEAR MORE EFFECTIVE THAN VKAs WITH COMPARABLE OR BETTER SAFETY PROFILES
Ruff CT, Guigliano RP, Braunwald E, et al. Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta‐analysis of randomized trials. Lancet. 2014;383(9921):955962.
Background
Adding to the previously published meta‐analyses of the original phase 3 randomized trials regarding the DOACs' impact on the atrial fibrillation (AF) treatment safety and efficacy literature relative to VKAs, a 2013 trial, ENGAGE AF‐TIMI 48 (Effective Anticoagulation with Factor Xa Next Generation in Atrial FibrillationThrombolysis in Myocardial Infarction 48), with edoxaban was published and warrants inclusion to have a better opportunity to glean important subgroup information.[17]
Findings
This meta‐analysis included data on 71,683 patients, 42,411 in the DOAC arm and 29,272 in the warfarin arm, as 2 of the trials were3‐arm studies, comparing warfarin to a high dose and a low dose of the DOAC. Meta‐analyses of the 4 trials were broken down into a high‐dose subsetthe 2 high‐dose arms and the standard doses used in the other 2 trialsand a low‐dose subsetthe 2 low‐dose arms and the standard doses used in the other 2 trials. With respect to the efficacy endpoint (incident stroke or systemic embolization), the high‐dose subset analyses of the DOACs yielded a 19% reduction (P<0.0001; NNT=142) relative to the VKAs. The safety endpoint of major bleeding in this analysis identified a 14% reduction in the DOAC group that was nonsignificant (P=0.06). Within the high‐dose subset, analyses favored DOACs with respect to hemorrhagic stroke (51% reduction; P<0.0001; NNT=220), intracranial hemorrhage (52% reduction; P<0.0001; NNT=132), and overall mortality (10% reduction; P=0.0003; NNT=129), whereas they increased the risk of gastrointestinal bleeding (25% increase; P=0.043; NNH=185). There was no significant difference between DOACs and warfarin with respect to ischemic stroke. The low‐dose subset had similar overall results with even fewer hemorrhage strokes balancing a higher incidence of ischemic strokes in the DOAC arm than in warfarin. Other important subgroup analyses suggest the safety and efficacy impact of DOACs is significant for VKA‐naive and experienced patients, though only statistically so for VKA‐naive patients. Additionally, the anticoagulation centers included in the study that had a TTR <66% seemed to gain a safety advantage from the DOACs, whereas both TTR groups (<66% and 66%) appeared to achieve an efficacy benefit from DOACs.
Cautions
There are not sufficient data to suggest routinely switching patients tolerating and well managed on VKAs to DOACs for AF.
Implications
DOACs reduce stroke and systemic emboli in patients with AF without increasing intracranial bleeding or hemorrhagic stroke, though at the cost of increased gastrointestinal bleeding in patients on the high‐dose regimens. Those patients on the low‐dose regimens have even a lower hemorrhagic stroke risk, the benefit of which is negated by a higher than VKA risk of ischemic strokes. Centers with lower TTRs (and perhaps by extrapolation, those patients with more difficulty staying in the therapeutic range) may gain more benefit by switching. New patients on treatment for AF should strongly be considered for DOAC therapy as the first line.
IN ELDERLY PATIENTS, THE DOACs APPEAR TO OFFER IMPROVED EFFICACY WITHOUT SACRIFICING SAFETY
Sardar P, Chatterjee S, Chaudhari S, Lip GYH. New oral anticoagulants in elderly adults: evidence from meta‐analysis of randomized trials. J Am Geriatr Soc. 2014;62(5):857864.
Background
The prevalence of AF rises with age, as does the prevalence of malignancy, limited mobility, and other comorbidities that increase the risk for VTEs. These factors may also increase the risk of bleeding with conventional therapy with heparins and VKAs. As such, understanding the implications of using DOACs in the elderly population is important.
Findings
This meta‐analysis included the elderly (age 75 years) subset of patients from existing AF treatment and VTE treatment and prophylaxis randomized trials comparing DOACs with VKAs, low‐molecular‐weight heparin (LMWH), aspirin, or placebo. The primary safety outcome was major bleeding. For AF trials, the efficacy endpoint was stroke or systemic embolization, whereas in VTE trials it was VTE or VTE‐related death. Authors were able to extract data on 25,031 patients across 10 trials that evaluated rivaroxaban, apixaban, and dabigatran (not edoxaban), with follow‐up data ranging from 35 days to 2 years. For safety outcomes, the 2 arms showed no statistical difference (DOAC: 6.4%; conventional therapy: 6.3%; OR: 1.02; 95% CI: 0.73‐1.43). For efficacy endpoints in VTE studies, DOACs were more effective (3.7% vs 7.0%; OR: 0.45; 95% CI: 0.27‐77; NNT=30). For AF, the efficacy analysis favored DOACs also (3.3% vs 4.7%; OR: 0.65; 95% CI: 0.48‐0.87; NNT=71). When analyzed by the efficacy of the individual DOAC, rivaroxaban and apixaban both appeared to outperform the VKA/LMWH arm for both VTE and AF treatment, whereas data on dabigatran were only available for AF, also showing an efficacy benefit. Individual DOAC analyses for safety endpoints showed all the 3 to be similar to VKA/LMWH.
Cautions
Authors note, however, that coexisting low body weight and renal insufficiency may influence dosing choices in this population. There are specific dosage recommendations in the elderly for some DOACs.
Implications
The use of DOACs in patients aged 75 years and older appears to confer a substantial efficacy advantage when used for treatment of VTE and AF patients. The safety data presented in this meta‐analysis suggest that this class is comparable to VKA/LMWH medications.
CHANGING INPATIENT MANAGEMENT OF SKIN INFECTIONS
Boucher, H, Wilcox M, Talbot G, et al. Once‐weekly dalbavancin versus daily conventional therapy for skin infection. N Engl J Med. 2014;370:21692179.
Corey G, Kabler, H, Mahra P, et al. Single‐dose oritavancin in the treatment of acute bacterial skin infections. N Engl J Med. 2014;370:21802190.
Background
There are over 870,000 hospital admissions yearly for skin infection, making it one of most common reasons for hospitalization in the United States.[18] Management often requires lengthy treatments with intravenous antibiotics, especially with the emergence of methicillin‐resistant Staphylococcus aureus. Results from 2 large randomized, double‐blinded, multicenter clinical trials were published looking at new once‐weekly intravenous antibiotics. Dalbavancin and oritavancin are both lipoglycopeptides in the same family as vancomycin. What is unique is that their serum drug concentrations exceed the minimum inhibitor concentrations for over a week. Both drugs were compared in noninferiority trials to vancomycin. The studies had similar outcomes. The dalbavancin results are presented below.
Findings
Researchers randomized 1312 patients with significant cellulitis, large abscess, or wound infection. Patients also had fever, leukocytosis, or bandemia, and the infection had to be deemed severe enough to require a minimum of 3 days of intravenous antibiotics. The patients could not have received any prior antibiotics. Over 80% of the patients had fevers, and more than half met the criteria for systemic inflammatory response syndrome. Patients were randomized to either dalbavancin (on day 1 and day 8) or vancomycin every 12 hours (1 gm or 15 mg/kg), with both groups receiving placebo dosing of the other drug. The blinded physicians could decide to switch to oral agent (placebo or linezolid in the vancomycin group) anytime after day 3, and the physicians could stop antibiotics anytime after day 10. Otherwise, all patients received 14 days of antibiotics.
The FDA‐approved outcome was cessation of spread of erythema at 48 to 72 hours and no fever at 3 independent readings. Results were similar in the dalbavancin group compared to the vancomycinlinezolid group (79.7% vs 79.8%). Dalbavancin was deemed noninferior to vancomycin. Blinded investigator's assessment of treatment success at 2 weeks was also similar (96% vs 96.7%, respectively). More treatment‐related adverse events occurred in the vancomycinlinezolid group (183 vs 139; P=0.02) and more deaths occurred in the vancomycin group (7 vs 1; P=0.03).
Cautions
These antibiotics have only been shown effective for complicated, acute bacterial skin infections. Their performance for other gram‐positive infections is unknown. In the future, it is possible that patients with severe skin infections will receive a dose of these antibiotics on hospital day 1 and be sent home with close follow‐up. However, that study has not been done yet to confirm efficacy and safety. Though the drugs appear safe, there needs to be more clinical use before they become standard of care, especially because of the long half‐life. Finally, these drugs are very expensive and provide broad spectrum gram‐positive coverage. They are not meant for a simple cellulitis.
Implications
These 2 new once‐weekly antibioticsdalbavancin and oritavancinare noninferior to vancomycin for acute bacterial skin infections. They provide alternative treatment choices for managing patients with significant infections requiring hospitalization. In the future, they may change the need for hospitalization of these patients or significantly reduce their length of stay. Though expensive, a significant reduction in hospitalization will offset costs.
SHOULD THEY STAY OR SHOULD THEY GO? FAMILY PRESENCE DURING CPR MAY IMPROVE THE GRIEF PROCESS DURABLY
Jabre P, Tazarourte K, Azoulay E, et al. Offering the opportunity for family to be present during cardiopulmonary resuscitation: 1 year assessment. Intensive Care Med. 2014;40:981987.
Background
In 2013, a French study randomized adult family members of a patient undergoing cardiopulmonary resuscitation (CPR) occurring at home to either be invited to stay and watch the resuscitation or to have no specific invitation offered.[19] At 90 days, this study revealed that those who were invited to watch (and 79% did) had fewer symptoms of post‐traumatic stress disorder (PTSD) (27% vs 37%) and anxiety (15% vs 23%), though not depression, than did the group not offered the opportunity to watch (though 43% watched anyway). There were 570 subjects (family members) in the trial, of whom a greater number in the control arm declined to participate in a 90‐day follow‐up due to emotional distress. Notably, only 4% of the patients in this study undergoing CPR survived to day 28. Whether the apparent positive psychological impact of the offer to watch CPR for families was durable remained in question.
Findings
The study group followed the families up to 1 year. At that time, dropout rates were similar (with the assumption, as in the prior study, that those who dropped out of either arm had PTSD symptoms). At follow‐up, subjects were again assessed for PTSD, anxiety, and depression symptoms as well as for meeting criteria for having had a major depressive episode or complicated grief. Four hundred eight of the original 570 subjects were able to undergo reevaluation. The 1‐year results showed the group offered the chance to watch CPR had fewer PTSD symptoms (20% vs 32%) and depression symptoms (10% vs 16%), as well as fewer major depressive episodes (23% vs 31%) and less complicated grief (21% vs 36%) but without a durable impact on anxiety symptoms.
Cautions
The resuscitation efforts in question here occurred out of hospital (in the home). Part of the protocol for those family members observing CPR was that a clinician was assigned to stay with them and explain the resuscitation process as it occurred.
Implications
It is postulated that having the chance to observe CPR, if desired, may help the grieving process. This study clearly raises a question about the wisdom of routinely escorting patient's families out of the room during resuscitative efforts. It seems likely that the durable and important psychological effects observed in this study for family members would similarly persist in emergency department and inpatient settings, where staff can be with patients' families to talk them through the events they are witnessing. It is time to ask families if they prefer to stay and watch CPR and not automatically move them to a waiting room.
Disclosure: Nothing to report.
- http://scientific.thomsonreuters.com/imgblast/JCRFullCovlist-2014.pdf. Accessed August 28, 2015. Journals in the 2014 release of the JCR. Available at:
- Neprilysin inhibition—a novel therapy for heart failure. N Engl J Med. 2014;371(11):1062–1064.
- Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta‐regression analysis. JAMA. 2010;303(12):1180–1187. , , , et al.
- Intravenous contrast medium‐induced nephrotoxicity: is the medical risk really as great as we have come to believe? Radiology 2010;256(1):21–28. ,
- Pathophysiology of contrast medium‐induced nephropathy. Kidney Int. 2005;68(1):14–22. , ,
- Contrast‐induced acute kidney injury: short‐ and long‐term implications. Semin Nephrol. 2011;31(3):300–309. ,
- Frequency of acute kidney injury following intravenous contrast medium administration: a systematic review and meta‐analysis. Radiology. 2013;267(1):119–128. , , , et al.
- Melatonin decreases delirium in elderly patients: a randomized, placebo‐controlled trial. Int J Geriatr Psychiatry. 2011;26(7):687–694. , , , , ,
- Lactulose in the treatment of chronic portal‐systemic encephalopathy. A double‐blind clinical trial. N Engl J Med. 1969;281(8):408–412. , ,
- Performance of the hepatic encephalopathy scoring algorithm in a clinical trial of patients with cirrhosis and severe hepatic encephalopathy. Am J Gastroenterol. 2009;104(6):1392–1400. , , , et al.
- The changing role of beta‐blocker therapy in patients with cirrhosis. J Hepatol. 2014;60(3):643–653. ,
- The window hypothesis: haemodynamic and non‐haemodynamic effects of beta‐blockers improve survival of patients with cirrhosis during a window in the disease. Gut. 2012;61(7):967–969. , , ,
- When should the beta‐blocker window in cirrhosis close? Gastroenterology. 2014;146(7):1597–1599. ,
- Venous thromboembolism prophylaxis in hospitalized medical patients and those with stroke: a background review for an American College of Physicians Clinical Practice Guideline. Ann Intern Med. 2011;155(9):602–615. , , ,
- LIFENOX Investigators. Low‐molecular‐weight heparin and mortality in acutely ill medical patients. N Engl J Med. 2011;365(26):2463–2472. , , , , , ;
- Randomized Comparison of Low‐Molecular‐Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators. Low‐molecular‐weight heparin versus a coumarin for the prevention of recurrent venous thromboembolism in patients with cancer. N Engl J Med. 2003;349(2):146–153. , , , et al.;
- ENGAGE AF‐TIMI 48 Investigators. Edoxaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2013;369(22):2093–2104. , , , et al.;
- Pharmacology and the treatment of complicated skin and skin‐structure infections. N Engl J Med. 2014;370(23):2238–2239.
- Family presence during cardiopulmonary resuscitation. N Engl J Med. 2013;368(11):1008–1018. , , , et al.
- http://scientific.thomsonreuters.com/imgblast/JCRFullCovlist-2014.pdf. Accessed August 28, 2015. Journals in the 2014 release of the JCR. Available at:
- Neprilysin inhibition—a novel therapy for heart failure. N Engl J Med. 2014;371(11):1062–1064.
- Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta‐regression analysis. JAMA. 2010;303(12):1180–1187. , , , et al.
- Intravenous contrast medium‐induced nephrotoxicity: is the medical risk really as great as we have come to believe? Radiology 2010;256(1):21–28. ,
- Pathophysiology of contrast medium‐induced nephropathy. Kidney Int. 2005;68(1):14–22. , ,
- Contrast‐induced acute kidney injury: short‐ and long‐term implications. Semin Nephrol. 2011;31(3):300–309. ,
- Frequency of acute kidney injury following intravenous contrast medium administration: a systematic review and meta‐analysis. Radiology. 2013;267(1):119–128. , , , et al.
- Melatonin decreases delirium in elderly patients: a randomized, placebo‐controlled trial. Int J Geriatr Psychiatry. 2011;26(7):687–694. , , , , ,
- Lactulose in the treatment of chronic portal‐systemic encephalopathy. A double‐blind clinical trial. N Engl J Med. 1969;281(8):408–412. , ,
- Performance of the hepatic encephalopathy scoring algorithm in a clinical trial of patients with cirrhosis and severe hepatic encephalopathy. Am J Gastroenterol. 2009;104(6):1392–1400. , , , et al.
- The changing role of beta‐blocker therapy in patients with cirrhosis. J Hepatol. 2014;60(3):643–653. ,
- The window hypothesis: haemodynamic and non‐haemodynamic effects of beta‐blockers improve survival of patients with cirrhosis during a window in the disease. Gut. 2012;61(7):967–969. , , ,
- When should the beta‐blocker window in cirrhosis close? Gastroenterology. 2014;146(7):1597–1599. ,
- Venous thromboembolism prophylaxis in hospitalized medical patients and those with stroke: a background review for an American College of Physicians Clinical Practice Guideline. Ann Intern Med. 2011;155(9):602–615. , , ,
- LIFENOX Investigators. Low‐molecular‐weight heparin and mortality in acutely ill medical patients. N Engl J Med. 2011;365(26):2463–2472. , , , , , ;
- Randomized Comparison of Low‐Molecular‐Weight Heparin versus Oral Anticoagulant Therapy for the Prevention of Recurrent Venous Thromboembolism in Patients with Cancer (CLOT) Investigators. Low‐molecular‐weight heparin versus a coumarin for the prevention of recurrent venous thromboembolism in patients with cancer. N Engl J Med. 2003;349(2):146–153. , , , et al.;
- ENGAGE AF‐TIMI 48 Investigators. Edoxaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2013;369(22):2093–2104. , , , et al.;
- Pharmacology and the treatment of complicated skin and skin‐structure infections. N Engl J Med. 2014;370(23):2238–2239.
- Family presence during cardiopulmonary resuscitation. N Engl J Med. 2013;368(11):1008–1018. , , , et al.
Fecal Microbiota Transplant for CDI
Symptomatic Clostridium difficile infection (CDI) results when C difficile, a gram‐positive bacillus that is an obligate‐anaerobe, produces cytotoxins TcdA and TcdB, causing epithelial and mucosal injury in the gastrointestinal tract.[1] Though it was first identified in 1978 as the causative agent of pseudomembranous colitis, and several effective treatments have subsequently been discovered,[2] nearly 3 decades later C difficile remains a major nosocomial pathogen. C difficile is the most frequent infectious cause of healthcare‐associated diarrhea and causes toxin mediated infection. The incidence of CDI in the United States has increased dramatically, especially in hospitals and nursing homes where there are now nearly 500,000 new cases and 30,000 deaths per year.[3, 4, 5, 6] This increased burden of disease is due both to the emergence of several strains that have led to a worldwide epidemic[7] and to a predilection for CDI in older adults, who constitute a growing proportion of hospitalized patients.[8] Ninety‐two percent of CDI‐related deaths occur in adults >65 years old,[9] and the risk of recurrent CDI is 2‐fold higher with each decade of life.[10] It is estimated that CDI is responsible for $1.5 billion in excess healthcare costs each year in the United States,[11] and that much of the additional cost and morbidity of CDI is due to recurrence, with around 83,000 cases per year.[6]
The human gut microbiota, which is a diverse ecosystem consisting of thousands of bacterial species,[12] protects against invasive pathogens such as C difficile.[13, 14] The pathogenesis of CDI requires disruption of the gut microbiota before onset of symptomatic disease,[15] and exposure to antibiotics is the most common precipitant (Figure 1).[16] Following exposure, the manifestations can vary from asymptomatic colonization, to a self‐limited diarrheal illness, to a fulminant, life‐threatening colitis.[1] Even among those who recover, recurrent disease is common.[10] A first recurrence will occur in 15% to 20% of successfully treated patients, a second recurrence will occur in 45% of those patients, and up to 5% of all patients enter a prolonged cycle of CDI with multiple recurrences.[17, 18, 19]

THE NEED FOR BETTER TREATMENT MODALITIES: RATIONALE
Conventional treatments (Table 1) utilize antibiotics with activity against C difficile,[20, 21] but these antibiotics have activity against other gut bacteria, limiting the ability of the microbiota to fully recover following CDI and predisposing patients to recurrence.[22] Traditional treatments for CDI result in a high incidence of recurrence (35%), with up to 65% of these patients who are again treated with conventional approaches developing a chronic pattern of recurrent CDI.[23] Though other factors may also explain why patients have recurrence (such as low serum antibody response to C difficile toxins,[24] use of medications such as proton pump inhibitors,[10] and the specific strain of C difficile causing infection[10, 21], restoration of the gut microbiome through fecal microbiota transplantation (FMT) is the treatment strategy that has garnered the most attention and has gained acceptance among practitioners in the treatment of recurrent CDI when conventional treatments have failed.[25] A review of the practices and evidence for use of FMT in the treatment of CDI in hospitalized patients is presented here, with recommendations shown in Table 2.
Type of CDI | Associated Signs/Symptoms | Usual Treatment(s)[20] |
---|---|---|
| ||
Primary CDI, nonsevere | Diarrhea without signs of systemic infection, WBC <15,000 cells/mL, and serum creatinine <1.5 times the premorbid level | Metronidazole 500mg by mouth 3 times daily for 1014 days OR vancomycin 125mg by mouth 4 times daily for 1014 days OR fidaxomicin 200mg by mouth twice daily for 10 daysa |
Primary CDI, severe | Signs of systemic infection and/or WBC15,000 cells/mL, or serum creatinine 1.5 times the premorbid level | vancomycin 125mg by mouth 4 times daily for 1014 days OR fidaxomicin 200mg by mouth twice daily for 10 daysa |
Primary CDI, complicated | Signs of systemic infection including hypotension, ileus, or megacolon | vancomycin 500mg by mouth 4 times daily AND vancomycin 500mg by rectum 4 times daily AND intravenous metronidazole 500mg 3 times daily |
Recurrent CDI | Return of symptoms with positive Clostridium difficile testing within 8 weeks of onset, but after initial symptoms resolved with treatment | First recurrence: same as initial treatment, based on severity. Second recurrence: Start treatment based on severity, followed by a vancomycin pulsed and/or tapered regimen over 6 or more weeks |
Type of CDI | Recommendation on Use of FMT |
---|---|
| |
Primary CDI, nonsevere | Insufficient data on safety/efficacy to make a recommendation; effective conventional treatments exist |
Primary CDI, severe | Not recommended due to insufficient data on safety/efficacy with documented adverse events |
Primary CDI, complicated | Not recommended due to insufficient data on safety/efficacy with documented adverse events |
Recurrent CDI (usually second recurrence) | Recommended based on data from case reports, systematic reviews, and 2 randomized, controlled clinical trials demonstrating safety and efficacy |
OVERVIEW OF FMT
FMT is not new to modern times, as there are reports of its use in ancient China for various purposes.[26] It was first described as a treatment for pseudomembranous colitis in the 1950s,[27] and in the past several years the use of FMT for CDI has increasingly gained acceptance as a safe and effective treatment. The optimal protocol for FMT is unknown; there are numerous published methods of stool preparation, infusion, and recipient and donor preparation. Diluents include tap water, normal saline, or even yogurt.[23, 28, 29] Sites of instillation of the stool include the stomach, small intestine, and large intestine.[23, 29, 30] Methods of recipient preparation for the infusion include cessation of antibiotic therapy for 24 to 48 hours prior to FMT, a bowel preparation or lavage, and use of antimotility agents, such as loperamide, to aid in retention of transplanted stool.[28] Donors may include friends or family members of the patients or 1 or more universal donors for an entire center. In both cases, screening for blood‐borne and fecal pathogens is performed before one can donate stool, though the tests performed vary between centers. FMT has been performed in both inpatient and outpatient settings, and a published study that instructed patients on self‐administration of fecal enema at home also demonstrated success.[30]
Although there are numerous variables to consider in designing a protocol, as discussed further below, it is encouraging that FMT appears to be highly effective regardless of the specific details of the protocol.[28] If the first procedure fails, evidence suggests a second or third treatment can be quite effective.[28] In a recent advance, successful FMT via administration of frozen stool oral capsules has been demonstrated,[31] which potentially removes many system‐ and patient‐level barriers to receipt of this treatment.
CLINICAL EVIDENCE FOR EFFICACY OF FMT IN TREATMENT OF CDI
Recurrent CDI
The clinical evidence for FMT is most robust for recurrent CDI, consisting of case reports or case series, recently aggregated by 2 large systematic reviews, as well as several clinical trials.[23, 29] Gough et al. published the larger of the 2 reviews with data from 317 patients treated via FMT for recurrent CDI,[23] including FMT via retention enema (35%), colonoscopic infusion (42%), and gastric infusion (23%). Though the authors noted differences in resolution proportions among routes of infusion, types of donors, and types of infusates, it is not possible to draw definite conclusions form these data given their anecdotal nature. Regardless of the specific protocol's details, 92% of patients in the review had resolution of recurrent CDI overall after 1 or more treatments, with 89% improving after only 1 treatment. Another systematic review of FMT, both for CDI and non‐CDI indications, reinforced its efficacy in CDI and overall benign safety profile.[32] Other individual case series and reports of FMT for CDI not included in these reviews have been published; they too demonstrate an excellent resolution rate.[33, 34, 35, 36, 37, 38] As with any case reports/series, generalizing from these data to arrive at conclusions about the safety and efficacy of FMT for CDI is limited by potential confounding and publication bias; thus, there emerged a need for high‐quality prospective trials.
The first randomized, controlled clinical trial (RCT) of FMT for recurrent CDI was reported in 2013.[39] Three treatment groups were compared: vancomycin for 5 days followed by FMT (n=16), vancomycin alone for 14 days (n=13), or vancomycin for 14 days with bowel lavage (n=13). Despite a strict definition of cure (absence of diarrhea or persistent diarrhea from another cause with 3 consecutive negative stool tests for C difficile toxin), the study was stopped early after an interim analysis due to resolution of CDI in 94% of patients in the FMT arm (81% after just 1 infusion) versus 23% to 31% in the others. Off‐protocol FMT was offered to the patients in the other 2 groups and 83% of them were also cured.
Youngster et al. conducted a pilot RCT with 10 patients in each group, where patients were randomized to receive FMT via either colonoscopy or nasogastric tube from a frozen fecal suspension, and no difference in efficacy was seen between administration routes, with an overall cure rate of 90%.[40] Subsequently, Youngster et al. conducted an open‐label noncomparative study with frozen fecal capsules for FMT in 20 patients with recurrent CDI.[31] Resolution occurred in 14 (70%) patients after a single treatment, and 4 of the 6 nonresponders had resolution upon retreatment for an overall efficacy of 90%.
Finally, Cammarota et al. conducted an open‐label RCT on FMT for recurrent CDI,[41] comparing FMT to a standard course of vancomycin for 10 days, followed by pulsed dosing every 2 to 3 days for 3 weeks. The study was stopped after a 1‐year interim analysis as 18 of 20 patients (90%) treated by FMT exhibited resolution of CDI‐associated diarrhea compared to only 5 of 19 patients (26%) in the vancomycin‐treated group (P<0.001).
Primary and Severe CDI
There are few data on the use of FMT for primary, nonrecurrent CDI aside from a few case reports, which are included in the data presented above. A mathematical model of CDI in an intensive care unit assessed the role of FMT on primary CDI,[42] and predicted a decreased median incidence of recurrent CDI in patients treated with FMT for primary CDI. In addition to the general limitations inherent in any mathematical model, the study had specific assumptions for model parameters that limited generalizability, such as lack of incorporation of known risk factors for CDI and assumed immediate, persistent disruption of the microbiota after any antimicrobial exposure until FMT occurred.[43]
Lagier et al.[44] conducted a nonrandomized, open‐label, before and after prospective study comparing mortality between 2 intervention periods: conventional antibiotic treatment for CDI versus early FMT via nasogastric infusion. This shift happened due to clinical need, as their hospital in Marseille developed a ribotype 027 outbreak with a dramatic global mortality rate (50.8%). Mortality in the FMT group was significantly less (64.4% vs 18.8%, P<0.01). This was an older cohort (mean age 84 years), suggesting that in an epidemic setting with a high mortality rate, early FMT may be beneficial, but one cannot extrapolate these data to support a position of early FMT for primary CDI in a nonepidemic setting.
Similarly, the evidence for use of FMT in severe CDI (defined in Table 1) consists of published case reports, which suggest efficacy.[45, 46, 47, 48] Similarly, the study by Lagier et al.[44] does not provide data on severity classification, but had a high mortality rate and found a benefit of FMT versus conventional therapy, suggesting that at least some patients presented with severe CDI and benefited. However, 1 documented death (discussed further below) following FMT for severe CDI highlights the need for caution before this treatment is used in that setting.[49]
Patient and Provider Perceptions Regarding Acceptability of FMT as a Treatment Option for CDI
A commonly cited reason for a limited role of FMT is the aesthetics of the treatment. However, few studies exist on the perceptions of patients and providers regarding FMT. Zipursky et al. surveyed 192 outpatients on their attitudes toward FMT using hypothetical case scenarios.[50] Only 1 patient had a history of CDI. The results were largely positive, with 81% of respondents agreeing to FMT for CDI. However, the need to handle stool and the nasogastric route of administration were identified as the most unappealing aspects of FMT. More respondents (90%, P=0.002) agreed to FMT when offered as a pill.
The same group of investigators undertook an electronic survey to examine physician attitudes toward FMT,[51] and found that 83 of 135 physicians (65%) in their sample had not offered or referred a patient for FMT. Frequent reasons for this included institutional barriers, concern that patients would find it too unappealing, and uncertainty regarding indications for FMT. Only 8% of physicians believed that patients would choose FMT if given the option. As the role of FMT in CDI continues to grow, it is likely that patient and provider perceptions and attitudes regarding this treatment will evolve to better align.
SAFETY OF FMT
Short‐term Complications
Serious adverse effects directly attributable to FMT in patients with normal immune function are uncommon. Symptoms of an irritable bowel (constipation, diarrhea, cramping, bloating) shortly after FMT are observed and usually last less than 48 hours.[23] A recent case series of immunocompromised patients (excluding those with inflammatory bowel disease [IBD]) treated for CDI with FMT did not find many adverse events in this group.[35] However, patients with IBD may have a different risk profile; the same case series noted adverse events occurred in 14% of IBD patients, who experienced disease flare requiring hospitalization in some cases.[35] No cases of septicemia or other infections were observed in this series. An increased risk of IBD flare, fever, and elevation in inflammatory markers following FMT has also been observed in other studies.[52, 53, 54] However, the interaction between IBD and the microbiome is complex, and a recent RCT for patients with ulcerative colitis (without CDI) treated via FMT did not show any significant adverse events.[55] FMT side effects may vary by the administration method and may be related to complications of the method itself rather than FMT (for example, misplacement of a nasogastric tube, perforation risk with colonoscopy).
Deaths following FMT are rare and often are not directly attributed to FMT. One reported death occurred as a result of aspiration pneumonia during sedation for colonoscopy for FMT.[35] In another case, a patient with severe CDI was treated with FMT, did not achieve cure, and developed toxic megacolon and shock, dying shortly after. The authors speculate that withdrawal of antibiotics with activity against CDI following FMT contributed to the outcome, rather than FMT itself.[49] FMT is largely untested in patients with severe CDI,[45, 46, 47, 48] and this fatal case of toxic megacolon warrants caution.
Long‐term Complications
The long‐term safety of FMT is unknown. There is an incomplete understanding of the interaction between the gut microbiome and the host, but this is a complex system, and associations with disease processes have been demonstrated. The gut microbiome may be associated with colon cancer, diabetes, obesity, and atopic disorders.[56] The role of FMT in contributing to these conditions is unknown. It is also not known whether targeted screening/selection of stool for infusion can mitigate these potential risks.
In the only study to capture long‐term outcomes after FMT, 77 patients were followed for 3 to 68 months (mean 17 months).[57] New conditions such as ovarian cancer, myocardial infarction, autoimmune disease, and stroke were observed. Although it is not possible to establish causality from this study or infer an increased risk of these conditions from FMT, the results underscore the need for long‐term follow‐up after FMT.
Regulatory Status
The increased use of FMT for CDI and interest in non‐CDI indications led the US Food and Drug Administration (FDA) in 2013 to publish an initial guidance statement regulating stool as a biologic agent.[58] However, subsequently, the United States Department of Health and Human Services' FDA issued guidance stating that it would exercise enforcement discretion for physicians administering FMT to treat patients with C difficile infections; thus, an investigational new drug approval is not required, but appropriate informed consent from the patient indicating that FMT is an investigational therapy is needed. Revision to this guidance is in progress.[59]
Future Directions
Expansion of the indications for FMT and use of synthetic and/or frozen stool are directions currently under active exploration. There are a number of clinical trials studying FMT for CDI underway that are not yet completed,[60, 61, 62, 63, 64, 65] and these may shed light on the safety and efficacy of FMT for primary CDI, severe CDI, and FMT as a preemptive therapy for high‐risk patients on antibiotics. Frozen stool preparations, often from a known set of prescreened donors and recently in capsule form, have been used for FMT and are gaining popularity.[31, 33] A synthetic intestinal microbiota suspension for use in FMT is currently being tested.[62] There also exists a nonprofit organization, OpenBiome (
CONCLUSIONS
Based on several prospective trials and observational data, FMT appears to be a safe and effective treatment for recurrent CDI that is superior to conventional approaches. Despite recent pivotal advances in the field of FMT, there remain many unanswered questions, and further research is needed to examine the optimal parameters, indications, and outcomes with FMT.
Disclosures
K.R. is supported by grants from the Claude D. Pepper Older Americans Independence Center (grant number AG‐024824) and the Michigan Institute for Clinical and Health Research (grant number 2UL1TR000433). N.S. is supported by a VA MERIT award. The contents of this article do not necessarily represent the views of the Department of Veterans Affairs. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors report no conflicts of interest.
- Emergence of Clostridium difficile‐associated disease in North America and Europe. Clin Microbiol Infect. 2006;12:2–18. , , .
- Antibiotic‐associated pseudomembranous colitis due to toxin‐producing clostridia. N Engl J Med. 1978;298(10):531–534. , , , , .
- Clostridium difficile infection in Ohio hospitals and nursing homes during 2006. Infect Control Hosp Epidemiol. 2009;30(6):526–533. , , ,, et al.
- Attributable burden of hospital‐onset Clostridium difficile infection: a propensity score matching study. Infect Control Hosp Epidemiol. 2013;34(6):588–596. , , , , .
- Centers for Disease Control and Prevention. Vital Signs. Making health care safer. Stopping C. difficile infections. Available at: http://www.cdc.gov/VitalSigns/Hai/StoppingCdifficile. Accessed January 15, 2015.
- Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372(9):825–834. , , , et al.
- Emergence and global spread of epidemic healthcare‐associated Clostridium difficile. Nat Genet. 2013;45(1):109–113. , , , et al.
- Effect of age on treatment outcomes in Clostridium difficile infection. J Am Geriatr Soc. 2013;61(2):222–230. , , , et al.
- Current status of Clostridium difficile infection epidemiology. Clin Infect Dis. 2012;55(suppl 2):S65–S70. , , .
- Risk factors for recurrence, complications and mortality in Clostridium difficile infection: a systematic review. PLoS One. 2014;9(6):e98400. , , , .
- Health care‐associated infections: a meta‐analysis of costs and financial impact on the US health care system. JAMA Intern Med. 2013;173(22):2039–2046. , , , et al.
- Human gut microbiome viewed across age and geography. Nature. 2012;486(7402):222–227. , , , et al.
- Colonization resistance of the digestive tract in conventional and antibiotic‐treated mice. Epidemiol Infect. 1971;69(03):405–411. , , .
- Colonization resistance. Antimicrob Agents Chemother. 1994;38(3):409. , .
- Role of the intestinal microbiota in resistance to colonization by Clostridium difficile. Gastroenterol. 2014;146(6):1547–1553. , .
- Antibiotic‐induced shifts in the mouse gut microbiome and metabolome increase susceptibility to Clostridium difficile infection. Nat Commun. 2014;5:3114. , , , et al.
- Fecal bacteriotherapy for recurrent Clostridium difficile infection. Anaerobe. 2009;15(6):285–289. .
- Treatment of recurrent Clostridium difficile diarrhea. Gastroenterol Hepatol. 2006;2(3):203–208. , .
- Bacteriotherapy using fecal flora: toying with human motions. J Clin Gastroenterol. 2004;38(6):475–483. , , , , , .
- Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the society for healthcare epidemiology of America (SHEA) and the infectious diseases society of America (IDSA). Infect Control Hosp Epidemiol. 2010;31(5):431–455. , , , et al.
- Fidaxomicin Versus Vancomycin for Clostridium difficile Infection: meta‐analysis of pivotal randomized controlled trials. Clin Infect Dis. 2012;55(suppl 2):S93–S103. , , , et al.
- Decreased diversity of the fecal microbiome in recurrent Clostridium difficile‐associated diarrhea. J Infect Dis. 2008;197(3):435–438. , , , et al.
- Systematic review of intestinal microbiota transplantation (fecal bacteriotherapy) for recurrent Clostridium difficile infection. Clin Infect Dis. 2011;53(10):994–1002. , , .
- Association between antibody response to toxin A and protection against recurrent Clostridium difficile diarrhoea. Lancet. 2001;357(9251):189–193. , , , .
- Treatment approaches including fecal microbiota transplantation for recurrent Clostridium difficile infection (RCDI) among infectious disease physicians. Anaerobe. 2013;24:20–24. , , , , .
- Should we standardize the 1,700‐year‐old fecal microbiota transplantation? Am J Gastroenterol. 2012;107(11):1755. , , , , .
- Fecal enema as an adjunct in the treatment of pseudomembranous enterocolitis. Surgery. 1958;44(5):854–859. , , , .
- Treating Clostridium difficile infection with fecal microbiota transplantation. Clin Gastroenterol Hepatol. 2011;9(12):1044–1049. , , , et al.
- Fecal microbiota transplantation for Clostridium difficile infection: systematic review and meta‐analysis. Am J Gastroenterol. 2013;108(4):500–508. , , , .
- Success of self‐administered home fecal transplantation for chronic Clostridium difficile infection. Clin Gastroenterol Hepatol. 2010;8(5):471–473. , , .
- Oral, Capsulized, frozen fecal microbiota transplantation for relapsing Clostridium difficile infection. JAMA. 2014;312(17):1772–1778. , , , , , .
- Systematic review: faecal microbiota transplantation therapy for digestive and nondigestive disorders in adults and children. Aliment Pharmacol Ther. 2014;39(10):1003–1032. , , , et al.
- Standardized frozen preparation for transplantation of fecal microbiota for recurrent Clostridium difficile Infection. Am J Gastroenterol. 2012;107(5):761–767. , , , .
- Fecal transplant via retention enema for refractory or recurrent Clostridium difficile infection. Arch Intern Med. 2012;172(2):191–193. , , , .
- Fecal microbiota transplant for treatment of Clostridium difficile infection in immunocompromised patients. Am J Gastroenterol. 2014;109(7):1065–1071. , , , et al.
- Efficacy of combined jejunal and colonic fecal microbiota transplantation for recurrent Clostridium difficile infection. Clin Gastroenterol Hepatol. 2014;12(9):1572–1576. , , , et al.
- Fecal microbiota transplantation for refractory Clostridium difficile colitis in solid organ transplant recipients. Am J Transplant. 2014;14(2):477–480. , , , .
- Faecal microbiota transplantation and bacteriotherapy for recurrent Clostridium difficile infection: a retrospective evaluation of 31 patients. Scand J Infect Dis. 2014;46(2):89–97. , , , , .
- Duodenal infusion of donor feces for recurrent Clostridium difficile. N Engl J Med. 2013;368(5):407–415. , , , et al.
- Fecal microbiota transplant for relapsing Clostridium difficile infection using a frozen inoculum from unrelated donors: a randomized, open‐label, controlled pilot study. Clin Infect Dis. 2014;58(11):1515–1522. , , , et al.
- Randomised clinical trial: faecal microbiota transplantation by colonoscopy vs. vancomycin for the treatment of recurrent Clostridium difficile infection. Aliment Pharmacol Ther. 2015;41(9):835–843. , , , et al.
- A mathematical model to evaluate the routine use of fecal microbiota transplantation to prevent incident and recurrent Clostridium difficile infection. Infect Control Hosp Epidemiol. 2013;35(1):18–27. , , , , .
- Commentary: fecal microbiota therapy: ready for prime time? Infect Control Hosp Epidemiol. 2014;35(1):28–30. , , .
- Dramatic reduction in Clostridium difficile ribotype 027‐associated mortality with early fecal transplantation by the nasogastric route: a preliminary report. Eur J Clin Microbiol Infect Dis. 2015;34(8):1597–1601. , , , et al.
- Fecal microbiota transplantation for fulminant Clostridium difficile infection in an allogeneic stem cell transplant patient. Transplant Infect Dis. 2012;14(6):E161–E165. , , , , , .
- Faecal microbiota transplantation for severe Clostridium difficile infection in the intensive care unit. Eur J Gastroenterol Hepatol. 2013;25(2):255–257. , , , , , .
- Successful colonoscopic fecal transplant for severe acute Clostridium difficile pseudomembranous colitis. Rev Gastroenterol Mex. 2011;77(1):40–42. , , , .
- Successful treatment of fulminant Clostridium difficile infection with fecal bacteriotherapy. Ann Intern Med. 2008;148(8):632–633. , , .
- Tempered enthusiasm for fecal transplant. Clin Infect Dis. 2014;59(2):319. , , , .
- Patient attitudes toward the use of fecal microbiota transplantation in the treatment of recurrent Clostridium difficile infection. Clin Infect Dis. 2012;55(12):1652–1658. , , , , .
- Physician attitudes toward the use of fecal microbiota transplantation for the treatment of recurrent Clostridium difficile infection. Can J Gastroenterol Hepatol. 2014;28(6):319–324. , , , , .
- Transient flare of ulcerative colitis after fecal microbiota transplantation for recurrent Clostridium difficile infection. Clin Gastroenterol Hepatol. 2013;11(8):1036–1038. , , .
- Temporal Bacterial Community Dynamics Vary Among Ulcerative Colitis Patients After Fecal Microbiota Transplantation. Am J Gastroenterol. 2013;108(10):1620–1630. , , , et al.
- Alteration of intestinal dysbiosis by fecal microbiota transplantation does not induce remission in patients with chronic active ulcerative colitis. Inflamm Bowel Dis. 2013;19(10):2155–2165. , , , et al.
- Findings from a randomized controlled trial of fecal transplantation for patients with ulcerative colitis. Gastroenterol. 2015;149(1):110–118.e4. , , , et al.
- Gut microbiota in health and disease. Physiol Rev. 2010;90(3):859–904. , , , .
- Long‐term follow‐up of colonoscopic fecal microbiota transplant for recurrent Clostridium difficile infection. Am J Gastroenterol. 2012;107(7):1079–1087. , , , et al.
- US Food and Drug Administration. Guidance for industry: enforcement policy regarding investigational new drug requirements for use of fecal microbiota for transplantation to treat Clostridium difficile infection not responsive to standard therapies. Available at: http://www.fda.gov/biologicsbloodvaccines/guidancecomplianceregulatoryinformation/guidances/vaccines/ucm361379.htm. Accessed July 1, 2014.
- US Food and Drug Administration. Draft guidance for industry: enforcement policy regarding investigational new drug requirements for use of fecal microbiota for transplantation to treat Clostridium difficile infection not responsive to standard therapies. Available at: http://www.fda.gov/biologicsbloodvaccines/guidancecomplianceregulatoryinformation/guidances/vaccines/ucm387023.htm. Accessed July 1, 2014.
- University Health Network Toronto. Oral vancomycin followed by fecal transplant versus tapering oral vancomycin. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01226992. Available at: http://clinicaltrials.gov/ct2/show/NCT01226992. Accessed July 1, 2014.
- Tel‐Aviv Sourasky Medical Center. Transplantation of fecal microbiota for Clostridium difficile infection. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01958463. Available at: http://clinicaltrials.gov/ct2/show/NCT01958463. Accessed July 1, 2014.
- Rebiotix Inc. Microbiota restoration therapy for recurrent Clostridium difficile‐associated diarrhea (PUNCH CD). Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01925417. Available at: http://clinicaltrials.gov/ct2/show/NCT01925417. Accessed July 1, 2014.
- Hadassah Medical Organization. Efficacy and safety of fecal microbiota transplantation for severe Clostridium difficile‐associated colitis. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01959048. Available at: http://clinicaltrials.gov/ct2/show/NCT01959048. Accessed July 1, 2014.
- University Hospital Tuebingen. Fecal microbiota transplantation in recurrent or refractory Clostridium difficile colitis (TOCSIN). Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01942447. Available at: http://clinicaltrials.gov/ct2/show/NCT01942447. Accessed July 1, 2014.
- Duke University. Stool transplants to treat refractory Clostridium difficile colitis. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT02127398. Available at: http://clinicaltrials.gov/ct2/show/NCT02127398. Accessed July 1, 2014.
Symptomatic Clostridium difficile infection (CDI) results when C difficile, a gram‐positive bacillus that is an obligate‐anaerobe, produces cytotoxins TcdA and TcdB, causing epithelial and mucosal injury in the gastrointestinal tract.[1] Though it was first identified in 1978 as the causative agent of pseudomembranous colitis, and several effective treatments have subsequently been discovered,[2] nearly 3 decades later C difficile remains a major nosocomial pathogen. C difficile is the most frequent infectious cause of healthcare‐associated diarrhea and causes toxin mediated infection. The incidence of CDI in the United States has increased dramatically, especially in hospitals and nursing homes where there are now nearly 500,000 new cases and 30,000 deaths per year.[3, 4, 5, 6] This increased burden of disease is due both to the emergence of several strains that have led to a worldwide epidemic[7] and to a predilection for CDI in older adults, who constitute a growing proportion of hospitalized patients.[8] Ninety‐two percent of CDI‐related deaths occur in adults >65 years old,[9] and the risk of recurrent CDI is 2‐fold higher with each decade of life.[10] It is estimated that CDI is responsible for $1.5 billion in excess healthcare costs each year in the United States,[11] and that much of the additional cost and morbidity of CDI is due to recurrence, with around 83,000 cases per year.[6]
The human gut microbiota, which is a diverse ecosystem consisting of thousands of bacterial species,[12] protects against invasive pathogens such as C difficile.[13, 14] The pathogenesis of CDI requires disruption of the gut microbiota before onset of symptomatic disease,[15] and exposure to antibiotics is the most common precipitant (Figure 1).[16] Following exposure, the manifestations can vary from asymptomatic colonization, to a self‐limited diarrheal illness, to a fulminant, life‐threatening colitis.[1] Even among those who recover, recurrent disease is common.[10] A first recurrence will occur in 15% to 20% of successfully treated patients, a second recurrence will occur in 45% of those patients, and up to 5% of all patients enter a prolonged cycle of CDI with multiple recurrences.[17, 18, 19]

THE NEED FOR BETTER TREATMENT MODALITIES: RATIONALE
Conventional treatments (Table 1) utilize antibiotics with activity against C difficile,[20, 21] but these antibiotics have activity against other gut bacteria, limiting the ability of the microbiota to fully recover following CDI and predisposing patients to recurrence.[22] Traditional treatments for CDI result in a high incidence of recurrence (35%), with up to 65% of these patients who are again treated with conventional approaches developing a chronic pattern of recurrent CDI.[23] Though other factors may also explain why patients have recurrence (such as low serum antibody response to C difficile toxins,[24] use of medications such as proton pump inhibitors,[10] and the specific strain of C difficile causing infection[10, 21], restoration of the gut microbiome through fecal microbiota transplantation (FMT) is the treatment strategy that has garnered the most attention and has gained acceptance among practitioners in the treatment of recurrent CDI when conventional treatments have failed.[25] A review of the practices and evidence for use of FMT in the treatment of CDI in hospitalized patients is presented here, with recommendations shown in Table 2.
Type of CDI | Associated Signs/Symptoms | Usual Treatment(s)[20] |
---|---|---|
| ||
Primary CDI, nonsevere | Diarrhea without signs of systemic infection, WBC <15,000 cells/mL, and serum creatinine <1.5 times the premorbid level | Metronidazole 500mg by mouth 3 times daily for 1014 days OR vancomycin 125mg by mouth 4 times daily for 1014 days OR fidaxomicin 200mg by mouth twice daily for 10 daysa |
Primary CDI, severe | Signs of systemic infection and/or WBC15,000 cells/mL, or serum creatinine 1.5 times the premorbid level | vancomycin 125mg by mouth 4 times daily for 1014 days OR fidaxomicin 200mg by mouth twice daily for 10 daysa |
Primary CDI, complicated | Signs of systemic infection including hypotension, ileus, or megacolon | vancomycin 500mg by mouth 4 times daily AND vancomycin 500mg by rectum 4 times daily AND intravenous metronidazole 500mg 3 times daily |
Recurrent CDI | Return of symptoms with positive Clostridium difficile testing within 8 weeks of onset, but after initial symptoms resolved with treatment | First recurrence: same as initial treatment, based on severity. Second recurrence: Start treatment based on severity, followed by a vancomycin pulsed and/or tapered regimen over 6 or more weeks |
Type of CDI | Recommendation on Use of FMT |
---|---|
| |
Primary CDI, nonsevere | Insufficient data on safety/efficacy to make a recommendation; effective conventional treatments exist |
Primary CDI, severe | Not recommended due to insufficient data on safety/efficacy with documented adverse events |
Primary CDI, complicated | Not recommended due to insufficient data on safety/efficacy with documented adverse events |
Recurrent CDI (usually second recurrence) | Recommended based on data from case reports, systematic reviews, and 2 randomized, controlled clinical trials demonstrating safety and efficacy |
OVERVIEW OF FMT
FMT is not new to modern times, as there are reports of its use in ancient China for various purposes.[26] It was first described as a treatment for pseudomembranous colitis in the 1950s,[27] and in the past several years the use of FMT for CDI has increasingly gained acceptance as a safe and effective treatment. The optimal protocol for FMT is unknown; there are numerous published methods of stool preparation, infusion, and recipient and donor preparation. Diluents include tap water, normal saline, or even yogurt.[23, 28, 29] Sites of instillation of the stool include the stomach, small intestine, and large intestine.[23, 29, 30] Methods of recipient preparation for the infusion include cessation of antibiotic therapy for 24 to 48 hours prior to FMT, a bowel preparation or lavage, and use of antimotility agents, such as loperamide, to aid in retention of transplanted stool.[28] Donors may include friends or family members of the patients or 1 or more universal donors for an entire center. In both cases, screening for blood‐borne and fecal pathogens is performed before one can donate stool, though the tests performed vary between centers. FMT has been performed in both inpatient and outpatient settings, and a published study that instructed patients on self‐administration of fecal enema at home also demonstrated success.[30]
Although there are numerous variables to consider in designing a protocol, as discussed further below, it is encouraging that FMT appears to be highly effective regardless of the specific details of the protocol.[28] If the first procedure fails, evidence suggests a second or third treatment can be quite effective.[28] In a recent advance, successful FMT via administration of frozen stool oral capsules has been demonstrated,[31] which potentially removes many system‐ and patient‐level barriers to receipt of this treatment.
CLINICAL EVIDENCE FOR EFFICACY OF FMT IN TREATMENT OF CDI
Recurrent CDI
The clinical evidence for FMT is most robust for recurrent CDI, consisting of case reports or case series, recently aggregated by 2 large systematic reviews, as well as several clinical trials.[23, 29] Gough et al. published the larger of the 2 reviews with data from 317 patients treated via FMT for recurrent CDI,[23] including FMT via retention enema (35%), colonoscopic infusion (42%), and gastric infusion (23%). Though the authors noted differences in resolution proportions among routes of infusion, types of donors, and types of infusates, it is not possible to draw definite conclusions form these data given their anecdotal nature. Regardless of the specific protocol's details, 92% of patients in the review had resolution of recurrent CDI overall after 1 or more treatments, with 89% improving after only 1 treatment. Another systematic review of FMT, both for CDI and non‐CDI indications, reinforced its efficacy in CDI and overall benign safety profile.[32] Other individual case series and reports of FMT for CDI not included in these reviews have been published; they too demonstrate an excellent resolution rate.[33, 34, 35, 36, 37, 38] As with any case reports/series, generalizing from these data to arrive at conclusions about the safety and efficacy of FMT for CDI is limited by potential confounding and publication bias; thus, there emerged a need for high‐quality prospective trials.
The first randomized, controlled clinical trial (RCT) of FMT for recurrent CDI was reported in 2013.[39] Three treatment groups were compared: vancomycin for 5 days followed by FMT (n=16), vancomycin alone for 14 days (n=13), or vancomycin for 14 days with bowel lavage (n=13). Despite a strict definition of cure (absence of diarrhea or persistent diarrhea from another cause with 3 consecutive negative stool tests for C difficile toxin), the study was stopped early after an interim analysis due to resolution of CDI in 94% of patients in the FMT arm (81% after just 1 infusion) versus 23% to 31% in the others. Off‐protocol FMT was offered to the patients in the other 2 groups and 83% of them were also cured.
Youngster et al. conducted a pilot RCT with 10 patients in each group, where patients were randomized to receive FMT via either colonoscopy or nasogastric tube from a frozen fecal suspension, and no difference in efficacy was seen between administration routes, with an overall cure rate of 90%.[40] Subsequently, Youngster et al. conducted an open‐label noncomparative study with frozen fecal capsules for FMT in 20 patients with recurrent CDI.[31] Resolution occurred in 14 (70%) patients after a single treatment, and 4 of the 6 nonresponders had resolution upon retreatment for an overall efficacy of 90%.
Finally, Cammarota et al. conducted an open‐label RCT on FMT for recurrent CDI,[41] comparing FMT to a standard course of vancomycin for 10 days, followed by pulsed dosing every 2 to 3 days for 3 weeks. The study was stopped after a 1‐year interim analysis as 18 of 20 patients (90%) treated by FMT exhibited resolution of CDI‐associated diarrhea compared to only 5 of 19 patients (26%) in the vancomycin‐treated group (P<0.001).
Primary and Severe CDI
There are few data on the use of FMT for primary, nonrecurrent CDI aside from a few case reports, which are included in the data presented above. A mathematical model of CDI in an intensive care unit assessed the role of FMT on primary CDI,[42] and predicted a decreased median incidence of recurrent CDI in patients treated with FMT for primary CDI. In addition to the general limitations inherent in any mathematical model, the study had specific assumptions for model parameters that limited generalizability, such as lack of incorporation of known risk factors for CDI and assumed immediate, persistent disruption of the microbiota after any antimicrobial exposure until FMT occurred.[43]
Lagier et al.[44] conducted a nonrandomized, open‐label, before and after prospective study comparing mortality between 2 intervention periods: conventional antibiotic treatment for CDI versus early FMT via nasogastric infusion. This shift happened due to clinical need, as their hospital in Marseille developed a ribotype 027 outbreak with a dramatic global mortality rate (50.8%). Mortality in the FMT group was significantly less (64.4% vs 18.8%, P<0.01). This was an older cohort (mean age 84 years), suggesting that in an epidemic setting with a high mortality rate, early FMT may be beneficial, but one cannot extrapolate these data to support a position of early FMT for primary CDI in a nonepidemic setting.
Similarly, the evidence for use of FMT in severe CDI (defined in Table 1) consists of published case reports, which suggest efficacy.[45, 46, 47, 48] Similarly, the study by Lagier et al.[44] does not provide data on severity classification, but had a high mortality rate and found a benefit of FMT versus conventional therapy, suggesting that at least some patients presented with severe CDI and benefited. However, 1 documented death (discussed further below) following FMT for severe CDI highlights the need for caution before this treatment is used in that setting.[49]
Patient and Provider Perceptions Regarding Acceptability of FMT as a Treatment Option for CDI
A commonly cited reason for a limited role of FMT is the aesthetics of the treatment. However, few studies exist on the perceptions of patients and providers regarding FMT. Zipursky et al. surveyed 192 outpatients on their attitudes toward FMT using hypothetical case scenarios.[50] Only 1 patient had a history of CDI. The results were largely positive, with 81% of respondents agreeing to FMT for CDI. However, the need to handle stool and the nasogastric route of administration were identified as the most unappealing aspects of FMT. More respondents (90%, P=0.002) agreed to FMT when offered as a pill.
The same group of investigators undertook an electronic survey to examine physician attitudes toward FMT,[51] and found that 83 of 135 physicians (65%) in their sample had not offered or referred a patient for FMT. Frequent reasons for this included institutional barriers, concern that patients would find it too unappealing, and uncertainty regarding indications for FMT. Only 8% of physicians believed that patients would choose FMT if given the option. As the role of FMT in CDI continues to grow, it is likely that patient and provider perceptions and attitudes regarding this treatment will evolve to better align.
SAFETY OF FMT
Short‐term Complications
Serious adverse effects directly attributable to FMT in patients with normal immune function are uncommon. Symptoms of an irritable bowel (constipation, diarrhea, cramping, bloating) shortly after FMT are observed and usually last less than 48 hours.[23] A recent case series of immunocompromised patients (excluding those with inflammatory bowel disease [IBD]) treated for CDI with FMT did not find many adverse events in this group.[35] However, patients with IBD may have a different risk profile; the same case series noted adverse events occurred in 14% of IBD patients, who experienced disease flare requiring hospitalization in some cases.[35] No cases of septicemia or other infections were observed in this series. An increased risk of IBD flare, fever, and elevation in inflammatory markers following FMT has also been observed in other studies.[52, 53, 54] However, the interaction between IBD and the microbiome is complex, and a recent RCT for patients with ulcerative colitis (without CDI) treated via FMT did not show any significant adverse events.[55] FMT side effects may vary by the administration method and may be related to complications of the method itself rather than FMT (for example, misplacement of a nasogastric tube, perforation risk with colonoscopy).
Deaths following FMT are rare and often are not directly attributed to FMT. One reported death occurred as a result of aspiration pneumonia during sedation for colonoscopy for FMT.[35] In another case, a patient with severe CDI was treated with FMT, did not achieve cure, and developed toxic megacolon and shock, dying shortly after. The authors speculate that withdrawal of antibiotics with activity against CDI following FMT contributed to the outcome, rather than FMT itself.[49] FMT is largely untested in patients with severe CDI,[45, 46, 47, 48] and this fatal case of toxic megacolon warrants caution.
Long‐term Complications
The long‐term safety of FMT is unknown. There is an incomplete understanding of the interaction between the gut microbiome and the host, but this is a complex system, and associations with disease processes have been demonstrated. The gut microbiome may be associated with colon cancer, diabetes, obesity, and atopic disorders.[56] The role of FMT in contributing to these conditions is unknown. It is also not known whether targeted screening/selection of stool for infusion can mitigate these potential risks.
In the only study to capture long‐term outcomes after FMT, 77 patients were followed for 3 to 68 months (mean 17 months).[57] New conditions such as ovarian cancer, myocardial infarction, autoimmune disease, and stroke were observed. Although it is not possible to establish causality from this study or infer an increased risk of these conditions from FMT, the results underscore the need for long‐term follow‐up after FMT.
Regulatory Status
The increased use of FMT for CDI and interest in non‐CDI indications led the US Food and Drug Administration (FDA) in 2013 to publish an initial guidance statement regulating stool as a biologic agent.[58] However, subsequently, the United States Department of Health and Human Services' FDA issued guidance stating that it would exercise enforcement discretion for physicians administering FMT to treat patients with C difficile infections; thus, an investigational new drug approval is not required, but appropriate informed consent from the patient indicating that FMT is an investigational therapy is needed. Revision to this guidance is in progress.[59]
Future Directions
Expansion of the indications for FMT and use of synthetic and/or frozen stool are directions currently under active exploration. There are a number of clinical trials studying FMT for CDI underway that are not yet completed,[60, 61, 62, 63, 64, 65] and these may shed light on the safety and efficacy of FMT for primary CDI, severe CDI, and FMT as a preemptive therapy for high‐risk patients on antibiotics. Frozen stool preparations, often from a known set of prescreened donors and recently in capsule form, have been used for FMT and are gaining popularity.[31, 33] A synthetic intestinal microbiota suspension for use in FMT is currently being tested.[62] There also exists a nonprofit organization, OpenBiome (
CONCLUSIONS
Based on several prospective trials and observational data, FMT appears to be a safe and effective treatment for recurrent CDI that is superior to conventional approaches. Despite recent pivotal advances in the field of FMT, there remain many unanswered questions, and further research is needed to examine the optimal parameters, indications, and outcomes with FMT.
Disclosures
K.R. is supported by grants from the Claude D. Pepper Older Americans Independence Center (grant number AG‐024824) and the Michigan Institute for Clinical and Health Research (grant number 2UL1TR000433). N.S. is supported by a VA MERIT award. The contents of this article do not necessarily represent the views of the Department of Veterans Affairs. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors report no conflicts of interest.
Symptomatic Clostridium difficile infection (CDI) results when C difficile, a gram‐positive bacillus that is an obligate‐anaerobe, produces cytotoxins TcdA and TcdB, causing epithelial and mucosal injury in the gastrointestinal tract.[1] Though it was first identified in 1978 as the causative agent of pseudomembranous colitis, and several effective treatments have subsequently been discovered,[2] nearly 3 decades later C difficile remains a major nosocomial pathogen. C difficile is the most frequent infectious cause of healthcare‐associated diarrhea and causes toxin mediated infection. The incidence of CDI in the United States has increased dramatically, especially in hospitals and nursing homes where there are now nearly 500,000 new cases and 30,000 deaths per year.[3, 4, 5, 6] This increased burden of disease is due both to the emergence of several strains that have led to a worldwide epidemic[7] and to a predilection for CDI in older adults, who constitute a growing proportion of hospitalized patients.[8] Ninety‐two percent of CDI‐related deaths occur in adults >65 years old,[9] and the risk of recurrent CDI is 2‐fold higher with each decade of life.[10] It is estimated that CDI is responsible for $1.5 billion in excess healthcare costs each year in the United States,[11] and that much of the additional cost and morbidity of CDI is due to recurrence, with around 83,000 cases per year.[6]
The human gut microbiota, which is a diverse ecosystem consisting of thousands of bacterial species,[12] protects against invasive pathogens such as C difficile.[13, 14] The pathogenesis of CDI requires disruption of the gut microbiota before onset of symptomatic disease,[15] and exposure to antibiotics is the most common precipitant (Figure 1).[16] Following exposure, the manifestations can vary from asymptomatic colonization, to a self‐limited diarrheal illness, to a fulminant, life‐threatening colitis.[1] Even among those who recover, recurrent disease is common.[10] A first recurrence will occur in 15% to 20% of successfully treated patients, a second recurrence will occur in 45% of those patients, and up to 5% of all patients enter a prolonged cycle of CDI with multiple recurrences.[17, 18, 19]

THE NEED FOR BETTER TREATMENT MODALITIES: RATIONALE
Conventional treatments (Table 1) utilize antibiotics with activity against C difficile,[20, 21] but these antibiotics have activity against other gut bacteria, limiting the ability of the microbiota to fully recover following CDI and predisposing patients to recurrence.[22] Traditional treatments for CDI result in a high incidence of recurrence (35%), with up to 65% of these patients who are again treated with conventional approaches developing a chronic pattern of recurrent CDI.[23] Though other factors may also explain why patients have recurrence (such as low serum antibody response to C difficile toxins,[24] use of medications such as proton pump inhibitors,[10] and the specific strain of C difficile causing infection[10, 21], restoration of the gut microbiome through fecal microbiota transplantation (FMT) is the treatment strategy that has garnered the most attention and has gained acceptance among practitioners in the treatment of recurrent CDI when conventional treatments have failed.[25] A review of the practices and evidence for use of FMT in the treatment of CDI in hospitalized patients is presented here, with recommendations shown in Table 2.
Type of CDI | Associated Signs/Symptoms | Usual Treatment(s)[20] |
---|---|---|
| ||
Primary CDI, nonsevere | Diarrhea without signs of systemic infection, WBC <15,000 cells/mL, and serum creatinine <1.5 times the premorbid level | Metronidazole 500mg by mouth 3 times daily for 1014 days OR vancomycin 125mg by mouth 4 times daily for 1014 days OR fidaxomicin 200mg by mouth twice daily for 10 daysa |
Primary CDI, severe | Signs of systemic infection and/or WBC15,000 cells/mL, or serum creatinine 1.5 times the premorbid level | vancomycin 125mg by mouth 4 times daily for 1014 days OR fidaxomicin 200mg by mouth twice daily for 10 daysa |
Primary CDI, complicated | Signs of systemic infection including hypotension, ileus, or megacolon | vancomycin 500mg by mouth 4 times daily AND vancomycin 500mg by rectum 4 times daily AND intravenous metronidazole 500mg 3 times daily |
Recurrent CDI | Return of symptoms with positive Clostridium difficile testing within 8 weeks of onset, but after initial symptoms resolved with treatment | First recurrence: same as initial treatment, based on severity. Second recurrence: Start treatment based on severity, followed by a vancomycin pulsed and/or tapered regimen over 6 or more weeks |
Type of CDI | Recommendation on Use of FMT |
---|---|
| |
Primary CDI, nonsevere | Insufficient data on safety/efficacy to make a recommendation; effective conventional treatments exist |
Primary CDI, severe | Not recommended due to insufficient data on safety/efficacy with documented adverse events |
Primary CDI, complicated | Not recommended due to insufficient data on safety/efficacy with documented adverse events |
Recurrent CDI (usually second recurrence) | Recommended based on data from case reports, systematic reviews, and 2 randomized, controlled clinical trials demonstrating safety and efficacy |
OVERVIEW OF FMT
FMT is not new to modern times, as there are reports of its use in ancient China for various purposes.[26] It was first described as a treatment for pseudomembranous colitis in the 1950s,[27] and in the past several years the use of FMT for CDI has increasingly gained acceptance as a safe and effective treatment. The optimal protocol for FMT is unknown; there are numerous published methods of stool preparation, infusion, and recipient and donor preparation. Diluents include tap water, normal saline, or even yogurt.[23, 28, 29] Sites of instillation of the stool include the stomach, small intestine, and large intestine.[23, 29, 30] Methods of recipient preparation for the infusion include cessation of antibiotic therapy for 24 to 48 hours prior to FMT, a bowel preparation or lavage, and use of antimotility agents, such as loperamide, to aid in retention of transplanted stool.[28] Donors may include friends or family members of the patients or 1 or more universal donors for an entire center. In both cases, screening for blood‐borne and fecal pathogens is performed before one can donate stool, though the tests performed vary between centers. FMT has been performed in both inpatient and outpatient settings, and a published study that instructed patients on self‐administration of fecal enema at home also demonstrated success.[30]
Although there are numerous variables to consider in designing a protocol, as discussed further below, it is encouraging that FMT appears to be highly effective regardless of the specific details of the protocol.[28] If the first procedure fails, evidence suggests a second or third treatment can be quite effective.[28] In a recent advance, successful FMT via administration of frozen stool oral capsules has been demonstrated,[31] which potentially removes many system‐ and patient‐level barriers to receipt of this treatment.
CLINICAL EVIDENCE FOR EFFICACY OF FMT IN TREATMENT OF CDI
Recurrent CDI
The clinical evidence for FMT is most robust for recurrent CDI, consisting of case reports or case series, recently aggregated by 2 large systematic reviews, as well as several clinical trials.[23, 29] Gough et al. published the larger of the 2 reviews with data from 317 patients treated via FMT for recurrent CDI,[23] including FMT via retention enema (35%), colonoscopic infusion (42%), and gastric infusion (23%). Though the authors noted differences in resolution proportions among routes of infusion, types of donors, and types of infusates, it is not possible to draw definite conclusions form these data given their anecdotal nature. Regardless of the specific protocol's details, 92% of patients in the review had resolution of recurrent CDI overall after 1 or more treatments, with 89% improving after only 1 treatment. Another systematic review of FMT, both for CDI and non‐CDI indications, reinforced its efficacy in CDI and overall benign safety profile.[32] Other individual case series and reports of FMT for CDI not included in these reviews have been published; they too demonstrate an excellent resolution rate.[33, 34, 35, 36, 37, 38] As with any case reports/series, generalizing from these data to arrive at conclusions about the safety and efficacy of FMT for CDI is limited by potential confounding and publication bias; thus, there emerged a need for high‐quality prospective trials.
The first randomized, controlled clinical trial (RCT) of FMT for recurrent CDI was reported in 2013.[39] Three treatment groups were compared: vancomycin for 5 days followed by FMT (n=16), vancomycin alone for 14 days (n=13), or vancomycin for 14 days with bowel lavage (n=13). Despite a strict definition of cure (absence of diarrhea or persistent diarrhea from another cause with 3 consecutive negative stool tests for C difficile toxin), the study was stopped early after an interim analysis due to resolution of CDI in 94% of patients in the FMT arm (81% after just 1 infusion) versus 23% to 31% in the others. Off‐protocol FMT was offered to the patients in the other 2 groups and 83% of them were also cured.
Youngster et al. conducted a pilot RCT with 10 patients in each group, where patients were randomized to receive FMT via either colonoscopy or nasogastric tube from a frozen fecal suspension, and no difference in efficacy was seen between administration routes, with an overall cure rate of 90%.[40] Subsequently, Youngster et al. conducted an open‐label noncomparative study with frozen fecal capsules for FMT in 20 patients with recurrent CDI.[31] Resolution occurred in 14 (70%) patients after a single treatment, and 4 of the 6 nonresponders had resolution upon retreatment for an overall efficacy of 90%.
Finally, Cammarota et al. conducted an open‐label RCT on FMT for recurrent CDI,[41] comparing FMT to a standard course of vancomycin for 10 days, followed by pulsed dosing every 2 to 3 days for 3 weeks. The study was stopped after a 1‐year interim analysis as 18 of 20 patients (90%) treated by FMT exhibited resolution of CDI‐associated diarrhea compared to only 5 of 19 patients (26%) in the vancomycin‐treated group (P<0.001).
Primary and Severe CDI
There are few data on the use of FMT for primary, nonrecurrent CDI aside from a few case reports, which are included in the data presented above. A mathematical model of CDI in an intensive care unit assessed the role of FMT on primary CDI,[42] and predicted a decreased median incidence of recurrent CDI in patients treated with FMT for primary CDI. In addition to the general limitations inherent in any mathematical model, the study had specific assumptions for model parameters that limited generalizability, such as lack of incorporation of known risk factors for CDI and assumed immediate, persistent disruption of the microbiota after any antimicrobial exposure until FMT occurred.[43]
Lagier et al.[44] conducted a nonrandomized, open‐label, before and after prospective study comparing mortality between 2 intervention periods: conventional antibiotic treatment for CDI versus early FMT via nasogastric infusion. This shift happened due to clinical need, as their hospital in Marseille developed a ribotype 027 outbreak with a dramatic global mortality rate (50.8%). Mortality in the FMT group was significantly less (64.4% vs 18.8%, P<0.01). This was an older cohort (mean age 84 years), suggesting that in an epidemic setting with a high mortality rate, early FMT may be beneficial, but one cannot extrapolate these data to support a position of early FMT for primary CDI in a nonepidemic setting.
Similarly, the evidence for use of FMT in severe CDI (defined in Table 1) consists of published case reports, which suggest efficacy.[45, 46, 47, 48] Similarly, the study by Lagier et al.[44] does not provide data on severity classification, but had a high mortality rate and found a benefit of FMT versus conventional therapy, suggesting that at least some patients presented with severe CDI and benefited. However, 1 documented death (discussed further below) following FMT for severe CDI highlights the need for caution before this treatment is used in that setting.[49]
Patient and Provider Perceptions Regarding Acceptability of FMT as a Treatment Option for CDI
A commonly cited reason for a limited role of FMT is the aesthetics of the treatment. However, few studies exist on the perceptions of patients and providers regarding FMT. Zipursky et al. surveyed 192 outpatients on their attitudes toward FMT using hypothetical case scenarios.[50] Only 1 patient had a history of CDI. The results were largely positive, with 81% of respondents agreeing to FMT for CDI. However, the need to handle stool and the nasogastric route of administration were identified as the most unappealing aspects of FMT. More respondents (90%, P=0.002) agreed to FMT when offered as a pill.
The same group of investigators undertook an electronic survey to examine physician attitudes toward FMT,[51] and found that 83 of 135 physicians (65%) in their sample had not offered or referred a patient for FMT. Frequent reasons for this included institutional barriers, concern that patients would find it too unappealing, and uncertainty regarding indications for FMT. Only 8% of physicians believed that patients would choose FMT if given the option. As the role of FMT in CDI continues to grow, it is likely that patient and provider perceptions and attitudes regarding this treatment will evolve to better align.
SAFETY OF FMT
Short‐term Complications
Serious adverse effects directly attributable to FMT in patients with normal immune function are uncommon. Symptoms of an irritable bowel (constipation, diarrhea, cramping, bloating) shortly after FMT are observed and usually last less than 48 hours.[23] A recent case series of immunocompromised patients (excluding those with inflammatory bowel disease [IBD]) treated for CDI with FMT did not find many adverse events in this group.[35] However, patients with IBD may have a different risk profile; the same case series noted adverse events occurred in 14% of IBD patients, who experienced disease flare requiring hospitalization in some cases.[35] No cases of septicemia or other infections were observed in this series. An increased risk of IBD flare, fever, and elevation in inflammatory markers following FMT has also been observed in other studies.[52, 53, 54] However, the interaction between IBD and the microbiome is complex, and a recent RCT for patients with ulcerative colitis (without CDI) treated via FMT did not show any significant adverse events.[55] FMT side effects may vary by the administration method and may be related to complications of the method itself rather than FMT (for example, misplacement of a nasogastric tube, perforation risk with colonoscopy).
Deaths following FMT are rare and often are not directly attributed to FMT. One reported death occurred as a result of aspiration pneumonia during sedation for colonoscopy for FMT.[35] In another case, a patient with severe CDI was treated with FMT, did not achieve cure, and developed toxic megacolon and shock, dying shortly after. The authors speculate that withdrawal of antibiotics with activity against CDI following FMT contributed to the outcome, rather than FMT itself.[49] FMT is largely untested in patients with severe CDI,[45, 46, 47, 48] and this fatal case of toxic megacolon warrants caution.
Long‐term Complications
The long‐term safety of FMT is unknown. There is an incomplete understanding of the interaction between the gut microbiome and the host, but this is a complex system, and associations with disease processes have been demonstrated. The gut microbiome may be associated with colon cancer, diabetes, obesity, and atopic disorders.[56] The role of FMT in contributing to these conditions is unknown. It is also not known whether targeted screening/selection of stool for infusion can mitigate these potential risks.
In the only study to capture long‐term outcomes after FMT, 77 patients were followed for 3 to 68 months (mean 17 months).[57] New conditions such as ovarian cancer, myocardial infarction, autoimmune disease, and stroke were observed. Although it is not possible to establish causality from this study or infer an increased risk of these conditions from FMT, the results underscore the need for long‐term follow‐up after FMT.
Regulatory Status
The increased use of FMT for CDI and interest in non‐CDI indications led the US Food and Drug Administration (FDA) in 2013 to publish an initial guidance statement regulating stool as a biologic agent.[58] However, subsequently, the United States Department of Health and Human Services' FDA issued guidance stating that it would exercise enforcement discretion for physicians administering FMT to treat patients with C difficile infections; thus, an investigational new drug approval is not required, but appropriate informed consent from the patient indicating that FMT is an investigational therapy is needed. Revision to this guidance is in progress.[59]
Future Directions
Expansion of the indications for FMT and use of synthetic and/or frozen stool are directions currently under active exploration. There are a number of clinical trials studying FMT for CDI underway that are not yet completed,[60, 61, 62, 63, 64, 65] and these may shed light on the safety and efficacy of FMT for primary CDI, severe CDI, and FMT as a preemptive therapy for high‐risk patients on antibiotics. Frozen stool preparations, often from a known set of prescreened donors and recently in capsule form, have been used for FMT and are gaining popularity.[31, 33] A synthetic intestinal microbiota suspension for use in FMT is currently being tested.[62] There also exists a nonprofit organization, OpenBiome (
CONCLUSIONS
Based on several prospective trials and observational data, FMT appears to be a safe and effective treatment for recurrent CDI that is superior to conventional approaches. Despite recent pivotal advances in the field of FMT, there remain many unanswered questions, and further research is needed to examine the optimal parameters, indications, and outcomes with FMT.
Disclosures
K.R. is supported by grants from the Claude D. Pepper Older Americans Independence Center (grant number AG‐024824) and the Michigan Institute for Clinical and Health Research (grant number 2UL1TR000433). N.S. is supported by a VA MERIT award. The contents of this article do not necessarily represent the views of the Department of Veterans Affairs. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors report no conflicts of interest.
- Emergence of Clostridium difficile‐associated disease in North America and Europe. Clin Microbiol Infect. 2006;12:2–18. , , .
- Antibiotic‐associated pseudomembranous colitis due to toxin‐producing clostridia. N Engl J Med. 1978;298(10):531–534. , , , , .
- Clostridium difficile infection in Ohio hospitals and nursing homes during 2006. Infect Control Hosp Epidemiol. 2009;30(6):526–533. , , ,, et al.
- Attributable burden of hospital‐onset Clostridium difficile infection: a propensity score matching study. Infect Control Hosp Epidemiol. 2013;34(6):588–596. , , , , .
- Centers for Disease Control and Prevention. Vital Signs. Making health care safer. Stopping C. difficile infections. Available at: http://www.cdc.gov/VitalSigns/Hai/StoppingCdifficile. Accessed January 15, 2015.
- Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372(9):825–834. , , , et al.
- Emergence and global spread of epidemic healthcare‐associated Clostridium difficile. Nat Genet. 2013;45(1):109–113. , , , et al.
- Effect of age on treatment outcomes in Clostridium difficile infection. J Am Geriatr Soc. 2013;61(2):222–230. , , , et al.
- Current status of Clostridium difficile infection epidemiology. Clin Infect Dis. 2012;55(suppl 2):S65–S70. , , .
- Risk factors for recurrence, complications and mortality in Clostridium difficile infection: a systematic review. PLoS One. 2014;9(6):e98400. , , , .
- Health care‐associated infections: a meta‐analysis of costs and financial impact on the US health care system. JAMA Intern Med. 2013;173(22):2039–2046. , , , et al.
- Human gut microbiome viewed across age and geography. Nature. 2012;486(7402):222–227. , , , et al.
- Colonization resistance of the digestive tract in conventional and antibiotic‐treated mice. Epidemiol Infect. 1971;69(03):405–411. , , .
- Colonization resistance. Antimicrob Agents Chemother. 1994;38(3):409. , .
- Role of the intestinal microbiota in resistance to colonization by Clostridium difficile. Gastroenterol. 2014;146(6):1547–1553. , .
- Antibiotic‐induced shifts in the mouse gut microbiome and metabolome increase susceptibility to Clostridium difficile infection. Nat Commun. 2014;5:3114. , , , et al.
- Fecal bacteriotherapy for recurrent Clostridium difficile infection. Anaerobe. 2009;15(6):285–289. .
- Treatment of recurrent Clostridium difficile diarrhea. Gastroenterol Hepatol. 2006;2(3):203–208. , .
- Bacteriotherapy using fecal flora: toying with human motions. J Clin Gastroenterol. 2004;38(6):475–483. , , , , , .
- Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the society for healthcare epidemiology of America (SHEA) and the infectious diseases society of America (IDSA). Infect Control Hosp Epidemiol. 2010;31(5):431–455. , , , et al.
- Fidaxomicin Versus Vancomycin for Clostridium difficile Infection: meta‐analysis of pivotal randomized controlled trials. Clin Infect Dis. 2012;55(suppl 2):S93–S103. , , , et al.
- Decreased diversity of the fecal microbiome in recurrent Clostridium difficile‐associated diarrhea. J Infect Dis. 2008;197(3):435–438. , , , et al.
- Systematic review of intestinal microbiota transplantation (fecal bacteriotherapy) for recurrent Clostridium difficile infection. Clin Infect Dis. 2011;53(10):994–1002. , , .
- Association between antibody response to toxin A and protection against recurrent Clostridium difficile diarrhoea. Lancet. 2001;357(9251):189–193. , , , .
- Treatment approaches including fecal microbiota transplantation for recurrent Clostridium difficile infection (RCDI) among infectious disease physicians. Anaerobe. 2013;24:20–24. , , , , .
- Should we standardize the 1,700‐year‐old fecal microbiota transplantation? Am J Gastroenterol. 2012;107(11):1755. , , , , .
- Fecal enema as an adjunct in the treatment of pseudomembranous enterocolitis. Surgery. 1958;44(5):854–859. , , , .
- Treating Clostridium difficile infection with fecal microbiota transplantation. Clin Gastroenterol Hepatol. 2011;9(12):1044–1049. , , , et al.
- Fecal microbiota transplantation for Clostridium difficile infection: systematic review and meta‐analysis. Am J Gastroenterol. 2013;108(4):500–508. , , , .
- Success of self‐administered home fecal transplantation for chronic Clostridium difficile infection. Clin Gastroenterol Hepatol. 2010;8(5):471–473. , , .
- Oral, Capsulized, frozen fecal microbiota transplantation for relapsing Clostridium difficile infection. JAMA. 2014;312(17):1772–1778. , , , , , .
- Systematic review: faecal microbiota transplantation therapy for digestive and nondigestive disorders in adults and children. Aliment Pharmacol Ther. 2014;39(10):1003–1032. , , , et al.
- Standardized frozen preparation for transplantation of fecal microbiota for recurrent Clostridium difficile Infection. Am J Gastroenterol. 2012;107(5):761–767. , , , .
- Fecal transplant via retention enema for refractory or recurrent Clostridium difficile infection. Arch Intern Med. 2012;172(2):191–193. , , , .
- Fecal microbiota transplant for treatment of Clostridium difficile infection in immunocompromised patients. Am J Gastroenterol. 2014;109(7):1065–1071. , , , et al.
- Efficacy of combined jejunal and colonic fecal microbiota transplantation for recurrent Clostridium difficile infection. Clin Gastroenterol Hepatol. 2014;12(9):1572–1576. , , , et al.
- Fecal microbiota transplantation for refractory Clostridium difficile colitis in solid organ transplant recipients. Am J Transplant. 2014;14(2):477–480. , , , .
- Faecal microbiota transplantation and bacteriotherapy for recurrent Clostridium difficile infection: a retrospective evaluation of 31 patients. Scand J Infect Dis. 2014;46(2):89–97. , , , , .
- Duodenal infusion of donor feces for recurrent Clostridium difficile. N Engl J Med. 2013;368(5):407–415. , , , et al.
- Fecal microbiota transplant for relapsing Clostridium difficile infection using a frozen inoculum from unrelated donors: a randomized, open‐label, controlled pilot study. Clin Infect Dis. 2014;58(11):1515–1522. , , , et al.
- Randomised clinical trial: faecal microbiota transplantation by colonoscopy vs. vancomycin for the treatment of recurrent Clostridium difficile infection. Aliment Pharmacol Ther. 2015;41(9):835–843. , , , et al.
- A mathematical model to evaluate the routine use of fecal microbiota transplantation to prevent incident and recurrent Clostridium difficile infection. Infect Control Hosp Epidemiol. 2013;35(1):18–27. , , , , .
- Commentary: fecal microbiota therapy: ready for prime time? Infect Control Hosp Epidemiol. 2014;35(1):28–30. , , .
- Dramatic reduction in Clostridium difficile ribotype 027‐associated mortality with early fecal transplantation by the nasogastric route: a preliminary report. Eur J Clin Microbiol Infect Dis. 2015;34(8):1597–1601. , , , et al.
- Fecal microbiota transplantation for fulminant Clostridium difficile infection in an allogeneic stem cell transplant patient. Transplant Infect Dis. 2012;14(6):E161–E165. , , , , , .
- Faecal microbiota transplantation for severe Clostridium difficile infection in the intensive care unit. Eur J Gastroenterol Hepatol. 2013;25(2):255–257. , , , , , .
- Successful colonoscopic fecal transplant for severe acute Clostridium difficile pseudomembranous colitis. Rev Gastroenterol Mex. 2011;77(1):40–42. , , , .
- Successful treatment of fulminant Clostridium difficile infection with fecal bacteriotherapy. Ann Intern Med. 2008;148(8):632–633. , , .
- Tempered enthusiasm for fecal transplant. Clin Infect Dis. 2014;59(2):319. , , , .
- Patient attitudes toward the use of fecal microbiota transplantation in the treatment of recurrent Clostridium difficile infection. Clin Infect Dis. 2012;55(12):1652–1658. , , , , .
- Physician attitudes toward the use of fecal microbiota transplantation for the treatment of recurrent Clostridium difficile infection. Can J Gastroenterol Hepatol. 2014;28(6):319–324. , , , , .
- Transient flare of ulcerative colitis after fecal microbiota transplantation for recurrent Clostridium difficile infection. Clin Gastroenterol Hepatol. 2013;11(8):1036–1038. , , .
- Temporal Bacterial Community Dynamics Vary Among Ulcerative Colitis Patients After Fecal Microbiota Transplantation. Am J Gastroenterol. 2013;108(10):1620–1630. , , , et al.
- Alteration of intestinal dysbiosis by fecal microbiota transplantation does not induce remission in patients with chronic active ulcerative colitis. Inflamm Bowel Dis. 2013;19(10):2155–2165. , , , et al.
- Findings from a randomized controlled trial of fecal transplantation for patients with ulcerative colitis. Gastroenterol. 2015;149(1):110–118.e4. , , , et al.
- Gut microbiota in health and disease. Physiol Rev. 2010;90(3):859–904. , , , .
- Long‐term follow‐up of colonoscopic fecal microbiota transplant for recurrent Clostridium difficile infection. Am J Gastroenterol. 2012;107(7):1079–1087. , , , et al.
- US Food and Drug Administration. Guidance for industry: enforcement policy regarding investigational new drug requirements for use of fecal microbiota for transplantation to treat Clostridium difficile infection not responsive to standard therapies. Available at: http://www.fda.gov/biologicsbloodvaccines/guidancecomplianceregulatoryinformation/guidances/vaccines/ucm361379.htm. Accessed July 1, 2014.
- US Food and Drug Administration. Draft guidance for industry: enforcement policy regarding investigational new drug requirements for use of fecal microbiota for transplantation to treat Clostridium difficile infection not responsive to standard therapies. Available at: http://www.fda.gov/biologicsbloodvaccines/guidancecomplianceregulatoryinformation/guidances/vaccines/ucm387023.htm. Accessed July 1, 2014.
- University Health Network Toronto. Oral vancomycin followed by fecal transplant versus tapering oral vancomycin. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01226992. Available at: http://clinicaltrials.gov/ct2/show/NCT01226992. Accessed July 1, 2014.
- Tel‐Aviv Sourasky Medical Center. Transplantation of fecal microbiota for Clostridium difficile infection. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01958463. Available at: http://clinicaltrials.gov/ct2/show/NCT01958463. Accessed July 1, 2014.
- Rebiotix Inc. Microbiota restoration therapy for recurrent Clostridium difficile‐associated diarrhea (PUNCH CD). Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01925417. Available at: http://clinicaltrials.gov/ct2/show/NCT01925417. Accessed July 1, 2014.
- Hadassah Medical Organization. Efficacy and safety of fecal microbiota transplantation for severe Clostridium difficile‐associated colitis. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01959048. Available at: http://clinicaltrials.gov/ct2/show/NCT01959048. Accessed July 1, 2014.
- University Hospital Tuebingen. Fecal microbiota transplantation in recurrent or refractory Clostridium difficile colitis (TOCSIN). Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01942447. Available at: http://clinicaltrials.gov/ct2/show/NCT01942447. Accessed July 1, 2014.
- Duke University. Stool transplants to treat refractory Clostridium difficile colitis. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT02127398. Available at: http://clinicaltrials.gov/ct2/show/NCT02127398. Accessed July 1, 2014.
- Emergence of Clostridium difficile‐associated disease in North America and Europe. Clin Microbiol Infect. 2006;12:2–18. , , .
- Antibiotic‐associated pseudomembranous colitis due to toxin‐producing clostridia. N Engl J Med. 1978;298(10):531–534. , , , , .
- Clostridium difficile infection in Ohio hospitals and nursing homes during 2006. Infect Control Hosp Epidemiol. 2009;30(6):526–533. , , ,, et al.
- Attributable burden of hospital‐onset Clostridium difficile infection: a propensity score matching study. Infect Control Hosp Epidemiol. 2013;34(6):588–596. , , , , .
- Centers for Disease Control and Prevention. Vital Signs. Making health care safer. Stopping C. difficile infections. Available at: http://www.cdc.gov/VitalSigns/Hai/StoppingCdifficile. Accessed January 15, 2015.
- Burden of Clostridium difficile infection in the United States. N Engl J Med. 2015;372(9):825–834. , , , et al.
- Emergence and global spread of epidemic healthcare‐associated Clostridium difficile. Nat Genet. 2013;45(1):109–113. , , , et al.
- Effect of age on treatment outcomes in Clostridium difficile infection. J Am Geriatr Soc. 2013;61(2):222–230. , , , et al.
- Current status of Clostridium difficile infection epidemiology. Clin Infect Dis. 2012;55(suppl 2):S65–S70. , , .
- Risk factors for recurrence, complications and mortality in Clostridium difficile infection: a systematic review. PLoS One. 2014;9(6):e98400. , , , .
- Health care‐associated infections: a meta‐analysis of costs and financial impact on the US health care system. JAMA Intern Med. 2013;173(22):2039–2046. , , , et al.
- Human gut microbiome viewed across age and geography. Nature. 2012;486(7402):222–227. , , , et al.
- Colonization resistance of the digestive tract in conventional and antibiotic‐treated mice. Epidemiol Infect. 1971;69(03):405–411. , , .
- Colonization resistance. Antimicrob Agents Chemother. 1994;38(3):409. , .
- Role of the intestinal microbiota in resistance to colonization by Clostridium difficile. Gastroenterol. 2014;146(6):1547–1553. , .
- Antibiotic‐induced shifts in the mouse gut microbiome and metabolome increase susceptibility to Clostridium difficile infection. Nat Commun. 2014;5:3114. , , , et al.
- Fecal bacteriotherapy for recurrent Clostridium difficile infection. Anaerobe. 2009;15(6):285–289. .
- Treatment of recurrent Clostridium difficile diarrhea. Gastroenterol Hepatol. 2006;2(3):203–208. , .
- Bacteriotherapy using fecal flora: toying with human motions. J Clin Gastroenterol. 2004;38(6):475–483. , , , , , .
- Clinical practice guidelines for Clostridium difficile infection in adults: 2010 update by the society for healthcare epidemiology of America (SHEA) and the infectious diseases society of America (IDSA). Infect Control Hosp Epidemiol. 2010;31(5):431–455. , , , et al.
- Fidaxomicin Versus Vancomycin for Clostridium difficile Infection: meta‐analysis of pivotal randomized controlled trials. Clin Infect Dis. 2012;55(suppl 2):S93–S103. , , , et al.
- Decreased diversity of the fecal microbiome in recurrent Clostridium difficile‐associated diarrhea. J Infect Dis. 2008;197(3):435–438. , , , et al.
- Systematic review of intestinal microbiota transplantation (fecal bacteriotherapy) for recurrent Clostridium difficile infection. Clin Infect Dis. 2011;53(10):994–1002. , , .
- Association between antibody response to toxin A and protection against recurrent Clostridium difficile diarrhoea. Lancet. 2001;357(9251):189–193. , , , .
- Treatment approaches including fecal microbiota transplantation for recurrent Clostridium difficile infection (RCDI) among infectious disease physicians. Anaerobe. 2013;24:20–24. , , , , .
- Should we standardize the 1,700‐year‐old fecal microbiota transplantation? Am J Gastroenterol. 2012;107(11):1755. , , , , .
- Fecal enema as an adjunct in the treatment of pseudomembranous enterocolitis. Surgery. 1958;44(5):854–859. , , , .
- Treating Clostridium difficile infection with fecal microbiota transplantation. Clin Gastroenterol Hepatol. 2011;9(12):1044–1049. , , , et al.
- Fecal microbiota transplantation for Clostridium difficile infection: systematic review and meta‐analysis. Am J Gastroenterol. 2013;108(4):500–508. , , , .
- Success of self‐administered home fecal transplantation for chronic Clostridium difficile infection. Clin Gastroenterol Hepatol. 2010;8(5):471–473. , , .
- Oral, Capsulized, frozen fecal microbiota transplantation for relapsing Clostridium difficile infection. JAMA. 2014;312(17):1772–1778. , , , , , .
- Systematic review: faecal microbiota transplantation therapy for digestive and nondigestive disorders in adults and children. Aliment Pharmacol Ther. 2014;39(10):1003–1032. , , , et al.
- Standardized frozen preparation for transplantation of fecal microbiota for recurrent Clostridium difficile Infection. Am J Gastroenterol. 2012;107(5):761–767. , , , .
- Fecal transplant via retention enema for refractory or recurrent Clostridium difficile infection. Arch Intern Med. 2012;172(2):191–193. , , , .
- Fecal microbiota transplant for treatment of Clostridium difficile infection in immunocompromised patients. Am J Gastroenterol. 2014;109(7):1065–1071. , , , et al.
- Efficacy of combined jejunal and colonic fecal microbiota transplantation for recurrent Clostridium difficile infection. Clin Gastroenterol Hepatol. 2014;12(9):1572–1576. , , , et al.
- Fecal microbiota transplantation for refractory Clostridium difficile colitis in solid organ transplant recipients. Am J Transplant. 2014;14(2):477–480. , , , .
- Faecal microbiota transplantation and bacteriotherapy for recurrent Clostridium difficile infection: a retrospective evaluation of 31 patients. Scand J Infect Dis. 2014;46(2):89–97. , , , , .
- Duodenal infusion of donor feces for recurrent Clostridium difficile. N Engl J Med. 2013;368(5):407–415. , , , et al.
- Fecal microbiota transplant for relapsing Clostridium difficile infection using a frozen inoculum from unrelated donors: a randomized, open‐label, controlled pilot study. Clin Infect Dis. 2014;58(11):1515–1522. , , , et al.
- Randomised clinical trial: faecal microbiota transplantation by colonoscopy vs. vancomycin for the treatment of recurrent Clostridium difficile infection. Aliment Pharmacol Ther. 2015;41(9):835–843. , , , et al.
- A mathematical model to evaluate the routine use of fecal microbiota transplantation to prevent incident and recurrent Clostridium difficile infection. Infect Control Hosp Epidemiol. 2013;35(1):18–27. , , , , .
- Commentary: fecal microbiota therapy: ready for prime time? Infect Control Hosp Epidemiol. 2014;35(1):28–30. , , .
- Dramatic reduction in Clostridium difficile ribotype 027‐associated mortality with early fecal transplantation by the nasogastric route: a preliminary report. Eur J Clin Microbiol Infect Dis. 2015;34(8):1597–1601. , , , et al.
- Fecal microbiota transplantation for fulminant Clostridium difficile infection in an allogeneic stem cell transplant patient. Transplant Infect Dis. 2012;14(6):E161–E165. , , , , , .
- Faecal microbiota transplantation for severe Clostridium difficile infection in the intensive care unit. Eur J Gastroenterol Hepatol. 2013;25(2):255–257. , , , , , .
- Successful colonoscopic fecal transplant for severe acute Clostridium difficile pseudomembranous colitis. Rev Gastroenterol Mex. 2011;77(1):40–42. , , , .
- Successful treatment of fulminant Clostridium difficile infection with fecal bacteriotherapy. Ann Intern Med. 2008;148(8):632–633. , , .
- Tempered enthusiasm for fecal transplant. Clin Infect Dis. 2014;59(2):319. , , , .
- Patient attitudes toward the use of fecal microbiota transplantation in the treatment of recurrent Clostridium difficile infection. Clin Infect Dis. 2012;55(12):1652–1658. , , , , .
- Physician attitudes toward the use of fecal microbiota transplantation for the treatment of recurrent Clostridium difficile infection. Can J Gastroenterol Hepatol. 2014;28(6):319–324. , , , , .
- Transient flare of ulcerative colitis after fecal microbiota transplantation for recurrent Clostridium difficile infection. Clin Gastroenterol Hepatol. 2013;11(8):1036–1038. , , .
- Temporal Bacterial Community Dynamics Vary Among Ulcerative Colitis Patients After Fecal Microbiota Transplantation. Am J Gastroenterol. 2013;108(10):1620–1630. , , , et al.
- Alteration of intestinal dysbiosis by fecal microbiota transplantation does not induce remission in patients with chronic active ulcerative colitis. Inflamm Bowel Dis. 2013;19(10):2155–2165. , , , et al.
- Findings from a randomized controlled trial of fecal transplantation for patients with ulcerative colitis. Gastroenterol. 2015;149(1):110–118.e4. , , , et al.
- Gut microbiota in health and disease. Physiol Rev. 2010;90(3):859–904. , , , .
- Long‐term follow‐up of colonoscopic fecal microbiota transplant for recurrent Clostridium difficile infection. Am J Gastroenterol. 2012;107(7):1079–1087. , , , et al.
- US Food and Drug Administration. Guidance for industry: enforcement policy regarding investigational new drug requirements for use of fecal microbiota for transplantation to treat Clostridium difficile infection not responsive to standard therapies. Available at: http://www.fda.gov/biologicsbloodvaccines/guidancecomplianceregulatoryinformation/guidances/vaccines/ucm361379.htm. Accessed July 1, 2014.
- US Food and Drug Administration. Draft guidance for industry: enforcement policy regarding investigational new drug requirements for use of fecal microbiota for transplantation to treat Clostridium difficile infection not responsive to standard therapies. Available at: http://www.fda.gov/biologicsbloodvaccines/guidancecomplianceregulatoryinformation/guidances/vaccines/ucm387023.htm. Accessed July 1, 2014.
- University Health Network Toronto. Oral vancomycin followed by fecal transplant versus tapering oral vancomycin. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01226992. Available at: http://clinicaltrials.gov/ct2/show/NCT01226992. Accessed July 1, 2014.
- Tel‐Aviv Sourasky Medical Center. Transplantation of fecal microbiota for Clostridium difficile infection. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01958463. Available at: http://clinicaltrials.gov/ct2/show/NCT01958463. Accessed July 1, 2014.
- Rebiotix Inc. Microbiota restoration therapy for recurrent Clostridium difficile‐associated diarrhea (PUNCH CD). Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01925417. Available at: http://clinicaltrials.gov/ct2/show/NCT01925417. Accessed July 1, 2014.
- Hadassah Medical Organization. Efficacy and safety of fecal microbiota transplantation for severe Clostridium difficile‐associated colitis. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01959048. Available at: http://clinicaltrials.gov/ct2/show/NCT01959048. Accessed July 1, 2014.
- University Hospital Tuebingen. Fecal microbiota transplantation in recurrent or refractory Clostridium difficile colitis (TOCSIN). Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT01942447. Available at: http://clinicaltrials.gov/ct2/show/NCT01942447. Accessed July 1, 2014.
- Duke University. Stool transplants to treat refractory Clostridium difficile colitis. Bethesda, MD: National Library of Medicine; 2000. NLM identifier: NCT02127398. Available at: http://clinicaltrials.gov/ct2/show/NCT02127398. Accessed July 1, 2014.
Management of Locally Advanced Rectal Adenocarcinoma
Colorectal cancers are among the most common cancers worldwide, and there is a high mortality rate for advanced-stage disease. Approximately 132,000 new cases of colorectal cancer will be diagnosed in the United States in 2015, and approximately 40,000 of these cases will be primary rectal cancers. The incidence and mortality rates have been steadily declining over the past two decades, largely through advances in screening and improvements in treatment. However, rectal cancer remains a significant cause of morbidity and mortality in the United States and worldwide.
To read the full article in PDF:
Colorectal cancers are among the most common cancers worldwide, and there is a high mortality rate for advanced-stage disease. Approximately 132,000 new cases of colorectal cancer will be diagnosed in the United States in 2015, and approximately 40,000 of these cases will be primary rectal cancers. The incidence and mortality rates have been steadily declining over the past two decades, largely through advances in screening and improvements in treatment. However, rectal cancer remains a significant cause of morbidity and mortality in the United States and worldwide.
To read the full article in PDF:
Colorectal cancers are among the most common cancers worldwide, and there is a high mortality rate for advanced-stage disease. Approximately 132,000 new cases of colorectal cancer will be diagnosed in the United States in 2015, and approximately 40,000 of these cases will be primary rectal cancers. The incidence and mortality rates have been steadily declining over the past two decades, largely through advances in screening and improvements in treatment. However, rectal cancer remains a significant cause of morbidity and mortality in the United States and worldwide.
To read the full article in PDF:
Lactic acidosis: Clinical implications and management strategies
Physicians are paying more attention to serum lactate levels in hospitalized patients than in the past, especially with the advent of point-of-care testing. Elevated lactate levels are associated with tissue hypoxia and hypoperfusion but can also be found in a number of other conditions. Therefore, confusion can arise as to how to interpret elevated levels and subsequently manage these patients in a variety of settings.
In this review, we discuss the mechanisms underlying lactic acidosis, its prognostic implications, and its use as a therapeutic target in treating patients in septic shock and other serious disorders.
LACTATE IS A PRODUCT OF ANAEROBIC RESPIRATION
Lactate, or lactic acid, is produced from pyruvate as an end product of glycolysis under anaerobic conditions (Figure 1). It is produced in most tissues in the body, but primarily in skeletal muscle, brain, intestine, and red blood cells. During times of stress, lactate is also produced in the lungs, white blood cells, and splanchnic organs.
Most lactate in the blood is cleared by the liver, where it is the substrate for gluconeogenesis, and a small amount is cleared by the kidneys.1,2 The entire pathway by which lactate is produced and converted back to glucose is called the Cori cycle.
NORMAL LEVELS ARE LESS THAN ABOUT 2.0 MMOL/L
In this review, we will present lactate levels in the SI units of mmol/L (1 mmol/L = 9 mg/dL).
Basal lactate production is approximately 0.8 mmol/kg body weight/hour. The average normal arterial blood lactate level is approximately 0.620 mmol/L and the venous level is slightly higher at 0.997 mmol/L,3 but overall, arterial and venous lactate levels correlate well.
Normal lactate levels are less than 2 mmol/L,4 intermediate levels range from 2 to less than 4 mmol/L, and high levels are 4 mmol/L or higher.5
To minimize variations in measurement, blood samples should be drawn without a tourniquet into tubes containing fluoride, placed on ice, and processed quickly (ideally within 15 minutes).
INCREASED PRODUCTION, DECREASED CLEARANCE, OR BOTH
An elevated lactate level can be the result of increased production, decreased clearance, or both (as in liver dysfunction).
Type A lactic acidosis—due to hypoperfusion and hypoxia—occurs when there is a mismatch between oxygen delivery and consumption, with resultant anaerobic glycolysis.
The guidelines from the Surviving Sepsis Campaign6 emphasize using lactate levels to diagnose patients with sepsis-induced hypoperfusion. However, hyperlactatemia can indicate inadequate oxygen delivery due to any type of shock (Table 1).
Type B lactic acidosis—not due to hypoperfusion—occurs in a variety of conditions (Table 1), including liver disease, malignancy, use of certain medications (eg, metformin, epinephrine), total parenteral nutrition, human immunodeficiency virus infection, thiamine deficiency, mitochondrial myopathies, and congenital lactic acidosis.1–3,7 Yet other causes include trauma, excessive exercise, diabetic ketoacidosis, ethanol intoxication, dysfunction of the enzyme pyruvate dehydrogenase, and increased muscle degradation leading to increased production of pyruvate. In these latter scenarios, glucose metabolism exceeds the oxidation capacity of the mitochondria, and the rise in pyruvate concentration drives lactate production.8,9 Mitochondrial dysfunction and subsequent deficits in cellular oxygen use can also result in persistently high lactate levels.10
In some situations, patients with mildly elevated lactic acid levels in type B lactic acidosis can be monitored to ensure stability, rather than be treated aggressively.
HIGHER LEVELS AND LOWER CLEARANCE PREDICT DEATH
The higher the lactate level and the slower the rate of normalization (lactate clearance), the higher the risk of death.
Lactate levels and mortality rate
Shapiro et al11 showed that increases in lactate level are associated with proportional increases in the mortality rate. Mikkelsen et al12 showed that intermediate levels (2.0–3.9 mmol/L) and high levels (≥ 4 mmol/L) of serum lactate are associated with increased risk of death independent of organ failure and shock. Patients with mildly elevated and intermediate lactate levels and sepsis have higher rates of in-hospital and 30-day mortality, which correlate with the baseline lactate level.13
In a post hoc analysis of a randomized controlled trial, patients with septic shock who presented to the emergency department with hypotension and a lactate level higher than 2 mmol/L had a significantly higher in-hospital mortality rate than those who presented with hypotension and a lactate level of 2 mmol/L or less (26% vs 9%, P < .0001).14 These data suggest that elevated lactate levels may have a significant prognostic role, independent of blood pressure.
Slower clearance
The prognostic implications of lactate clearance (reductions in lactate levels over time, as opposed to a single value in time), have also been evaluated.
Lactate clearance of at least 10% at 6 hours after presentation has been associated with a lower mortality rate than nonclearance (19% vs 60%) in patients with sepsis or septic shock with elevated levels.15–17 Similar findings have been reported in a general intensive care unit population,18 as well as a surgical intensive care population.sup>19
Puskarich et al20 have also shown that lactate normalization to less than 2 mmol/L during early sepsis resuscitation is the strongest predictor of survival (odds ratio [OR] 5.2), followed by lactate clearance of 50% (OR 4.0) within the first 6 hours of presentation. Not only is lactate clearance associated with improved outcomes, but a faster rate of clearance after initial presentation is also beneficial.15,16,18
Lactate clearance over a longer period (> 6 hours) has not been studied in patients with septic shock. However, in the general intensive care unit population, therapy guided by lactate clearance for the first 8 hours after presentation has shown a reduction in mortality rate.18 There are no data available on outcomes of lactate-directed therapy beyond 8 hours, but lactate concentration and lactate clearance at 24 hours correlate with the 28-day mortality rate.21
Cryptic shock
Cryptic shock describes a state in a subgroup of patients who have elevated lactate levels and global tissue hypoxia despite being normotensive or even hypertensive. These patients have a higher mortality rate independent of blood pressure. Jansen et al18 found that patients with a lactate level higher than 4 mmol/L and preserved blood pressure had a mortality rate of 15%, while those without shock or hyperlactatemia had a mortality rate of 2.5%. In addition, patients with an elevated lactate level in the absence of hypotension have mortality rates similar to those in patients with high lactate levels and hypotension refractory to fluid boluses, suggesting the presence of tissue hypoxia even in these normotensive patients.6
HOW TO APPROACH AN ELEVATED LACTATE LEVEL
An elevated lactate level should prompt an evaluation for causes of decreased oxygen delivery, due either to a systemic low-flow state (as a result of decreased cardiac output) or severe anemia, or to regionally decreased perfusion, (eg, limb or mesenteric ischemia). If tissue hypoxia is ruled out after an exhaustive workup, consideration should be given to causes of hyperlactatemia without concomitant tissue hypoxia (type B acidosis).
Treatment differs depending on the underlying mechanism of the lactate elevation; nevertheless, treatment is mostly related to optimizing oxygen delivery by giving fluids, packed red blood cells, and vasopressors or inotropic agents, or both (Figure 2). The specific treatment differs based on the shock state, but there are similarities that can guide the clinician.
FLUID SUPPORT
Giving fluids, with a goal of improving cardiac output, remains a cornerstone of initial therapy for most shock states.22,23
How much fluid?
Fluids should be given until the patient is no longer preload-dependent, although there is much debate about which assessment strategy should be used to determine if cardiac output will improve with more fluid (ie, fluid-responsiveness).24 In many cases, fluid resuscitation alone may be enough to restore hemodynamic stability, improve tissue perfusion, and reduce elevated lactate concentrations.25
The decision to give more fluids should not be made lightly, though, as a more positive fluid balance early in the course of septic shock and over 4 days has been associated with a higher mortality rate.26 Additionally, pushing fluids in patients with cardiogenic shock due to impaired left ventricular systolic function may lead to or worsen pulmonary edema. Therefore, the indiscriminate use of fluids should be avoided.
Which fluids?
Despite years of research, controversy persists about whether crystalloids or colloids are better for resuscitation. Randomized trials in heterogeneous intensive care unit patients have not detected differences in 28-day mortality rates between those allocated to crystalloids or 4% albumin27 and those allocated to crystalloids or hydroxyethyl starch.28
Hydroxyethyl starch may not be best. In a study of patients with severe sepsis, those randomized to receive hydroxyethyl starch had a higher 90-day mortality rate than patients randomized to crystalloids (51% vs 43%, P = .03).29 A sequential prospective before-and-after study did not detect a difference in the time to normalization (< 2.2 mmol/L) of lactate (P = .68) or cessation of vasopressors (P = .11) in patients with severe sepsis who received fluid resuscitation with crystalloids, gelatin, or hydroxyethyl starch. More patients who received hydroxyethyl starch in these studies developed acute kidney injury than those receiving crystalloids.28–30
Taken together, these data strongly suggest hydroxyethyl starch should not be used for fluid resuscitation in the intensive care unit.
Normal saline or albumin? Although some data suggest that albumin may be preferable to 0.9% sodium chloride in patients with severe sepsis,31,32 these analyses should be viewed as hypothesis-generating. There do not seem to be differences between fluid types in terms of subsequent serum lactate concentrations or achievement of lactate clearance goals.28–30 Until further studies are completed, both albumin and crystalloids are reasonable for resuscitation.
Caironi et al33 performed an open-label study comparing albumin replacement (with a goal serum albumin concentration of 3 g/dL) plus a crystalloid solution vs a crystalloid solution alone in patients with severe sepsis or septic shock. They detected no difference between the albumin and crystalloid groups in mortality rates at 28 days (31.8% vs 32.0%, P = .94) or 90 days (41.1% vs 43.6%, P = .29). However, patients in the albumin group had a shorter time to cessation of vasoactive agents (median 3 vs 4 days, P = .007) and lower cardiovascular Sequential Organ Failure Assessment subscores (median 1.20 vs 1.42, P = .03), and more frequently achieved a mean arterial pressure of at least 65 mm Hg within 6 hours of randomization (86.0% vs 82.5%, P = .04).
Although serum lactate levels were lower in the albumin group at baseline (1.7 mmol/L vs 1.8 mmol/L, P = .05), inspection of the data appears to show a similar daily lactate clearance rate between groups over the first 7 study days (although these data were not analyzed by the authors). Achievement of a lactate level lower than 2 mmol/L on the first day of therapy was not significantly different between groups (73.4% vs 72.5%, P = .11).33
In a post hoc subgroup analysis, patients with septic shock at baseline randomized to albumin had a lower 90-day mortality rate than patients randomized to crystalloid solutions (RR 0.87, 95% CI 0.77–0.99). There was no difference in the 90-day mortality rate in patients without septic shock (RR 1.13, 95% CI 0.92–1.39, P = .03 for heterogeneity).33
These data suggest that albumin replacement may not improve outcomes in patients with severe sepsis, but may have advantages in terms of hemodynamic variables (and potentially mortality) in patients with septic shock. The role of albumin replacement in patients with septic shock warrants further study.
VASOPRESSORS
Vasopressors, inotropes, or both should be given to patients who have signs of hypoperfusion (including elevated lactate levels) despite preload optimization or ongoing fluid administration. The most appropriate drug depends on the goal: vasopressors are used to increase systemic vascular resistance, while inotropes are used to improve cardiac output and oxygen delivery.
Blood pressure target
The Surviving Sepsis Campaign guidelines recommend a mean arterial blood pressure target of at least 65 mm Hg during initial resuscitation and when vasopressors are applied for patients with septic shock.22 This recommendation is based on small studies that did not show differences in serum lactate levels or regional blood flow when the mean arterial pressure was elevated above 65 mm Hg with norepinephrine.34,35 However, the campaign guidelines note that the mean arterial pressure goal must be individualized in order to achieve optimal perfusion.
A large, open-label trial36 detected no difference in 28-day mortality rates in patients with septic shock between those allocated to a mean arterial pressure goal of 80 to 85 mm Hg or 65 to 70 mm Hg (36.6% vs 34.0%, P = .57). Although lactate levels did not differ between groups, the incidence of new-onset atrial fibrillation was higher in the higher-target group (6.7% vs 2.8%, P = .02). Fewer patients with chronic hypertension needed renal replacement therapy in the higher pressure group, further emphasizing the need to individualize the mean arterial pressure goal for patients in shock.36
Which vasopressor agent?
Dopamine and norepinephrine have traditionally been the preferred initial vasopressors for patients with shock. Until recently there were few data to guide selection between the two, but this is changing.
In a 2010 study of 1,679 patients with shock requiring vasopressors, there was no difference in the 28-day mortality rate between patients randomized to dopamine or norepinephrine (53% vs 49%, P = .10).37 Patients allocated to dopamine, though, had a higher incidence of arrhythmias (24% vs 12%, P < .001) and more frequently required open-label norepinephrine (26% vs 20%, P < .001). Although lactate levels and the time to achievement of a mean arterial pressure of 65 mm Hg were similar between groups, patients allocated to norepinephrine had more vasopressor-free days through day 28.
An a priori-planned subgroup analysis evaluated the influence of the type of shock on patient outcome. Patients with cardiogenic shock randomized to dopamine had a higher mortality rate than those randomized to norepinephrine (P = .03). However, the overall effect of treatment did not differ among the shock subgroups (interaction P = .87), suggesting that the reported differences in mortality according to subgroup may be spurious.
In a 2012 meta-analysis of patients with septic shock, dopamine use was associated with a higher mortality rate than norepinephrine use.38
In light of these data, norepinephrine should be preferred over dopamine as the initial vasopressor in most types of shock.
Epinephrine does not offer an outcome advantage over norepinephrine and may be associated with a higher incidence of adverse events.39–42 Indeed, in a study of patients with septic shock, lactate concentrations on the first day after randomization were significantly higher in patients allocated to epinephrine than in patients allocated to norepinephrine plus dobutamine.39 Similar effects on lactate concentrations with epinephrine were seen in patients with various types of shock40 and in those with cardiogenic shock.42
These differences in lactate concentrations may be directly attributable to epinephrine. Epinephrine can increase lactate concentrations through glycolysis and pyruvate dehydrogenase activation by stimulation of sodium-potassium ATPase activity via beta-2 adrenergic receptors in skeletal muscles,43 as well as decrease splanchnic perfusion.42,44,45 These effects may preclude using lactate clearance as a resuscitation goal in patients receiving epinephrine. Epinephrine is likely best reserved for patients with refractory shock,22 particularly those in whom cardiac output is known to be low.
Phenylephrine, essentially a pure vasoconstrictor, should be avoided in low cardiac output states and is best reserved for patients who develop a tachyarrhythmia on norepinephrine.22
Vasopressin, also a pure vasoconstrictor that should be avoided in low cardiac output states, has been best studied in patients with vasodilatory shock. Although controversy exists on the mortality benefits of vasopressin in vasodilatory shock, it is a relatively safe drug with consistent norepinephrine-sparing effects when added to existing norepinephrine therapy.46,47 In patients with less severe septic shock, including those with low lactate concentrations, adding vasopressin to norepinephrine instead of continuing norepinephrine alone may confer a mortality advantage.48
OTHER MEASURES TO OPTIMIZE OXYGEN DELIVERY
In circulatory shock from any cause, tissue oxygen demand exceeds oxygen delivery. Once arterial oxygenation and hemoglobin levels (by packed red blood cell transfusion) have been optimized, cardiac output is the critical determinant of oxygen delivery. Cardiac output may be augmented by ensuring adequate preload (by fluid resuscitation) or by giving inotropes or vasodilators.
The optimal cardiac output is difficult to define, and the exact marker for determining when cardiac output should be augmented is unclear. A strategy of increasing cardiac output to predefined “supranormal” levels was not associated with a lower mortality rate.49 Therefore, the decision to augment cardiac output must be individualized and will likely vary in the same patient over time.23
A reasonable approach to determining when augmentation of cardiac output is necessary was proposed in a study by Rivers et al.50 In that study, in patients randomized to early goal-directed therapy, inotropes were recommended when the central venous oxygenation saturation (Scvo2) was below 70% despite adequate fluid resuscitation (central venous pressure ≥ 8 mm Hg) and hematocrits were higher than 30%.
When an inotrope is indicated to improve cardiac output, dobutamine is usually the preferred agent. Dobutamine has a shorter half-life (allowing for easier titration) and causes less hypotension (assuming preload has been optimized) than phosphodiesterase type III inhibitors such as milrinone.
Mechanical support devices, such as intra-aortic balloon counterpulsation, and vasodilators can also be used to improve tissue perfusion in selected patients with low cardiac output syndromes.
USING LACTATE LEVELS TO GUIDE THERAPY
Lactate levels above 4.0 mmol/L
Lactate may be a useful marker for determining whether organ dysfunction is present and, hence, what course of therapy should be given, especially in sepsis. A serum lactate level higher than 4.0 mmol/L has been used as the trigger to start aggressive resuscitation in patients with sepsis.50,51
Traditionally, as delineated by Rivers et al50 in their landmark study of early goal-directed therapy, this entailed placing an arterial line and a central line for hemodynamic monitoring, with specific interventions directed at increasing the central venous pressure, mean arterial pressure, and central venous oxygen saturation.50 However, a recent study in a similar population of patients with sepsis with elevated lactate found no significant advantage of protocol-based resuscitation over care provided according to physician judgment, and no significant benefit in central venous catheterization and hemodynamic monitoring in all patients.51
Lactate clearance: 10% or above at 8 hours?
Regardless of the approach chosen, decreasing lactate levels can be interpreted as an adequate response to the interventions provided. As a matter of fact, several groups of investigators have also demonstrated the merits of lactate clearance alone as a prognostic indicator in patients requiring hemodynamic support.
McNelis et al52 retrospectively evaluated 95 postsurgical patients who required hemodynamic monitoring.52,53 The authors found that the slower the lactate clearance, the higher the mortality rate.
Given the prognostic implications of lactate clearance, investigators have evaluated whether lactate clearance could be used as a surrogate resuscitation goal for optimizing oxygen delivery. Using lactate clearance may have significant practical advantages over using central venous oxygen saturation, since it does not require a central venous catheter or continuous oximetric monitoring.
In a study comparing these two resuscitation end points, patients were randomized to a goal of either central venous oxygen saturation of 70% or more or lactate clearance of 10% or more within the first 6 hours after presentation as a marker of oxygen delivery.53 Mortality rates were similar with either strategy. Of note, only 10% of the patients actually required therapies to improve their oxygen delivery. Furthermore, there were no differences in the treatments given (including fluids, vasopressors, inotropes, packed red blood cells) throughout the treatment period.
These findings provide several insights. First, few patients admitted to the emergency department with severe sepsis and treated with an initial quantitative resuscitation protocol require additional therapy for augmenting oxygen delivery. Second, lactate clearance, in a setting where initial resuscitation with fluids and vasopressors restores adequate oxygen delivery for the majority of patients, is likely as good a target for resuscitation as central venous oxygen saturation.
This study, however, does not address the question of whether lactate clearance is useful as an additional marker of oxygen delivery (in conjunction with central venous oxygen saturation). Indeed, caution should be taken to target central venous oxygen saturation goals alone, as patients with septic shock presenting with venous hyperoxia (central venous oxygen saturation > 89%) have been shown to have a higher mortality rate than patients with normoxia (central venous oxygen saturation 71%–89%).54
This was further demonstrated by Arnold et al in a study of patients presenting to the emergency department with severe sepsis.15 In this study, significant discordance between central venous oxygen saturation and lactate clearance was seen, where 79% of patients with less than 10% lactate clearance had concomitant central venous oxygen saturation of 70% or greater.
Jansen et al18 evaluated the role of targeting lactate clearance in conjunction with central venous oxygen saturation monitoring. In this study, critically ill patients with elevated lactate and inadequate lactate clearance were randomized to usual care or to resuscitation to adequate lactate clearance (20% or more). The therapies to optimize oxygen delivery were given according to the central venous oxygen saturation. Overall, after adjustment for predefined risk factors, the in-hospital mortality rate was lower in the lactate clearance group. This may signify that patients with sepsis and central venous oxygen saturation of 70% or more may continue to have poor lactate clearance, warranting further treatment.
Taken together, serum lactate may be helpful for prognostication, determination of course of therapy, and quantification for tissue hypoperfusion for targeted therapies. Figure 2 presents our approach to an elevated lactate level. As performed in the study by Jansen et al,18 it seems reasonable to measure lactate levels every 2 hours for the first 8 hours of resuscitation in patients with type A lactic acidosis. These levels should be interpreted in the context of lactate clearance (at least 10%, but preferably 20%) and normalization, and should be treated with an approach similar to the one outlined in Figure 2.
TREATING TYPE B LACTIC ACIDOSIS (NORMAL PERFUSION AND OXYGENATION)
Treating type B lactic acidosis is quite different because the goal is not to correct mismatches in oxygen consumption and delivery. Since most cases are due to underlying conditions such as malignancy or medications, treatment should be centered around eliminating the cause (eg, treat the malignancy, discontinue the offending medication). The main reason for treatment is to alleviate the harmful effects of acidosis. For example, acidosis can result in a negative inotropic effect.
Sodium bicarbonate, dichloroacetate, carbicarb, and tromethamine have all been studied in the management of type B lactic acidosis, with little success.55,56
Renal replacement therapy has had some success in drug-induced lactic acidosis.57,58
l-carnitine has had promising results in treating patients with human immunodeficiency virus infection, since these patients are carnitine-deficient and carnitine plays an important role in mitochondrial function.59
Thiamine and biotin deficiencies can occur in patients receiving total parenteral nutrition without vitamins and in patients who drink alcohol heavily and can cause lactic acidosis. These nutrients should be supplemented accordingly.
Treatment of mitochondrial disorders includes antioxidants (coenzyme Q10, vitamin C, vitamin E) and amino acids (l-arginine).60
- Andersen LW, Mackenhauer J, Roberts JC, Berg KM, Cocchi MN, Donnino MW. Etiology and therapeutic approach to elevated lactate levels. Mayo Clin Proc 2013; 88:1127–1140.
- Fuller BM, Dellinger RP. Lactate as a hemodynamic marker in the critically ill. Curr Opin Crit Care 2012; 18:267–272.
- Fall PJ, Szerlip HM. Lactic acidosis: from sour milk to septic shock. J Intensive Care Med 2005; 20:255–271.
- Kruse O, Grunnet N, Barfod C. Blood lactate as a predictor for in-hospital mortality in patients admitted acutely to hospital: a systematic review. Scand J Trauma Resusc Emerg Med 2011;19:74.
- Howell MD, Donnino M, Clardy P, Talmor D, Shapiro NI. Occult hypoperfusion and mortality in patients with suspected infection. Intensive Care Med 2007; 33:1892–1899.
- Puskarich MA, Trzeciak S, Shapiro NI, et al. Outcomes of patients undergoing early sepsis resuscitation for cryptic shock compared with overt shock. Resuscitation 2011; 82:1289–1293.
- Bakker J, Nijsten MW, Jansen TC. Clinical use of lactate monitoring in critically ill patients. Ann Intensive Care 2013; 3:12.
- Levy B, Gibot S, Franck P, Cravoisy A, Bollaert PE. Relation between muscle Na+K+ ATPase activity and raised lactate concentrations in septic shock: a prospective study. Lancet 2005; 365:871–875.
- Vary TC. Sepsis-induced alterations in pyruvate dehydrogenase complex activity in rat skeletal muscle: effects on plasma lactate. Shock 1996; 6:89–94.
- Brealey D, Brand M, Hargreaves I, et al. Association between mitochondrial dysfunction and severity and outcome of septic shock. Lancet 2002; 360:219–223.
- Shapiro NI, Howell MD, Talmor D, et al. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med 2005; 45:524–528.
- Mikkelsen ME, Miltiades AN, Gaieski DF, et al. Serum lactate is associated with mortality in severe sepsis independent of organ failure and shock. Crit Care Med 2009; 37:1670–1677.
- Liu V, Morehouse JW, Soule J, Whippy A, Escobar GJ. Fluid volume, lactate values, and mortality in sepsis patients with intermediate lactate values. Ann Am Thorac Soc 2013; 10:466–473.
- Sterling SA, Puskarich MA, Shapiro NI, et al; Emergency Medicine Shock Research Network (EMShockNET). Characteristics and outcomes of patients with vasoplegic versus tissue dysoxic septic shock. Shock 2013; 40:11–14.
- Arnold RC, Shapiro NI, Jones AE, et al; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Multicenter study of early lactate clearance as a determinant of survival in patients with presumed sepsis. Shock 2009; 32:35–39.
- Jones AE. Lactate clearance for assessing response to resuscitation in severe sepsis. Acad Emerg Med 2013; 20:844–847.
- Nguyen HB, Rivers EP, Knoblich BP, et al. Early lactate clearance is associated with improved outcome in severe sepsis and septic shock. Crit Care Med 2004; 32:1637–1642.
- Jansen TC, van Bommel J, Schoonderbeek FJ, et al; LACTATE study group. Early lactate-guided therapy in intensive care unit patients: a multicenter, open-label, randomized controlled trial. Am J Respir Crit Care Med 2010; 182:752–761.
- Husain FA, Martin MJ, Mullenix PS, Steele SR, Elliott DC. Serum lactate and base deficit as predictors of mortality and morbidity. Am J Surg 2003; 185:485–491.
- Puskarich MA, Trzeciak S, Shapiro NI, et al. Whole blood lactate kinetics in patients undergoing quantitative resuscitation for severe sepsis and septic shock. Chest 2013; 143:1548–1553.
- Marty P, Roquilly A, Vallee F, et al. Lactate clearance for death prediction in severe sepsis or septic shock patients during the first 24 hours in intensive care unit: an observational study. Ann Intensive Care 2013; 3:3.
- Dellinger RP, Levy MM, Rhodes A, et al; Surviving Sepsis Campaign Guidelines Committee including the Pediatric Subgroup. Surviving sepsis campaign: International guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med 2013; 41:580–637.
- Vincent JL, De Backer D. Circulatory shock. N Engl J Med 2013; 369:1726–1734.
- Durairaj L, Schmidt GA. Fluid therapy in resuscitated sepsis: less is more. Chest 2008; 133:252–263.
- Vincent JL, Dufaye P, Berré J, Leeman M, Degaute JP, Kahn RJ. Serial lactate determinations during circulatory shock. Crit Care Med 1983; 11:449–451.
- Boyd JH, Forbes J, Nakada TA, Walley KR, Russell JA. Fluid resuscitation in septic shock: a positive fluid balance and elevated central venous pressure are associated with increased mortality. Crit Care Med 2011; 39:259–265.
- Finfer S, Bellomo R, Boyce N, French J, Myburgh J, Norton R; SAFE Study Investigators. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med 2004; 350:2247–2256.
- Myburgh JA, Finfer S, Bellomo R, et al; CHEST Investigators; Australian and New Zealand Intensive Care Society Clinical Trials Group. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med 2012; 367:1901–1911.
- Perner A, Haase N, Guttormsen AB, et al; 6S Trial Group; Scandinavian Critical Care Trials Group. Hydroxyethyl starch 130/0.42 versus Ringer’s acetate in severe sepsis. N Engl J Med 2012; 367:124–134.
- Bayer O, Reinhart K, Kohl M, et al. Effects of fluid resuscitation with synthetic colloids or crystalloids alone on shock reversal, fluid balance, and patient outcomes in patients with severe sepsis: a prospective sequential analysis. Crit Care Med 2012; 40:2543–2551.
- Delaney AP, Dan A, McCaffrey J, Finfer S. The role of albumin as a resuscitation fluid for patients with sepsis: a systematic review and meta-analysis. Crit Care Med 2011; 39:386–391.
- SAFE Study Investigators; Finfer S, McEvoy S, Bellomo R, McArthur C, Myburgh J, Norton R. Impact of albumin compared to saline on organ function and mortality of patients with severe sepsis. Intensive Care Med 2011; 37:86–96.
- Caironi P, Tognoni G, Masson S, et al; ALBIOS Study Investigators. Albumin replacement in patients with severe sepsis or septic shock. N Engl J Med 2014; 370:1412–1421.
- Bourgoin A, Leone M, Delmas A, Garnier F, Albanèse J, Martin C. Increasing mean arterial pressure in patients with septic shock: effects on oxygen variables and renal function. Crit Care Med 2005; 33:780–786.
- LeDoux D, Astiz ME, Carpati CM, Rackow EC. Effects of perfusion pressure on tissue perfusion in septic shock. Crit Care Med 2000; 28:2729–2732.
- Asfar P, Meziani F, Hamel JF, et al; SEPSISPAM Investigators. High versus low blood-pressure target in patients with septic shock. N Engl J Med 2014; 370:1583–1593.
- De Backer D, Biston P, Devriendt J, et al; SOAP II Investigators. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med 2010; 362:779–789.
- De Backer D, Aldecoa C, Njimi H, Vincent JL. Dopamine versus norepinephrine in the treatment of septic shock: a meta-analysis. Crit Care Med 2012; 40:725–730.
- Annane D, Vignon P, Renault A, et al: CATS Study Group. Norepinephrine plus dobutamine versus epinephrine alone for management of septic shock: a randomised trial. Lancet 2007; 370:676–684.
- Myburgh JA, Higgins A, Jovanovska A, Lipman J, Ramakrishnan N, Santamaria J; CAT Study investigators. A comparison of epinephrine and norepinephrine in critically ill patients. Intensive Care Med 2008; 34:2226–2234.
- Schmittinger CA, Torgersen C, Luckner G, Schröder DC, Lorenz I, Dünser MW. Adverse cardiac events during catecholamine vasopressor therapy: a prospective observational study. Intensive Care Med 2012; 38:950–958.
- Levy B, Perez P, Perny J, Thivilier C, Gerard A. Comparison of norepinephrine-dobutamine to epinephrine for hemodynamics, lactate metabolism, and organ function variables in cardiogenic shock. A prospective, randomized pilot study. Crit Care Med 2011; 39:450–455.
- Watt MJ, Howlett KF, Febbraio MA, Spriet LL, Hargreaves M. Adrenaline increases skeletal muscle glycogenolysis, pyruvate dehydrogenase activation and carbohydrate oxidation during moderate exercise in humans. J Physiol 2001; 534:269–278.
- De Backer D, Creteur J, Silva E, Vincent JL. Effects of dopamine, norepinephrine, and epinephrine on the splanchnic circulation in septic shock: which is best? Crit Care Med 2003; 31:1659–1667.
- Levy B, Bollaert PE, Charpentier C, et al. Comparison of norepinephrine and dobutamine to epinephrine for hemodynamics, lactate metabolism, and gastric tonometric variables in septic shock: a prospective, randomized study. Intensive Care Med 1997; 23:282–287.
- Polito A, Parisini E, Ricci Z, Picardo S, Annane D. Vasopressin for treatment of vasodilatory shock: an ESICM systematic review and meta-analysis. Intensive Care Med 2012; 38:9–19.
- Serpa Neto A, Nassar APJ, Cardoso SO, et al. Vasopressin and terlipressin in adult vasodilatory shock: a systematic review and meta-analysis of nine randomized controlled trials. Crit Care 2012; 16:R154.
- Russell JA, Walley KR, Singer J, et al; VASST Investigators. Vasopressin versus norepinephrine infusion in patients with septic shock. N Engl J Med 2008; 358:877–887.
- Gattinoni L, Brazzi L, Pelosi P, et al; for the SvO2 Collaborative Group. A trial of goal-oriented hemodynamic therapy in critically ill patients. N Engl J Med 1995; 333:1025–1032.
- Rivers E, Nguyen B, Havstad S, et al; Early Goal-Directed Therapy Collaborative Group. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med 2001; 345:1368–1377.
- ProCESS Investigators; Yealy DM, Kellum JA, Huang DT, et al. A randomized trial of protocol-based care for early septic shock. N Engl J Med 2014; 370:1683–1693.
- McNelis J, Marini CP, Jurkiewicz A, et al. Prolonged lactate clearance is associated with increased mortality in the surgical intensive care unit. Am J Surg 2001; 182:481–485.
- Jones AE, Shapiro NI, Trzeciak S, Arnold RC, Claremont HA, Kline JA; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial. JAMA 2010; 303:739–746.
- Pope JV, Jones AE, Gaieski DF, Arnold RC, Trzeciak S, Shapiro NI; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Multicenter study of central venous oxygen saturation (ScvO2) as a predictor of mortality in patients with sepsis. Ann Emerg Med 2010; 55:40–46.e1
- Kraut JA, Kurtz I. Use of base in the treatment of severe acidemic states. Am J Kidney Dis 2001; 38:703–727.
- Levraut J, Grimaud D. Treatment of metabolic acidosis. Curr Opin Crit Care 2003; 9:260–265.
- Orija AA, Jenks CL. Nucleoside analog reverse transcriptase inhibitor induced lactic acidosis treated with continuous renal replacement in the medical intensive care unit. Crit Care & Shock 2012; 15:9–11.
- Friesecke S, Abel P, Kraft M, Gerner A, Runge S. Combined renal replacement therapy for severe metformin-induced lactic acidosis. Nephrol Dial Transplant 2006; 21:2038–2039.
- Claessens YE, Cariou A, Monchi M, et al. Detecting life-threatening lactic acidosis related to nucleoside-analog treatment of human immunodeficiency virus-infected patients, and treatment with l-carnitine. Crit Care Med 2003; 31:1042–1047.
- Parikh S, Saneto R, Falk MJ, Anselm I, Cohen BH, Haas R; Medicine Society TM. A modern approach to the treatment of mitochondrial disease. Curr Treat Options Neurol 2009; 11:414–430.
Physicians are paying more attention to serum lactate levels in hospitalized patients than in the past, especially with the advent of point-of-care testing. Elevated lactate levels are associated with tissue hypoxia and hypoperfusion but can also be found in a number of other conditions. Therefore, confusion can arise as to how to interpret elevated levels and subsequently manage these patients in a variety of settings.
In this review, we discuss the mechanisms underlying lactic acidosis, its prognostic implications, and its use as a therapeutic target in treating patients in septic shock and other serious disorders.
LACTATE IS A PRODUCT OF ANAEROBIC RESPIRATION
Lactate, or lactic acid, is produced from pyruvate as an end product of glycolysis under anaerobic conditions (Figure 1). It is produced in most tissues in the body, but primarily in skeletal muscle, brain, intestine, and red blood cells. During times of stress, lactate is also produced in the lungs, white blood cells, and splanchnic organs.
Most lactate in the blood is cleared by the liver, where it is the substrate for gluconeogenesis, and a small amount is cleared by the kidneys.1,2 The entire pathway by which lactate is produced and converted back to glucose is called the Cori cycle.
NORMAL LEVELS ARE LESS THAN ABOUT 2.0 MMOL/L
In this review, we will present lactate levels in the SI units of mmol/L (1 mmol/L = 9 mg/dL).
Basal lactate production is approximately 0.8 mmol/kg body weight/hour. The average normal arterial blood lactate level is approximately 0.620 mmol/L and the venous level is slightly higher at 0.997 mmol/L,3 but overall, arterial and venous lactate levels correlate well.
Normal lactate levels are less than 2 mmol/L,4 intermediate levels range from 2 to less than 4 mmol/L, and high levels are 4 mmol/L or higher.5
To minimize variations in measurement, blood samples should be drawn without a tourniquet into tubes containing fluoride, placed on ice, and processed quickly (ideally within 15 minutes).
INCREASED PRODUCTION, DECREASED CLEARANCE, OR BOTH
An elevated lactate level can be the result of increased production, decreased clearance, or both (as in liver dysfunction).
Type A lactic acidosis—due to hypoperfusion and hypoxia—occurs when there is a mismatch between oxygen delivery and consumption, with resultant anaerobic glycolysis.
The guidelines from the Surviving Sepsis Campaign6 emphasize using lactate levels to diagnose patients with sepsis-induced hypoperfusion. However, hyperlactatemia can indicate inadequate oxygen delivery due to any type of shock (Table 1).
Type B lactic acidosis—not due to hypoperfusion—occurs in a variety of conditions (Table 1), including liver disease, malignancy, use of certain medications (eg, metformin, epinephrine), total parenteral nutrition, human immunodeficiency virus infection, thiamine deficiency, mitochondrial myopathies, and congenital lactic acidosis.1–3,7 Yet other causes include trauma, excessive exercise, diabetic ketoacidosis, ethanol intoxication, dysfunction of the enzyme pyruvate dehydrogenase, and increased muscle degradation leading to increased production of pyruvate. In these latter scenarios, glucose metabolism exceeds the oxidation capacity of the mitochondria, and the rise in pyruvate concentration drives lactate production.8,9 Mitochondrial dysfunction and subsequent deficits in cellular oxygen use can also result in persistently high lactate levels.10
In some situations, patients with mildly elevated lactic acid levels in type B lactic acidosis can be monitored to ensure stability, rather than be treated aggressively.
HIGHER LEVELS AND LOWER CLEARANCE PREDICT DEATH
The higher the lactate level and the slower the rate of normalization (lactate clearance), the higher the risk of death.
Lactate levels and mortality rate
Shapiro et al11 showed that increases in lactate level are associated with proportional increases in the mortality rate. Mikkelsen et al12 showed that intermediate levels (2.0–3.9 mmol/L) and high levels (≥ 4 mmol/L) of serum lactate are associated with increased risk of death independent of organ failure and shock. Patients with mildly elevated and intermediate lactate levels and sepsis have higher rates of in-hospital and 30-day mortality, which correlate with the baseline lactate level.13
In a post hoc analysis of a randomized controlled trial, patients with septic shock who presented to the emergency department with hypotension and a lactate level higher than 2 mmol/L had a significantly higher in-hospital mortality rate than those who presented with hypotension and a lactate level of 2 mmol/L or less (26% vs 9%, P < .0001).14 These data suggest that elevated lactate levels may have a significant prognostic role, independent of blood pressure.
Slower clearance
The prognostic implications of lactate clearance (reductions in lactate levels over time, as opposed to a single value in time), have also been evaluated.
Lactate clearance of at least 10% at 6 hours after presentation has been associated with a lower mortality rate than nonclearance (19% vs 60%) in patients with sepsis or septic shock with elevated levels.15–17 Similar findings have been reported in a general intensive care unit population,18 as well as a surgical intensive care population.sup>19
Puskarich et al20 have also shown that lactate normalization to less than 2 mmol/L during early sepsis resuscitation is the strongest predictor of survival (odds ratio [OR] 5.2), followed by lactate clearance of 50% (OR 4.0) within the first 6 hours of presentation. Not only is lactate clearance associated with improved outcomes, but a faster rate of clearance after initial presentation is also beneficial.15,16,18
Lactate clearance over a longer period (> 6 hours) has not been studied in patients with septic shock. However, in the general intensive care unit population, therapy guided by lactate clearance for the first 8 hours after presentation has shown a reduction in mortality rate.18 There are no data available on outcomes of lactate-directed therapy beyond 8 hours, but lactate concentration and lactate clearance at 24 hours correlate with the 28-day mortality rate.21
Cryptic shock
Cryptic shock describes a state in a subgroup of patients who have elevated lactate levels and global tissue hypoxia despite being normotensive or even hypertensive. These patients have a higher mortality rate independent of blood pressure. Jansen et al18 found that patients with a lactate level higher than 4 mmol/L and preserved blood pressure had a mortality rate of 15%, while those without shock or hyperlactatemia had a mortality rate of 2.5%. In addition, patients with an elevated lactate level in the absence of hypotension have mortality rates similar to those in patients with high lactate levels and hypotension refractory to fluid boluses, suggesting the presence of tissue hypoxia even in these normotensive patients.6
HOW TO APPROACH AN ELEVATED LACTATE LEVEL
An elevated lactate level should prompt an evaluation for causes of decreased oxygen delivery, due either to a systemic low-flow state (as a result of decreased cardiac output) or severe anemia, or to regionally decreased perfusion, (eg, limb or mesenteric ischemia). If tissue hypoxia is ruled out after an exhaustive workup, consideration should be given to causes of hyperlactatemia without concomitant tissue hypoxia (type B acidosis).
Treatment differs depending on the underlying mechanism of the lactate elevation; nevertheless, treatment is mostly related to optimizing oxygen delivery by giving fluids, packed red blood cells, and vasopressors or inotropic agents, or both (Figure 2). The specific treatment differs based on the shock state, but there are similarities that can guide the clinician.
FLUID SUPPORT
Giving fluids, with a goal of improving cardiac output, remains a cornerstone of initial therapy for most shock states.22,23
How much fluid?
Fluids should be given until the patient is no longer preload-dependent, although there is much debate about which assessment strategy should be used to determine if cardiac output will improve with more fluid (ie, fluid-responsiveness).24 In many cases, fluid resuscitation alone may be enough to restore hemodynamic stability, improve tissue perfusion, and reduce elevated lactate concentrations.25
The decision to give more fluids should not be made lightly, though, as a more positive fluid balance early in the course of septic shock and over 4 days has been associated with a higher mortality rate.26 Additionally, pushing fluids in patients with cardiogenic shock due to impaired left ventricular systolic function may lead to or worsen pulmonary edema. Therefore, the indiscriminate use of fluids should be avoided.
Which fluids?
Despite years of research, controversy persists about whether crystalloids or colloids are better for resuscitation. Randomized trials in heterogeneous intensive care unit patients have not detected differences in 28-day mortality rates between those allocated to crystalloids or 4% albumin27 and those allocated to crystalloids or hydroxyethyl starch.28
Hydroxyethyl starch may not be best. In a study of patients with severe sepsis, those randomized to receive hydroxyethyl starch had a higher 90-day mortality rate than patients randomized to crystalloids (51% vs 43%, P = .03).29 A sequential prospective before-and-after study did not detect a difference in the time to normalization (< 2.2 mmol/L) of lactate (P = .68) or cessation of vasopressors (P = .11) in patients with severe sepsis who received fluid resuscitation with crystalloids, gelatin, or hydroxyethyl starch. More patients who received hydroxyethyl starch in these studies developed acute kidney injury than those receiving crystalloids.28–30
Taken together, these data strongly suggest hydroxyethyl starch should not be used for fluid resuscitation in the intensive care unit.
Normal saline or albumin? Although some data suggest that albumin may be preferable to 0.9% sodium chloride in patients with severe sepsis,31,32 these analyses should be viewed as hypothesis-generating. There do not seem to be differences between fluid types in terms of subsequent serum lactate concentrations or achievement of lactate clearance goals.28–30 Until further studies are completed, both albumin and crystalloids are reasonable for resuscitation.
Caironi et al33 performed an open-label study comparing albumin replacement (with a goal serum albumin concentration of 3 g/dL) plus a crystalloid solution vs a crystalloid solution alone in patients with severe sepsis or septic shock. They detected no difference between the albumin and crystalloid groups in mortality rates at 28 days (31.8% vs 32.0%, P = .94) or 90 days (41.1% vs 43.6%, P = .29). However, patients in the albumin group had a shorter time to cessation of vasoactive agents (median 3 vs 4 days, P = .007) and lower cardiovascular Sequential Organ Failure Assessment subscores (median 1.20 vs 1.42, P = .03), and more frequently achieved a mean arterial pressure of at least 65 mm Hg within 6 hours of randomization (86.0% vs 82.5%, P = .04).
Although serum lactate levels were lower in the albumin group at baseline (1.7 mmol/L vs 1.8 mmol/L, P = .05), inspection of the data appears to show a similar daily lactate clearance rate between groups over the first 7 study days (although these data were not analyzed by the authors). Achievement of a lactate level lower than 2 mmol/L on the first day of therapy was not significantly different between groups (73.4% vs 72.5%, P = .11).33
In a post hoc subgroup analysis, patients with septic shock at baseline randomized to albumin had a lower 90-day mortality rate than patients randomized to crystalloid solutions (RR 0.87, 95% CI 0.77–0.99). There was no difference in the 90-day mortality rate in patients without septic shock (RR 1.13, 95% CI 0.92–1.39, P = .03 for heterogeneity).33
These data suggest that albumin replacement may not improve outcomes in patients with severe sepsis, but may have advantages in terms of hemodynamic variables (and potentially mortality) in patients with septic shock. The role of albumin replacement in patients with septic shock warrants further study.
VASOPRESSORS
Vasopressors, inotropes, or both should be given to patients who have signs of hypoperfusion (including elevated lactate levels) despite preload optimization or ongoing fluid administration. The most appropriate drug depends on the goal: vasopressors are used to increase systemic vascular resistance, while inotropes are used to improve cardiac output and oxygen delivery.
Blood pressure target
The Surviving Sepsis Campaign guidelines recommend a mean arterial blood pressure target of at least 65 mm Hg during initial resuscitation and when vasopressors are applied for patients with septic shock.22 This recommendation is based on small studies that did not show differences in serum lactate levels or regional blood flow when the mean arterial pressure was elevated above 65 mm Hg with norepinephrine.34,35 However, the campaign guidelines note that the mean arterial pressure goal must be individualized in order to achieve optimal perfusion.
A large, open-label trial36 detected no difference in 28-day mortality rates in patients with septic shock between those allocated to a mean arterial pressure goal of 80 to 85 mm Hg or 65 to 70 mm Hg (36.6% vs 34.0%, P = .57). Although lactate levels did not differ between groups, the incidence of new-onset atrial fibrillation was higher in the higher-target group (6.7% vs 2.8%, P = .02). Fewer patients with chronic hypertension needed renal replacement therapy in the higher pressure group, further emphasizing the need to individualize the mean arterial pressure goal for patients in shock.36
Which vasopressor agent?
Dopamine and norepinephrine have traditionally been the preferred initial vasopressors for patients with shock. Until recently there were few data to guide selection between the two, but this is changing.
In a 2010 study of 1,679 patients with shock requiring vasopressors, there was no difference in the 28-day mortality rate between patients randomized to dopamine or norepinephrine (53% vs 49%, P = .10).37 Patients allocated to dopamine, though, had a higher incidence of arrhythmias (24% vs 12%, P < .001) and more frequently required open-label norepinephrine (26% vs 20%, P < .001). Although lactate levels and the time to achievement of a mean arterial pressure of 65 mm Hg were similar between groups, patients allocated to norepinephrine had more vasopressor-free days through day 28.
An a priori-planned subgroup analysis evaluated the influence of the type of shock on patient outcome. Patients with cardiogenic shock randomized to dopamine had a higher mortality rate than those randomized to norepinephrine (P = .03). However, the overall effect of treatment did not differ among the shock subgroups (interaction P = .87), suggesting that the reported differences in mortality according to subgroup may be spurious.
In a 2012 meta-analysis of patients with septic shock, dopamine use was associated with a higher mortality rate than norepinephrine use.38
In light of these data, norepinephrine should be preferred over dopamine as the initial vasopressor in most types of shock.
Epinephrine does not offer an outcome advantage over norepinephrine and may be associated with a higher incidence of adverse events.39–42 Indeed, in a study of patients with septic shock, lactate concentrations on the first day after randomization were significantly higher in patients allocated to epinephrine than in patients allocated to norepinephrine plus dobutamine.39 Similar effects on lactate concentrations with epinephrine were seen in patients with various types of shock40 and in those with cardiogenic shock.42
These differences in lactate concentrations may be directly attributable to epinephrine. Epinephrine can increase lactate concentrations through glycolysis and pyruvate dehydrogenase activation by stimulation of sodium-potassium ATPase activity via beta-2 adrenergic receptors in skeletal muscles,43 as well as decrease splanchnic perfusion.42,44,45 These effects may preclude using lactate clearance as a resuscitation goal in patients receiving epinephrine. Epinephrine is likely best reserved for patients with refractory shock,22 particularly those in whom cardiac output is known to be low.
Phenylephrine, essentially a pure vasoconstrictor, should be avoided in low cardiac output states and is best reserved for patients who develop a tachyarrhythmia on norepinephrine.22
Vasopressin, also a pure vasoconstrictor that should be avoided in low cardiac output states, has been best studied in patients with vasodilatory shock. Although controversy exists on the mortality benefits of vasopressin in vasodilatory shock, it is a relatively safe drug with consistent norepinephrine-sparing effects when added to existing norepinephrine therapy.46,47 In patients with less severe septic shock, including those with low lactate concentrations, adding vasopressin to norepinephrine instead of continuing norepinephrine alone may confer a mortality advantage.48
OTHER MEASURES TO OPTIMIZE OXYGEN DELIVERY
In circulatory shock from any cause, tissue oxygen demand exceeds oxygen delivery. Once arterial oxygenation and hemoglobin levels (by packed red blood cell transfusion) have been optimized, cardiac output is the critical determinant of oxygen delivery. Cardiac output may be augmented by ensuring adequate preload (by fluid resuscitation) or by giving inotropes or vasodilators.
The optimal cardiac output is difficult to define, and the exact marker for determining when cardiac output should be augmented is unclear. A strategy of increasing cardiac output to predefined “supranormal” levels was not associated with a lower mortality rate.49 Therefore, the decision to augment cardiac output must be individualized and will likely vary in the same patient over time.23
A reasonable approach to determining when augmentation of cardiac output is necessary was proposed in a study by Rivers et al.50 In that study, in patients randomized to early goal-directed therapy, inotropes were recommended when the central venous oxygenation saturation (Scvo2) was below 70% despite adequate fluid resuscitation (central venous pressure ≥ 8 mm Hg) and hematocrits were higher than 30%.
When an inotrope is indicated to improve cardiac output, dobutamine is usually the preferred agent. Dobutamine has a shorter half-life (allowing for easier titration) and causes less hypotension (assuming preload has been optimized) than phosphodiesterase type III inhibitors such as milrinone.
Mechanical support devices, such as intra-aortic balloon counterpulsation, and vasodilators can also be used to improve tissue perfusion in selected patients with low cardiac output syndromes.
USING LACTATE LEVELS TO GUIDE THERAPY
Lactate levels above 4.0 mmol/L
Lactate may be a useful marker for determining whether organ dysfunction is present and, hence, what course of therapy should be given, especially in sepsis. A serum lactate level higher than 4.0 mmol/L has been used as the trigger to start aggressive resuscitation in patients with sepsis.50,51
Traditionally, as delineated by Rivers et al50 in their landmark study of early goal-directed therapy, this entailed placing an arterial line and a central line for hemodynamic monitoring, with specific interventions directed at increasing the central venous pressure, mean arterial pressure, and central venous oxygen saturation.50 However, a recent study in a similar population of patients with sepsis with elevated lactate found no significant advantage of protocol-based resuscitation over care provided according to physician judgment, and no significant benefit in central venous catheterization and hemodynamic monitoring in all patients.51
Lactate clearance: 10% or above at 8 hours?
Regardless of the approach chosen, decreasing lactate levels can be interpreted as an adequate response to the interventions provided. As a matter of fact, several groups of investigators have also demonstrated the merits of lactate clearance alone as a prognostic indicator in patients requiring hemodynamic support.
McNelis et al52 retrospectively evaluated 95 postsurgical patients who required hemodynamic monitoring.52,53 The authors found that the slower the lactate clearance, the higher the mortality rate.
Given the prognostic implications of lactate clearance, investigators have evaluated whether lactate clearance could be used as a surrogate resuscitation goal for optimizing oxygen delivery. Using lactate clearance may have significant practical advantages over using central venous oxygen saturation, since it does not require a central venous catheter or continuous oximetric monitoring.
In a study comparing these two resuscitation end points, patients were randomized to a goal of either central venous oxygen saturation of 70% or more or lactate clearance of 10% or more within the first 6 hours after presentation as a marker of oxygen delivery.53 Mortality rates were similar with either strategy. Of note, only 10% of the patients actually required therapies to improve their oxygen delivery. Furthermore, there were no differences in the treatments given (including fluids, vasopressors, inotropes, packed red blood cells) throughout the treatment period.
These findings provide several insights. First, few patients admitted to the emergency department with severe sepsis and treated with an initial quantitative resuscitation protocol require additional therapy for augmenting oxygen delivery. Second, lactate clearance, in a setting where initial resuscitation with fluids and vasopressors restores adequate oxygen delivery for the majority of patients, is likely as good a target for resuscitation as central venous oxygen saturation.
This study, however, does not address the question of whether lactate clearance is useful as an additional marker of oxygen delivery (in conjunction with central venous oxygen saturation). Indeed, caution should be taken to target central venous oxygen saturation goals alone, as patients with septic shock presenting with venous hyperoxia (central venous oxygen saturation > 89%) have been shown to have a higher mortality rate than patients with normoxia (central venous oxygen saturation 71%–89%).54
This was further demonstrated by Arnold et al in a study of patients presenting to the emergency department with severe sepsis.15 In this study, significant discordance between central venous oxygen saturation and lactate clearance was seen, where 79% of patients with less than 10% lactate clearance had concomitant central venous oxygen saturation of 70% or greater.
Jansen et al18 evaluated the role of targeting lactate clearance in conjunction with central venous oxygen saturation monitoring. In this study, critically ill patients with elevated lactate and inadequate lactate clearance were randomized to usual care or to resuscitation to adequate lactate clearance (20% or more). The therapies to optimize oxygen delivery were given according to the central venous oxygen saturation. Overall, after adjustment for predefined risk factors, the in-hospital mortality rate was lower in the lactate clearance group. This may signify that patients with sepsis and central venous oxygen saturation of 70% or more may continue to have poor lactate clearance, warranting further treatment.
Taken together, serum lactate may be helpful for prognostication, determination of course of therapy, and quantification for tissue hypoperfusion for targeted therapies. Figure 2 presents our approach to an elevated lactate level. As performed in the study by Jansen et al,18 it seems reasonable to measure lactate levels every 2 hours for the first 8 hours of resuscitation in patients with type A lactic acidosis. These levels should be interpreted in the context of lactate clearance (at least 10%, but preferably 20%) and normalization, and should be treated with an approach similar to the one outlined in Figure 2.
TREATING TYPE B LACTIC ACIDOSIS (NORMAL PERFUSION AND OXYGENATION)
Treating type B lactic acidosis is quite different because the goal is not to correct mismatches in oxygen consumption and delivery. Since most cases are due to underlying conditions such as malignancy or medications, treatment should be centered around eliminating the cause (eg, treat the malignancy, discontinue the offending medication). The main reason for treatment is to alleviate the harmful effects of acidosis. For example, acidosis can result in a negative inotropic effect.
Sodium bicarbonate, dichloroacetate, carbicarb, and tromethamine have all been studied in the management of type B lactic acidosis, with little success.55,56
Renal replacement therapy has had some success in drug-induced lactic acidosis.57,58
l-carnitine has had promising results in treating patients with human immunodeficiency virus infection, since these patients are carnitine-deficient and carnitine plays an important role in mitochondrial function.59
Thiamine and biotin deficiencies can occur in patients receiving total parenteral nutrition without vitamins and in patients who drink alcohol heavily and can cause lactic acidosis. These nutrients should be supplemented accordingly.
Treatment of mitochondrial disorders includes antioxidants (coenzyme Q10, vitamin C, vitamin E) and amino acids (l-arginine).60
Physicians are paying more attention to serum lactate levels in hospitalized patients than in the past, especially with the advent of point-of-care testing. Elevated lactate levels are associated with tissue hypoxia and hypoperfusion but can also be found in a number of other conditions. Therefore, confusion can arise as to how to interpret elevated levels and subsequently manage these patients in a variety of settings.
In this review, we discuss the mechanisms underlying lactic acidosis, its prognostic implications, and its use as a therapeutic target in treating patients in septic shock and other serious disorders.
LACTATE IS A PRODUCT OF ANAEROBIC RESPIRATION
Lactate, or lactic acid, is produced from pyruvate as an end product of glycolysis under anaerobic conditions (Figure 1). It is produced in most tissues in the body, but primarily in skeletal muscle, brain, intestine, and red blood cells. During times of stress, lactate is also produced in the lungs, white blood cells, and splanchnic organs.
Most lactate in the blood is cleared by the liver, where it is the substrate for gluconeogenesis, and a small amount is cleared by the kidneys.1,2 The entire pathway by which lactate is produced and converted back to glucose is called the Cori cycle.
NORMAL LEVELS ARE LESS THAN ABOUT 2.0 MMOL/L
In this review, we will present lactate levels in the SI units of mmol/L (1 mmol/L = 9 mg/dL).
Basal lactate production is approximately 0.8 mmol/kg body weight/hour. The average normal arterial blood lactate level is approximately 0.620 mmol/L and the venous level is slightly higher at 0.997 mmol/L,3 but overall, arterial and venous lactate levels correlate well.
Normal lactate levels are less than 2 mmol/L,4 intermediate levels range from 2 to less than 4 mmol/L, and high levels are 4 mmol/L or higher.5
To minimize variations in measurement, blood samples should be drawn without a tourniquet into tubes containing fluoride, placed on ice, and processed quickly (ideally within 15 minutes).
INCREASED PRODUCTION, DECREASED CLEARANCE, OR BOTH
An elevated lactate level can be the result of increased production, decreased clearance, or both (as in liver dysfunction).
Type A lactic acidosis—due to hypoperfusion and hypoxia—occurs when there is a mismatch between oxygen delivery and consumption, with resultant anaerobic glycolysis.
The guidelines from the Surviving Sepsis Campaign6 emphasize using lactate levels to diagnose patients with sepsis-induced hypoperfusion. However, hyperlactatemia can indicate inadequate oxygen delivery due to any type of shock (Table 1).
Type B lactic acidosis—not due to hypoperfusion—occurs in a variety of conditions (Table 1), including liver disease, malignancy, use of certain medications (eg, metformin, epinephrine), total parenteral nutrition, human immunodeficiency virus infection, thiamine deficiency, mitochondrial myopathies, and congenital lactic acidosis.1–3,7 Yet other causes include trauma, excessive exercise, diabetic ketoacidosis, ethanol intoxication, dysfunction of the enzyme pyruvate dehydrogenase, and increased muscle degradation leading to increased production of pyruvate. In these latter scenarios, glucose metabolism exceeds the oxidation capacity of the mitochondria, and the rise in pyruvate concentration drives lactate production.8,9 Mitochondrial dysfunction and subsequent deficits in cellular oxygen use can also result in persistently high lactate levels.10
In some situations, patients with mildly elevated lactic acid levels in type B lactic acidosis can be monitored to ensure stability, rather than be treated aggressively.
HIGHER LEVELS AND LOWER CLEARANCE PREDICT DEATH
The higher the lactate level and the slower the rate of normalization (lactate clearance), the higher the risk of death.
Lactate levels and mortality rate
Shapiro et al11 showed that increases in lactate level are associated with proportional increases in the mortality rate. Mikkelsen et al12 showed that intermediate levels (2.0–3.9 mmol/L) and high levels (≥ 4 mmol/L) of serum lactate are associated with increased risk of death independent of organ failure and shock. Patients with mildly elevated and intermediate lactate levels and sepsis have higher rates of in-hospital and 30-day mortality, which correlate with the baseline lactate level.13
In a post hoc analysis of a randomized controlled trial, patients with septic shock who presented to the emergency department with hypotension and a lactate level higher than 2 mmol/L had a significantly higher in-hospital mortality rate than those who presented with hypotension and a lactate level of 2 mmol/L or less (26% vs 9%, P < .0001).14 These data suggest that elevated lactate levels may have a significant prognostic role, independent of blood pressure.
Slower clearance
The prognostic implications of lactate clearance (reductions in lactate levels over time, as opposed to a single value in time), have also been evaluated.
Lactate clearance of at least 10% at 6 hours after presentation has been associated with a lower mortality rate than nonclearance (19% vs 60%) in patients with sepsis or septic shock with elevated levels.15–17 Similar findings have been reported in a general intensive care unit population,18 as well as a surgical intensive care population.sup>19
Puskarich et al20 have also shown that lactate normalization to less than 2 mmol/L during early sepsis resuscitation is the strongest predictor of survival (odds ratio [OR] 5.2), followed by lactate clearance of 50% (OR 4.0) within the first 6 hours of presentation. Not only is lactate clearance associated with improved outcomes, but a faster rate of clearance after initial presentation is also beneficial.15,16,18
Lactate clearance over a longer period (> 6 hours) has not been studied in patients with septic shock. However, in the general intensive care unit population, therapy guided by lactate clearance for the first 8 hours after presentation has shown a reduction in mortality rate.18 There are no data available on outcomes of lactate-directed therapy beyond 8 hours, but lactate concentration and lactate clearance at 24 hours correlate with the 28-day mortality rate.21
Cryptic shock
Cryptic shock describes a state in a subgroup of patients who have elevated lactate levels and global tissue hypoxia despite being normotensive or even hypertensive. These patients have a higher mortality rate independent of blood pressure. Jansen et al18 found that patients with a lactate level higher than 4 mmol/L and preserved blood pressure had a mortality rate of 15%, while those without shock or hyperlactatemia had a mortality rate of 2.5%. In addition, patients with an elevated lactate level in the absence of hypotension have mortality rates similar to those in patients with high lactate levels and hypotension refractory to fluid boluses, suggesting the presence of tissue hypoxia even in these normotensive patients.6
HOW TO APPROACH AN ELEVATED LACTATE LEVEL
An elevated lactate level should prompt an evaluation for causes of decreased oxygen delivery, due either to a systemic low-flow state (as a result of decreased cardiac output) or severe anemia, or to regionally decreased perfusion, (eg, limb or mesenteric ischemia). If tissue hypoxia is ruled out after an exhaustive workup, consideration should be given to causes of hyperlactatemia without concomitant tissue hypoxia (type B acidosis).
Treatment differs depending on the underlying mechanism of the lactate elevation; nevertheless, treatment is mostly related to optimizing oxygen delivery by giving fluids, packed red blood cells, and vasopressors or inotropic agents, or both (Figure 2). The specific treatment differs based on the shock state, but there are similarities that can guide the clinician.
FLUID SUPPORT
Giving fluids, with a goal of improving cardiac output, remains a cornerstone of initial therapy for most shock states.22,23
How much fluid?
Fluids should be given until the patient is no longer preload-dependent, although there is much debate about which assessment strategy should be used to determine if cardiac output will improve with more fluid (ie, fluid-responsiveness).24 In many cases, fluid resuscitation alone may be enough to restore hemodynamic stability, improve tissue perfusion, and reduce elevated lactate concentrations.25
The decision to give more fluids should not be made lightly, though, as a more positive fluid balance early in the course of septic shock and over 4 days has been associated with a higher mortality rate.26 Additionally, pushing fluids in patients with cardiogenic shock due to impaired left ventricular systolic function may lead to or worsen pulmonary edema. Therefore, the indiscriminate use of fluids should be avoided.
Which fluids?
Despite years of research, controversy persists about whether crystalloids or colloids are better for resuscitation. Randomized trials in heterogeneous intensive care unit patients have not detected differences in 28-day mortality rates between those allocated to crystalloids or 4% albumin27 and those allocated to crystalloids or hydroxyethyl starch.28
Hydroxyethyl starch may not be best. In a study of patients with severe sepsis, those randomized to receive hydroxyethyl starch had a higher 90-day mortality rate than patients randomized to crystalloids (51% vs 43%, P = .03).29 A sequential prospective before-and-after study did not detect a difference in the time to normalization (< 2.2 mmol/L) of lactate (P = .68) or cessation of vasopressors (P = .11) in patients with severe sepsis who received fluid resuscitation with crystalloids, gelatin, or hydroxyethyl starch. More patients who received hydroxyethyl starch in these studies developed acute kidney injury than those receiving crystalloids.28–30
Taken together, these data strongly suggest hydroxyethyl starch should not be used for fluid resuscitation in the intensive care unit.
Normal saline or albumin? Although some data suggest that albumin may be preferable to 0.9% sodium chloride in patients with severe sepsis,31,32 these analyses should be viewed as hypothesis-generating. There do not seem to be differences between fluid types in terms of subsequent serum lactate concentrations or achievement of lactate clearance goals.28–30 Until further studies are completed, both albumin and crystalloids are reasonable for resuscitation.
Caironi et al33 performed an open-label study comparing albumin replacement (with a goal serum albumin concentration of 3 g/dL) plus a crystalloid solution vs a crystalloid solution alone in patients with severe sepsis or septic shock. They detected no difference between the albumin and crystalloid groups in mortality rates at 28 days (31.8% vs 32.0%, P = .94) or 90 days (41.1% vs 43.6%, P = .29). However, patients in the albumin group had a shorter time to cessation of vasoactive agents (median 3 vs 4 days, P = .007) and lower cardiovascular Sequential Organ Failure Assessment subscores (median 1.20 vs 1.42, P = .03), and more frequently achieved a mean arterial pressure of at least 65 mm Hg within 6 hours of randomization (86.0% vs 82.5%, P = .04).
Although serum lactate levels were lower in the albumin group at baseline (1.7 mmol/L vs 1.8 mmol/L, P = .05), inspection of the data appears to show a similar daily lactate clearance rate between groups over the first 7 study days (although these data were not analyzed by the authors). Achievement of a lactate level lower than 2 mmol/L on the first day of therapy was not significantly different between groups (73.4% vs 72.5%, P = .11).33
In a post hoc subgroup analysis, patients with septic shock at baseline randomized to albumin had a lower 90-day mortality rate than patients randomized to crystalloid solutions (RR 0.87, 95% CI 0.77–0.99). There was no difference in the 90-day mortality rate in patients without septic shock (RR 1.13, 95% CI 0.92–1.39, P = .03 for heterogeneity).33
These data suggest that albumin replacement may not improve outcomes in patients with severe sepsis, but may have advantages in terms of hemodynamic variables (and potentially mortality) in patients with septic shock. The role of albumin replacement in patients with septic shock warrants further study.
VASOPRESSORS
Vasopressors, inotropes, or both should be given to patients who have signs of hypoperfusion (including elevated lactate levels) despite preload optimization or ongoing fluid administration. The most appropriate drug depends on the goal: vasopressors are used to increase systemic vascular resistance, while inotropes are used to improve cardiac output and oxygen delivery.
Blood pressure target
The Surviving Sepsis Campaign guidelines recommend a mean arterial blood pressure target of at least 65 mm Hg during initial resuscitation and when vasopressors are applied for patients with septic shock.22 This recommendation is based on small studies that did not show differences in serum lactate levels or regional blood flow when the mean arterial pressure was elevated above 65 mm Hg with norepinephrine.34,35 However, the campaign guidelines note that the mean arterial pressure goal must be individualized in order to achieve optimal perfusion.
A large, open-label trial36 detected no difference in 28-day mortality rates in patients with septic shock between those allocated to a mean arterial pressure goal of 80 to 85 mm Hg or 65 to 70 mm Hg (36.6% vs 34.0%, P = .57). Although lactate levels did not differ between groups, the incidence of new-onset atrial fibrillation was higher in the higher-target group (6.7% vs 2.8%, P = .02). Fewer patients with chronic hypertension needed renal replacement therapy in the higher pressure group, further emphasizing the need to individualize the mean arterial pressure goal for patients in shock.36
Which vasopressor agent?
Dopamine and norepinephrine have traditionally been the preferred initial vasopressors for patients with shock. Until recently there were few data to guide selection between the two, but this is changing.
In a 2010 study of 1,679 patients with shock requiring vasopressors, there was no difference in the 28-day mortality rate between patients randomized to dopamine or norepinephrine (53% vs 49%, P = .10).37 Patients allocated to dopamine, though, had a higher incidence of arrhythmias (24% vs 12%, P < .001) and more frequently required open-label norepinephrine (26% vs 20%, P < .001). Although lactate levels and the time to achievement of a mean arterial pressure of 65 mm Hg were similar between groups, patients allocated to norepinephrine had more vasopressor-free days through day 28.
An a priori-planned subgroup analysis evaluated the influence of the type of shock on patient outcome. Patients with cardiogenic shock randomized to dopamine had a higher mortality rate than those randomized to norepinephrine (P = .03). However, the overall effect of treatment did not differ among the shock subgroups (interaction P = .87), suggesting that the reported differences in mortality according to subgroup may be spurious.
In a 2012 meta-analysis of patients with septic shock, dopamine use was associated with a higher mortality rate than norepinephrine use.38
In light of these data, norepinephrine should be preferred over dopamine as the initial vasopressor in most types of shock.
Epinephrine does not offer an outcome advantage over norepinephrine and may be associated with a higher incidence of adverse events.39–42 Indeed, in a study of patients with septic shock, lactate concentrations on the first day after randomization were significantly higher in patients allocated to epinephrine than in patients allocated to norepinephrine plus dobutamine.39 Similar effects on lactate concentrations with epinephrine were seen in patients with various types of shock40 and in those with cardiogenic shock.42
These differences in lactate concentrations may be directly attributable to epinephrine. Epinephrine can increase lactate concentrations through glycolysis and pyruvate dehydrogenase activation by stimulation of sodium-potassium ATPase activity via beta-2 adrenergic receptors in skeletal muscles,43 as well as decrease splanchnic perfusion.42,44,45 These effects may preclude using lactate clearance as a resuscitation goal in patients receiving epinephrine. Epinephrine is likely best reserved for patients with refractory shock,22 particularly those in whom cardiac output is known to be low.
Phenylephrine, essentially a pure vasoconstrictor, should be avoided in low cardiac output states and is best reserved for patients who develop a tachyarrhythmia on norepinephrine.22
Vasopressin, also a pure vasoconstrictor that should be avoided in low cardiac output states, has been best studied in patients with vasodilatory shock. Although controversy exists on the mortality benefits of vasopressin in vasodilatory shock, it is a relatively safe drug with consistent norepinephrine-sparing effects when added to existing norepinephrine therapy.46,47 In patients with less severe septic shock, including those with low lactate concentrations, adding vasopressin to norepinephrine instead of continuing norepinephrine alone may confer a mortality advantage.48
OTHER MEASURES TO OPTIMIZE OXYGEN DELIVERY
In circulatory shock from any cause, tissue oxygen demand exceeds oxygen delivery. Once arterial oxygenation and hemoglobin levels (by packed red blood cell transfusion) have been optimized, cardiac output is the critical determinant of oxygen delivery. Cardiac output may be augmented by ensuring adequate preload (by fluid resuscitation) or by giving inotropes or vasodilators.
The optimal cardiac output is difficult to define, and the exact marker for determining when cardiac output should be augmented is unclear. A strategy of increasing cardiac output to predefined “supranormal” levels was not associated with a lower mortality rate.49 Therefore, the decision to augment cardiac output must be individualized and will likely vary in the same patient over time.23
A reasonable approach to determining when augmentation of cardiac output is necessary was proposed in a study by Rivers et al.50 In that study, in patients randomized to early goal-directed therapy, inotropes were recommended when the central venous oxygenation saturation (Scvo2) was below 70% despite adequate fluid resuscitation (central venous pressure ≥ 8 mm Hg) and hematocrits were higher than 30%.
When an inotrope is indicated to improve cardiac output, dobutamine is usually the preferred agent. Dobutamine has a shorter half-life (allowing for easier titration) and causes less hypotension (assuming preload has been optimized) than phosphodiesterase type III inhibitors such as milrinone.
Mechanical support devices, such as intra-aortic balloon counterpulsation, and vasodilators can also be used to improve tissue perfusion in selected patients with low cardiac output syndromes.
USING LACTATE LEVELS TO GUIDE THERAPY
Lactate levels above 4.0 mmol/L
Lactate may be a useful marker for determining whether organ dysfunction is present and, hence, what course of therapy should be given, especially in sepsis. A serum lactate level higher than 4.0 mmol/L has been used as the trigger to start aggressive resuscitation in patients with sepsis.50,51
Traditionally, as delineated by Rivers et al50 in their landmark study of early goal-directed therapy, this entailed placing an arterial line and a central line for hemodynamic monitoring, with specific interventions directed at increasing the central venous pressure, mean arterial pressure, and central venous oxygen saturation.50 However, a recent study in a similar population of patients with sepsis with elevated lactate found no significant advantage of protocol-based resuscitation over care provided according to physician judgment, and no significant benefit in central venous catheterization and hemodynamic monitoring in all patients.51
Lactate clearance: 10% or above at 8 hours?
Regardless of the approach chosen, decreasing lactate levels can be interpreted as an adequate response to the interventions provided. As a matter of fact, several groups of investigators have also demonstrated the merits of lactate clearance alone as a prognostic indicator in patients requiring hemodynamic support.
McNelis et al52 retrospectively evaluated 95 postsurgical patients who required hemodynamic monitoring.52,53 The authors found that the slower the lactate clearance, the higher the mortality rate.
Given the prognostic implications of lactate clearance, investigators have evaluated whether lactate clearance could be used as a surrogate resuscitation goal for optimizing oxygen delivery. Using lactate clearance may have significant practical advantages over using central venous oxygen saturation, since it does not require a central venous catheter or continuous oximetric monitoring.
In a study comparing these two resuscitation end points, patients were randomized to a goal of either central venous oxygen saturation of 70% or more or lactate clearance of 10% or more within the first 6 hours after presentation as a marker of oxygen delivery.53 Mortality rates were similar with either strategy. Of note, only 10% of the patients actually required therapies to improve their oxygen delivery. Furthermore, there were no differences in the treatments given (including fluids, vasopressors, inotropes, packed red blood cells) throughout the treatment period.
These findings provide several insights. First, few patients admitted to the emergency department with severe sepsis and treated with an initial quantitative resuscitation protocol require additional therapy for augmenting oxygen delivery. Second, lactate clearance, in a setting where initial resuscitation with fluids and vasopressors restores adequate oxygen delivery for the majority of patients, is likely as good a target for resuscitation as central venous oxygen saturation.
This study, however, does not address the question of whether lactate clearance is useful as an additional marker of oxygen delivery (in conjunction with central venous oxygen saturation). Indeed, caution should be taken to target central venous oxygen saturation goals alone, as patients with septic shock presenting with venous hyperoxia (central venous oxygen saturation > 89%) have been shown to have a higher mortality rate than patients with normoxia (central venous oxygen saturation 71%–89%).54
This was further demonstrated by Arnold et al in a study of patients presenting to the emergency department with severe sepsis.15 In this study, significant discordance between central venous oxygen saturation and lactate clearance was seen, where 79% of patients with less than 10% lactate clearance had concomitant central venous oxygen saturation of 70% or greater.
Jansen et al18 evaluated the role of targeting lactate clearance in conjunction with central venous oxygen saturation monitoring. In this study, critically ill patients with elevated lactate and inadequate lactate clearance were randomized to usual care or to resuscitation to adequate lactate clearance (20% or more). The therapies to optimize oxygen delivery were given according to the central venous oxygen saturation. Overall, after adjustment for predefined risk factors, the in-hospital mortality rate was lower in the lactate clearance group. This may signify that patients with sepsis and central venous oxygen saturation of 70% or more may continue to have poor lactate clearance, warranting further treatment.
Taken together, serum lactate may be helpful for prognostication, determination of course of therapy, and quantification for tissue hypoperfusion for targeted therapies. Figure 2 presents our approach to an elevated lactate level. As performed in the study by Jansen et al,18 it seems reasonable to measure lactate levels every 2 hours for the first 8 hours of resuscitation in patients with type A lactic acidosis. These levels should be interpreted in the context of lactate clearance (at least 10%, but preferably 20%) and normalization, and should be treated with an approach similar to the one outlined in Figure 2.
TREATING TYPE B LACTIC ACIDOSIS (NORMAL PERFUSION AND OXYGENATION)
Treating type B lactic acidosis is quite different because the goal is not to correct mismatches in oxygen consumption and delivery. Since most cases are due to underlying conditions such as malignancy or medications, treatment should be centered around eliminating the cause (eg, treat the malignancy, discontinue the offending medication). The main reason for treatment is to alleviate the harmful effects of acidosis. For example, acidosis can result in a negative inotropic effect.
Sodium bicarbonate, dichloroacetate, carbicarb, and tromethamine have all been studied in the management of type B lactic acidosis, with little success.55,56
Renal replacement therapy has had some success in drug-induced lactic acidosis.57,58
l-carnitine has had promising results in treating patients with human immunodeficiency virus infection, since these patients are carnitine-deficient and carnitine plays an important role in mitochondrial function.59
Thiamine and biotin deficiencies can occur in patients receiving total parenteral nutrition without vitamins and in patients who drink alcohol heavily and can cause lactic acidosis. These nutrients should be supplemented accordingly.
Treatment of mitochondrial disorders includes antioxidants (coenzyme Q10, vitamin C, vitamin E) and amino acids (l-arginine).60
- Andersen LW, Mackenhauer J, Roberts JC, Berg KM, Cocchi MN, Donnino MW. Etiology and therapeutic approach to elevated lactate levels. Mayo Clin Proc 2013; 88:1127–1140.
- Fuller BM, Dellinger RP. Lactate as a hemodynamic marker in the critically ill. Curr Opin Crit Care 2012; 18:267–272.
- Fall PJ, Szerlip HM. Lactic acidosis: from sour milk to septic shock. J Intensive Care Med 2005; 20:255–271.
- Kruse O, Grunnet N, Barfod C. Blood lactate as a predictor for in-hospital mortality in patients admitted acutely to hospital: a systematic review. Scand J Trauma Resusc Emerg Med 2011;19:74.
- Howell MD, Donnino M, Clardy P, Talmor D, Shapiro NI. Occult hypoperfusion and mortality in patients with suspected infection. Intensive Care Med 2007; 33:1892–1899.
- Puskarich MA, Trzeciak S, Shapiro NI, et al. Outcomes of patients undergoing early sepsis resuscitation for cryptic shock compared with overt shock. Resuscitation 2011; 82:1289–1293.
- Bakker J, Nijsten MW, Jansen TC. Clinical use of lactate monitoring in critically ill patients. Ann Intensive Care 2013; 3:12.
- Levy B, Gibot S, Franck P, Cravoisy A, Bollaert PE. Relation between muscle Na+K+ ATPase activity and raised lactate concentrations in septic shock: a prospective study. Lancet 2005; 365:871–875.
- Vary TC. Sepsis-induced alterations in pyruvate dehydrogenase complex activity in rat skeletal muscle: effects on plasma lactate. Shock 1996; 6:89–94.
- Brealey D, Brand M, Hargreaves I, et al. Association between mitochondrial dysfunction and severity and outcome of septic shock. Lancet 2002; 360:219–223.
- Shapiro NI, Howell MD, Talmor D, et al. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med 2005; 45:524–528.
- Mikkelsen ME, Miltiades AN, Gaieski DF, et al. Serum lactate is associated with mortality in severe sepsis independent of organ failure and shock. Crit Care Med 2009; 37:1670–1677.
- Liu V, Morehouse JW, Soule J, Whippy A, Escobar GJ. Fluid volume, lactate values, and mortality in sepsis patients with intermediate lactate values. Ann Am Thorac Soc 2013; 10:466–473.
- Sterling SA, Puskarich MA, Shapiro NI, et al; Emergency Medicine Shock Research Network (EMShockNET). Characteristics and outcomes of patients with vasoplegic versus tissue dysoxic septic shock. Shock 2013; 40:11–14.
- Arnold RC, Shapiro NI, Jones AE, et al; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Multicenter study of early lactate clearance as a determinant of survival in patients with presumed sepsis. Shock 2009; 32:35–39.
- Jones AE. Lactate clearance for assessing response to resuscitation in severe sepsis. Acad Emerg Med 2013; 20:844–847.
- Nguyen HB, Rivers EP, Knoblich BP, et al. Early lactate clearance is associated with improved outcome in severe sepsis and septic shock. Crit Care Med 2004; 32:1637–1642.
- Jansen TC, van Bommel J, Schoonderbeek FJ, et al; LACTATE study group. Early lactate-guided therapy in intensive care unit patients: a multicenter, open-label, randomized controlled trial. Am J Respir Crit Care Med 2010; 182:752–761.
- Husain FA, Martin MJ, Mullenix PS, Steele SR, Elliott DC. Serum lactate and base deficit as predictors of mortality and morbidity. Am J Surg 2003; 185:485–491.
- Puskarich MA, Trzeciak S, Shapiro NI, et al. Whole blood lactate kinetics in patients undergoing quantitative resuscitation for severe sepsis and septic shock. Chest 2013; 143:1548–1553.
- Marty P, Roquilly A, Vallee F, et al. Lactate clearance for death prediction in severe sepsis or septic shock patients during the first 24 hours in intensive care unit: an observational study. Ann Intensive Care 2013; 3:3.
- Dellinger RP, Levy MM, Rhodes A, et al; Surviving Sepsis Campaign Guidelines Committee including the Pediatric Subgroup. Surviving sepsis campaign: International guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med 2013; 41:580–637.
- Vincent JL, De Backer D. Circulatory shock. N Engl J Med 2013; 369:1726–1734.
- Durairaj L, Schmidt GA. Fluid therapy in resuscitated sepsis: less is more. Chest 2008; 133:252–263.
- Vincent JL, Dufaye P, Berré J, Leeman M, Degaute JP, Kahn RJ. Serial lactate determinations during circulatory shock. Crit Care Med 1983; 11:449–451.
- Boyd JH, Forbes J, Nakada TA, Walley KR, Russell JA. Fluid resuscitation in septic shock: a positive fluid balance and elevated central venous pressure are associated with increased mortality. Crit Care Med 2011; 39:259–265.
- Finfer S, Bellomo R, Boyce N, French J, Myburgh J, Norton R; SAFE Study Investigators. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med 2004; 350:2247–2256.
- Myburgh JA, Finfer S, Bellomo R, et al; CHEST Investigators; Australian and New Zealand Intensive Care Society Clinical Trials Group. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med 2012; 367:1901–1911.
- Perner A, Haase N, Guttormsen AB, et al; 6S Trial Group; Scandinavian Critical Care Trials Group. Hydroxyethyl starch 130/0.42 versus Ringer’s acetate in severe sepsis. N Engl J Med 2012; 367:124–134.
- Bayer O, Reinhart K, Kohl M, et al. Effects of fluid resuscitation with synthetic colloids or crystalloids alone on shock reversal, fluid balance, and patient outcomes in patients with severe sepsis: a prospective sequential analysis. Crit Care Med 2012; 40:2543–2551.
- Delaney AP, Dan A, McCaffrey J, Finfer S. The role of albumin as a resuscitation fluid for patients with sepsis: a systematic review and meta-analysis. Crit Care Med 2011; 39:386–391.
- SAFE Study Investigators; Finfer S, McEvoy S, Bellomo R, McArthur C, Myburgh J, Norton R. Impact of albumin compared to saline on organ function and mortality of patients with severe sepsis. Intensive Care Med 2011; 37:86–96.
- Caironi P, Tognoni G, Masson S, et al; ALBIOS Study Investigators. Albumin replacement in patients with severe sepsis or septic shock. N Engl J Med 2014; 370:1412–1421.
- Bourgoin A, Leone M, Delmas A, Garnier F, Albanèse J, Martin C. Increasing mean arterial pressure in patients with septic shock: effects on oxygen variables and renal function. Crit Care Med 2005; 33:780–786.
- LeDoux D, Astiz ME, Carpati CM, Rackow EC. Effects of perfusion pressure on tissue perfusion in septic shock. Crit Care Med 2000; 28:2729–2732.
- Asfar P, Meziani F, Hamel JF, et al; SEPSISPAM Investigators. High versus low blood-pressure target in patients with septic shock. N Engl J Med 2014; 370:1583–1593.
- De Backer D, Biston P, Devriendt J, et al; SOAP II Investigators. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med 2010; 362:779–789.
- De Backer D, Aldecoa C, Njimi H, Vincent JL. Dopamine versus norepinephrine in the treatment of septic shock: a meta-analysis. Crit Care Med 2012; 40:725–730.
- Annane D, Vignon P, Renault A, et al: CATS Study Group. Norepinephrine plus dobutamine versus epinephrine alone for management of septic shock: a randomised trial. Lancet 2007; 370:676–684.
- Myburgh JA, Higgins A, Jovanovska A, Lipman J, Ramakrishnan N, Santamaria J; CAT Study investigators. A comparison of epinephrine and norepinephrine in critically ill patients. Intensive Care Med 2008; 34:2226–2234.
- Schmittinger CA, Torgersen C, Luckner G, Schröder DC, Lorenz I, Dünser MW. Adverse cardiac events during catecholamine vasopressor therapy: a prospective observational study. Intensive Care Med 2012; 38:950–958.
- Levy B, Perez P, Perny J, Thivilier C, Gerard A. Comparison of norepinephrine-dobutamine to epinephrine for hemodynamics, lactate metabolism, and organ function variables in cardiogenic shock. A prospective, randomized pilot study. Crit Care Med 2011; 39:450–455.
- Watt MJ, Howlett KF, Febbraio MA, Spriet LL, Hargreaves M. Adrenaline increases skeletal muscle glycogenolysis, pyruvate dehydrogenase activation and carbohydrate oxidation during moderate exercise in humans. J Physiol 2001; 534:269–278.
- De Backer D, Creteur J, Silva E, Vincent JL. Effects of dopamine, norepinephrine, and epinephrine on the splanchnic circulation in septic shock: which is best? Crit Care Med 2003; 31:1659–1667.
- Levy B, Bollaert PE, Charpentier C, et al. Comparison of norepinephrine and dobutamine to epinephrine for hemodynamics, lactate metabolism, and gastric tonometric variables in septic shock: a prospective, randomized study. Intensive Care Med 1997; 23:282–287.
- Polito A, Parisini E, Ricci Z, Picardo S, Annane D. Vasopressin for treatment of vasodilatory shock: an ESICM systematic review and meta-analysis. Intensive Care Med 2012; 38:9–19.
- Serpa Neto A, Nassar APJ, Cardoso SO, et al. Vasopressin and terlipressin in adult vasodilatory shock: a systematic review and meta-analysis of nine randomized controlled trials. Crit Care 2012; 16:R154.
- Russell JA, Walley KR, Singer J, et al; VASST Investigators. Vasopressin versus norepinephrine infusion in patients with septic shock. N Engl J Med 2008; 358:877–887.
- Gattinoni L, Brazzi L, Pelosi P, et al; for the SvO2 Collaborative Group. A trial of goal-oriented hemodynamic therapy in critically ill patients. N Engl J Med 1995; 333:1025–1032.
- Rivers E, Nguyen B, Havstad S, et al; Early Goal-Directed Therapy Collaborative Group. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med 2001; 345:1368–1377.
- ProCESS Investigators; Yealy DM, Kellum JA, Huang DT, et al. A randomized trial of protocol-based care for early septic shock. N Engl J Med 2014; 370:1683–1693.
- McNelis J, Marini CP, Jurkiewicz A, et al. Prolonged lactate clearance is associated with increased mortality in the surgical intensive care unit. Am J Surg 2001; 182:481–485.
- Jones AE, Shapiro NI, Trzeciak S, Arnold RC, Claremont HA, Kline JA; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial. JAMA 2010; 303:739–746.
- Pope JV, Jones AE, Gaieski DF, Arnold RC, Trzeciak S, Shapiro NI; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Multicenter study of central venous oxygen saturation (ScvO2) as a predictor of mortality in patients with sepsis. Ann Emerg Med 2010; 55:40–46.e1
- Kraut JA, Kurtz I. Use of base in the treatment of severe acidemic states. Am J Kidney Dis 2001; 38:703–727.
- Levraut J, Grimaud D. Treatment of metabolic acidosis. Curr Opin Crit Care 2003; 9:260–265.
- Orija AA, Jenks CL. Nucleoside analog reverse transcriptase inhibitor induced lactic acidosis treated with continuous renal replacement in the medical intensive care unit. Crit Care & Shock 2012; 15:9–11.
- Friesecke S, Abel P, Kraft M, Gerner A, Runge S. Combined renal replacement therapy for severe metformin-induced lactic acidosis. Nephrol Dial Transplant 2006; 21:2038–2039.
- Claessens YE, Cariou A, Monchi M, et al. Detecting life-threatening lactic acidosis related to nucleoside-analog treatment of human immunodeficiency virus-infected patients, and treatment with l-carnitine. Crit Care Med 2003; 31:1042–1047.
- Parikh S, Saneto R, Falk MJ, Anselm I, Cohen BH, Haas R; Medicine Society TM. A modern approach to the treatment of mitochondrial disease. Curr Treat Options Neurol 2009; 11:414–430.
- Andersen LW, Mackenhauer J, Roberts JC, Berg KM, Cocchi MN, Donnino MW. Etiology and therapeutic approach to elevated lactate levels. Mayo Clin Proc 2013; 88:1127–1140.
- Fuller BM, Dellinger RP. Lactate as a hemodynamic marker in the critically ill. Curr Opin Crit Care 2012; 18:267–272.
- Fall PJ, Szerlip HM. Lactic acidosis: from sour milk to septic shock. J Intensive Care Med 2005; 20:255–271.
- Kruse O, Grunnet N, Barfod C. Blood lactate as a predictor for in-hospital mortality in patients admitted acutely to hospital: a systematic review. Scand J Trauma Resusc Emerg Med 2011;19:74.
- Howell MD, Donnino M, Clardy P, Talmor D, Shapiro NI. Occult hypoperfusion and mortality in patients with suspected infection. Intensive Care Med 2007; 33:1892–1899.
- Puskarich MA, Trzeciak S, Shapiro NI, et al. Outcomes of patients undergoing early sepsis resuscitation for cryptic shock compared with overt shock. Resuscitation 2011; 82:1289–1293.
- Bakker J, Nijsten MW, Jansen TC. Clinical use of lactate monitoring in critically ill patients. Ann Intensive Care 2013; 3:12.
- Levy B, Gibot S, Franck P, Cravoisy A, Bollaert PE. Relation between muscle Na+K+ ATPase activity and raised lactate concentrations in septic shock: a prospective study. Lancet 2005; 365:871–875.
- Vary TC. Sepsis-induced alterations in pyruvate dehydrogenase complex activity in rat skeletal muscle: effects on plasma lactate. Shock 1996; 6:89–94.
- Brealey D, Brand M, Hargreaves I, et al. Association between mitochondrial dysfunction and severity and outcome of septic shock. Lancet 2002; 360:219–223.
- Shapiro NI, Howell MD, Talmor D, et al. Serum lactate as a predictor of mortality in emergency department patients with infection. Ann Emerg Med 2005; 45:524–528.
- Mikkelsen ME, Miltiades AN, Gaieski DF, et al. Serum lactate is associated with mortality in severe sepsis independent of organ failure and shock. Crit Care Med 2009; 37:1670–1677.
- Liu V, Morehouse JW, Soule J, Whippy A, Escobar GJ. Fluid volume, lactate values, and mortality in sepsis patients with intermediate lactate values. Ann Am Thorac Soc 2013; 10:466–473.
- Sterling SA, Puskarich MA, Shapiro NI, et al; Emergency Medicine Shock Research Network (EMShockNET). Characteristics and outcomes of patients with vasoplegic versus tissue dysoxic septic shock. Shock 2013; 40:11–14.
- Arnold RC, Shapiro NI, Jones AE, et al; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Multicenter study of early lactate clearance as a determinant of survival in patients with presumed sepsis. Shock 2009; 32:35–39.
- Jones AE. Lactate clearance for assessing response to resuscitation in severe sepsis. Acad Emerg Med 2013; 20:844–847.
- Nguyen HB, Rivers EP, Knoblich BP, et al. Early lactate clearance is associated with improved outcome in severe sepsis and septic shock. Crit Care Med 2004; 32:1637–1642.
- Jansen TC, van Bommel J, Schoonderbeek FJ, et al; LACTATE study group. Early lactate-guided therapy in intensive care unit patients: a multicenter, open-label, randomized controlled trial. Am J Respir Crit Care Med 2010; 182:752–761.
- Husain FA, Martin MJ, Mullenix PS, Steele SR, Elliott DC. Serum lactate and base deficit as predictors of mortality and morbidity. Am J Surg 2003; 185:485–491.
- Puskarich MA, Trzeciak S, Shapiro NI, et al. Whole blood lactate kinetics in patients undergoing quantitative resuscitation for severe sepsis and septic shock. Chest 2013; 143:1548–1553.
- Marty P, Roquilly A, Vallee F, et al. Lactate clearance for death prediction in severe sepsis or septic shock patients during the first 24 hours in intensive care unit: an observational study. Ann Intensive Care 2013; 3:3.
- Dellinger RP, Levy MM, Rhodes A, et al; Surviving Sepsis Campaign Guidelines Committee including the Pediatric Subgroup. Surviving sepsis campaign: International guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med 2013; 41:580–637.
- Vincent JL, De Backer D. Circulatory shock. N Engl J Med 2013; 369:1726–1734.
- Durairaj L, Schmidt GA. Fluid therapy in resuscitated sepsis: less is more. Chest 2008; 133:252–263.
- Vincent JL, Dufaye P, Berré J, Leeman M, Degaute JP, Kahn RJ. Serial lactate determinations during circulatory shock. Crit Care Med 1983; 11:449–451.
- Boyd JH, Forbes J, Nakada TA, Walley KR, Russell JA. Fluid resuscitation in septic shock: a positive fluid balance and elevated central venous pressure are associated with increased mortality. Crit Care Med 2011; 39:259–265.
- Finfer S, Bellomo R, Boyce N, French J, Myburgh J, Norton R; SAFE Study Investigators. A comparison of albumin and saline for fluid resuscitation in the intensive care unit. N Engl J Med 2004; 350:2247–2256.
- Myburgh JA, Finfer S, Bellomo R, et al; CHEST Investigators; Australian and New Zealand Intensive Care Society Clinical Trials Group. Hydroxyethyl starch or saline for fluid resuscitation in intensive care. N Engl J Med 2012; 367:1901–1911.
- Perner A, Haase N, Guttormsen AB, et al; 6S Trial Group; Scandinavian Critical Care Trials Group. Hydroxyethyl starch 130/0.42 versus Ringer’s acetate in severe sepsis. N Engl J Med 2012; 367:124–134.
- Bayer O, Reinhart K, Kohl M, et al. Effects of fluid resuscitation with synthetic colloids or crystalloids alone on shock reversal, fluid balance, and patient outcomes in patients with severe sepsis: a prospective sequential analysis. Crit Care Med 2012; 40:2543–2551.
- Delaney AP, Dan A, McCaffrey J, Finfer S. The role of albumin as a resuscitation fluid for patients with sepsis: a systematic review and meta-analysis. Crit Care Med 2011; 39:386–391.
- SAFE Study Investigators; Finfer S, McEvoy S, Bellomo R, McArthur C, Myburgh J, Norton R. Impact of albumin compared to saline on organ function and mortality of patients with severe sepsis. Intensive Care Med 2011; 37:86–96.
- Caironi P, Tognoni G, Masson S, et al; ALBIOS Study Investigators. Albumin replacement in patients with severe sepsis or septic shock. N Engl J Med 2014; 370:1412–1421.
- Bourgoin A, Leone M, Delmas A, Garnier F, Albanèse J, Martin C. Increasing mean arterial pressure in patients with septic shock: effects on oxygen variables and renal function. Crit Care Med 2005; 33:780–786.
- LeDoux D, Astiz ME, Carpati CM, Rackow EC. Effects of perfusion pressure on tissue perfusion in septic shock. Crit Care Med 2000; 28:2729–2732.
- Asfar P, Meziani F, Hamel JF, et al; SEPSISPAM Investigators. High versus low blood-pressure target in patients with septic shock. N Engl J Med 2014; 370:1583–1593.
- De Backer D, Biston P, Devriendt J, et al; SOAP II Investigators. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med 2010; 362:779–789.
- De Backer D, Aldecoa C, Njimi H, Vincent JL. Dopamine versus norepinephrine in the treatment of septic shock: a meta-analysis. Crit Care Med 2012; 40:725–730.
- Annane D, Vignon P, Renault A, et al: CATS Study Group. Norepinephrine plus dobutamine versus epinephrine alone for management of septic shock: a randomised trial. Lancet 2007; 370:676–684.
- Myburgh JA, Higgins A, Jovanovska A, Lipman J, Ramakrishnan N, Santamaria J; CAT Study investigators. A comparison of epinephrine and norepinephrine in critically ill patients. Intensive Care Med 2008; 34:2226–2234.
- Schmittinger CA, Torgersen C, Luckner G, Schröder DC, Lorenz I, Dünser MW. Adverse cardiac events during catecholamine vasopressor therapy: a prospective observational study. Intensive Care Med 2012; 38:950–958.
- Levy B, Perez P, Perny J, Thivilier C, Gerard A. Comparison of norepinephrine-dobutamine to epinephrine for hemodynamics, lactate metabolism, and organ function variables in cardiogenic shock. A prospective, randomized pilot study. Crit Care Med 2011; 39:450–455.
- Watt MJ, Howlett KF, Febbraio MA, Spriet LL, Hargreaves M. Adrenaline increases skeletal muscle glycogenolysis, pyruvate dehydrogenase activation and carbohydrate oxidation during moderate exercise in humans. J Physiol 2001; 534:269–278.
- De Backer D, Creteur J, Silva E, Vincent JL. Effects of dopamine, norepinephrine, and epinephrine on the splanchnic circulation in septic shock: which is best? Crit Care Med 2003; 31:1659–1667.
- Levy B, Bollaert PE, Charpentier C, et al. Comparison of norepinephrine and dobutamine to epinephrine for hemodynamics, lactate metabolism, and gastric tonometric variables in septic shock: a prospective, randomized study. Intensive Care Med 1997; 23:282–287.
- Polito A, Parisini E, Ricci Z, Picardo S, Annane D. Vasopressin for treatment of vasodilatory shock: an ESICM systematic review and meta-analysis. Intensive Care Med 2012; 38:9–19.
- Serpa Neto A, Nassar APJ, Cardoso SO, et al. Vasopressin and terlipressin in adult vasodilatory shock: a systematic review and meta-analysis of nine randomized controlled trials. Crit Care 2012; 16:R154.
- Russell JA, Walley KR, Singer J, et al; VASST Investigators. Vasopressin versus norepinephrine infusion in patients with septic shock. N Engl J Med 2008; 358:877–887.
- Gattinoni L, Brazzi L, Pelosi P, et al; for the SvO2 Collaborative Group. A trial of goal-oriented hemodynamic therapy in critically ill patients. N Engl J Med 1995; 333:1025–1032.
- Rivers E, Nguyen B, Havstad S, et al; Early Goal-Directed Therapy Collaborative Group. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med 2001; 345:1368–1377.
- ProCESS Investigators; Yealy DM, Kellum JA, Huang DT, et al. A randomized trial of protocol-based care for early septic shock. N Engl J Med 2014; 370:1683–1693.
- McNelis J, Marini CP, Jurkiewicz A, et al. Prolonged lactate clearance is associated with increased mortality in the surgical intensive care unit. Am J Surg 2001; 182:481–485.
- Jones AE, Shapiro NI, Trzeciak S, Arnold RC, Claremont HA, Kline JA; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial. JAMA 2010; 303:739–746.
- Pope JV, Jones AE, Gaieski DF, Arnold RC, Trzeciak S, Shapiro NI; Emergency Medicine Shock Research Network (EMShockNet) Investigators. Multicenter study of central venous oxygen saturation (ScvO2) as a predictor of mortality in patients with sepsis. Ann Emerg Med 2010; 55:40–46.e1
- Kraut JA, Kurtz I. Use of base in the treatment of severe acidemic states. Am J Kidney Dis 2001; 38:703–727.
- Levraut J, Grimaud D. Treatment of metabolic acidosis. Curr Opin Crit Care 2003; 9:260–265.
- Orija AA, Jenks CL. Nucleoside analog reverse transcriptase inhibitor induced lactic acidosis treated with continuous renal replacement in the medical intensive care unit. Crit Care & Shock 2012; 15:9–11.
- Friesecke S, Abel P, Kraft M, Gerner A, Runge S. Combined renal replacement therapy for severe metformin-induced lactic acidosis. Nephrol Dial Transplant 2006; 21:2038–2039.
- Claessens YE, Cariou A, Monchi M, et al. Detecting life-threatening lactic acidosis related to nucleoside-analog treatment of human immunodeficiency virus-infected patients, and treatment with l-carnitine. Crit Care Med 2003; 31:1042–1047.
- Parikh S, Saneto R, Falk MJ, Anselm I, Cohen BH, Haas R; Medicine Society TM. A modern approach to the treatment of mitochondrial disease. Curr Treat Options Neurol 2009; 11:414–430.
KEY POINTS
- Serum lactate levels can become elevated by a variety of underlying processes, categorized as increased production in conditions of hypoperfusion and hypoxia (type A lactic acidosis), or as increased production or decreased clearance not due to hypoperfusion and hypoxia (type B).
- The higher the lactate level and the slower the rate of normalization (lactate clearance), the higher the risk of death.
- Treatments differ depending on the underlying mechanism of the lactate elevation. Thus, identifying the reason for hyperlactatemia and differentiating between type A and B lactic acidosis are of the utmost importance.
- Treatment of type A lactic acidosis aims to improve perfusion and match oxygen consumption with oxygen delivery by giving fluids, packed red blood cells, and vasopressors or inotropic agents, or both.
- Treatment of type B involves more specific management, such as discontinuing offending medications or supplementing key cofactors for anaerobic metabolism.