Manage most SEGAs with rapamycin analogs, not surgery

Article Type
Changed
Tue, 02/14/2023 - 13:10
Display Headline
Manage most SEGAs with rapamycin analogs, not surgery

SAN DIEGO – Medical management with sirolimus or everolimus for pediatric patients with tuberous sclerosis complex and subependymal giant cell astrocytomas is more effective and safer than surgery, researchers from the University of Cincinnati and University of California, Los Angeles, have found.

Although the benign tumors have traditionally been left to surgeons, it’s become clear in recent years that rapamycin analogs are effective, too. The question has been "which [approach] is best? Medical management "is known to be pretty mild compared to the surgery," but it’s not curative, explained lead investigator Susanne Yoon, the University of Cincinnati medical student who presented the results at the annual meeting of the American Academy of Neurology.

The team compared outcomes for 23 SEGA (subependymal giant cell astrocytoma) patients who underwent surgery, 81 who took sirolimus or everolimus, and 9 who got both. The surgery patients were diagnosed when they were about 10 years old and were followed for a median of 8.9 years; the medical patients were about 7 years old when diagnosed, and were followed for a median of 2.8 years. Boys made up the majority of both groups.

None of the children who took a rapamycin analog needed surgery; tumors shrank by more than half in 61% (45). The drugs caused infections, weight change, or hyperlipidemia in some, but only 13% (11) needed to stop the drug or go to the hospital because of side effects.

Meanwhile, surgery cured just 39% (9) of the children who got it, sometimes after two or three operations; 61% (14) of those patients had prolonged hospitalizations or were hospitalized due to postoperative complications that included intracranial hemorrhage in 8, hydrocephalus/shunt malfunction in 6, neurologic impairment, and seizures.

"Not only does medical management win in efficacy, but it also wins in the safety issues. Rapalog [rapamycin] therapy, alone or in combination, is becoming a cornerstone of tumor management" in neurocutaneous disorders, said Dr. David H. Viskochil, professor of pediatrics at the University of Utah, Salt Lake City, commenting on the study.

"Of course, there are emergent situations where you’ve just got to go in and get the tumor out; you can’t wait 3 months to see" if drugs work. "But if a child is just starting to show some symptoms and not deteriorating, then you can start with medicine first and see what happens," he said.

"The question is if you got [SEGAs] really early, would surgical cure be much more likely? The studies aren’t quite there yet," he said in an interview.

Ms. Yoon and Dr. Viskochil said they have no disclosures.

aotto@frontlinemedcom.com

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
sirolimus, everolimus, tuberous sclerosis complex, subependymal giant cell astrocytomas, benign tumors, rapamycin, Susanne Yoon, American Academy of Neurology, SEGA, subependymal giant cell astrocytoma, pediatric tumors
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SAN DIEGO – Medical management with sirolimus or everolimus for pediatric patients with tuberous sclerosis complex and subependymal giant cell astrocytomas is more effective and safer than surgery, researchers from the University of Cincinnati and University of California, Los Angeles, have found.

Although the benign tumors have traditionally been left to surgeons, it’s become clear in recent years that rapamycin analogs are effective, too. The question has been "which [approach] is best? Medical management "is known to be pretty mild compared to the surgery," but it’s not curative, explained lead investigator Susanne Yoon, the University of Cincinnati medical student who presented the results at the annual meeting of the American Academy of Neurology.

The team compared outcomes for 23 SEGA (subependymal giant cell astrocytoma) patients who underwent surgery, 81 who took sirolimus or everolimus, and 9 who got both. The surgery patients were diagnosed when they were about 10 years old and were followed for a median of 8.9 years; the medical patients were about 7 years old when diagnosed, and were followed for a median of 2.8 years. Boys made up the majority of both groups.

None of the children who took a rapamycin analog needed surgery; tumors shrank by more than half in 61% (45). The drugs caused infections, weight change, or hyperlipidemia in some, but only 13% (11) needed to stop the drug or go to the hospital because of side effects.

Meanwhile, surgery cured just 39% (9) of the children who got it, sometimes after two or three operations; 61% (14) of those patients had prolonged hospitalizations or were hospitalized due to postoperative complications that included intracranial hemorrhage in 8, hydrocephalus/shunt malfunction in 6, neurologic impairment, and seizures.

"Not only does medical management win in efficacy, but it also wins in the safety issues. Rapalog [rapamycin] therapy, alone or in combination, is becoming a cornerstone of tumor management" in neurocutaneous disorders, said Dr. David H. Viskochil, professor of pediatrics at the University of Utah, Salt Lake City, commenting on the study.

"Of course, there are emergent situations where you’ve just got to go in and get the tumor out; you can’t wait 3 months to see" if drugs work. "But if a child is just starting to show some symptoms and not deteriorating, then you can start with medicine first and see what happens," he said.

"The question is if you got [SEGAs] really early, would surgical cure be much more likely? The studies aren’t quite there yet," he said in an interview.

Ms. Yoon and Dr. Viskochil said they have no disclosures.

aotto@frontlinemedcom.com

SAN DIEGO – Medical management with sirolimus or everolimus for pediatric patients with tuberous sclerosis complex and subependymal giant cell astrocytomas is more effective and safer than surgery, researchers from the University of Cincinnati and University of California, Los Angeles, have found.

Although the benign tumors have traditionally been left to surgeons, it’s become clear in recent years that rapamycin analogs are effective, too. The question has been "which [approach] is best? Medical management "is known to be pretty mild compared to the surgery," but it’s not curative, explained lead investigator Susanne Yoon, the University of Cincinnati medical student who presented the results at the annual meeting of the American Academy of Neurology.

The team compared outcomes for 23 SEGA (subependymal giant cell astrocytoma) patients who underwent surgery, 81 who took sirolimus or everolimus, and 9 who got both. The surgery patients were diagnosed when they were about 10 years old and were followed for a median of 8.9 years; the medical patients were about 7 years old when diagnosed, and were followed for a median of 2.8 years. Boys made up the majority of both groups.

None of the children who took a rapamycin analog needed surgery; tumors shrank by more than half in 61% (45). The drugs caused infections, weight change, or hyperlipidemia in some, but only 13% (11) needed to stop the drug or go to the hospital because of side effects.

Meanwhile, surgery cured just 39% (9) of the children who got it, sometimes after two or three operations; 61% (14) of those patients had prolonged hospitalizations or were hospitalized due to postoperative complications that included intracranial hemorrhage in 8, hydrocephalus/shunt malfunction in 6, neurologic impairment, and seizures.

"Not only does medical management win in efficacy, but it also wins in the safety issues. Rapalog [rapamycin] therapy, alone or in combination, is becoming a cornerstone of tumor management" in neurocutaneous disorders, said Dr. David H. Viskochil, professor of pediatrics at the University of Utah, Salt Lake City, commenting on the study.

"Of course, there are emergent situations where you’ve just got to go in and get the tumor out; you can’t wait 3 months to see" if drugs work. "But if a child is just starting to show some symptoms and not deteriorating, then you can start with medicine first and see what happens," he said.

"The question is if you got [SEGAs] really early, would surgical cure be much more likely? The studies aren’t quite there yet," he said in an interview.

Ms. Yoon and Dr. Viskochil said they have no disclosures.

aotto@frontlinemedcom.com

Publications
Publications
Topics
Article Type
Display Headline
Manage most SEGAs with rapamycin analogs, not surgery
Display Headline
Manage most SEGAs with rapamycin analogs, not surgery
Legacy Keywords
sirolimus, everolimus, tuberous sclerosis complex, subependymal giant cell astrocytomas, benign tumors, rapamycin, Susanne Yoon, American Academy of Neurology, SEGA, subependymal giant cell astrocytoma, pediatric tumors
Legacy Keywords
sirolimus, everolimus, tuberous sclerosis complex, subependymal giant cell astrocytomas, benign tumors, rapamycin, Susanne Yoon, American Academy of Neurology, SEGA, subependymal giant cell astrocytoma, pediatric tumors
Article Source

AT THE 2013 AAN ANNUAL MEETING

PURLs Copyright

Inside the Article

Vitals

Major finding: Rapamycin analogs shrink SEGA tumors by more than 50% in a majority of children, and obviate the need for surgery.

Data source: Comparison of surgical and medical treatment of SEGA tumors in 113 children.

Disclosures: Ms. Yoon and Dr. Viskochil said they have no disclosures.

Resident Use of Handoff Information

Article Type
Changed
Sun, 05/21/2017 - 18:14
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

Files
References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
Article PDF
Issue
Journal of Hospital Medicine - 8(6)
Page Number
328-333
Sections
Files
Files
Article PDF
Article PDF

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
Issue
Journal of Hospital Medicine - 8(6)
Issue
Journal of Hospital Medicine - 8(6)
Page Number
328-333
Page Number
328-333
Article Type
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Maireade E. McSweeney, MD, Division of Gastroenterology and Nutrition, Boston Children's Hospital, Boston, MA 02115; Telephone: 617‐355‐7036; Fax: 617–730‐0495; E‐mail: maireade.mcsweeney@childrens.harvard.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Hospital Value‐Based Purchasing

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Hospital value‐based purchasing

The Centers for Medicaid and Medicare Services' (CMS) Hospital Inpatient Value‐Based Purchasing (VBP) Program, which was signed into law as part of the Patient Protection and Affordable Care Act of 2010, aims to incentivize inpatient providers to deliver high‐value, as opposed to high‐volume, healthcare.[1] Beginning on October 1, 2012, the start of the 2013 fiscal year (FY), hospitals participating in the VBP program became eligible for a variety of performance‐based incentive payments from CMS. These payments are based on an acute care hospital's ability to meet performance measurements in 6 care domains: (1) patient safety, (2) care coordination, (3) clinical processes and outcomes, (4) population or community health, (5) efficiency and cost reduction, and (6) patient‐ and caregiver‐centered experience.[2] The VBP program's ultimate purpose is to enable CMS to improve the health of Medicare beneficiaries by purchasing better care for them at a lower cost. These 3 characteristics of careimproved health, improved care, and lower costsare the foundation of CMS' conception of value.[1, 2] They are closely related to an economic conception of value, which is the difference between an intervention's benefit and its cost.

Although in principle not a new idea, the formal mandate of hospitals to provide high‐value healthcare through financial incentives marks an important change in Medicare and Medicaid policy. In this opportune review of VBP, we first discuss the relevant historical changes in the reimbursement environment of US hospitals that have set the stage for VBP. We then describe the structure of CMS' VBP program, with a focus on which facilities are eligible to participate in the program, the specific outcomes measured and incentivized, how rewards and penalties are allocated, and how the program will be funded. In an effort to anticipate some of the issues that lie ahead, we then highlight a number of potential challenges to the success of VBP, and discuss how VBP will impact the delivery and reimbursement of inpatient care services. We conclude by examining how the VBP program is likely to evolve over time.

HISTORICAL CONTEXT FOR VBP

Over the last decade, CMS has embarked on a number of initiatives to incentivize the provision of higher‐quality and more cost‐effective care. For example, in 2003, CMS implemented a national pay‐for‐performance (P4P) pilot project called the Premier Hospital Quality Incentive Demonstration (HQID).[3, 4] HQID, which ran for 6 years, tracked and rewarded the performance of 216 hospitals in 6 healthcare service domains: (1) acute myocardial infarction (AMI), (2) congestive heart failure (CHF), (3) pneumonia, (4) coronary artery bypass graft surgery, (5) hip and knee replacement surgery, and (6) perioperative management of surgical patients (including prevention of surgical site infections).[4] CMS then introduced its Hospital Compare Web site in 2005 to facilitate public reporting of hospital‐level quality outcomes.[3, 5] This Web site provides the public with access to data on hospital performance across a wide array of measures of process quality, clinical outcomes, spending, and resource utilization.[5] Next, in October 2008, CMS stopped reimbursing hospitals for a number of costly and common hospital‐acquired complications, including hospital‐acquired bloodstream infections and urinary tract infections, patient falls, and pressure ulcers.[3, 6] VBP is the latest and most comprehensive step that CMS has taken in its decade‐long effort to shift from volume to value‐based compensation for inpatient care.

Although CMS appears fully invested in using performance incentives to increase healthcare value, existing evidence of the effects of P4P on patient outcomes remains quite mixed.[7] On one hand, an analysis of an inpatient P4P program sponsored by the United Kingdom's National Health Service's (NHS) suggests that P4P may improve quality and save lives; indeed, hospitals that participated in the NHS P4P program significantly reduced inpatient mortality from pneumonia, saving an estimated 890 lives.[8] Additional empirical work suggests that the HQID was also associated with early improvements in healthcare quality.[9] However, a subsequent long‐term analysis found that participation in HQID had no discernible effect on 30‐day mortality rates.[10] Moreover, a meta‐analysis of P4P incentives for individual practitioners found few methodologically robust studies of P4P for clinicians and concluded that P4P's effects on individual practice patterns and outcomes remain largely uncertain.[11]

VBP: STRUCTURE AND DESIGN

This section reviews the structure of the VBP program. We describe current VBP eligibility criteria and sources of funding for the program, how hospitals participating in VBP are evaluated, and how VBP incentives for FY 2013 have been calculated.

Hospital Eligibility for VBP

All acute care hospitals in the United States (excluding Maryland) that are not psychiatric hospitals, rehabilitation hospitals, long‐term care facilities, children's hospitals, or cancer hospitals are eligible to participate in VBP in FY 2013 (full eligibility criteria is outlined in Table 1). For FY 2013, CMS chose to incentivize measures in just 2 care domains: (1) clinical processes of care and (2) patient experience of care. To be eligible for VBP in FY 2013, a hospital must report at least 10 cases each in at least 4 of 12 measures included in the clinical processes of care domain (Table 2), and/or must have at least 100 completed Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Designed and validated by CMS, the HCAHPS survey provides hospitals with a standardized instrument for gathering information about patient satisfaction with, and perspectives on, their hospital care.[12] HCAHPS will be used to assess 8 patient experience of care measures (Table 3).

Inclusion and Exclusion Criteria for the Inpatient Value‐Based Purchasing Program in Fiscal Year 2013
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; HHS, US Department of Health and Human Services; VBP, Value‐Based Purchasing.

Inclusion criteria
Acute care hospital
Located in all 50 US states or District of Columbia (excluding Maryland)
Has at least 10 cases in at least 4 of 12 clinical process of care measures and/or at least 100 completed HCAHPS surveys
Exclusion criteria
Psychiatric, rehabilitation, long‐term care, children's or cancer hospital
Does not participate in Hospital Inpatient Quality Reporting Program during the VBP performance period
Cited by the Secretary of HHS for significant patient safety violations during performance period
Hospital does not meet minimum reporting requirements for number of cases, process measures, and surveys needed to participate in VBP
Clinical Process of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Disease Process Process of Care Measure
  • NOTE: Mortality measures to be added in fiscal year 2014: acute myocardial infarction, congestive heart failure, pneumonia.

Acute myocardial infarction Fibrinolytic therapy received within 30 minutes of hospital arrival
Primary percutaneous coronary intervention received within 90 minutes of hospital arrival
Heart failure Discharge instructions provided
Pneumonia Blood cultures performed in the emergency department prior to initial antibiotic received in hospital
Initial antibiotic selection for community‐acquired pneumonia in immunocompetent patient
Healthcare‐associated infections Prophylactic antibiotic received within 1 hour prior to surgical incision
Prophylactic antibiotic selection for surgical patients
Prophylactic antibiotics discontinued within 24 hours after surgery ends
Cardiac surgery patients with controlled 6:00 am postoperative serum glucose
Surgeries Surgery patients on ‐blocker prior to arrival that received ‐blocker during perioperative period
Surgery patients with recommended venous thromboembolism prophylaxis ordered
Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery
Patient Experience of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Communication with nurses
Communication with doctors
Responsiveness of hospital staff
Pain management
Communication about medicines
Cleanliness and quietness of hospital environment
Discharge information
Overall rating of hospital

Participation in the program is mandatory for eligible hospitals, and CMS estimates that more than 3000 facilities across the United States will participate in FY 2013. Roughly $850 million dollars in VBP incentives will be paid out to these participating hospitals in FY 2013. The program is being financed through a 1% across‐the‐board reduction in FY 2013 diagnosis‐related group (DRG)‐based inpatient payments to participating hospitals. On December 20, 2012, CMS publically announced FY 2013 VBP incentives for all participating hospitals. Each hospital's incentive is retroactive and based on its performance between July 1, 2011 and March 31, 2012.

All data used for calculating VBP incentives is reported to CMS through its Hospital Inpatient Quality Reporting (Hospital IQR) Program, a national program instituted in 2003 that rewards hospitals for reporting designated quality measures. As of 2007, approximately 95% of eligible US hospitals were using the Hospital IQR program.[1] Measures evaluated via chart abstracts and surveys reflect a hospital's performance for its entire patient population, whereas measures assessed with claims data reflect hospital performance only for Medicare patients.

Evaluation of Hospitals

In FY 2013, hospital VBP incentive payments will be based entirely on performance in 2 domains: (1) clinical processes of care (weighted 70%) and (2) patient experience of care (weighted 30%). For each domain, CMS will evaluate each hospital's improvement over time as well as achievement compared to other hospitals in the VBP program. By assessing and rewarding both achievement and improvement, CMS will ensure that lower‐performing hospitals will still be rewarded for making substantial improvements in quality. To evaluate the first metricimprovement over timeCMS will compare a hospital's performance during a given reporting period with its baseline performance 2 years prior to this block of time. A hospital receives improvement points for improving its performance over time. To assess the second metricachievement compared to other hospitals in the VBP programCMS will compare each hospital's performance during a reporting period with the baseline performance (eg, performance 2 years prior to reporting period) of all other hospitals in the VBP program. A hospital is awarded achievement points if its performance exceeds the 50th percentile of all hospitals during the baseline performance period. Improvement scores range from 0 to 9, whereas achievement scores range from 0 to 10. The greater of a hospital's improvement and achievement scores on each VBP measure are used to calculate each hospital's total earned clinical care domain score and total earned HCAHPS base score. Hospitals that lack baseline performance data, which is required to assess improvement, will be evaluated solely on the basis of achievement points.[1] The total earned clinical care domain score is multiplied by 70% to reach the clinical care domain's contribution to a hospital's total performance score.

Each hospital's total patient experience domain, or HCAHPS performance, score consists of 2 components: a total earned HCAHPS base score as described above and a consistency score. The consistency score evaluates the reliability of a hospital's performance across all 8 patient experience of care measures (Table 3). If a hospital is above the 50th percentile of all hospital scores during the baseline period on all 8 measures, then it receives 100% of its consistency points. If a hospital is at the 0 percentile for a given measure, then it receives 0 consistency points for all measures. This provision promotes consistency by harshly penalizing hospitals with extremely poor performance on any 1 specific measure. If 1 or more measures are between the 0 and 50th percentiles, then it will receive a consistency score that takes into account how many measures were below the 50th percentile and their distance from this threshold. Each hospital's total HCAHPS performance score (the sum of total earned HCAHPS base points and consistency points) is then multiplied by 30% to arrive at the patient experience of care domain's contribution to a hospital's total performance score.

Importantly, CMS excluded from its VBP initiative 10 clinical process measures reported in the Hospital IQR Program because they are topped out; that is, almost all hospitals already perform them at very high rates (Table 4). Examples of these topped out process measures include administration of aspirin to all patients with AMI on arrival at the hospital; counseling of patients with AMI, CHF, and pneumonia about smoking cessation; and prescribing angiotensin‐converting enzyme inhibitors or angiotensin receptor blockers to patients with CHF and left ventricular dysfunction.[1]

Topped Out Measures
Disease Process Measure
  • NOTE: Abbreviations: ACEI, angiotensin‐converting enzyme inhibitor; ARB, angiotensin receptor blocker.

Acute myocardial infarction Aspirin administered on arrival to the emergency department
ACEI or ARB prescribed on discharge
Patient counseled about smoking cessation
‐Blocker prescribed on discharge
Aspirin prescribed at discharge
Heart failure Patient counseled about smoking cessation
Evaluation of left ventricular systolic function
ACEI or ARB prescribed for left ventricular systolic dysfunction
Pneumonia Patient counseled about smoking cessation
Surgical Care Improvement Project Surgery patients with appropriate hair removal

Calculation of VBP Incentives and Public Reporting

A hospital's total performance score for FY 2013 is equal to the sum of 70% of its clinical care domain score and 30% of its total HCAHPS performance score. This total performance score is entered into a linear mathematical formula to calculate each hospital's incentive payment. CMS projects that VBP will lead to a net increase in Medicare payments for one‐half of hospitals and a net decrease in payments for the other half of participating facilities.[1]

In December 2012, CMS publicly disclosed information about the initial performance of each hospital in the VBP program. Reported information included: (1) hospital performance for each applicable performance measure, (2) hospital performance by disease condition or procedure, and (3) hospital's total performance score. Initial analyses of this performance data revealed that 1557 hospitals will receive bonus payments under VBP in FY 2013, whereas 1427 hospitals will lose money under this program. Treasure Valley Hospital, a 10‐bed physician‐owned hospital in Boise, Idaho, will receive a 0.83% increase in Medicare payments, the largest payment increase under VBP in 2013. Conversely, Auburn Community Hospital in upstate New York, will suffer the most severe payment reduction: 0.9% per Medicare admission. The penalty will cost Auburn Hospital about $100,000, which is slightly more than 0.1% of its yearly $85 million operating budget.[13] For almost two‐thirds of participating hospitals, FY 2013 Medicare payments will change by <0.25%.[13] Additional information about VBP payments for FY 2013, including the number of hospitals who received VBP incentives and the size and range of these payments, is now accessible to the public through CMS' Hospital Compare Web site (http://www.hospitalcompare.hhs.gov).

CHALLENGES OF VBP

As the Medicare VBP program evolves, and hospitals confront ever‐larger financial incentives to deliver high‐value as opposed to high‐volume care, it will be important to recognize limitations of the VBP program as they arise. Here we briefly discuss several conceptual and implementation challenges that physicians and policymakers should consider when assessing the merits of VBP in promoting high‐quality healthcare.

Rigorous and Continuous Evaluation of VBP Programs

The main premise of using VBP to incentivize hospitals to deliver high‐quality cost‐effective care is that the process measures used to determine hospital quality do impact patient outcomes. However, it is already well established that improvements in measures of process quality are not always associated with improvements in patient outcomes.[14, 15, 16] Moreover, incentivizing specific process measures encourages hospitals to shift resources away from other aspects of care delivery, which may have ambiguous, or even deleterious, effects on patient outcomes. Although incentives ideally push hospitals to shift resources away from low‐quality care toward high‐quality care, in practice this is not always the case. Hospital resources may instead be drawn away from areas that are not yet incented by VBP, but for which improvements in quality of care are desperately needed. The same empirical focus behind using VBP to incentivize hospitals to improve patient outcomes efficiently should be used to evaluate whether VBP is continually meeting its stated goals: reducing overall patient morbidity and mortality and improving patient satisfaction at ideally lower cost. The experience of the US education system with public policies designed to improve student testing performance may serve as a cautionary example here. Such policies, which provide financial rewards to schools whose students perform well on standardized tests, can indeed raise testing performance. However, these policies also lead educators to teach to the test, and to neglect important topics that are not tested on standardized exams.[17]

Prioritization of Process Measures

As payment incentives for VBP currently stand, process measures are weighted equally regardless of the clinical benefits they generate and the resources required to achieve improvements in process quality. For instance, 2 process measures, continuing home ‐blocker medications for patients with coronary artery disease undergoing surgery and early percutaneous coronary intervention for patients with AMI, may be weighted equally as process measures although both their clinical benefits and the costs of implementation are very different. Some hospitals responding to VBP incentives may choose to invest in areas where their ability to earn VBP incentive payments is high and the costs of improvement are low, although those areas may not be where interventions are most needed because clinical outcomes could be most improved. Recognizing that process measures have heterogeneous benefits and costs of implementation is important when prioritizing their reimbursement in VBP.

Measuring Improvements in Hospital Quality

Tying hospital financial compensation to hospital quality implies that measures of hospital quality should be robust. To incentivize hospitals to improve quality not only relative to other hospitals but to themselves in the past, the VBP program has established a baseline performance for each hospital. Each hospital is compared to its baseline performance in subsequent evaluation periods. Thus, properly measuring a hospital's baseline performance is important. During a given baseline period, some hospitals may have better or worse outcomes than their steady state due to random variation alone. Some hospitals deemed to have a low baseline will experience improvements in quality that are not related to active efforts to improve quality but through chance alone. Similarly, some hospitals deemed to have a high baseline will experience reductions in quality through chance. Of course, neither of these changes should be subject to differences in reimbursement because they do not reflect actual organizational changes made by the hospitals. The VBP program has made significant efforts to address this issue by requiring participating hospitals to have a large enough sample of cases such that estimated rates of process quality adherence meet a reliability threshold (ie, are likely to be consistent over time rather than vary substantially through chance alone). However, not all process measures exhibit high reliability, particularly those for which adverse events are rare (eg, foreign objects retained after surgery, air embolisms, and blood incompatibility). Ultimately, CMS's decision to balance the need for statistically reliable data with the goal of including as many hospitals as possible in the VBP program will require ongoing reevaluation of this issue.

Choosing Hospital Comparators Appropriately

In the current VBP program, hospitals will be evaluated in part by how they compare to hospitals nationally. However, studies of regional variation in healthcare have demonstrated large variations in practice patterns across the United States,[18, 19, 20] raising the question of whether hospitals should, at least initially, be compared to hospitals in the same geographic area. Although the ultimate goal of VBP should be to hold hospitals to a national standard, local practice patterns are not easily modified within 1‐ to 2‐year timeframes. Initially comparing hospitals to a national rather than local standard may unfairly penalize hospitals that are relative underperformers nationally but overperformers regionally. Although CMS's policy to reward improvement within hospitals over time mitigates issues arising from a cross‐sectional comparison of hospitals, the issue still remains if many hospitals within a region not only underperform relative to other hospitals nationally but also fail to demonstrate improvement. More broadly, this issue extends to differences across hospitals in factors that impact their ability to meet VBP goals. These factors may include, for example, hospital size, profitability, patient case and insurance mix, and presence of an electronic medical record. Comparing hospitals with vastly different abilities to achieve VBP goals and improve quickly may amount to inequitable policy.

Continual Evaluation of Topped‐Out Measures

Process measures that are met at high rates at nearly all hospitals are not used in evaluations by CMS for VBP. An assumption underlying CMS' decision to not reward hospitals for achieving these topped‐out measures is that once physicians and hospitals make cognitive and system‐level improvements that improve process quality, these gains will persist after the incentive is removed. Thus, CMS hopes and anticipates that although performance incentives will make it easier for well‐meaning physicians to learn to do the right thing, doctors will continue to do the right things for patients after these incentives are removed.[21, 22] Although this assumption may generally be accurate, it is important to continue to evaluate whether measures that are currently topped out continue to remain adequately performed, because rewarding new quality measures will necessarily lead hospitals to reallocate resources away from other clinical activities. Although we hope that the continued public reporting of topped‐out measures will prevent declines in performance on these measures, policy makers and clinicians should be aware that the lack of financial incentives for topped‐out measures may result in declines in quality. To this point, an analysis of 35 Kaiser Permanente facilities from 1997 to 2007 demonstrated that the removal of financial incentives for diabetic retinopathy and cervical cancer screening was associated with subsequent declines in performance of 3% and 1.6% per year, respectively.[23]

Will VBP Incentives Be Large Enough to Change Practice Patterns?

The VBP Program's ability to influence change depends, at least in part, on how the incentives offered under this program compare to the magnitude of the investments that hospitals must make to achieve a given reward. In general, larger incentives are necessary to motivate more significant changes in behavior or to influence organizations to invest the resources needed to achieve change. The incentives offered under VBP in FY 2013 are quite modest. Almost two‐thirds of participating hospitals will see their FY 2013 Medicare revenues change by <0.25%, roughly $125,000 at most.[13, 24] Although these incentives may motivate hospitals that can improve performance and achievement with very modest investments, they may have little impact on organizations that need to make significant upfront investments in care processes to achieve sustainable improvements in care quality. As CMS increases the size of VBP incentives over the next 2 to 4 years, it will also hold hospitals accountable for a broader and increasingly complex set of outcomes. Improving these outcomes may require investments in areas such as information technology and process improvement that far surpass the VBP incentive reward.

Moreover, prior research suggests that financial incentives like those available under VBP may contribute only slightly to performance improvements when public reporting already exists. For example, in a 2‐year study of 613 US hospitals implementing pay‐for‐performance plus public reporting or public reporting only, pay for performance plus public reporting was associated with only a 2.6% to 4.1% increase in a composite measure of quality when compared to hospitals with public reporting only.[9] Similarly, a study of 54 hospitals participating in the CMS pay for performance pilot initiative found no significant improvement in quality of care or outcomes for AMI when compared to 446 control hospitals.[25] A long‐term analysis of pay for performance in the Medicare Premier Hospital Quality Incentive Demonstration found that participation in the program had no discernible effect on 30‐day mortality rates.[10] Finally, a study of physician medical groups contracting with a large network healthcare maintenance organization found that the implementation of pay for performance did not result in major before and after improvements in clinical quality compared to a control group of medical groups.[26]

High‐Value Care Is Not Always Low‐Cost Care

Not surprisingly, the clinical process measures included in CMS' hospital VBP program evaluate a select and relatively small group of high‐value and low‐cost interventions (eg, appropriate administration of antibiotics and tight control of serum glucose in surgical patients). However, an important body of work has demonstrated that high‐cost care (eg, intensive inpatient hospital care for common acute medical conditions) may also be highly valuable in terms of improving survival.[20, 27, 28, 29, 30] As the hospital VBP program evolves, its overseers will need to consider whether to include additional incentives for high‐value high‐cost healthcare services. Such considerations will likely become increasingly salient as healthcare delivery organizations move toward capitated delivery models. In particular, the VBP program's Medicare Spending Per Beneficiary measure, which quantifies inpatient and subsequent outpatient spending per beneficiary after a given hospitalization episode, will need to distinguish between higher‐spending hospitals that provide highly effective care (eg, care that reduces mortality and readmissions) and facilities that provide less‐effective care.

FUTURE OF VBP

Although the future of VBP is unknown, CMS is likely to modify the program in a number of ways over the next 3 to 5 years. First, CMS will likely expand the breadth and focus of incentivized measures in the VBP program. In FY 2014, for example, CMS is adding a set of 3, 30‐day mortality outcome measures to VBP: 30‐day risk‐adjusted mortality for AMI, CHF, and pneumonia.[1] A hospital's performance with respect to these outcomes will represent 25% of its total performance score in 2014, whereas the clinical process of care and patient experience of care domains will account for 45% and 30% of this score, respectively. In 2015, patient experience and outcome measures will account for 30% each in a hospital's performance score, whereas process and efficiency measures will each account for 20% of this score, respectively. The composition of this performance score evidences a shift away from rewarding process‐based measures and toward incentivizing measures of clinical outcomes and patient satisfaction, the latter of which may be highly subjective and more representative of a hospital's catchment population than of a hospital's care itself.[31] Additional measures in the domains of patient safety, care coordination, population and community health, emergency room wait times, and cost control may also be added to the VBP program in FY 2015 to FY 2017. Furthermore, CMS will continue to reevaluate the appropriateness of measures that are already included in VBP and will stop incentivizing measures that have become topped out, or are no longer supported by the National Quality Forum.[1, 13]

Second, CMS has established an annual gradual increase of 0.25% in the percentage of each hospital's inpatient DRG‐based payment that is at stake under VBP. In FY 2014, for example, participating hospitals will be required to contribute 1.25% of inpatient DRG payments to the VBP program. This percentage is likely to increase to 2% or more by 2017.[1, 32]

Third, expansions of the VBP program complement a number of other quality improvement efforts overseen by CMS, including the Hospital Readmissions Reduction Program. Effective for discharges beginning on October 1, 2012, hospitals with excess readmissions for AMI, CHF, and pneumonia are at risk for reimbursement reductions for all Medicare admissions in proportion to the rate of excess rehospitalizations. Some of the same concerns about the hospital VBP program outlined above have also been raised for this program, namely, whether readmission penalties will be large enough to impact hospital behavior, whether readmissions are even preventable,[33, 34] and whether adjustments in hospital‐level policies will reduce admissions that are known to be heavily influenced by patient economic and social factors that are outside of a hospital's control.[35, 36] Despite the limitations of VBP and the challenges that lie ahead, there is optimism that rewarding hospitals that provide high‐value rather than high‐volume care will not only improve outcomes of hospitalized patients in the United States, but will potentially be able to do so at a lower cost. Encouraging hospitals to improve their quality of care may also have important spillover effects on other healthcare domains. For example, hospitals that adopt systems to ensure prompt delivery of antibiotics to patients with pneumonia may also observe positive spillover effects with the prompt antibiotic management of other acute infectious illnesses that are not covered by VBP. VBP may have spillover effects on medical malpractice liability and defensive medicine as well. Indeed, financial incentives to practice higher‐quality evidenced‐based care may reduce medical malpractice liability and defensive medicine.

The government's ultimate goal in implementing VBP is to identify a broad and clinically relevant set of outcome measures that can be used to incentivize hospitals to deliver high‐quality as opposed to high‐volume healthcare. The first wave of outcome measures has already been instituted. It remains to be seen whether the incentive rewards of Medicare's hospital VBP program will be large enough that hospitals feel compelled to improve and compete for them.

Files
References
  1. Centers for Medicare and Medicaid Services. Hospital Value‐Based Purchasing Web site. 2013. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html. Accessed March 4, 2013.
  2. VanLare JM, Conway PH. Value‐based purchasing—national programs to move from volume to value. N Engl J Med. 2012;367:292295.
  3. Joynt KE, Rosenthal MB. Hospital value‐based purchasing: will Medicare's new policy exacerbate disparities? Circ Cardiovasc Qual Outcomes. 2012;5:148149.
  4. Centers for Medicare and Medicaid Services. CMS/premier hospital quality incentive demonstration (QHID). 2013. Available at: https://www.premierinc.com/quality‐safety/tools‐services/p4p/hqi/faqs.jsp. Accessed March 5, 2013.
  5. Centers for Medicare and Medicaid Services. Hospital Compare Web site. 2013. Available at: http://www.medicare.gov/hospitalcompare. Accessed March 4, 2013.
  6. Brown J, Doloresco F, Mylotte JM. “Never events”: not every hospital‐acquired infection is preventable. Clin Infect Dis. 2009;49:743746.
  7. Epstein AM. Will pay for performance improve quality of care? The answer is in the details. N Engl J Med. 2012;367:18521853.
  8. Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367:18211828.
  9. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486496.
  10. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long‐term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  11. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157:889899.
  12. Centers for Medicare and Medicaid Services. Hospital Consumer Assessment Of Healthcare Providers and Systems Web site. 2013. Available at: http://www.hcahpsonline.org. Accessed March 5, 2013.
  13. Rau J. Medicare discloses hospitals' bonuses, penalties based on quality. Kaiser Health News. December 20, 2012. Available at: http://www.kaiserhealthnews.org/stories/2012/december/21/medicare‐hospitals‐value‐based‐purchasing.aspx?referrer=search. Accessed March 26, 2013.
  14. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood). 2009;28:w566w572.
  15. Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297:6170.
  16. Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13:469474.
  17. Jacob BA. Accountability, incentives and behavior: the impact of high‐stakes testing in the Chicago public schools. J Public Econ. 2005;89:761796.
  18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138:273287.
  19. Fisher ES. Medical care—is more always better? N Engl J Med. 2003;349:16651667.
  20. Romley JA, Jena AB, Goldman DP. Hospital spending and inpatient mortality: evidence from California: an observational study. Ann Intern Med. 2011;154:160167.
  21. James BC. Making it easy to do it right. N Engl J Med. 2001;345:991993.
  22. Christensen RD, Henry E, Ilstrup S, Baer VL. A high rate of compliance with neonatal intensive care unit transfusion guidelines persists even after a program to improve transfusion guideline compliance ended. Transfusion. 2011;51:25192520.
  23. Lester H, Schmittdiel J, Selby J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898.
  24. Werner RM, Dudley RA. Medicare's new hospital value‐based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31:19321940.
  25. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:23732380.
  26. Mullen KJ, Frank RG, Rosenthal MB. Can you get what you pay for? Pay‐for‐performance and the quality of healthcare providers. Rand J Econ. 2010;41:6491.
  27. Romley JA, Jena AB, O'Leary JF, Goldman DP. Spending and mortality in US acute care hospitals. Am J Manag Care. 2013;19:e46e54.
  28. Barnato AE, Farrell MH, Chang CC, Lave JR, Roberts MS, Angus DC. Development and validation of hospital “end‐of‐life” treatment intensity measures. Med Care. 2009;47:10981105.
  29. Ong MK, Mangione CM, Romano PS, et al. Looking forward, looking back: assessing variations in hospital resource use and outcomes for elderly patients with heart failure. Circ Cardiovasc Qual Outcomes. 2009;2:548557.
  30. Stukel TA, Fisher ES, Alter DA, et al. Association of hospital spending intensity with mortality and readmission rates in Ontario hospitals. JAMA. 2012;307:10371045.
  31. Young GJ, Meterko M, Desai KR. Patient satisfaction with hospital care: effects of demographic and institutional characteristics. Med Care. 2000;38:325334.
  32. VanLare JM, Blum JD, Conway PH. Linking performance with payment: implementing the Physician Value‐Based Payment Modifier. JAMA. 2012;308:20892090.
  33. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183:E391E402.
  34. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183:E1067E1072.
  35. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366:13661369.
  36. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305:675681.
Article PDF
Issue
Journal of Hospital Medicine - 8(5)
Page Number
271-277
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicaid and Medicare Services' (CMS) Hospital Inpatient Value‐Based Purchasing (VBP) Program, which was signed into law as part of the Patient Protection and Affordable Care Act of 2010, aims to incentivize inpatient providers to deliver high‐value, as opposed to high‐volume, healthcare.[1] Beginning on October 1, 2012, the start of the 2013 fiscal year (FY), hospitals participating in the VBP program became eligible for a variety of performance‐based incentive payments from CMS. These payments are based on an acute care hospital's ability to meet performance measurements in 6 care domains: (1) patient safety, (2) care coordination, (3) clinical processes and outcomes, (4) population or community health, (5) efficiency and cost reduction, and (6) patient‐ and caregiver‐centered experience.[2] The VBP program's ultimate purpose is to enable CMS to improve the health of Medicare beneficiaries by purchasing better care for them at a lower cost. These 3 characteristics of careimproved health, improved care, and lower costsare the foundation of CMS' conception of value.[1, 2] They are closely related to an economic conception of value, which is the difference between an intervention's benefit and its cost.

Although in principle not a new idea, the formal mandate of hospitals to provide high‐value healthcare through financial incentives marks an important change in Medicare and Medicaid policy. In this opportune review of VBP, we first discuss the relevant historical changes in the reimbursement environment of US hospitals that have set the stage for VBP. We then describe the structure of CMS' VBP program, with a focus on which facilities are eligible to participate in the program, the specific outcomes measured and incentivized, how rewards and penalties are allocated, and how the program will be funded. In an effort to anticipate some of the issues that lie ahead, we then highlight a number of potential challenges to the success of VBP, and discuss how VBP will impact the delivery and reimbursement of inpatient care services. We conclude by examining how the VBP program is likely to evolve over time.

HISTORICAL CONTEXT FOR VBP

Over the last decade, CMS has embarked on a number of initiatives to incentivize the provision of higher‐quality and more cost‐effective care. For example, in 2003, CMS implemented a national pay‐for‐performance (P4P) pilot project called the Premier Hospital Quality Incentive Demonstration (HQID).[3, 4] HQID, which ran for 6 years, tracked and rewarded the performance of 216 hospitals in 6 healthcare service domains: (1) acute myocardial infarction (AMI), (2) congestive heart failure (CHF), (3) pneumonia, (4) coronary artery bypass graft surgery, (5) hip and knee replacement surgery, and (6) perioperative management of surgical patients (including prevention of surgical site infections).[4] CMS then introduced its Hospital Compare Web site in 2005 to facilitate public reporting of hospital‐level quality outcomes.[3, 5] This Web site provides the public with access to data on hospital performance across a wide array of measures of process quality, clinical outcomes, spending, and resource utilization.[5] Next, in October 2008, CMS stopped reimbursing hospitals for a number of costly and common hospital‐acquired complications, including hospital‐acquired bloodstream infections and urinary tract infections, patient falls, and pressure ulcers.[3, 6] VBP is the latest and most comprehensive step that CMS has taken in its decade‐long effort to shift from volume to value‐based compensation for inpatient care.

Although CMS appears fully invested in using performance incentives to increase healthcare value, existing evidence of the effects of P4P on patient outcomes remains quite mixed.[7] On one hand, an analysis of an inpatient P4P program sponsored by the United Kingdom's National Health Service's (NHS) suggests that P4P may improve quality and save lives; indeed, hospitals that participated in the NHS P4P program significantly reduced inpatient mortality from pneumonia, saving an estimated 890 lives.[8] Additional empirical work suggests that the HQID was also associated with early improvements in healthcare quality.[9] However, a subsequent long‐term analysis found that participation in HQID had no discernible effect on 30‐day mortality rates.[10] Moreover, a meta‐analysis of P4P incentives for individual practitioners found few methodologically robust studies of P4P for clinicians and concluded that P4P's effects on individual practice patterns and outcomes remain largely uncertain.[11]

VBP: STRUCTURE AND DESIGN

This section reviews the structure of the VBP program. We describe current VBP eligibility criteria and sources of funding for the program, how hospitals participating in VBP are evaluated, and how VBP incentives for FY 2013 have been calculated.

Hospital Eligibility for VBP

All acute care hospitals in the United States (excluding Maryland) that are not psychiatric hospitals, rehabilitation hospitals, long‐term care facilities, children's hospitals, or cancer hospitals are eligible to participate in VBP in FY 2013 (full eligibility criteria is outlined in Table 1). For FY 2013, CMS chose to incentivize measures in just 2 care domains: (1) clinical processes of care and (2) patient experience of care. To be eligible for VBP in FY 2013, a hospital must report at least 10 cases each in at least 4 of 12 measures included in the clinical processes of care domain (Table 2), and/or must have at least 100 completed Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Designed and validated by CMS, the HCAHPS survey provides hospitals with a standardized instrument for gathering information about patient satisfaction with, and perspectives on, their hospital care.[12] HCAHPS will be used to assess 8 patient experience of care measures (Table 3).

Inclusion and Exclusion Criteria for the Inpatient Value‐Based Purchasing Program in Fiscal Year 2013
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; HHS, US Department of Health and Human Services; VBP, Value‐Based Purchasing.

Inclusion criteria
Acute care hospital
Located in all 50 US states or District of Columbia (excluding Maryland)
Has at least 10 cases in at least 4 of 12 clinical process of care measures and/or at least 100 completed HCAHPS surveys
Exclusion criteria
Psychiatric, rehabilitation, long‐term care, children's or cancer hospital
Does not participate in Hospital Inpatient Quality Reporting Program during the VBP performance period
Cited by the Secretary of HHS for significant patient safety violations during performance period
Hospital does not meet minimum reporting requirements for number of cases, process measures, and surveys needed to participate in VBP
Clinical Process of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Disease Process Process of Care Measure
  • NOTE: Mortality measures to be added in fiscal year 2014: acute myocardial infarction, congestive heart failure, pneumonia.

Acute myocardial infarction Fibrinolytic therapy received within 30 minutes of hospital arrival
Primary percutaneous coronary intervention received within 90 minutes of hospital arrival
Heart failure Discharge instructions provided
Pneumonia Blood cultures performed in the emergency department prior to initial antibiotic received in hospital
Initial antibiotic selection for community‐acquired pneumonia in immunocompetent patient
Healthcare‐associated infections Prophylactic antibiotic received within 1 hour prior to surgical incision
Prophylactic antibiotic selection for surgical patients
Prophylactic antibiotics discontinued within 24 hours after surgery ends
Cardiac surgery patients with controlled 6:00 am postoperative serum glucose
Surgeries Surgery patients on ‐blocker prior to arrival that received ‐blocker during perioperative period
Surgery patients with recommended venous thromboembolism prophylaxis ordered
Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery
Patient Experience of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Communication with nurses
Communication with doctors
Responsiveness of hospital staff
Pain management
Communication about medicines
Cleanliness and quietness of hospital environment
Discharge information
Overall rating of hospital

Participation in the program is mandatory for eligible hospitals, and CMS estimates that more than 3000 facilities across the United States will participate in FY 2013. Roughly $850 million dollars in VBP incentives will be paid out to these participating hospitals in FY 2013. The program is being financed through a 1% across‐the‐board reduction in FY 2013 diagnosis‐related group (DRG)‐based inpatient payments to participating hospitals. On December 20, 2012, CMS publically announced FY 2013 VBP incentives for all participating hospitals. Each hospital's incentive is retroactive and based on its performance between July 1, 2011 and March 31, 2012.

All data used for calculating VBP incentives is reported to CMS through its Hospital Inpatient Quality Reporting (Hospital IQR) Program, a national program instituted in 2003 that rewards hospitals for reporting designated quality measures. As of 2007, approximately 95% of eligible US hospitals were using the Hospital IQR program.[1] Measures evaluated via chart abstracts and surveys reflect a hospital's performance for its entire patient population, whereas measures assessed with claims data reflect hospital performance only for Medicare patients.

Evaluation of Hospitals

In FY 2013, hospital VBP incentive payments will be based entirely on performance in 2 domains: (1) clinical processes of care (weighted 70%) and (2) patient experience of care (weighted 30%). For each domain, CMS will evaluate each hospital's improvement over time as well as achievement compared to other hospitals in the VBP program. By assessing and rewarding both achievement and improvement, CMS will ensure that lower‐performing hospitals will still be rewarded for making substantial improvements in quality. To evaluate the first metricimprovement over timeCMS will compare a hospital's performance during a given reporting period with its baseline performance 2 years prior to this block of time. A hospital receives improvement points for improving its performance over time. To assess the second metricachievement compared to other hospitals in the VBP programCMS will compare each hospital's performance during a reporting period with the baseline performance (eg, performance 2 years prior to reporting period) of all other hospitals in the VBP program. A hospital is awarded achievement points if its performance exceeds the 50th percentile of all hospitals during the baseline performance period. Improvement scores range from 0 to 9, whereas achievement scores range from 0 to 10. The greater of a hospital's improvement and achievement scores on each VBP measure are used to calculate each hospital's total earned clinical care domain score and total earned HCAHPS base score. Hospitals that lack baseline performance data, which is required to assess improvement, will be evaluated solely on the basis of achievement points.[1] The total earned clinical care domain score is multiplied by 70% to reach the clinical care domain's contribution to a hospital's total performance score.

Each hospital's total patient experience domain, or HCAHPS performance, score consists of 2 components: a total earned HCAHPS base score as described above and a consistency score. The consistency score evaluates the reliability of a hospital's performance across all 8 patient experience of care measures (Table 3). If a hospital is above the 50th percentile of all hospital scores during the baseline period on all 8 measures, then it receives 100% of its consistency points. If a hospital is at the 0 percentile for a given measure, then it receives 0 consistency points for all measures. This provision promotes consistency by harshly penalizing hospitals with extremely poor performance on any 1 specific measure. If 1 or more measures are between the 0 and 50th percentiles, then it will receive a consistency score that takes into account how many measures were below the 50th percentile and their distance from this threshold. Each hospital's total HCAHPS performance score (the sum of total earned HCAHPS base points and consistency points) is then multiplied by 30% to arrive at the patient experience of care domain's contribution to a hospital's total performance score.

Importantly, CMS excluded from its VBP initiative 10 clinical process measures reported in the Hospital IQR Program because they are topped out; that is, almost all hospitals already perform them at very high rates (Table 4). Examples of these topped out process measures include administration of aspirin to all patients with AMI on arrival at the hospital; counseling of patients with AMI, CHF, and pneumonia about smoking cessation; and prescribing angiotensin‐converting enzyme inhibitors or angiotensin receptor blockers to patients with CHF and left ventricular dysfunction.[1]

Topped Out Measures
Disease Process Measure
  • NOTE: Abbreviations: ACEI, angiotensin‐converting enzyme inhibitor; ARB, angiotensin receptor blocker.

Acute myocardial infarction Aspirin administered on arrival to the emergency department
ACEI or ARB prescribed on discharge
Patient counseled about smoking cessation
‐Blocker prescribed on discharge
Aspirin prescribed at discharge
Heart failure Patient counseled about smoking cessation
Evaluation of left ventricular systolic function
ACEI or ARB prescribed for left ventricular systolic dysfunction
Pneumonia Patient counseled about smoking cessation
Surgical Care Improvement Project Surgery patients with appropriate hair removal

Calculation of VBP Incentives and Public Reporting

A hospital's total performance score for FY 2013 is equal to the sum of 70% of its clinical care domain score and 30% of its total HCAHPS performance score. This total performance score is entered into a linear mathematical formula to calculate each hospital's incentive payment. CMS projects that VBP will lead to a net increase in Medicare payments for one‐half of hospitals and a net decrease in payments for the other half of participating facilities.[1]

In December 2012, CMS publicly disclosed information about the initial performance of each hospital in the VBP program. Reported information included: (1) hospital performance for each applicable performance measure, (2) hospital performance by disease condition or procedure, and (3) hospital's total performance score. Initial analyses of this performance data revealed that 1557 hospitals will receive bonus payments under VBP in FY 2013, whereas 1427 hospitals will lose money under this program. Treasure Valley Hospital, a 10‐bed physician‐owned hospital in Boise, Idaho, will receive a 0.83% increase in Medicare payments, the largest payment increase under VBP in 2013. Conversely, Auburn Community Hospital in upstate New York, will suffer the most severe payment reduction: 0.9% per Medicare admission. The penalty will cost Auburn Hospital about $100,000, which is slightly more than 0.1% of its yearly $85 million operating budget.[13] For almost two‐thirds of participating hospitals, FY 2013 Medicare payments will change by <0.25%.[13] Additional information about VBP payments for FY 2013, including the number of hospitals who received VBP incentives and the size and range of these payments, is now accessible to the public through CMS' Hospital Compare Web site (http://www.hospitalcompare.hhs.gov).

CHALLENGES OF VBP

As the Medicare VBP program evolves, and hospitals confront ever‐larger financial incentives to deliver high‐value as opposed to high‐volume care, it will be important to recognize limitations of the VBP program as they arise. Here we briefly discuss several conceptual and implementation challenges that physicians and policymakers should consider when assessing the merits of VBP in promoting high‐quality healthcare.

Rigorous and Continuous Evaluation of VBP Programs

The main premise of using VBP to incentivize hospitals to deliver high‐quality cost‐effective care is that the process measures used to determine hospital quality do impact patient outcomes. However, it is already well established that improvements in measures of process quality are not always associated with improvements in patient outcomes.[14, 15, 16] Moreover, incentivizing specific process measures encourages hospitals to shift resources away from other aspects of care delivery, which may have ambiguous, or even deleterious, effects on patient outcomes. Although incentives ideally push hospitals to shift resources away from low‐quality care toward high‐quality care, in practice this is not always the case. Hospital resources may instead be drawn away from areas that are not yet incented by VBP, but for which improvements in quality of care are desperately needed. The same empirical focus behind using VBP to incentivize hospitals to improve patient outcomes efficiently should be used to evaluate whether VBP is continually meeting its stated goals: reducing overall patient morbidity and mortality and improving patient satisfaction at ideally lower cost. The experience of the US education system with public policies designed to improve student testing performance may serve as a cautionary example here. Such policies, which provide financial rewards to schools whose students perform well on standardized tests, can indeed raise testing performance. However, these policies also lead educators to teach to the test, and to neglect important topics that are not tested on standardized exams.[17]

Prioritization of Process Measures

As payment incentives for VBP currently stand, process measures are weighted equally regardless of the clinical benefits they generate and the resources required to achieve improvements in process quality. For instance, 2 process measures, continuing home ‐blocker medications for patients with coronary artery disease undergoing surgery and early percutaneous coronary intervention for patients with AMI, may be weighted equally as process measures although both their clinical benefits and the costs of implementation are very different. Some hospitals responding to VBP incentives may choose to invest in areas where their ability to earn VBP incentive payments is high and the costs of improvement are low, although those areas may not be where interventions are most needed because clinical outcomes could be most improved. Recognizing that process measures have heterogeneous benefits and costs of implementation is important when prioritizing their reimbursement in VBP.

Measuring Improvements in Hospital Quality

Tying hospital financial compensation to hospital quality implies that measures of hospital quality should be robust. To incentivize hospitals to improve quality not only relative to other hospitals but to themselves in the past, the VBP program has established a baseline performance for each hospital. Each hospital is compared to its baseline performance in subsequent evaluation periods. Thus, properly measuring a hospital's baseline performance is important. During a given baseline period, some hospitals may have better or worse outcomes than their steady state due to random variation alone. Some hospitals deemed to have a low baseline will experience improvements in quality that are not related to active efforts to improve quality but through chance alone. Similarly, some hospitals deemed to have a high baseline will experience reductions in quality through chance. Of course, neither of these changes should be subject to differences in reimbursement because they do not reflect actual organizational changes made by the hospitals. The VBP program has made significant efforts to address this issue by requiring participating hospitals to have a large enough sample of cases such that estimated rates of process quality adherence meet a reliability threshold (ie, are likely to be consistent over time rather than vary substantially through chance alone). However, not all process measures exhibit high reliability, particularly those for which adverse events are rare (eg, foreign objects retained after surgery, air embolisms, and blood incompatibility). Ultimately, CMS's decision to balance the need for statistically reliable data with the goal of including as many hospitals as possible in the VBP program will require ongoing reevaluation of this issue.

Choosing Hospital Comparators Appropriately

In the current VBP program, hospitals will be evaluated in part by how they compare to hospitals nationally. However, studies of regional variation in healthcare have demonstrated large variations in practice patterns across the United States,[18, 19, 20] raising the question of whether hospitals should, at least initially, be compared to hospitals in the same geographic area. Although the ultimate goal of VBP should be to hold hospitals to a national standard, local practice patterns are not easily modified within 1‐ to 2‐year timeframes. Initially comparing hospitals to a national rather than local standard may unfairly penalize hospitals that are relative underperformers nationally but overperformers regionally. Although CMS's policy to reward improvement within hospitals over time mitigates issues arising from a cross‐sectional comparison of hospitals, the issue still remains if many hospitals within a region not only underperform relative to other hospitals nationally but also fail to demonstrate improvement. More broadly, this issue extends to differences across hospitals in factors that impact their ability to meet VBP goals. These factors may include, for example, hospital size, profitability, patient case and insurance mix, and presence of an electronic medical record. Comparing hospitals with vastly different abilities to achieve VBP goals and improve quickly may amount to inequitable policy.

Continual Evaluation of Topped‐Out Measures

Process measures that are met at high rates at nearly all hospitals are not used in evaluations by CMS for VBP. An assumption underlying CMS' decision to not reward hospitals for achieving these topped‐out measures is that once physicians and hospitals make cognitive and system‐level improvements that improve process quality, these gains will persist after the incentive is removed. Thus, CMS hopes and anticipates that although performance incentives will make it easier for well‐meaning physicians to learn to do the right thing, doctors will continue to do the right things for patients after these incentives are removed.[21, 22] Although this assumption may generally be accurate, it is important to continue to evaluate whether measures that are currently topped out continue to remain adequately performed, because rewarding new quality measures will necessarily lead hospitals to reallocate resources away from other clinical activities. Although we hope that the continued public reporting of topped‐out measures will prevent declines in performance on these measures, policy makers and clinicians should be aware that the lack of financial incentives for topped‐out measures may result in declines in quality. To this point, an analysis of 35 Kaiser Permanente facilities from 1997 to 2007 demonstrated that the removal of financial incentives for diabetic retinopathy and cervical cancer screening was associated with subsequent declines in performance of 3% and 1.6% per year, respectively.[23]

Will VBP Incentives Be Large Enough to Change Practice Patterns?

The VBP Program's ability to influence change depends, at least in part, on how the incentives offered under this program compare to the magnitude of the investments that hospitals must make to achieve a given reward. In general, larger incentives are necessary to motivate more significant changes in behavior or to influence organizations to invest the resources needed to achieve change. The incentives offered under VBP in FY 2013 are quite modest. Almost two‐thirds of participating hospitals will see their FY 2013 Medicare revenues change by <0.25%, roughly $125,000 at most.[13, 24] Although these incentives may motivate hospitals that can improve performance and achievement with very modest investments, they may have little impact on organizations that need to make significant upfront investments in care processes to achieve sustainable improvements in care quality. As CMS increases the size of VBP incentives over the next 2 to 4 years, it will also hold hospitals accountable for a broader and increasingly complex set of outcomes. Improving these outcomes may require investments in areas such as information technology and process improvement that far surpass the VBP incentive reward.

Moreover, prior research suggests that financial incentives like those available under VBP may contribute only slightly to performance improvements when public reporting already exists. For example, in a 2‐year study of 613 US hospitals implementing pay‐for‐performance plus public reporting or public reporting only, pay for performance plus public reporting was associated with only a 2.6% to 4.1% increase in a composite measure of quality when compared to hospitals with public reporting only.[9] Similarly, a study of 54 hospitals participating in the CMS pay for performance pilot initiative found no significant improvement in quality of care or outcomes for AMI when compared to 446 control hospitals.[25] A long‐term analysis of pay for performance in the Medicare Premier Hospital Quality Incentive Demonstration found that participation in the program had no discernible effect on 30‐day mortality rates.[10] Finally, a study of physician medical groups contracting with a large network healthcare maintenance organization found that the implementation of pay for performance did not result in major before and after improvements in clinical quality compared to a control group of medical groups.[26]

High‐Value Care Is Not Always Low‐Cost Care

Not surprisingly, the clinical process measures included in CMS' hospital VBP program evaluate a select and relatively small group of high‐value and low‐cost interventions (eg, appropriate administration of antibiotics and tight control of serum glucose in surgical patients). However, an important body of work has demonstrated that high‐cost care (eg, intensive inpatient hospital care for common acute medical conditions) may also be highly valuable in terms of improving survival.[20, 27, 28, 29, 30] As the hospital VBP program evolves, its overseers will need to consider whether to include additional incentives for high‐value high‐cost healthcare services. Such considerations will likely become increasingly salient as healthcare delivery organizations move toward capitated delivery models. In particular, the VBP program's Medicare Spending Per Beneficiary measure, which quantifies inpatient and subsequent outpatient spending per beneficiary after a given hospitalization episode, will need to distinguish between higher‐spending hospitals that provide highly effective care (eg, care that reduces mortality and readmissions) and facilities that provide less‐effective care.

FUTURE OF VBP

Although the future of VBP is unknown, CMS is likely to modify the program in a number of ways over the next 3 to 5 years. First, CMS will likely expand the breadth and focus of incentivized measures in the VBP program. In FY 2014, for example, CMS is adding a set of 3, 30‐day mortality outcome measures to VBP: 30‐day risk‐adjusted mortality for AMI, CHF, and pneumonia.[1] A hospital's performance with respect to these outcomes will represent 25% of its total performance score in 2014, whereas the clinical process of care and patient experience of care domains will account for 45% and 30% of this score, respectively. In 2015, patient experience and outcome measures will account for 30% each in a hospital's performance score, whereas process and efficiency measures will each account for 20% of this score, respectively. The composition of this performance score evidences a shift away from rewarding process‐based measures and toward incentivizing measures of clinical outcomes and patient satisfaction, the latter of which may be highly subjective and more representative of a hospital's catchment population than of a hospital's care itself.[31] Additional measures in the domains of patient safety, care coordination, population and community health, emergency room wait times, and cost control may also be added to the VBP program in FY 2015 to FY 2017. Furthermore, CMS will continue to reevaluate the appropriateness of measures that are already included in VBP and will stop incentivizing measures that have become topped out, or are no longer supported by the National Quality Forum.[1, 13]

Second, CMS has established an annual gradual increase of 0.25% in the percentage of each hospital's inpatient DRG‐based payment that is at stake under VBP. In FY 2014, for example, participating hospitals will be required to contribute 1.25% of inpatient DRG payments to the VBP program. This percentage is likely to increase to 2% or more by 2017.[1, 32]

Third, expansions of the VBP program complement a number of other quality improvement efforts overseen by CMS, including the Hospital Readmissions Reduction Program. Effective for discharges beginning on October 1, 2012, hospitals with excess readmissions for AMI, CHF, and pneumonia are at risk for reimbursement reductions for all Medicare admissions in proportion to the rate of excess rehospitalizations. Some of the same concerns about the hospital VBP program outlined above have also been raised for this program, namely, whether readmission penalties will be large enough to impact hospital behavior, whether readmissions are even preventable,[33, 34] and whether adjustments in hospital‐level policies will reduce admissions that are known to be heavily influenced by patient economic and social factors that are outside of a hospital's control.[35, 36] Despite the limitations of VBP and the challenges that lie ahead, there is optimism that rewarding hospitals that provide high‐value rather than high‐volume care will not only improve outcomes of hospitalized patients in the United States, but will potentially be able to do so at a lower cost. Encouraging hospitals to improve their quality of care may also have important spillover effects on other healthcare domains. For example, hospitals that adopt systems to ensure prompt delivery of antibiotics to patients with pneumonia may also observe positive spillover effects with the prompt antibiotic management of other acute infectious illnesses that are not covered by VBP. VBP may have spillover effects on medical malpractice liability and defensive medicine as well. Indeed, financial incentives to practice higher‐quality evidenced‐based care may reduce medical malpractice liability and defensive medicine.

The government's ultimate goal in implementing VBP is to identify a broad and clinically relevant set of outcome measures that can be used to incentivize hospitals to deliver high‐quality as opposed to high‐volume healthcare. The first wave of outcome measures has already been instituted. It remains to be seen whether the incentive rewards of Medicare's hospital VBP program will be large enough that hospitals feel compelled to improve and compete for them.

The Centers for Medicaid and Medicare Services' (CMS) Hospital Inpatient Value‐Based Purchasing (VBP) Program, which was signed into law as part of the Patient Protection and Affordable Care Act of 2010, aims to incentivize inpatient providers to deliver high‐value, as opposed to high‐volume, healthcare.[1] Beginning on October 1, 2012, the start of the 2013 fiscal year (FY), hospitals participating in the VBP program became eligible for a variety of performance‐based incentive payments from CMS. These payments are based on an acute care hospital's ability to meet performance measurements in 6 care domains: (1) patient safety, (2) care coordination, (3) clinical processes and outcomes, (4) population or community health, (5) efficiency and cost reduction, and (6) patient‐ and caregiver‐centered experience.[2] The VBP program's ultimate purpose is to enable CMS to improve the health of Medicare beneficiaries by purchasing better care for them at a lower cost. These 3 characteristics of careimproved health, improved care, and lower costsare the foundation of CMS' conception of value.[1, 2] They are closely related to an economic conception of value, which is the difference between an intervention's benefit and its cost.

Although in principle not a new idea, the formal mandate of hospitals to provide high‐value healthcare through financial incentives marks an important change in Medicare and Medicaid policy. In this opportune review of VBP, we first discuss the relevant historical changes in the reimbursement environment of US hospitals that have set the stage for VBP. We then describe the structure of CMS' VBP program, with a focus on which facilities are eligible to participate in the program, the specific outcomes measured and incentivized, how rewards and penalties are allocated, and how the program will be funded. In an effort to anticipate some of the issues that lie ahead, we then highlight a number of potential challenges to the success of VBP, and discuss how VBP will impact the delivery and reimbursement of inpatient care services. We conclude by examining how the VBP program is likely to evolve over time.

HISTORICAL CONTEXT FOR VBP

Over the last decade, CMS has embarked on a number of initiatives to incentivize the provision of higher‐quality and more cost‐effective care. For example, in 2003, CMS implemented a national pay‐for‐performance (P4P) pilot project called the Premier Hospital Quality Incentive Demonstration (HQID).[3, 4] HQID, which ran for 6 years, tracked and rewarded the performance of 216 hospitals in 6 healthcare service domains: (1) acute myocardial infarction (AMI), (2) congestive heart failure (CHF), (3) pneumonia, (4) coronary artery bypass graft surgery, (5) hip and knee replacement surgery, and (6) perioperative management of surgical patients (including prevention of surgical site infections).[4] CMS then introduced its Hospital Compare Web site in 2005 to facilitate public reporting of hospital‐level quality outcomes.[3, 5] This Web site provides the public with access to data on hospital performance across a wide array of measures of process quality, clinical outcomes, spending, and resource utilization.[5] Next, in October 2008, CMS stopped reimbursing hospitals for a number of costly and common hospital‐acquired complications, including hospital‐acquired bloodstream infections and urinary tract infections, patient falls, and pressure ulcers.[3, 6] VBP is the latest and most comprehensive step that CMS has taken in its decade‐long effort to shift from volume to value‐based compensation for inpatient care.

Although CMS appears fully invested in using performance incentives to increase healthcare value, existing evidence of the effects of P4P on patient outcomes remains quite mixed.[7] On one hand, an analysis of an inpatient P4P program sponsored by the United Kingdom's National Health Service's (NHS) suggests that P4P may improve quality and save lives; indeed, hospitals that participated in the NHS P4P program significantly reduced inpatient mortality from pneumonia, saving an estimated 890 lives.[8] Additional empirical work suggests that the HQID was also associated with early improvements in healthcare quality.[9] However, a subsequent long‐term analysis found that participation in HQID had no discernible effect on 30‐day mortality rates.[10] Moreover, a meta‐analysis of P4P incentives for individual practitioners found few methodologically robust studies of P4P for clinicians and concluded that P4P's effects on individual practice patterns and outcomes remain largely uncertain.[11]

VBP: STRUCTURE AND DESIGN

This section reviews the structure of the VBP program. We describe current VBP eligibility criteria and sources of funding for the program, how hospitals participating in VBP are evaluated, and how VBP incentives for FY 2013 have been calculated.

Hospital Eligibility for VBP

All acute care hospitals in the United States (excluding Maryland) that are not psychiatric hospitals, rehabilitation hospitals, long‐term care facilities, children's hospitals, or cancer hospitals are eligible to participate in VBP in FY 2013 (full eligibility criteria is outlined in Table 1). For FY 2013, CMS chose to incentivize measures in just 2 care domains: (1) clinical processes of care and (2) patient experience of care. To be eligible for VBP in FY 2013, a hospital must report at least 10 cases each in at least 4 of 12 measures included in the clinical processes of care domain (Table 2), and/or must have at least 100 completed Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Designed and validated by CMS, the HCAHPS survey provides hospitals with a standardized instrument for gathering information about patient satisfaction with, and perspectives on, their hospital care.[12] HCAHPS will be used to assess 8 patient experience of care measures (Table 3).

Inclusion and Exclusion Criteria for the Inpatient Value‐Based Purchasing Program in Fiscal Year 2013
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; HHS, US Department of Health and Human Services; VBP, Value‐Based Purchasing.

Inclusion criteria
Acute care hospital
Located in all 50 US states or District of Columbia (excluding Maryland)
Has at least 10 cases in at least 4 of 12 clinical process of care measures and/or at least 100 completed HCAHPS surveys
Exclusion criteria
Psychiatric, rehabilitation, long‐term care, children's or cancer hospital
Does not participate in Hospital Inpatient Quality Reporting Program during the VBP performance period
Cited by the Secretary of HHS for significant patient safety violations during performance period
Hospital does not meet minimum reporting requirements for number of cases, process measures, and surveys needed to participate in VBP
Clinical Process of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Disease Process Process of Care Measure
  • NOTE: Mortality measures to be added in fiscal year 2014: acute myocardial infarction, congestive heart failure, pneumonia.

Acute myocardial infarction Fibrinolytic therapy received within 30 minutes of hospital arrival
Primary percutaneous coronary intervention received within 90 minutes of hospital arrival
Heart failure Discharge instructions provided
Pneumonia Blood cultures performed in the emergency department prior to initial antibiotic received in hospital
Initial antibiotic selection for community‐acquired pneumonia in immunocompetent patient
Healthcare‐associated infections Prophylactic antibiotic received within 1 hour prior to surgical incision
Prophylactic antibiotic selection for surgical patients
Prophylactic antibiotics discontinued within 24 hours after surgery ends
Cardiac surgery patients with controlled 6:00 am postoperative serum glucose
Surgeries Surgery patients on ‐blocker prior to arrival that received ‐blocker during perioperative period
Surgery patients with recommended venous thromboembolism prophylaxis ordered
Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery
Patient Experience of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Communication with nurses
Communication with doctors
Responsiveness of hospital staff
Pain management
Communication about medicines
Cleanliness and quietness of hospital environment
Discharge information
Overall rating of hospital

Participation in the program is mandatory for eligible hospitals, and CMS estimates that more than 3000 facilities across the United States will participate in FY 2013. Roughly $850 million dollars in VBP incentives will be paid out to these participating hospitals in FY 2013. The program is being financed through a 1% across‐the‐board reduction in FY 2013 diagnosis‐related group (DRG)‐based inpatient payments to participating hospitals. On December 20, 2012, CMS publically announced FY 2013 VBP incentives for all participating hospitals. Each hospital's incentive is retroactive and based on its performance between July 1, 2011 and March 31, 2012.

All data used for calculating VBP incentives is reported to CMS through its Hospital Inpatient Quality Reporting (Hospital IQR) Program, a national program instituted in 2003 that rewards hospitals for reporting designated quality measures. As of 2007, approximately 95% of eligible US hospitals were using the Hospital IQR program.[1] Measures evaluated via chart abstracts and surveys reflect a hospital's performance for its entire patient population, whereas measures assessed with claims data reflect hospital performance only for Medicare patients.

Evaluation of Hospitals

In FY 2013, hospital VBP incentive payments will be based entirely on performance in 2 domains: (1) clinical processes of care (weighted 70%) and (2) patient experience of care (weighted 30%). For each domain, CMS will evaluate each hospital's improvement over time as well as achievement compared to other hospitals in the VBP program. By assessing and rewarding both achievement and improvement, CMS will ensure that lower‐performing hospitals will still be rewarded for making substantial improvements in quality. To evaluate the first metricimprovement over timeCMS will compare a hospital's performance during a given reporting period with its baseline performance 2 years prior to this block of time. A hospital receives improvement points for improving its performance over time. To assess the second metricachievement compared to other hospitals in the VBP programCMS will compare each hospital's performance during a reporting period with the baseline performance (eg, performance 2 years prior to reporting period) of all other hospitals in the VBP program. A hospital is awarded achievement points if its performance exceeds the 50th percentile of all hospitals during the baseline performance period. Improvement scores range from 0 to 9, whereas achievement scores range from 0 to 10. The greater of a hospital's improvement and achievement scores on each VBP measure are used to calculate each hospital's total earned clinical care domain score and total earned HCAHPS base score. Hospitals that lack baseline performance data, which is required to assess improvement, will be evaluated solely on the basis of achievement points.[1] The total earned clinical care domain score is multiplied by 70% to reach the clinical care domain's contribution to a hospital's total performance score.

Each hospital's total patient experience domain, or HCAHPS performance, score consists of 2 components: a total earned HCAHPS base score as described above and a consistency score. The consistency score evaluates the reliability of a hospital's performance across all 8 patient experience of care measures (Table 3). If a hospital is above the 50th percentile of all hospital scores during the baseline period on all 8 measures, then it receives 100% of its consistency points. If a hospital is at the 0 percentile for a given measure, then it receives 0 consistency points for all measures. This provision promotes consistency by harshly penalizing hospitals with extremely poor performance on any 1 specific measure. If 1 or more measures are between the 0 and 50th percentiles, then it will receive a consistency score that takes into account how many measures were below the 50th percentile and their distance from this threshold. Each hospital's total HCAHPS performance score (the sum of total earned HCAHPS base points and consistency points) is then multiplied by 30% to arrive at the patient experience of care domain's contribution to a hospital's total performance score.

Importantly, CMS excluded from its VBP initiative 10 clinical process measures reported in the Hospital IQR Program because they are topped out; that is, almost all hospitals already perform them at very high rates (Table 4). Examples of these topped out process measures include administration of aspirin to all patients with AMI on arrival at the hospital; counseling of patients with AMI, CHF, and pneumonia about smoking cessation; and prescribing angiotensin‐converting enzyme inhibitors or angiotensin receptor blockers to patients with CHF and left ventricular dysfunction.[1]

Topped Out Measures
Disease Process Measure
  • NOTE: Abbreviations: ACEI, angiotensin‐converting enzyme inhibitor; ARB, angiotensin receptor blocker.

Acute myocardial infarction Aspirin administered on arrival to the emergency department
ACEI or ARB prescribed on discharge
Patient counseled about smoking cessation
‐Blocker prescribed on discharge
Aspirin prescribed at discharge
Heart failure Patient counseled about smoking cessation
Evaluation of left ventricular systolic function
ACEI or ARB prescribed for left ventricular systolic dysfunction
Pneumonia Patient counseled about smoking cessation
Surgical Care Improvement Project Surgery patients with appropriate hair removal

Calculation of VBP Incentives and Public Reporting

A hospital's total performance score for FY 2013 is equal to the sum of 70% of its clinical care domain score and 30% of its total HCAHPS performance score. This total performance score is entered into a linear mathematical formula to calculate each hospital's incentive payment. CMS projects that VBP will lead to a net increase in Medicare payments for one‐half of hospitals and a net decrease in payments for the other half of participating facilities.[1]

In December 2012, CMS publicly disclosed information about the initial performance of each hospital in the VBP program. Reported information included: (1) hospital performance for each applicable performance measure, (2) hospital performance by disease condition or procedure, and (3) hospital's total performance score. Initial analyses of this performance data revealed that 1557 hospitals will receive bonus payments under VBP in FY 2013, whereas 1427 hospitals will lose money under this program. Treasure Valley Hospital, a 10‐bed physician‐owned hospital in Boise, Idaho, will receive a 0.83% increase in Medicare payments, the largest payment increase under VBP in 2013. Conversely, Auburn Community Hospital in upstate New York, will suffer the most severe payment reduction: 0.9% per Medicare admission. The penalty will cost Auburn Hospital about $100,000, which is slightly more than 0.1% of its yearly $85 million operating budget.[13] For almost two‐thirds of participating hospitals, FY 2013 Medicare payments will change by <0.25%.[13] Additional information about VBP payments for FY 2013, including the number of hospitals who received VBP incentives and the size and range of these payments, is now accessible to the public through CMS' Hospital Compare Web site (http://www.hospitalcompare.hhs.gov).

CHALLENGES OF VBP

As the Medicare VBP program evolves, and hospitals confront ever‐larger financial incentives to deliver high‐value as opposed to high‐volume care, it will be important to recognize limitations of the VBP program as they arise. Here we briefly discuss several conceptual and implementation challenges that physicians and policymakers should consider when assessing the merits of VBP in promoting high‐quality healthcare.

Rigorous and Continuous Evaluation of VBP Programs

The main premise of using VBP to incentivize hospitals to deliver high‐quality cost‐effective care is that the process measures used to determine hospital quality do impact patient outcomes. However, it is already well established that improvements in measures of process quality are not always associated with improvements in patient outcomes.[14, 15, 16] Moreover, incentivizing specific process measures encourages hospitals to shift resources away from other aspects of care delivery, which may have ambiguous, or even deleterious, effects on patient outcomes. Although incentives ideally push hospitals to shift resources away from low‐quality care toward high‐quality care, in practice this is not always the case. Hospital resources may instead be drawn away from areas that are not yet incented by VBP, but for which improvements in quality of care are desperately needed. The same empirical focus behind using VBP to incentivize hospitals to improve patient outcomes efficiently should be used to evaluate whether VBP is continually meeting its stated goals: reducing overall patient morbidity and mortality and improving patient satisfaction at ideally lower cost. The experience of the US education system with public policies designed to improve student testing performance may serve as a cautionary example here. Such policies, which provide financial rewards to schools whose students perform well on standardized tests, can indeed raise testing performance. However, these policies also lead educators to teach to the test, and to neglect important topics that are not tested on standardized exams.[17]

Prioritization of Process Measures

As payment incentives for VBP currently stand, process measures are weighted equally regardless of the clinical benefits they generate and the resources required to achieve improvements in process quality. For instance, 2 process measures, continuing home ‐blocker medications for patients with coronary artery disease undergoing surgery and early percutaneous coronary intervention for patients with AMI, may be weighted equally as process measures although both their clinical benefits and the costs of implementation are very different. Some hospitals responding to VBP incentives may choose to invest in areas where their ability to earn VBP incentive payments is high and the costs of improvement are low, although those areas may not be where interventions are most needed because clinical outcomes could be most improved. Recognizing that process measures have heterogeneous benefits and costs of implementation is important when prioritizing their reimbursement in VBP.

Measuring Improvements in Hospital Quality

Tying hospital financial compensation to hospital quality implies that measures of hospital quality should be robust. To incentivize hospitals to improve quality not only relative to other hospitals but to themselves in the past, the VBP program has established a baseline performance for each hospital. Each hospital is compared to its baseline performance in subsequent evaluation periods. Thus, properly measuring a hospital's baseline performance is important. During a given baseline period, some hospitals may have better or worse outcomes than their steady state due to random variation alone. Some hospitals deemed to have a low baseline will experience improvements in quality that are not related to active efforts to improve quality but through chance alone. Similarly, some hospitals deemed to have a high baseline will experience reductions in quality through chance. Of course, neither of these changes should be subject to differences in reimbursement because they do not reflect actual organizational changes made by the hospitals. The VBP program has made significant efforts to address this issue by requiring participating hospitals to have a large enough sample of cases such that estimated rates of process quality adherence meet a reliability threshold (ie, are likely to be consistent over time rather than vary substantially through chance alone). However, not all process measures exhibit high reliability, particularly those for which adverse events are rare (eg, foreign objects retained after surgery, air embolisms, and blood incompatibility). Ultimately, CMS's decision to balance the need for statistically reliable data with the goal of including as many hospitals as possible in the VBP program will require ongoing reevaluation of this issue.

Choosing Hospital Comparators Appropriately

In the current VBP program, hospitals will be evaluated in part by how they compare to hospitals nationally. However, studies of regional variation in healthcare have demonstrated large variations in practice patterns across the United States,[18, 19, 20] raising the question of whether hospitals should, at least initially, be compared to hospitals in the same geographic area. Although the ultimate goal of VBP should be to hold hospitals to a national standard, local practice patterns are not easily modified within 1‐ to 2‐year timeframes. Initially comparing hospitals to a national rather than local standard may unfairly penalize hospitals that are relative underperformers nationally but overperformers regionally. Although CMS's policy to reward improvement within hospitals over time mitigates issues arising from a cross‐sectional comparison of hospitals, the issue still remains if many hospitals within a region not only underperform relative to other hospitals nationally but also fail to demonstrate improvement. More broadly, this issue extends to differences across hospitals in factors that impact their ability to meet VBP goals. These factors may include, for example, hospital size, profitability, patient case and insurance mix, and presence of an electronic medical record. Comparing hospitals with vastly different abilities to achieve VBP goals and improve quickly may amount to inequitable policy.

Continual Evaluation of Topped‐Out Measures

Process measures that are met at high rates at nearly all hospitals are not used in evaluations by CMS for VBP. An assumption underlying CMS' decision to not reward hospitals for achieving these topped‐out measures is that once physicians and hospitals make cognitive and system‐level improvements that improve process quality, these gains will persist after the incentive is removed. Thus, CMS hopes and anticipates that although performance incentives will make it easier for well‐meaning physicians to learn to do the right thing, doctors will continue to do the right things for patients after these incentives are removed.[21, 22] Although this assumption may generally be accurate, it is important to continue to evaluate whether measures that are currently topped out continue to remain adequately performed, because rewarding new quality measures will necessarily lead hospitals to reallocate resources away from other clinical activities. Although we hope that the continued public reporting of topped‐out measures will prevent declines in performance on these measures, policy makers and clinicians should be aware that the lack of financial incentives for topped‐out measures may result in declines in quality. To this point, an analysis of 35 Kaiser Permanente facilities from 1997 to 2007 demonstrated that the removal of financial incentives for diabetic retinopathy and cervical cancer screening was associated with subsequent declines in performance of 3% and 1.6% per year, respectively.[23]

Will VBP Incentives Be Large Enough to Change Practice Patterns?

The VBP Program's ability to influence change depends, at least in part, on how the incentives offered under this program compare to the magnitude of the investments that hospitals must make to achieve a given reward. In general, larger incentives are necessary to motivate more significant changes in behavior or to influence organizations to invest the resources needed to achieve change. The incentives offered under VBP in FY 2013 are quite modest. Almost two‐thirds of participating hospitals will see their FY 2013 Medicare revenues change by <0.25%, roughly $125,000 at most.[13, 24] Although these incentives may motivate hospitals that can improve performance and achievement with very modest investments, they may have little impact on organizations that need to make significant upfront investments in care processes to achieve sustainable improvements in care quality. As CMS increases the size of VBP incentives over the next 2 to 4 years, it will also hold hospitals accountable for a broader and increasingly complex set of outcomes. Improving these outcomes may require investments in areas such as information technology and process improvement that far surpass the VBP incentive reward.

Moreover, prior research suggests that financial incentives like those available under VBP may contribute only slightly to performance improvements when public reporting already exists. For example, in a 2‐year study of 613 US hospitals implementing pay‐for‐performance plus public reporting or public reporting only, pay for performance plus public reporting was associated with only a 2.6% to 4.1% increase in a composite measure of quality when compared to hospitals with public reporting only.[9] Similarly, a study of 54 hospitals participating in the CMS pay for performance pilot initiative found no significant improvement in quality of care or outcomes for AMI when compared to 446 control hospitals.[25] A long‐term analysis of pay for performance in the Medicare Premier Hospital Quality Incentive Demonstration found that participation in the program had no discernible effect on 30‐day mortality rates.[10] Finally, a study of physician medical groups contracting with a large network healthcare maintenance organization found that the implementation of pay for performance did not result in major before and after improvements in clinical quality compared to a control group of medical groups.[26]

High‐Value Care Is Not Always Low‐Cost Care

Not surprisingly, the clinical process measures included in CMS' hospital VBP program evaluate a select and relatively small group of high‐value and low‐cost interventions (eg, appropriate administration of antibiotics and tight control of serum glucose in surgical patients). However, an important body of work has demonstrated that high‐cost care (eg, intensive inpatient hospital care for common acute medical conditions) may also be highly valuable in terms of improving survival.[20, 27, 28, 29, 30] As the hospital VBP program evolves, its overseers will need to consider whether to include additional incentives for high‐value high‐cost healthcare services. Such considerations will likely become increasingly salient as healthcare delivery organizations move toward capitated delivery models. In particular, the VBP program's Medicare Spending Per Beneficiary measure, which quantifies inpatient and subsequent outpatient spending per beneficiary after a given hospitalization episode, will need to distinguish between higher‐spending hospitals that provide highly effective care (eg, care that reduces mortality and readmissions) and facilities that provide less‐effective care.

FUTURE OF VBP

Although the future of VBP is unknown, CMS is likely to modify the program in a number of ways over the next 3 to 5 years. First, CMS will likely expand the breadth and focus of incentivized measures in the VBP program. In FY 2014, for example, CMS is adding a set of 3, 30‐day mortality outcome measures to VBP: 30‐day risk‐adjusted mortality for AMI, CHF, and pneumonia.[1] A hospital's performance with respect to these outcomes will represent 25% of its total performance score in 2014, whereas the clinical process of care and patient experience of care domains will account for 45% and 30% of this score, respectively. In 2015, patient experience and outcome measures will account for 30% each in a hospital's performance score, whereas process and efficiency measures will each account for 20% of this score, respectively. The composition of this performance score evidences a shift away from rewarding process‐based measures and toward incentivizing measures of clinical outcomes and patient satisfaction, the latter of which may be highly subjective and more representative of a hospital's catchment population than of a hospital's care itself.[31] Additional measures in the domains of patient safety, care coordination, population and community health, emergency room wait times, and cost control may also be added to the VBP program in FY 2015 to FY 2017. Furthermore, CMS will continue to reevaluate the appropriateness of measures that are already included in VBP and will stop incentivizing measures that have become topped out, or are no longer supported by the National Quality Forum.[1, 13]

Second, CMS has established an annual gradual increase of 0.25% in the percentage of each hospital's inpatient DRG‐based payment that is at stake under VBP. In FY 2014, for example, participating hospitals will be required to contribute 1.25% of inpatient DRG payments to the VBP program. This percentage is likely to increase to 2% or more by 2017.[1, 32]

Third, expansions of the VBP program complement a number of other quality improvement efforts overseen by CMS, including the Hospital Readmissions Reduction Program. Effective for discharges beginning on October 1, 2012, hospitals with excess readmissions for AMI, CHF, and pneumonia are at risk for reimbursement reductions for all Medicare admissions in proportion to the rate of excess rehospitalizations. Some of the same concerns about the hospital VBP program outlined above have also been raised for this program, namely, whether readmission penalties will be large enough to impact hospital behavior, whether readmissions are even preventable,[33, 34] and whether adjustments in hospital‐level policies will reduce admissions that are known to be heavily influenced by patient economic and social factors that are outside of a hospital's control.[35, 36] Despite the limitations of VBP and the challenges that lie ahead, there is optimism that rewarding hospitals that provide high‐value rather than high‐volume care will not only improve outcomes of hospitalized patients in the United States, but will potentially be able to do so at a lower cost. Encouraging hospitals to improve their quality of care may also have important spillover effects on other healthcare domains. For example, hospitals that adopt systems to ensure prompt delivery of antibiotics to patients with pneumonia may also observe positive spillover effects with the prompt antibiotic management of other acute infectious illnesses that are not covered by VBP. VBP may have spillover effects on medical malpractice liability and defensive medicine as well. Indeed, financial incentives to practice higher‐quality evidenced‐based care may reduce medical malpractice liability and defensive medicine.

The government's ultimate goal in implementing VBP is to identify a broad and clinically relevant set of outcome measures that can be used to incentivize hospitals to deliver high‐quality as opposed to high‐volume healthcare. The first wave of outcome measures has already been instituted. It remains to be seen whether the incentive rewards of Medicare's hospital VBP program will be large enough that hospitals feel compelled to improve and compete for them.

References
  1. Centers for Medicare and Medicaid Services. Hospital Value‐Based Purchasing Web site. 2013. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html. Accessed March 4, 2013.
  2. VanLare JM, Conway PH. Value‐based purchasing—national programs to move from volume to value. N Engl J Med. 2012;367:292295.
  3. Joynt KE, Rosenthal MB. Hospital value‐based purchasing: will Medicare's new policy exacerbate disparities? Circ Cardiovasc Qual Outcomes. 2012;5:148149.
  4. Centers for Medicare and Medicaid Services. CMS/premier hospital quality incentive demonstration (QHID). 2013. Available at: https://www.premierinc.com/quality‐safety/tools‐services/p4p/hqi/faqs.jsp. Accessed March 5, 2013.
  5. Centers for Medicare and Medicaid Services. Hospital Compare Web site. 2013. Available at: http://www.medicare.gov/hospitalcompare. Accessed March 4, 2013.
  6. Brown J, Doloresco F, Mylotte JM. “Never events”: not every hospital‐acquired infection is preventable. Clin Infect Dis. 2009;49:743746.
  7. Epstein AM. Will pay for performance improve quality of care? The answer is in the details. N Engl J Med. 2012;367:18521853.
  8. Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367:18211828.
  9. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486496.
  10. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long‐term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  11. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157:889899.
  12. Centers for Medicare and Medicaid Services. Hospital Consumer Assessment Of Healthcare Providers and Systems Web site. 2013. Available at: http://www.hcahpsonline.org. Accessed March 5, 2013.
  13. Rau J. Medicare discloses hospitals' bonuses, penalties based on quality. Kaiser Health News. December 20, 2012. Available at: http://www.kaiserhealthnews.org/stories/2012/december/21/medicare‐hospitals‐value‐based‐purchasing.aspx?referrer=search. Accessed March 26, 2013.
  14. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood). 2009;28:w566w572.
  15. Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297:6170.
  16. Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13:469474.
  17. Jacob BA. Accountability, incentives and behavior: the impact of high‐stakes testing in the Chicago public schools. J Public Econ. 2005;89:761796.
  18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138:273287.
  19. Fisher ES. Medical care—is more always better? N Engl J Med. 2003;349:16651667.
  20. Romley JA, Jena AB, Goldman DP. Hospital spending and inpatient mortality: evidence from California: an observational study. Ann Intern Med. 2011;154:160167.
  21. James BC. Making it easy to do it right. N Engl J Med. 2001;345:991993.
  22. Christensen RD, Henry E, Ilstrup S, Baer VL. A high rate of compliance with neonatal intensive care unit transfusion guidelines persists even after a program to improve transfusion guideline compliance ended. Transfusion. 2011;51:25192520.
  23. Lester H, Schmittdiel J, Selby J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898.
  24. Werner RM, Dudley RA. Medicare's new hospital value‐based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31:19321940.
  25. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:23732380.
  26. Mullen KJ, Frank RG, Rosenthal MB. Can you get what you pay for? Pay‐for‐performance and the quality of healthcare providers. Rand J Econ. 2010;41:6491.
  27. Romley JA, Jena AB, O'Leary JF, Goldman DP. Spending and mortality in US acute care hospitals. Am J Manag Care. 2013;19:e46e54.
  28. Barnato AE, Farrell MH, Chang CC, Lave JR, Roberts MS, Angus DC. Development and validation of hospital “end‐of‐life” treatment intensity measures. Med Care. 2009;47:10981105.
  29. Ong MK, Mangione CM, Romano PS, et al. Looking forward, looking back: assessing variations in hospital resource use and outcomes for elderly patients with heart failure. Circ Cardiovasc Qual Outcomes. 2009;2:548557.
  30. Stukel TA, Fisher ES, Alter DA, et al. Association of hospital spending intensity with mortality and readmission rates in Ontario hospitals. JAMA. 2012;307:10371045.
  31. Young GJ, Meterko M, Desai KR. Patient satisfaction with hospital care: effects of demographic and institutional characteristics. Med Care. 2000;38:325334.
  32. VanLare JM, Blum JD, Conway PH. Linking performance with payment: implementing the Physician Value‐Based Payment Modifier. JAMA. 2012;308:20892090.
  33. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183:E391E402.
  34. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183:E1067E1072.
  35. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366:13661369.
  36. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305:675681.
References
  1. Centers for Medicare and Medicaid Services. Hospital Value‐Based Purchasing Web site. 2013. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html. Accessed March 4, 2013.
  2. VanLare JM, Conway PH. Value‐based purchasing—national programs to move from volume to value. N Engl J Med. 2012;367:292295.
  3. Joynt KE, Rosenthal MB. Hospital value‐based purchasing: will Medicare's new policy exacerbate disparities? Circ Cardiovasc Qual Outcomes. 2012;5:148149.
  4. Centers for Medicare and Medicaid Services. CMS/premier hospital quality incentive demonstration (QHID). 2013. Available at: https://www.premierinc.com/quality‐safety/tools‐services/p4p/hqi/faqs.jsp. Accessed March 5, 2013.
  5. Centers for Medicare and Medicaid Services. Hospital Compare Web site. 2013. Available at: http://www.medicare.gov/hospitalcompare. Accessed March 4, 2013.
  6. Brown J, Doloresco F, Mylotte JM. “Never events”: not every hospital‐acquired infection is preventable. Clin Infect Dis. 2009;49:743746.
  7. Epstein AM. Will pay for performance improve quality of care? The answer is in the details. N Engl J Med. 2012;367:18521853.
  8. Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367:18211828.
  9. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486496.
  10. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long‐term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  11. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157:889899.
  12. Centers for Medicare and Medicaid Services. Hospital Consumer Assessment Of Healthcare Providers and Systems Web site. 2013. Available at: http://www.hcahpsonline.org. Accessed March 5, 2013.
  13. Rau J. Medicare discloses hospitals' bonuses, penalties based on quality. Kaiser Health News. December 20, 2012. Available at: http://www.kaiserhealthnews.org/stories/2012/december/21/medicare‐hospitals‐value‐based‐purchasing.aspx?referrer=search. Accessed March 26, 2013.
  14. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood). 2009;28:w566w572.
  15. Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297:6170.
  16. Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13:469474.
  17. Jacob BA. Accountability, incentives and behavior: the impact of high‐stakes testing in the Chicago public schools. J Public Econ. 2005;89:761796.
  18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138:273287.
  19. Fisher ES. Medical care—is more always better? N Engl J Med. 2003;349:16651667.
  20. Romley JA, Jena AB, Goldman DP. Hospital spending and inpatient mortality: evidence from California: an observational study. Ann Intern Med. 2011;154:160167.
  21. James BC. Making it easy to do it right. N Engl J Med. 2001;345:991993.
  22. Christensen RD, Henry E, Ilstrup S, Baer VL. A high rate of compliance with neonatal intensive care unit transfusion guidelines persists even after a program to improve transfusion guideline compliance ended. Transfusion. 2011;51:25192520.
  23. Lester H, Schmittdiel J, Selby J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898.
  24. Werner RM, Dudley RA. Medicare's new hospital value‐based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31:19321940.
  25. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:23732380.
  26. Mullen KJ, Frank RG, Rosenthal MB. Can you get what you pay for? Pay‐for‐performance and the quality of healthcare providers. Rand J Econ. 2010;41:6491.
  27. Romley JA, Jena AB, O'Leary JF, Goldman DP. Spending and mortality in US acute care hospitals. Am J Manag Care. 2013;19:e46e54.
  28. Barnato AE, Farrell MH, Chang CC, Lave JR, Roberts MS, Angus DC. Development and validation of hospital “end‐of‐life” treatment intensity measures. Med Care. 2009;47:10981105.
  29. Ong MK, Mangione CM, Romano PS, et al. Looking forward, looking back: assessing variations in hospital resource use and outcomes for elderly patients with heart failure. Circ Cardiovasc Qual Outcomes. 2009;2:548557.
  30. Stukel TA, Fisher ES, Alter DA, et al. Association of hospital spending intensity with mortality and readmission rates in Ontario hospitals. JAMA. 2012;307:10371045.
  31. Young GJ, Meterko M, Desai KR. Patient satisfaction with hospital care: effects of demographic and institutional characteristics. Med Care. 2000;38:325334.
  32. VanLare JM, Blum JD, Conway PH. Linking performance with payment: implementing the Physician Value‐Based Payment Modifier. JAMA. 2012;308:20892090.
  33. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183:E391E402.
  34. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183:E1067E1072.
  35. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366:13661369.
  36. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305:675681.
Issue
Journal of Hospital Medicine - 8(5)
Issue
Journal of Hospital Medicine - 8(5)
Page Number
271-277
Page Number
271-277
Article Type
Display Headline
Hospital value‐based purchasing
Display Headline
Hospital value‐based purchasing
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Anupam B. Jena, MD, PhD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Avenue, Boston, MA 02115; Telephone: 617‐432‐8322; Fax: 617‐432‐0173. E‐mail: jena@hcp.med.harvard.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Medications and Pediatric Deterioration

Article Type
Changed
Sun, 05/21/2017 - 18:20
Display Headline
Medications associated with clinical deterioration in hospitalized children

In recent years, many hospitals have implemented rapid response systems (RRSs) in efforts to reduce mortality outside the intensive care unit (ICU). Rapid response systems include 2 clinical components (efferent and afferent limbs) and 2 organizational components (process improvement and administrative limbs).[1, 2] The efferent limb includes medical emergency teams (METs) that can be summoned to hospital wards to rescue deteriorating patients. The afferent limb identifies patients at risk of deterioration using tools such as early warning scores and triggers a MET response when appropriate.[2] The process‐improvement limb evaluates and optimizes the RRS. The administrative limb implements the RRS and supports its ongoing operation. The effectiveness of most RRSs depends upon the ward team making the decision to escalate care by activating the MET. Barriers to activating the MET may include reduced situational awareness,[3, 4] hierarchical barriers to calling for help,[3, 4, 5, 6, 7, 8] fear of criticism,[3, 8, 9] and other hospital safety cultural barriers.[3, 4, 8]

Proactive critical‐care outreach[10, 11, 12, 13] or rover[14] teams seek to reduce barriers to activation and improve outcomes by systematically identifying and evaluating at‐risk patients without relying on requests for assistance from the ward team. Structured similarly to early warning scores, surveillance tools intended for rover teams might improve their ability to rapidly identify at‐risk patients throughout a hospital. They could combine vital signs with other variables, such as diagnostic and therapeutic interventions that reflect the ward team's early, evolving concern. In particular, the incorporation of medications associated with deterioration may enhance the performance of surveillance tools.

Medications may be associated with deterioration in one of several ways. They could play a causal role in deterioration (ie, opioids causing respiratory insufficiency), represent clinical worsening and anticipation of possible deterioration (ie, broad‐spectrum antibiotics for a positive blood culture), or represent rescue therapies for early deterioration (ie, antihistamines for allergic reactions). In each case, the associated therapeutic classes could be considered sentinel markers of clinical deterioration.

Combined with vital signs and other risk factors, therapeutic classes could serve as useful components of surveillance tools to detect signs of early, evolving deterioration and flag at‐risk patients for evaluation. As a first step, we sought to identify therapeutic classes associated with clinical deterioration. This effort to improve existing afferent tools falls within the process‐improvement limb of RRSs.

PATIENTS AND METHODS

Study Design

We performed a case‐crossover study of children who experienced clinical deterioration. An alternative to the matched case‐control design, the case‐crossover design involves longitudinal within‐subject comparisons exclusively of case subjects such that an individual serves as his or her own control. It is most effective when studying intermittent exposures that result in transient changes in the risk of an acute event,[15, 16, 17] making it appropriate for our study.

Using the case‐crossover design, we compared a discrete time period in close proximity to the deterioration event, called the hazard interval, with earlier time periods in the hospitalization, called the control intervals.[15, 16, 17] In our primary analysis (Figure 1B), we defined the durations of these intervals as follows: We first censored the 2 hours immediately preceding the clinical deterioration event (hours 0 to 2). We made this decision a priori to exclude medications used after deterioration was recognized and resuscitation had already begun. The 12‐hour period immediately preceding the censored interval was the hazard interval (hours 2 to 14). Each 12‐hour period immediately preceding the hazard interval was a control interval (hours 14 to 26, 26 to 38, 38 to 50, and 50 to 62). Depending on the child's length of stay prior to the deterioration event, each hazard interval had 14 control intervals for comparison. In sensitivity analysis, we altered the durations of these intervals (see below).

Figure 1
Schematic of the iterations of the sensitivity analysis. (A–F) The length of the hazard and control intervals was either 8 or 12 hours, whereas the length of the censored interval was either 0, 2, or 4 hours. (B) The primary analysis used 12‐hour hazard and control intervals with a 2‐hour censored interval. (G) The design is a variant of the primary analysis in which the control interval closest to the hazard interval is censored.

Study Setting and Participants

We performed this study among children age <18 years who experienced clinical deterioration between January 1, 2005, and December 31, 2008, after being hospitalized on a general medical or surgical unit at The Children's Hospital of Philadelphia for 24 hours. Clinical deterioration was a composite outcome defined as cardiopulmonary arrest (CPA), acute respiratory compromise (ARC), or urgent ICU transfer. Cardiopulmonary arrest events required either pulselessness or a pulse with inadequate perfusion treated with chest compressions and/or defibrillation. Acute respiratory compromise events required respiratory insufficiency treated with bag‐valve‐mask or invasive airway interventions. Urgent ICU transfers included 1 of the following outcomes in the 12 hours after transfer: death, CPA, intubation, initiation of noninvasive ventilation, or administration of a vasoactive medication infusion used for the treatment of shock. Time zero was the time of the CPA/ARC, or the time at which the child arrived in the ICU for urgent transfers. These subjects also served as the cases for a previously published case‐control study evaluating different risk factors for deterioration.[18] The institutional review board of The Children's Hospital of Philadelphia approved the study.

At the time of the study, the hospital did not have a formal RRS. An immediate‐response code‐blue team was available throughout the study period for emergencies occurring outside the ICU. Physicians could also page the pediatric ICU fellow to discuss patients who did not require immediate assistance from the code‐blue team but were clinically deteriorating. There were no established triggering criteria.

Medication Exposures

Intravenous (IV) medications administered in the 72 hours prior to clinical deterioration were considered the exposures of interest. Each medication was included in 1 therapeutic classes assigned in the hospital's formulary (Lexicomp, Hudson, OH).[19] In order to determine which therapeutic classes to evaluate, we performed a power calculation using the sampsi_mcc package for Stata 12 (StataCorp, College Station, TX). We estimated that we would have 3 matched control intervals per hazard interval. We found that, in order to detect a minimum odds ratio of 3.0 with 80% power, a therapeutic class had to be administered in 5% of control periods. All therapeutic classes meeting that requirement were included in the analysis and are listed in Table 1. (See lists of the individual medications comprising each class in the Supporting Information, Tables 124, in the online version of this article.)

Therapeutic Classes With Drugs Administered in 5% of Control Intervals, Meeting Criteria for Evaluation in the Primary Analysis Based on the Power Calculation
Therapeutic ClassNo. of Control Intervals%
  • NOTE: Abbreviations: PPIs, proton pump inhibitors. Individual medications comprising each class are in the Supporting Information, Tables 124, in the online version of this article.

Sedatives10725
Antiemetics9222
Third‐ and fourth‐generation cephalosporins8320
Antihistamines7417
Antidotes to hypersensitivity reactions (diphenhydramine)6515
Gastric acid secretion inhibitors6215
Loop diuretics6215
Anti‐inflammatory agents6114
Penicillin antibiotics6114
Benzodiazepines5914
Hypnotics5814
Narcotic analgesics (full opioid agonists)5413
Antianxiety agents5313
Systemic corticosteroids5313
Glycopeptide antibiotics (vancomycin)4611
Anaerobic antibiotics4511
Histamine H2 antagonists4110
Antifungal agents379
Phenothiazine derivatives379
Adrenal corticosteroids358
Antiviral agents307
Aminoglycoside antibiotics266
Narcotic analgesics (partial opioid agonists)266
PPIs266

Data Collection

Data were abstracted from the electronic medication administration record (Sunrise Clinical Manager; Allscripts, Chicago, IL) into a database. For each subject, we recorded the name and time of administration of each IV medication given in the 72 hours preceding deterioration, as well as demographic, event, and hospitalization characteristics.

Statistical Analysis

We used univariable conditional logistic regression to evaluate the association between each therapeutic class and the composite outcome of clinical deterioration in the primary analysis. Because cases serve as their own controls in the case‐crossover design, this method inherently adjusts for all subject‐specific time‐invariant confounding variables, such as patient demographics, disease, and hospital‐ward characteristics.[15]

Sensitivity Analysis

Our primary analysis used a 2‐hour censored interval and 12‐hour hazard and control intervals. Excluding the censored interval from analysis was a conservative approach that we chose because our goal was to identify therapeutic classes associated with deterioration during a phase in which adverse outcomes may be prevented with early intervention. In order to test whether our findings were stable across different lengths of censored, hazard, and control intervals, we performed a sensitivity analysis, also using conditional logistic regression, on all therapeutic classes that were significant (P<0.05) in primary analysis. In 6 iterations of the sensitivity analysis, we varied the length of the hazard and control intervals between 8 and 12 hours, and the length of the censored interval between 0 and 4 hours (Figure 1AF). In a seventh iteration, we used a variant of the primary analysis in which we censored the first control interval (Figure 1G).

RESULTS

We identified 12 CPAs, 41 ARCs, and 699 ICU transfers during the study period. Of these 752 events, 141 (19%) were eligible as cases according to our inclusion criteria.[18] (A flowchart demonstrating the identification of eligible cases is provided in Supporting Table 25 in the online version of this article.) Of the 81% excluded, 37% were ICU transfers who did not meet urgent criteria. Another 31% were excluded because they were hospitalized for <24 hours at the time of the event, making their analysis in a case‐crossover design using 12‐hour periods impossible. Event characteristics, demographics, and hospitalization characteristics are shown in Table 2.

Subject Characteristics (N=141)
 n%
  • NOTE: Abbreviations: ARC, acute respiratory compromise; CPA, cardiopulmonary arrest; F, female; ICU, intensive care unit; M, male.

Type of event  
CPA43
ARC2920
Urgent ICU transfer10877
Demographics  
Age  
0<6 months1712
6<12 months2216
1<4 years3424
4<10 years2618
10<18 years4230
Sex  
F6043
M8157
Race  
White6949
Black/African American4935
Asian/Pacific Islander00
Other2316
Ethnicity  
Non‐Hispanic12790
Hispanic1410
Hospitalization  
Surgical service43
Survived to hospital discharge10776

Primary Analysis

A total of 141 hazard intervals and 487 control intervals were included in the primary analysis, the results of which are shown in Table 3. Among the antimicrobial therapeutic classes, glycopeptide antibiotics (vancomycin), anaerobic antibiotics, third‐generation and fourth‐generation cephalosporins, and aminoglycoside antibiotics were significant. All of the anti‐inflammatory therapeutic classes, including systemic corticosteroids, anti‐inflammatory agents, and adrenal corticosteroids, were significant. All of the sedatives, hypnotics, and antianxiety therapeutic classes, including sedatives, benzodiazepines, hypnotics, and antianxiety agents, were significant. Among the narcotic analgesic therapeutic classes, only 1 class, narcotic analgesics (full opioid agonists), was significant. None of the gastrointestinal therapeutic classes were significant. Among the classes classified as other, loop diuretics and antidotes to hypersensitivity reactions (diphenhydramine) were significant.

Results of Primary Analysis Using 12‐Hour Blocks and 2‐Hour Censored Period
 ORLCIUCIP Value
  • NOTE: Abbreviations: CI, confidence interval; GI, gastrointestinal; LCI, lower confidence interval; OR, odds ratio; PPIs, proton‐pump inhibitors; UCI, upper confidence interval. Substantial overlap exists among some therapeutic classes; see Supporting Information, Tables 124, in the online version of this article for a listing of the medications that comprised each class. *There was substantial overlap in the drugs that comprised the corticosteroids and other anti‐inflammatory therapeutic classes, and the ORs and CIs were identical for the 3 groups. When the individual drugs were examined, it was apparent that hydrocortisone and methylprednisolone were entirely responsible for the OR. Therefore, we used the category that the study team deemed (1) most parsimonious and (2) most clinically relevant in the sensitivity analysis, systemic corticosteroids. There was substantial overlap between the sedatives, hypnotics, and antianxiety therapeutic classes. When the individual drugs were examined, it was apparent that benzodiazepines and diphenhydramine were primarily responsible for the significant OR. Diphenhydramine had already been evaluated in the antidotes to hypersensitivity reactions class. Therefore, we used the category that the study team deemed (1) most parsimonious and (2) most clinically relevant in the sensitivity analysis, benzodiazepines.

Antimicrobial therapeutic classes    
Glycopeptide antibiotics (vancomycin)5.842.0116.980.001
Anaerobic antibiotics5.331.3620.940.02
Third‐ and fourth‐generation cephalosporins2.781.156.690.02
Aminoglycoside antibiotics2.901.117.560.03
Penicillin antibiotics2.400.96.40.08
Antiviral agents1.520.2011.460.68
Antifungal agents1.060.442.580.89
Corticosteroids and other anti‐inflammatory therapeutic classes*
Systemic corticosteroids3.691.0912.550.04
Anti‐inflammatory agents3.691.0912.550.04
Adrenal corticosteroids3.691.0912.550.04
Sedatives, hypnotics, and antianxiety therapeutic classes
Sedatives3.481.786.78<0.001
Benzodiazepines2.711.365.400.01
Hypnotics2.541.275.090.01
Antianxiety agents2.281.064.910.04
Narcotic analgesic therapeutic classes    
Narcotic analgesics (full opioid agonists)2.481.075.730.03
Narcotic analgesics (partial opioid agonists)1.970.576.850.29
GI therapeutic classes    
Antiemetics0.570.221.480.25
PPIs2.050.587.250.26
Phenothiazine derivatives0.470.121.830.27
Gastric acid secretion inhibitors1.710.614.810.31
Histamine H2 antagonists0.950.175.190.95
Other therapeutic classes    
Loop diuretics2.871.286.470.01
Antidotes to hypersensitivity reactions (diphenhydramine)2.451.155.230.02
Antihistamines2.000.974.120.06

Sensitivity Analysis

Of the 14 classes that were significant in primary analysis, we carried 9 forward to sensitivity analysis. The 5 that were not carried forward overlapped substantially with other classes that were carried forward. The decision of which overlapping class to carry forward was based upon (1) parsimony and (2) clinical relevance. This is described briefly in the footnotes to Table 3 (see Supporting information in the online version of this article for a full description of this process). Figure 2 presents the odds ratios and their 95% confidence intervals for the sensitivity analysis of each therapeutic class that was significant in primary analysis. Loop diuretics remained significantly associated with deterioration in all 7 iterations. Glycopeptide antibiotics (vancomycin), third‐generation and fourth‐generation cephalosporins, systemic corticosteroids, and benzodiazepines were significant in 6. Anaerobic antibiotics and narcotic analgesics (full opioid agonists) were significant in 5, and aminoglycoside antibiotics and antidotes to hypersensitivity reactions (diphenhydramine) in 4.

Figure 2
The ORs and 95% CIs for the sensitivity analyses. The primary analysis is “12 hr blocks, 2 hr censored”. Point estimates with CIs crossing the line at OR51.00 did not reach statistical significance. Upper confidence limit extends to 16.98,a 20.94,b 27.12,c 18.23,d 17.71,e 16.20,f 206.13,g 33.60,h and 28.28.i The OR estimate is 26.05.g Abbreviations: CI, confidence interval; hr, hour; OR, odds ratio.

DISCUSSION

We identified 9 therapeutic classes which were associated with a 2.5‐fold to 5.8‐fold increased risk of clinical deterioration. The results were robust to sensitivity analysis. Given their temporal association to the deterioration events, these therapeutic classes may serve as sentinels of early deterioration and are candidate variables to combine with vital signs and other risk factors in a surveillance tool for rover teams or an early warning score.

Although most early warning scores intended for use at the bedside are based upon vital signs and clinical observations, a few also include medications. Monaghan's Pediatric Early Warning Score, the basis for many modified scores used in children's hospitals throughout the world, assigns points for children requiring frequent doses of nebulized medication.[20, 21, 22] Nebulized epinephrine is a component of the Bristol Paediatric Early Warning Tool.[23] The number of medications administered in the preceding 24 hours was included in an early version of the Bedside Paediatric Early Warning System Score.[24] Adding IV antibiotics to the Maximum Modified Early Warning Score improved prediction of the need for higher care utilization among hospitalized adults.[25]

In order to determine the role of the IV medications we found to be associated with clinical deterioration, the necessary next step is to develop a multivariable predictive model to determine if they improve the performance of existing early warning scores in identifying deteriorating patients. Although simplicity is an important characteristic of hand‐calculated early warning scores, integration of a more complex scoring system with more variables, such as these medications, into the electronic health record would allow for automated scoring, eliminating the need to sacrifice score performance to keep the tool simple. Integration into the electronic health record would have the additional benefit of making the score available to clinicians who are not at the bedside. Such tools would be especially useful for remote surveillance for deterioration by critical‐care outreach or rover teams.

Our study has several limitations. First, the sample size was small, and although we sought to minimize the likelihood of chance associations by performing sensitivity analysis, these findings should be confirmed in a larger study. Second, we only evaluated IV medications. Medications administered by other routes could also be associated with clinical deterioration and should be analyzed in future studies. Third, we excluded children hospitalized for <24 hours, as well as transfers that did not meet urgent criteria. These may be limitations because (1) the first 24 hours of hospitalization may be a high‐risk period, and (2) patients who were on trajectories toward severe deterioration and received interventions that prevented further deterioration but did not meet urgent transfer criteria were excluded. It may be that the children we included as cases were at increased risk of deterioration that is either more difficult to recognize early, or more difficult to treat effectively without ICU interventions. Finally, we acknowledge that in some cases the therapeutic classes were associated with deterioration in a causal fashion, and in others the medications administered did not cause deterioration but were signs of therapeutic interventions that were initiated in response to clinical worsening. Identifying the specific indications for administration of drugs used in response to clinical worsening may have resulted in stronger associations with deterioration. However, these indications are often complex, multifactorial, and poorly documented in real time. This limits the ability to automate their detection using the electronic health record, the ultimate goal of this line of research.

CONCLUSION

We used a case‐crossover approach to identify therapeutic classes that are associated with increased risk of clinical deterioration in hospitalized children on pediatric wards. These sentinel therapeutic classes may serve as useful components of electronic health recordbased surveillance tools to detect signs of early, evolving deterioration and flag at‐risk patients for critical‐care outreach or rover team review. Future research should focus on evaluating whether including these therapeutic classes in early warning scores improves their accuracy in detecting signs of deterioration and determining if providing this information as clinical decision support improves patient outcomes.

Acknowledgments

Disclosures: This study was funded by The Children's Hospital of Philadelphia Center for Pediatric Clinical Effectiveness Pilot Grant and the University of Pennsylvania Provost's Undergraduate Research Mentoring Program. Drs. Bonafide and Keren also receive funding from the Pennsylvania Health Research Formula Fund Award from the Pennsylvania Department of Health for research in pediatric hospital quality, safety, and costs. The authors have no other conflicts of interest to report.

Files
References
  1. Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):24632478.
  2. DeVita MA, Smith GB, Adam SK, et al. “Identifying the hospitalised patient in crisis”—a consensus conference on the afferent limb of rapid response systems. Resuscitation. 2010;81(4):375382.
  3. Azzopardi P, Kinney S, Moulden A, Tibballs J. Attitudes and barriers to a medical emergency team system at a tertiary paediatric hospital. Resuscitation. 2011;82(2):167174.
  4. Marshall SD, Kitto S, Shearer W, et al. Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39.
  5. Sandroni C, Cavallaro F. Failure of the afferent limb: a persistent problem in rapid response systems. Resuscitation. 2011;82(7):797798.
  6. Mackintosh N, Rainey H, Sandall J. Understanding how rapid response systems may improve safety for the acutely ill patient: learning from the frontline. BMJ Qual Saf. 2012;21(2):135144.
  7. Leach LS, Mayo A, O'Rourke M. How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):14.
  8. Bagshaw SM, Mondor EE, Scouten C, et al. A survey of nurses' beliefs about the medical emergency team system in a Canadian tertiary hospital. Am J Crit Care. 2010;19(1):7483.
  9. Jones D, Baldwin I, McIntyre T, et al. Nurses' attitudes to a medical emergency team service in a teaching hospital. Qual Saf Health Care. 2006;15(6):427432.
  10. Priestley G, Watson W, Rashidian A, et al. Introducing critical care outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30(7):13981404.
  11. Pittard AJ. Out of our reach? Assessing the impact of introducing a critical care outreach service. Anaesthesia. 2003;58(9):882885.
  12. Ball C, Kirkby M, Williams S. Effect of the critical care outreach team on patient survival to discharge from hospital and readmission to critical care: non‐randomised population based study. BMJ. 2003;327(7422):1014.
  13. Gerdik C, Vallish RO, Miles K, et al. Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81(12):16761681.
  14. Hueckel RM, Turi JL, Cheifetz IM, et al. Beyond rapid response teams: instituting a “Rover Team” improves the management of at‐risk patients, facilitates proactive interventions, and improves outcomes. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  15. Delaney JA, Suissa S. The case‐crossover study design in pharmacoepidemiology. Stat Methods Med Res. 2009;18(1):5365.
  16. Viboud C, Boëlle PY, Kelly J, et al. Comparison of the statistical efficiency of case‐crossover and case‐control designs: application to severe cutaneous adverse reactions. J Clin Epidemiol. 2001;54(12):12181227.
  17. Maclure M. The case‐crossover design: a method for studying transient effects on the risk of acute events. Am J Epidemiol. 1991;133(2):144153.
  18. Bonafide CP, Holmes JH, Nadkarni VM, Lin R, Landis JR, Keren R. Development of a score to predict clinical deterioration in hospitalized children. J Hosp Med. 2012;7(4):345349.
  19. Lexicomp. Available at: http://www.lexi.com. Accessed July 26, 2012.
  20. Akre M, Finkelstein M, Erickson M, Liu M, Vanderbilt L, Billman G. Sensitivity of the Pediatric Early Warning Score to identify patient deterioration. Pediatrics. 2010;125(4):e763e769.
  21. Monaghan A. Detecting and managing deterioration in children. Paediatr Nurs. 2005;17(1):3235.
  22. Tucker KM, Brewer TL, Baker RB, Demeritt B, Vossmeyer MT. Prospective evaluation of a pediatric inpatient early warning scoring system. J Spec Pediatr Nurs. 2009;14(2):7985.
  23. Haines C, Perrott M, Weir P. Promoting care for acutely ill children—development and evaluation of a Paediatric Early Warning Tool. Intensive Crit Care Nurs. 2006;22(2):7381.
  24. Duncan H, Hutchison J, Parshuram CS. The Pediatric Early Warning System Score: a severity of illness score to predict urgent medical need in hospitalized children. J Crit Care. 2006;21(3):271278.
  25. Heitz CR, Gaillard JP, Blumstein H, Case D, Messick C, Miller CD. Performance of the maximum modified early warning score to predict the need for higher care utilization among admitted emergency department patients. J Hosp Med. 2010;5(1):E46E52.
Article PDF
Issue
Journal of Hospital Medicine - 8(5)
Page Number
254-260
Sections
Files
Files
Article PDF
Article PDF

In recent years, many hospitals have implemented rapid response systems (RRSs) in efforts to reduce mortality outside the intensive care unit (ICU). Rapid response systems include 2 clinical components (efferent and afferent limbs) and 2 organizational components (process improvement and administrative limbs).[1, 2] The efferent limb includes medical emergency teams (METs) that can be summoned to hospital wards to rescue deteriorating patients. The afferent limb identifies patients at risk of deterioration using tools such as early warning scores and triggers a MET response when appropriate.[2] The process‐improvement limb evaluates and optimizes the RRS. The administrative limb implements the RRS and supports its ongoing operation. The effectiveness of most RRSs depends upon the ward team making the decision to escalate care by activating the MET. Barriers to activating the MET may include reduced situational awareness,[3, 4] hierarchical barriers to calling for help,[3, 4, 5, 6, 7, 8] fear of criticism,[3, 8, 9] and other hospital safety cultural barriers.[3, 4, 8]

Proactive critical‐care outreach[10, 11, 12, 13] or rover[14] teams seek to reduce barriers to activation and improve outcomes by systematically identifying and evaluating at‐risk patients without relying on requests for assistance from the ward team. Structured similarly to early warning scores, surveillance tools intended for rover teams might improve their ability to rapidly identify at‐risk patients throughout a hospital. They could combine vital signs with other variables, such as diagnostic and therapeutic interventions that reflect the ward team's early, evolving concern. In particular, the incorporation of medications associated with deterioration may enhance the performance of surveillance tools.

Medications may be associated with deterioration in one of several ways. They could play a causal role in deterioration (ie, opioids causing respiratory insufficiency), represent clinical worsening and anticipation of possible deterioration (ie, broad‐spectrum antibiotics for a positive blood culture), or represent rescue therapies for early deterioration (ie, antihistamines for allergic reactions). In each case, the associated therapeutic classes could be considered sentinel markers of clinical deterioration.

Combined with vital signs and other risk factors, therapeutic classes could serve as useful components of surveillance tools to detect signs of early, evolving deterioration and flag at‐risk patients for evaluation. As a first step, we sought to identify therapeutic classes associated with clinical deterioration. This effort to improve existing afferent tools falls within the process‐improvement limb of RRSs.

PATIENTS AND METHODS

Study Design

We performed a case‐crossover study of children who experienced clinical deterioration. An alternative to the matched case‐control design, the case‐crossover design involves longitudinal within‐subject comparisons exclusively of case subjects such that an individual serves as his or her own control. It is most effective when studying intermittent exposures that result in transient changes in the risk of an acute event,[15, 16, 17] making it appropriate for our study.

Using the case‐crossover design, we compared a discrete time period in close proximity to the deterioration event, called the hazard interval, with earlier time periods in the hospitalization, called the control intervals.[15, 16, 17] In our primary analysis (Figure 1B), we defined the durations of these intervals as follows: We first censored the 2 hours immediately preceding the clinical deterioration event (hours 0 to 2). We made this decision a priori to exclude medications used after deterioration was recognized and resuscitation had already begun. The 12‐hour period immediately preceding the censored interval was the hazard interval (hours 2 to 14). Each 12‐hour period immediately preceding the hazard interval was a control interval (hours 14 to 26, 26 to 38, 38 to 50, and 50 to 62). Depending on the child's length of stay prior to the deterioration event, each hazard interval had 14 control intervals for comparison. In sensitivity analysis, we altered the durations of these intervals (see below).

Figure 1
Schematic of the iterations of the sensitivity analysis. (A–F) The length of the hazard and control intervals was either 8 or 12 hours, whereas the length of the censored interval was either 0, 2, or 4 hours. (B) The primary analysis used 12‐hour hazard and control intervals with a 2‐hour censored interval. (G) The design is a variant of the primary analysis in which the control interval closest to the hazard interval is censored.

Study Setting and Participants

We performed this study among children age <18 years who experienced clinical deterioration between January 1, 2005, and December 31, 2008, after being hospitalized on a general medical or surgical unit at The Children's Hospital of Philadelphia for 24 hours. Clinical deterioration was a composite outcome defined as cardiopulmonary arrest (CPA), acute respiratory compromise (ARC), or urgent ICU transfer. Cardiopulmonary arrest events required either pulselessness or a pulse with inadequate perfusion treated with chest compressions and/or defibrillation. Acute respiratory compromise events required respiratory insufficiency treated with bag‐valve‐mask or invasive airway interventions. Urgent ICU transfers included 1 of the following outcomes in the 12 hours after transfer: death, CPA, intubation, initiation of noninvasive ventilation, or administration of a vasoactive medication infusion used for the treatment of shock. Time zero was the time of the CPA/ARC, or the time at which the child arrived in the ICU for urgent transfers. These subjects also served as the cases for a previously published case‐control study evaluating different risk factors for deterioration.[18] The institutional review board of The Children's Hospital of Philadelphia approved the study.

At the time of the study, the hospital did not have a formal RRS. An immediate‐response code‐blue team was available throughout the study period for emergencies occurring outside the ICU. Physicians could also page the pediatric ICU fellow to discuss patients who did not require immediate assistance from the code‐blue team but were clinically deteriorating. There were no established triggering criteria.

Medication Exposures

Intravenous (IV) medications administered in the 72 hours prior to clinical deterioration were considered the exposures of interest. Each medication was included in 1 therapeutic classes assigned in the hospital's formulary (Lexicomp, Hudson, OH).[19] In order to determine which therapeutic classes to evaluate, we performed a power calculation using the sampsi_mcc package for Stata 12 (StataCorp, College Station, TX). We estimated that we would have 3 matched control intervals per hazard interval. We found that, in order to detect a minimum odds ratio of 3.0 with 80% power, a therapeutic class had to be administered in 5% of control periods. All therapeutic classes meeting that requirement were included in the analysis and are listed in Table 1. (See lists of the individual medications comprising each class in the Supporting Information, Tables 124, in the online version of this article.)

Therapeutic Classes With Drugs Administered in 5% of Control Intervals, Meeting Criteria for Evaluation in the Primary Analysis Based on the Power Calculation
Therapeutic ClassNo. of Control Intervals%
  • NOTE: Abbreviations: PPIs, proton pump inhibitors. Individual medications comprising each class are in the Supporting Information, Tables 124, in the online version of this article.

Sedatives10725
Antiemetics9222
Third‐ and fourth‐generation cephalosporins8320
Antihistamines7417
Antidotes to hypersensitivity reactions (diphenhydramine)6515
Gastric acid secretion inhibitors6215
Loop diuretics6215
Anti‐inflammatory agents6114
Penicillin antibiotics6114
Benzodiazepines5914
Hypnotics5814
Narcotic analgesics (full opioid agonists)5413
Antianxiety agents5313
Systemic corticosteroids5313
Glycopeptide antibiotics (vancomycin)4611
Anaerobic antibiotics4511
Histamine H2 antagonists4110
Antifungal agents379
Phenothiazine derivatives379
Adrenal corticosteroids358
Antiviral agents307
Aminoglycoside antibiotics266
Narcotic analgesics (partial opioid agonists)266
PPIs266

Data Collection

Data were abstracted from the electronic medication administration record (Sunrise Clinical Manager; Allscripts, Chicago, IL) into a database. For each subject, we recorded the name and time of administration of each IV medication given in the 72 hours preceding deterioration, as well as demographic, event, and hospitalization characteristics.

Statistical Analysis

We used univariable conditional logistic regression to evaluate the association between each therapeutic class and the composite outcome of clinical deterioration in the primary analysis. Because cases serve as their own controls in the case‐crossover design, this method inherently adjusts for all subject‐specific time‐invariant confounding variables, such as patient demographics, disease, and hospital‐ward characteristics.[15]

Sensitivity Analysis

Our primary analysis used a 2‐hour censored interval and 12‐hour hazard and control intervals. Excluding the censored interval from analysis was a conservative approach that we chose because our goal was to identify therapeutic classes associated with deterioration during a phase in which adverse outcomes may be prevented with early intervention. In order to test whether our findings were stable across different lengths of censored, hazard, and control intervals, we performed a sensitivity analysis, also using conditional logistic regression, on all therapeutic classes that were significant (P<0.05) in primary analysis. In 6 iterations of the sensitivity analysis, we varied the length of the hazard and control intervals between 8 and 12 hours, and the length of the censored interval between 0 and 4 hours (Figure 1AF). In a seventh iteration, we used a variant of the primary analysis in which we censored the first control interval (Figure 1G).

RESULTS

We identified 12 CPAs, 41 ARCs, and 699 ICU transfers during the study period. Of these 752 events, 141 (19%) were eligible as cases according to our inclusion criteria.[18] (A flowchart demonstrating the identification of eligible cases is provided in Supporting Table 25 in the online version of this article.) Of the 81% excluded, 37% were ICU transfers who did not meet urgent criteria. Another 31% were excluded because they were hospitalized for <24 hours at the time of the event, making their analysis in a case‐crossover design using 12‐hour periods impossible. Event characteristics, demographics, and hospitalization characteristics are shown in Table 2.

Subject Characteristics (N=141)
 n%
  • NOTE: Abbreviations: ARC, acute respiratory compromise; CPA, cardiopulmonary arrest; F, female; ICU, intensive care unit; M, male.

Type of event  
CPA43
ARC2920
Urgent ICU transfer10877
Demographics  
Age  
0<6 months1712
6<12 months2216
1<4 years3424
4<10 years2618
10<18 years4230
Sex  
F6043
M8157
Race  
White6949
Black/African American4935
Asian/Pacific Islander00
Other2316
Ethnicity  
Non‐Hispanic12790
Hispanic1410
Hospitalization  
Surgical service43
Survived to hospital discharge10776

Primary Analysis

A total of 141 hazard intervals and 487 control intervals were included in the primary analysis, the results of which are shown in Table 3. Among the antimicrobial therapeutic classes, glycopeptide antibiotics (vancomycin), anaerobic antibiotics, third‐generation and fourth‐generation cephalosporins, and aminoglycoside antibiotics were significant. All of the anti‐inflammatory therapeutic classes, including systemic corticosteroids, anti‐inflammatory agents, and adrenal corticosteroids, were significant. All of the sedatives, hypnotics, and antianxiety therapeutic classes, including sedatives, benzodiazepines, hypnotics, and antianxiety agents, were significant. Among the narcotic analgesic therapeutic classes, only 1 class, narcotic analgesics (full opioid agonists), was significant. None of the gastrointestinal therapeutic classes were significant. Among the classes classified as other, loop diuretics and antidotes to hypersensitivity reactions (diphenhydramine) were significant.

Results of Primary Analysis Using 12‐Hour Blocks and 2‐Hour Censored Period
 ORLCIUCIP Value
  • NOTE: Abbreviations: CI, confidence interval; GI, gastrointestinal; LCI, lower confidence interval; OR, odds ratio; PPIs, proton‐pump inhibitors; UCI, upper confidence interval. Substantial overlap exists among some therapeutic classes; see Supporting Information, Tables 124, in the online version of this article for a listing of the medications that comprised each class. *There was substantial overlap in the drugs that comprised the corticosteroids and other anti‐inflammatory therapeutic classes, and the ORs and CIs were identical for the 3 groups. When the individual drugs were examined, it was apparent that hydrocortisone and methylprednisolone were entirely responsible for the OR. Therefore, we used the category that the study team deemed (1) most parsimonious and (2) most clinically relevant in the sensitivity analysis, systemic corticosteroids. There was substantial overlap between the sedatives, hypnotics, and antianxiety therapeutic classes. When the individual drugs were examined, it was apparent that benzodiazepines and diphenhydramine were primarily responsible for the significant OR. Diphenhydramine had already been evaluated in the antidotes to hypersensitivity reactions class. Therefore, we used the category that the study team deemed (1) most parsimonious and (2) most clinically relevant in the sensitivity analysis, benzodiazepines.

Antimicrobial therapeutic classes    
Glycopeptide antibiotics (vancomycin)5.842.0116.980.001
Anaerobic antibiotics5.331.3620.940.02
Third‐ and fourth‐generation cephalosporins2.781.156.690.02
Aminoglycoside antibiotics2.901.117.560.03
Penicillin antibiotics2.400.96.40.08
Antiviral agents1.520.2011.460.68
Antifungal agents1.060.442.580.89
Corticosteroids and other anti‐inflammatory therapeutic classes*
Systemic corticosteroids3.691.0912.550.04
Anti‐inflammatory agents3.691.0912.550.04
Adrenal corticosteroids3.691.0912.550.04
Sedatives, hypnotics, and antianxiety therapeutic classes
Sedatives3.481.786.78<0.001
Benzodiazepines2.711.365.400.01
Hypnotics2.541.275.090.01
Antianxiety agents2.281.064.910.04
Narcotic analgesic therapeutic classes    
Narcotic analgesics (full opioid agonists)2.481.075.730.03
Narcotic analgesics (partial opioid agonists)1.970.576.850.29
GI therapeutic classes    
Antiemetics0.570.221.480.25
PPIs2.050.587.250.26
Phenothiazine derivatives0.470.121.830.27
Gastric acid secretion inhibitors1.710.614.810.31
Histamine H2 antagonists0.950.175.190.95
Other therapeutic classes    
Loop diuretics2.871.286.470.01
Antidotes to hypersensitivity reactions (diphenhydramine)2.451.155.230.02
Antihistamines2.000.974.120.06

Sensitivity Analysis

Of the 14 classes that were significant in primary analysis, we carried 9 forward to sensitivity analysis. The 5 that were not carried forward overlapped substantially with other classes that were carried forward. The decision of which overlapping class to carry forward was based upon (1) parsimony and (2) clinical relevance. This is described briefly in the footnotes to Table 3 (see Supporting information in the online version of this article for a full description of this process). Figure 2 presents the odds ratios and their 95% confidence intervals for the sensitivity analysis of each therapeutic class that was significant in primary analysis. Loop diuretics remained significantly associated with deterioration in all 7 iterations. Glycopeptide antibiotics (vancomycin), third‐generation and fourth‐generation cephalosporins, systemic corticosteroids, and benzodiazepines were significant in 6. Anaerobic antibiotics and narcotic analgesics (full opioid agonists) were significant in 5, and aminoglycoside antibiotics and antidotes to hypersensitivity reactions (diphenhydramine) in 4.

Figure 2
The ORs and 95% CIs for the sensitivity analyses. The primary analysis is “12 hr blocks, 2 hr censored”. Point estimates with CIs crossing the line at OR51.00 did not reach statistical significance. Upper confidence limit extends to 16.98,a 20.94,b 27.12,c 18.23,d 17.71,e 16.20,f 206.13,g 33.60,h and 28.28.i The OR estimate is 26.05.g Abbreviations: CI, confidence interval; hr, hour; OR, odds ratio.

DISCUSSION

We identified 9 therapeutic classes which were associated with a 2.5‐fold to 5.8‐fold increased risk of clinical deterioration. The results were robust to sensitivity analysis. Given their temporal association to the deterioration events, these therapeutic classes may serve as sentinels of early deterioration and are candidate variables to combine with vital signs and other risk factors in a surveillance tool for rover teams or an early warning score.

Although most early warning scores intended for use at the bedside are based upon vital signs and clinical observations, a few also include medications. Monaghan's Pediatric Early Warning Score, the basis for many modified scores used in children's hospitals throughout the world, assigns points for children requiring frequent doses of nebulized medication.[20, 21, 22] Nebulized epinephrine is a component of the Bristol Paediatric Early Warning Tool.[23] The number of medications administered in the preceding 24 hours was included in an early version of the Bedside Paediatric Early Warning System Score.[24] Adding IV antibiotics to the Maximum Modified Early Warning Score improved prediction of the need for higher care utilization among hospitalized adults.[25]

In order to determine the role of the IV medications we found to be associated with clinical deterioration, the necessary next step is to develop a multivariable predictive model to determine if they improve the performance of existing early warning scores in identifying deteriorating patients. Although simplicity is an important characteristic of hand‐calculated early warning scores, integration of a more complex scoring system with more variables, such as these medications, into the electronic health record would allow for automated scoring, eliminating the need to sacrifice score performance to keep the tool simple. Integration into the electronic health record would have the additional benefit of making the score available to clinicians who are not at the bedside. Such tools would be especially useful for remote surveillance for deterioration by critical‐care outreach or rover teams.

Our study has several limitations. First, the sample size was small, and although we sought to minimize the likelihood of chance associations by performing sensitivity analysis, these findings should be confirmed in a larger study. Second, we only evaluated IV medications. Medications administered by other routes could also be associated with clinical deterioration and should be analyzed in future studies. Third, we excluded children hospitalized for <24 hours, as well as transfers that did not meet urgent criteria. These may be limitations because (1) the first 24 hours of hospitalization may be a high‐risk period, and (2) patients who were on trajectories toward severe deterioration and received interventions that prevented further deterioration but did not meet urgent transfer criteria were excluded. It may be that the children we included as cases were at increased risk of deterioration that is either more difficult to recognize early, or more difficult to treat effectively without ICU interventions. Finally, we acknowledge that in some cases the therapeutic classes were associated with deterioration in a causal fashion, and in others the medications administered did not cause deterioration but were signs of therapeutic interventions that were initiated in response to clinical worsening. Identifying the specific indications for administration of drugs used in response to clinical worsening may have resulted in stronger associations with deterioration. However, these indications are often complex, multifactorial, and poorly documented in real time. This limits the ability to automate their detection using the electronic health record, the ultimate goal of this line of research.

CONCLUSION

We used a case‐crossover approach to identify therapeutic classes that are associated with increased risk of clinical deterioration in hospitalized children on pediatric wards. These sentinel therapeutic classes may serve as useful components of electronic health recordbased surveillance tools to detect signs of early, evolving deterioration and flag at‐risk patients for critical‐care outreach or rover team review. Future research should focus on evaluating whether including these therapeutic classes in early warning scores improves their accuracy in detecting signs of deterioration and determining if providing this information as clinical decision support improves patient outcomes.

Acknowledgments

Disclosures: This study was funded by The Children's Hospital of Philadelphia Center for Pediatric Clinical Effectiveness Pilot Grant and the University of Pennsylvania Provost's Undergraduate Research Mentoring Program. Drs. Bonafide and Keren also receive funding from the Pennsylvania Health Research Formula Fund Award from the Pennsylvania Department of Health for research in pediatric hospital quality, safety, and costs. The authors have no other conflicts of interest to report.

In recent years, many hospitals have implemented rapid response systems (RRSs) in efforts to reduce mortality outside the intensive care unit (ICU). Rapid response systems include 2 clinical components (efferent and afferent limbs) and 2 organizational components (process improvement and administrative limbs).[1, 2] The efferent limb includes medical emergency teams (METs) that can be summoned to hospital wards to rescue deteriorating patients. The afferent limb identifies patients at risk of deterioration using tools such as early warning scores and triggers a MET response when appropriate.[2] The process‐improvement limb evaluates and optimizes the RRS. The administrative limb implements the RRS and supports its ongoing operation. The effectiveness of most RRSs depends upon the ward team making the decision to escalate care by activating the MET. Barriers to activating the MET may include reduced situational awareness,[3, 4] hierarchical barriers to calling for help,[3, 4, 5, 6, 7, 8] fear of criticism,[3, 8, 9] and other hospital safety cultural barriers.[3, 4, 8]

Proactive critical‐care outreach[10, 11, 12, 13] or rover[14] teams seek to reduce barriers to activation and improve outcomes by systematically identifying and evaluating at‐risk patients without relying on requests for assistance from the ward team. Structured similarly to early warning scores, surveillance tools intended for rover teams might improve their ability to rapidly identify at‐risk patients throughout a hospital. They could combine vital signs with other variables, such as diagnostic and therapeutic interventions that reflect the ward team's early, evolving concern. In particular, the incorporation of medications associated with deterioration may enhance the performance of surveillance tools.

Medications may be associated with deterioration in one of several ways. They could play a causal role in deterioration (ie, opioids causing respiratory insufficiency), represent clinical worsening and anticipation of possible deterioration (ie, broad‐spectrum antibiotics for a positive blood culture), or represent rescue therapies for early deterioration (ie, antihistamines for allergic reactions). In each case, the associated therapeutic classes could be considered sentinel markers of clinical deterioration.

Combined with vital signs and other risk factors, therapeutic classes could serve as useful components of surveillance tools to detect signs of early, evolving deterioration and flag at‐risk patients for evaluation. As a first step, we sought to identify therapeutic classes associated with clinical deterioration. This effort to improve existing afferent tools falls within the process‐improvement limb of RRSs.

PATIENTS AND METHODS

Study Design

We performed a case‐crossover study of children who experienced clinical deterioration. An alternative to the matched case‐control design, the case‐crossover design involves longitudinal within‐subject comparisons exclusively of case subjects such that an individual serves as his or her own control. It is most effective when studying intermittent exposures that result in transient changes in the risk of an acute event,[15, 16, 17] making it appropriate for our study.

Using the case‐crossover design, we compared a discrete time period in close proximity to the deterioration event, called the hazard interval, with earlier time periods in the hospitalization, called the control intervals.[15, 16, 17] In our primary analysis (Figure 1B), we defined the durations of these intervals as follows: We first censored the 2 hours immediately preceding the clinical deterioration event (hours 0 to 2). We made this decision a priori to exclude medications used after deterioration was recognized and resuscitation had already begun. The 12‐hour period immediately preceding the censored interval was the hazard interval (hours 2 to 14). Each 12‐hour period immediately preceding the hazard interval was a control interval (hours 14 to 26, 26 to 38, 38 to 50, and 50 to 62). Depending on the child's length of stay prior to the deterioration event, each hazard interval had 14 control intervals for comparison. In sensitivity analysis, we altered the durations of these intervals (see below).

Figure 1
Schematic of the iterations of the sensitivity analysis. (A–F) The length of the hazard and control intervals was either 8 or 12 hours, whereas the length of the censored interval was either 0, 2, or 4 hours. (B) The primary analysis used 12‐hour hazard and control intervals with a 2‐hour censored interval. (G) The design is a variant of the primary analysis in which the control interval closest to the hazard interval is censored.

Study Setting and Participants

We performed this study among children age <18 years who experienced clinical deterioration between January 1, 2005, and December 31, 2008, after being hospitalized on a general medical or surgical unit at The Children's Hospital of Philadelphia for 24 hours. Clinical deterioration was a composite outcome defined as cardiopulmonary arrest (CPA), acute respiratory compromise (ARC), or urgent ICU transfer. Cardiopulmonary arrest events required either pulselessness or a pulse with inadequate perfusion treated with chest compressions and/or defibrillation. Acute respiratory compromise events required respiratory insufficiency treated with bag‐valve‐mask or invasive airway interventions. Urgent ICU transfers included 1 of the following outcomes in the 12 hours after transfer: death, CPA, intubation, initiation of noninvasive ventilation, or administration of a vasoactive medication infusion used for the treatment of shock. Time zero was the time of the CPA/ARC, or the time at which the child arrived in the ICU for urgent transfers. These subjects also served as the cases for a previously published case‐control study evaluating different risk factors for deterioration.[18] The institutional review board of The Children's Hospital of Philadelphia approved the study.

At the time of the study, the hospital did not have a formal RRS. An immediate‐response code‐blue team was available throughout the study period for emergencies occurring outside the ICU. Physicians could also page the pediatric ICU fellow to discuss patients who did not require immediate assistance from the code‐blue team but were clinically deteriorating. There were no established triggering criteria.

Medication Exposures

Intravenous (IV) medications administered in the 72 hours prior to clinical deterioration were considered the exposures of interest. Each medication was included in 1 therapeutic classes assigned in the hospital's formulary (Lexicomp, Hudson, OH).[19] In order to determine which therapeutic classes to evaluate, we performed a power calculation using the sampsi_mcc package for Stata 12 (StataCorp, College Station, TX). We estimated that we would have 3 matched control intervals per hazard interval. We found that, in order to detect a minimum odds ratio of 3.0 with 80% power, a therapeutic class had to be administered in 5% of control periods. All therapeutic classes meeting that requirement were included in the analysis and are listed in Table 1. (See lists of the individual medications comprising each class in the Supporting Information, Tables 124, in the online version of this article.)

Therapeutic Classes With Drugs Administered in 5% of Control Intervals, Meeting Criteria for Evaluation in the Primary Analysis Based on the Power Calculation
Therapeutic ClassNo. of Control Intervals%
  • NOTE: Abbreviations: PPIs, proton pump inhibitors. Individual medications comprising each class are in the Supporting Information, Tables 124, in the online version of this article.

Sedatives10725
Antiemetics9222
Third‐ and fourth‐generation cephalosporins8320
Antihistamines7417
Antidotes to hypersensitivity reactions (diphenhydramine)6515
Gastric acid secretion inhibitors6215
Loop diuretics6215
Anti‐inflammatory agents6114
Penicillin antibiotics6114
Benzodiazepines5914
Hypnotics5814
Narcotic analgesics (full opioid agonists)5413
Antianxiety agents5313
Systemic corticosteroids5313
Glycopeptide antibiotics (vancomycin)4611
Anaerobic antibiotics4511
Histamine H2 antagonists4110
Antifungal agents379
Phenothiazine derivatives379
Adrenal corticosteroids358
Antiviral agents307
Aminoglycoside antibiotics266
Narcotic analgesics (partial opioid agonists)266
PPIs266

Data Collection

Data were abstracted from the electronic medication administration record (Sunrise Clinical Manager; Allscripts, Chicago, IL) into a database. For each subject, we recorded the name and time of administration of each IV medication given in the 72 hours preceding deterioration, as well as demographic, event, and hospitalization characteristics.

Statistical Analysis

We used univariable conditional logistic regression to evaluate the association between each therapeutic class and the composite outcome of clinical deterioration in the primary analysis. Because cases serve as their own controls in the case‐crossover design, this method inherently adjusts for all subject‐specific time‐invariant confounding variables, such as patient demographics, disease, and hospital‐ward characteristics.[15]

Sensitivity Analysis

Our primary analysis used a 2‐hour censored interval and 12‐hour hazard and control intervals. Excluding the censored interval from analysis was a conservative approach that we chose because our goal was to identify therapeutic classes associated with deterioration during a phase in which adverse outcomes may be prevented with early intervention. In order to test whether our findings were stable across different lengths of censored, hazard, and control intervals, we performed a sensitivity analysis, also using conditional logistic regression, on all therapeutic classes that were significant (P<0.05) in primary analysis. In 6 iterations of the sensitivity analysis, we varied the length of the hazard and control intervals between 8 and 12 hours, and the length of the censored interval between 0 and 4 hours (Figure 1AF). In a seventh iteration, we used a variant of the primary analysis in which we censored the first control interval (Figure 1G).

RESULTS

We identified 12 CPAs, 41 ARCs, and 699 ICU transfers during the study period. Of these 752 events, 141 (19%) were eligible as cases according to our inclusion criteria.[18] (A flowchart demonstrating the identification of eligible cases is provided in Supporting Table 25 in the online version of this article.) Of the 81% excluded, 37% were ICU transfers who did not meet urgent criteria. Another 31% were excluded because they were hospitalized for <24 hours at the time of the event, making their analysis in a case‐crossover design using 12‐hour periods impossible. Event characteristics, demographics, and hospitalization characteristics are shown in Table 2.

Subject Characteristics (N=141)
 n%
  • NOTE: Abbreviations: ARC, acute respiratory compromise; CPA, cardiopulmonary arrest; F, female; ICU, intensive care unit; M, male.

Type of event  
CPA43
ARC2920
Urgent ICU transfer10877
Demographics  
Age  
0<6 months1712
6<12 months2216
1<4 years3424
4<10 years2618
10<18 years4230
Sex  
F6043
M8157
Race  
White6949
Black/African American4935
Asian/Pacific Islander00
Other2316
Ethnicity  
Non‐Hispanic12790
Hispanic1410
Hospitalization  
Surgical service43
Survived to hospital discharge10776

Primary Analysis

A total of 141 hazard intervals and 487 control intervals were included in the primary analysis, the results of which are shown in Table 3. Among the antimicrobial therapeutic classes, glycopeptide antibiotics (vancomycin), anaerobic antibiotics, third‐generation and fourth‐generation cephalosporins, and aminoglycoside antibiotics were significant. All of the anti‐inflammatory therapeutic classes, including systemic corticosteroids, anti‐inflammatory agents, and adrenal corticosteroids, were significant. All of the sedatives, hypnotics, and antianxiety therapeutic classes, including sedatives, benzodiazepines, hypnotics, and antianxiety agents, were significant. Among the narcotic analgesic therapeutic classes, only 1 class, narcotic analgesics (full opioid agonists), was significant. None of the gastrointestinal therapeutic classes were significant. Among the classes classified as other, loop diuretics and antidotes to hypersensitivity reactions (diphenhydramine) were significant.

Results of Primary Analysis Using 12‐Hour Blocks and 2‐Hour Censored Period
 ORLCIUCIP Value
  • NOTE: Abbreviations: CI, confidence interval; GI, gastrointestinal; LCI, lower confidence interval; OR, odds ratio; PPIs, proton‐pump inhibitors; UCI, upper confidence interval. Substantial overlap exists among some therapeutic classes; see Supporting Information, Tables 124, in the online version of this article for a listing of the medications that comprised each class. *There was substantial overlap in the drugs that comprised the corticosteroids and other anti‐inflammatory therapeutic classes, and the ORs and CIs were identical for the 3 groups. When the individual drugs were examined, it was apparent that hydrocortisone and methylprednisolone were entirely responsible for the OR. Therefore, we used the category that the study team deemed (1) most parsimonious and (2) most clinically relevant in the sensitivity analysis, systemic corticosteroids. There was substantial overlap between the sedatives, hypnotics, and antianxiety therapeutic classes. When the individual drugs were examined, it was apparent that benzodiazepines and diphenhydramine were primarily responsible for the significant OR. Diphenhydramine had already been evaluated in the antidotes to hypersensitivity reactions class. Therefore, we used the category that the study team deemed (1) most parsimonious and (2) most clinically relevant in the sensitivity analysis, benzodiazepines.

Antimicrobial therapeutic classes    
Glycopeptide antibiotics (vancomycin)5.842.0116.980.001
Anaerobic antibiotics5.331.3620.940.02
Third‐ and fourth‐generation cephalosporins2.781.156.690.02
Aminoglycoside antibiotics2.901.117.560.03
Penicillin antibiotics2.400.96.40.08
Antiviral agents1.520.2011.460.68
Antifungal agents1.060.442.580.89
Corticosteroids and other anti‐inflammatory therapeutic classes*
Systemic corticosteroids3.691.0912.550.04
Anti‐inflammatory agents3.691.0912.550.04
Adrenal corticosteroids3.691.0912.550.04
Sedatives, hypnotics, and antianxiety therapeutic classes
Sedatives3.481.786.78<0.001
Benzodiazepines2.711.365.400.01
Hypnotics2.541.275.090.01
Antianxiety agents2.281.064.910.04
Narcotic analgesic therapeutic classes    
Narcotic analgesics (full opioid agonists)2.481.075.730.03
Narcotic analgesics (partial opioid agonists)1.970.576.850.29
GI therapeutic classes    
Antiemetics0.570.221.480.25
PPIs2.050.587.250.26
Phenothiazine derivatives0.470.121.830.27
Gastric acid secretion inhibitors1.710.614.810.31
Histamine H2 antagonists0.950.175.190.95
Other therapeutic classes    
Loop diuretics2.871.286.470.01
Antidotes to hypersensitivity reactions (diphenhydramine)2.451.155.230.02
Antihistamines2.000.974.120.06

Sensitivity Analysis

Of the 14 classes that were significant in primary analysis, we carried 9 forward to sensitivity analysis. The 5 that were not carried forward overlapped substantially with other classes that were carried forward. The decision of which overlapping class to carry forward was based upon (1) parsimony and (2) clinical relevance. This is described briefly in the footnotes to Table 3 (see Supporting information in the online version of this article for a full description of this process). Figure 2 presents the odds ratios and their 95% confidence intervals for the sensitivity analysis of each therapeutic class that was significant in primary analysis. Loop diuretics remained significantly associated with deterioration in all 7 iterations. Glycopeptide antibiotics (vancomycin), third‐generation and fourth‐generation cephalosporins, systemic corticosteroids, and benzodiazepines were significant in 6. Anaerobic antibiotics and narcotic analgesics (full opioid agonists) were significant in 5, and aminoglycoside antibiotics and antidotes to hypersensitivity reactions (diphenhydramine) in 4.

Figure 2
The ORs and 95% CIs for the sensitivity analyses. The primary analysis is “12 hr blocks, 2 hr censored”. Point estimates with CIs crossing the line at OR51.00 did not reach statistical significance. Upper confidence limit extends to 16.98,a 20.94,b 27.12,c 18.23,d 17.71,e 16.20,f 206.13,g 33.60,h and 28.28.i The OR estimate is 26.05.g Abbreviations: CI, confidence interval; hr, hour; OR, odds ratio.

DISCUSSION

We identified 9 therapeutic classes which were associated with a 2.5‐fold to 5.8‐fold increased risk of clinical deterioration. The results were robust to sensitivity analysis. Given their temporal association to the deterioration events, these therapeutic classes may serve as sentinels of early deterioration and are candidate variables to combine with vital signs and other risk factors in a surveillance tool for rover teams or an early warning score.

Although most early warning scores intended for use at the bedside are based upon vital signs and clinical observations, a few also include medications. Monaghan's Pediatric Early Warning Score, the basis for many modified scores used in children's hospitals throughout the world, assigns points for children requiring frequent doses of nebulized medication.[20, 21, 22] Nebulized epinephrine is a component of the Bristol Paediatric Early Warning Tool.[23] The number of medications administered in the preceding 24 hours was included in an early version of the Bedside Paediatric Early Warning System Score.[24] Adding IV antibiotics to the Maximum Modified Early Warning Score improved prediction of the need for higher care utilization among hospitalized adults.[25]

In order to determine the role of the IV medications we found to be associated with clinical deterioration, the necessary next step is to develop a multivariable predictive model to determine if they improve the performance of existing early warning scores in identifying deteriorating patients. Although simplicity is an important characteristic of hand‐calculated early warning scores, integration of a more complex scoring system with more variables, such as these medications, into the electronic health record would allow for automated scoring, eliminating the need to sacrifice score performance to keep the tool simple. Integration into the electronic health record would have the additional benefit of making the score available to clinicians who are not at the bedside. Such tools would be especially useful for remote surveillance for deterioration by critical‐care outreach or rover teams.

Our study has several limitations. First, the sample size was small, and although we sought to minimize the likelihood of chance associations by performing sensitivity analysis, these findings should be confirmed in a larger study. Second, we only evaluated IV medications. Medications administered by other routes could also be associated with clinical deterioration and should be analyzed in future studies. Third, we excluded children hospitalized for <24 hours, as well as transfers that did not meet urgent criteria. These may be limitations because (1) the first 24 hours of hospitalization may be a high‐risk period, and (2) patients who were on trajectories toward severe deterioration and received interventions that prevented further deterioration but did not meet urgent transfer criteria were excluded. It may be that the children we included as cases were at increased risk of deterioration that is either more difficult to recognize early, or more difficult to treat effectively without ICU interventions. Finally, we acknowledge that in some cases the therapeutic classes were associated with deterioration in a causal fashion, and in others the medications administered did not cause deterioration but were signs of therapeutic interventions that were initiated in response to clinical worsening. Identifying the specific indications for administration of drugs used in response to clinical worsening may have resulted in stronger associations with deterioration. However, these indications are often complex, multifactorial, and poorly documented in real time. This limits the ability to automate their detection using the electronic health record, the ultimate goal of this line of research.

CONCLUSION

We used a case‐crossover approach to identify therapeutic classes that are associated with increased risk of clinical deterioration in hospitalized children on pediatric wards. These sentinel therapeutic classes may serve as useful components of electronic health recordbased surveillance tools to detect signs of early, evolving deterioration and flag at‐risk patients for critical‐care outreach or rover team review. Future research should focus on evaluating whether including these therapeutic classes in early warning scores improves their accuracy in detecting signs of deterioration and determining if providing this information as clinical decision support improves patient outcomes.

Acknowledgments

Disclosures: This study was funded by The Children's Hospital of Philadelphia Center for Pediatric Clinical Effectiveness Pilot Grant and the University of Pennsylvania Provost's Undergraduate Research Mentoring Program. Drs. Bonafide and Keren also receive funding from the Pennsylvania Health Research Formula Fund Award from the Pennsylvania Department of Health for research in pediatric hospital quality, safety, and costs. The authors have no other conflicts of interest to report.

References
  1. Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):24632478.
  2. DeVita MA, Smith GB, Adam SK, et al. “Identifying the hospitalised patient in crisis”—a consensus conference on the afferent limb of rapid response systems. Resuscitation. 2010;81(4):375382.
  3. Azzopardi P, Kinney S, Moulden A, Tibballs J. Attitudes and barriers to a medical emergency team system at a tertiary paediatric hospital. Resuscitation. 2011;82(2):167174.
  4. Marshall SD, Kitto S, Shearer W, et al. Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39.
  5. Sandroni C, Cavallaro F. Failure of the afferent limb: a persistent problem in rapid response systems. Resuscitation. 2011;82(7):797798.
  6. Mackintosh N, Rainey H, Sandall J. Understanding how rapid response systems may improve safety for the acutely ill patient: learning from the frontline. BMJ Qual Saf. 2012;21(2):135144.
  7. Leach LS, Mayo A, O'Rourke M. How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):14.
  8. Bagshaw SM, Mondor EE, Scouten C, et al. A survey of nurses' beliefs about the medical emergency team system in a Canadian tertiary hospital. Am J Crit Care. 2010;19(1):7483.
  9. Jones D, Baldwin I, McIntyre T, et al. Nurses' attitudes to a medical emergency team service in a teaching hospital. Qual Saf Health Care. 2006;15(6):427432.
  10. Priestley G, Watson W, Rashidian A, et al. Introducing critical care outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30(7):13981404.
  11. Pittard AJ. Out of our reach? Assessing the impact of introducing a critical care outreach service. Anaesthesia. 2003;58(9):882885.
  12. Ball C, Kirkby M, Williams S. Effect of the critical care outreach team on patient survival to discharge from hospital and readmission to critical care: non‐randomised population based study. BMJ. 2003;327(7422):1014.
  13. Gerdik C, Vallish RO, Miles K, et al. Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81(12):16761681.
  14. Hueckel RM, Turi JL, Cheifetz IM, et al. Beyond rapid response teams: instituting a “Rover Team” improves the management of at‐risk patients, facilitates proactive interventions, and improves outcomes. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  15. Delaney JA, Suissa S. The case‐crossover study design in pharmacoepidemiology. Stat Methods Med Res. 2009;18(1):5365.
  16. Viboud C, Boëlle PY, Kelly J, et al. Comparison of the statistical efficiency of case‐crossover and case‐control designs: application to severe cutaneous adverse reactions. J Clin Epidemiol. 2001;54(12):12181227.
  17. Maclure M. The case‐crossover design: a method for studying transient effects on the risk of acute events. Am J Epidemiol. 1991;133(2):144153.
  18. Bonafide CP, Holmes JH, Nadkarni VM, Lin R, Landis JR, Keren R. Development of a score to predict clinical deterioration in hospitalized children. J Hosp Med. 2012;7(4):345349.
  19. Lexicomp. Available at: http://www.lexi.com. Accessed July 26, 2012.
  20. Akre M, Finkelstein M, Erickson M, Liu M, Vanderbilt L, Billman G. Sensitivity of the Pediatric Early Warning Score to identify patient deterioration. Pediatrics. 2010;125(4):e763e769.
  21. Monaghan A. Detecting and managing deterioration in children. Paediatr Nurs. 2005;17(1):3235.
  22. Tucker KM, Brewer TL, Baker RB, Demeritt B, Vossmeyer MT. Prospective evaluation of a pediatric inpatient early warning scoring system. J Spec Pediatr Nurs. 2009;14(2):7985.
  23. Haines C, Perrott M, Weir P. Promoting care for acutely ill children—development and evaluation of a Paediatric Early Warning Tool. Intensive Crit Care Nurs. 2006;22(2):7381.
  24. Duncan H, Hutchison J, Parshuram CS. The Pediatric Early Warning System Score: a severity of illness score to predict urgent medical need in hospitalized children. J Crit Care. 2006;21(3):271278.
  25. Heitz CR, Gaillard JP, Blumstein H, Case D, Messick C, Miller CD. Performance of the maximum modified early warning score to predict the need for higher care utilization among admitted emergency department patients. J Hosp Med. 2010;5(1):E46E52.
References
  1. Devita MA, Bellomo R, Hillman K, et al. Findings of the first consensus conference on medical emergency teams. Crit Care Med. 2006;34(9):24632478.
  2. DeVita MA, Smith GB, Adam SK, et al. “Identifying the hospitalised patient in crisis”—a consensus conference on the afferent limb of rapid response systems. Resuscitation. 2010;81(4):375382.
  3. Azzopardi P, Kinney S, Moulden A, Tibballs J. Attitudes and barriers to a medical emergency team system at a tertiary paediatric hospital. Resuscitation. 2011;82(2):167174.
  4. Marshall SD, Kitto S, Shearer W, et al. Why don't hospital staff activate the rapid response system (RRS)? How frequently is it needed and can the process be improved? Implement Sci. 2011;6:39.
  5. Sandroni C, Cavallaro F. Failure of the afferent limb: a persistent problem in rapid response systems. Resuscitation. 2011;82(7):797798.
  6. Mackintosh N, Rainey H, Sandall J. Understanding how rapid response systems may improve safety for the acutely ill patient: learning from the frontline. BMJ Qual Saf. 2012;21(2):135144.
  7. Leach LS, Mayo A, O'Rourke M. How RNs rescue patients: a qualitative study of RNs' perceived involvement in rapid response teams. Qual Saf Health Care. 2010;19(5):14.
  8. Bagshaw SM, Mondor EE, Scouten C, et al. A survey of nurses' beliefs about the medical emergency team system in a Canadian tertiary hospital. Am J Crit Care. 2010;19(1):7483.
  9. Jones D, Baldwin I, McIntyre T, et al. Nurses' attitudes to a medical emergency team service in a teaching hospital. Qual Saf Health Care. 2006;15(6):427432.
  10. Priestley G, Watson W, Rashidian A, et al. Introducing critical care outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30(7):13981404.
  11. Pittard AJ. Out of our reach? Assessing the impact of introducing a critical care outreach service. Anaesthesia. 2003;58(9):882885.
  12. Ball C, Kirkby M, Williams S. Effect of the critical care outreach team on patient survival to discharge from hospital and readmission to critical care: non‐randomised population based study. BMJ. 2003;327(7422):1014.
  13. Gerdik C, Vallish RO, Miles K, et al. Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81(12):16761681.
  14. Hueckel RM, Turi JL, Cheifetz IM, et al. Beyond rapid response teams: instituting a “Rover Team” improves the management of at‐risk patients, facilitates proactive interventions, and improves outcomes. In: Henriksen K, Battles JB, Keyes MA, Grady ML, eds. Advances in Patient Safety: New Directions and Alternative Approaches. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  15. Delaney JA, Suissa S. The case‐crossover study design in pharmacoepidemiology. Stat Methods Med Res. 2009;18(1):5365.
  16. Viboud C, Boëlle PY, Kelly J, et al. Comparison of the statistical efficiency of case‐crossover and case‐control designs: application to severe cutaneous adverse reactions. J Clin Epidemiol. 2001;54(12):12181227.
  17. Maclure M. The case‐crossover design: a method for studying transient effects on the risk of acute events. Am J Epidemiol. 1991;133(2):144153.
  18. Bonafide CP, Holmes JH, Nadkarni VM, Lin R, Landis JR, Keren R. Development of a score to predict clinical deterioration in hospitalized children. J Hosp Med. 2012;7(4):345349.
  19. Lexicomp. Available at: http://www.lexi.com. Accessed July 26, 2012.
  20. Akre M, Finkelstein M, Erickson M, Liu M, Vanderbilt L, Billman G. Sensitivity of the Pediatric Early Warning Score to identify patient deterioration. Pediatrics. 2010;125(4):e763e769.
  21. Monaghan A. Detecting and managing deterioration in children. Paediatr Nurs. 2005;17(1):3235.
  22. Tucker KM, Brewer TL, Baker RB, Demeritt B, Vossmeyer MT. Prospective evaluation of a pediatric inpatient early warning scoring system. J Spec Pediatr Nurs. 2009;14(2):7985.
  23. Haines C, Perrott M, Weir P. Promoting care for acutely ill children—development and evaluation of a Paediatric Early Warning Tool. Intensive Crit Care Nurs. 2006;22(2):7381.
  24. Duncan H, Hutchison J, Parshuram CS. The Pediatric Early Warning System Score: a severity of illness score to predict urgent medical need in hospitalized children. J Crit Care. 2006;21(3):271278.
  25. Heitz CR, Gaillard JP, Blumstein H, Case D, Messick C, Miller CD. Performance of the maximum modified early warning score to predict the need for higher care utilization among admitted emergency department patients. J Hosp Med. 2010;5(1):E46E52.
Issue
Journal of Hospital Medicine - 8(5)
Issue
Journal of Hospital Medicine - 8(5)
Page Number
254-260
Page Number
254-260
Article Type
Display Headline
Medications associated with clinical deterioration in hospitalized children
Display Headline
Medications associated with clinical deterioration in hospitalized children
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: John H. Holmes, PhD, University of Pennsylvania Center for Clinical Epidemiology and Biostatistics, 726 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104; Telephone: 215–898‐4833; Fax: 215–573‐5325; E‐mail: jhholmes@mail.med.upenn.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Patients at Risk for 30‐Day Readmission

Article Type
Changed
Sun, 05/21/2017 - 18:06
Display Headline
Contribution of psychiatric illness and substance abuse to 30‐day readmission risk

Readmissions to the hospital are common and costly.[1] However, identifying patients prospectively who are likely to be readmitted and who may benefit from interventions to reduce readmission risk has proven challenging, with published risk scores having only moderate ability to discriminate between patients likely and unlikely to be readmitted.[2] One reason for this may be that published studies have not typically focused on patients who are cognitively impaired, psychiatrically ill, have low health or English literacy, or have poor social supports, all of whom may represent a substantial fraction of readmitted patients.[2, 3, 4, 5]

Psychiatric disease, in particular, may contribute to increased readmission risk for nonpsychiatric (medical) illness, and is associated with increased utilization of healthcare resources.[6, 7, 8, 9, 10, 11] For example, patients with mental illness who were discharged from New York hospitals were more likely to be rehospitalized and had more costly readmissions than patients without these comorbidities, including a length of stay nearly 1 day longer on average.[7] An unmet need for treatment of substance abuse was projected to cost Tennessee $772 million of excess healthcare costs in 2000, mostly incurred through repeat hospitalizations and emergency department (ED) visits.[10]

Despite this, few investigators have considered the role of psychiatric disease and/or substance abuse in medical readmission risk. The purpose of the current study was to evaluate the role of psychiatric illness and substance abuse in unselected medical patients to determine their relative contributions to 30‐day all‐cause readmissions (ACR) and potentially avoidable readmissions (PAR).

METHODS

Patients and Setting

We conducted a retrospective cohort study of consecutive adult patients discharged from medicine services at Brigham and Women's Hospital (BWH), a 747‐bed tertiary referral center and teaching hospital, between July 1, 2009 and June 30, 2010. Most patients are cared for by resident housestaff teams at BWH (approximately 25% are cared for by physician assistants working directly with attending physicians), and approximately half receive primary care in the Partners system, which has a shared electronic medical record (EMR). Outpatient mental health services are provided by Partners‐associated mental health professionals including those at McLean Hospital and MassHealth (Medicaid)‐associated sites through the Massachusetts Behavioral Health Partnership. Exclusion criteria were death in the hospital or discharge to another acute care facility. We also excluded patients who left against medical advice (AMA). The study protocol was approved by the Partners Institutional Review Board.

Outcome

The primary outcomes were ACR and PAR within 30 days of discharge. First, we identified all 30‐day readmissions to BWH or to 2 other hospitals in the Partners Healthcare Network (previous studies have shown that 80% of all readmitted patients are readmitted to 1 of these 3 hospitals).[12] For patients with multiple readmissions, only the first readmission was included in the dataset.

To find potentially avoidable readmissions, administrative and billing data for these patients were processed using the SQLape (SQLape s.a.r.l., Corseaux, Switzerland) algorithm, which identifies PAR by excluding patients who undergo planned follow‐up treatment (such as a cycle of planned chemotherapy) or are readmitted for conditions unrelated in any way to the index hospitalization.[13, 14] Common complications of treatment are categorized as potentially avoidable, such as development of a deep venous thrombosis, a decubitus ulcer after prolonged bed rest, or bleeding complications after starting anticoagulation. Although the algorithm identifies theoretically preventable readmissions, the algorithm does not quantify how preventable they are, and these are thus referred to as potentially avoidable. This is similar to other admission metrics, such as the Agency for Healthcare Research and Quality's prevention quality indicators, which are created from a list of ambulatory care‐sensitive conditions.[15] SQLape has the advantage of being a specific tool for readmissions. Patients with 30‐day readmissions identified by SQLape as planned or unlikely to be avoidable were excluded in the PAR analysis, although still included in ACR analysis. In each case, the comparison group is patients without any readmission.

Predictors

Our predictors of interest included the overall prevalence of a psychiatric diagnosis or diagnosis of substance abuse, the presence of specific psychiatric diagnoses, and prescription of psychiatric medications to help assess the independent contribution of these comorbidities to readmission risk.

We used a combination of easily obtainable inpatient and outpatient clinical and administrative data to identify relevant patients. Patients were considered likely to be psychiatrically ill if they: (1) had a psychiatric diagnosis on their Partners outpatient EMR problem list and were prescribed a medication to treat that condition as an outpatient, or (2) had an International Classification of Diseases, 9th Revision diagnosis of a psychiatric illness at hospital discharge. Patients were considered to have moderate probability of disease if they: (1) had a psychiatric diagnosis on their outpatient problem list, or (2) were prescribed a medication intended to treat a psychiatric condition as an outpatient. Patients were considered unlikely to have psychiatric disease if none of these criteria were met. Patients were considered likely to have a substance abuse disorder if they had this diagnosis on their outpatient EMR, or were prescribed a medication to treat this condition (eg, buprenorphine/naloxone), or received inpatient consultation from a substance abuse treatment team during their inpatient hospitalization, and were considered unlikely if none of these were true. We also evaluated individual categories of psychiatric illness (schizophrenia, depression, anxiety, bipolar disorder) and of psychotropic medications (antidepressants, antipsychotics, anxiolytics).

Potential Confounders

Data on potential confounders, based on prior literature,[16, 17] collected at the index admission were derived from electronic administrative, clinical, and billing sources, including the Brigham Integrated Computer System and the Partners Clinical Data Repository. They included patient age, gender, ethnicity, primary language, marital status, insurance status, living situation prior to admission, discharge location, length of stay, Elixhauser comorbidity index,[18] total number of medications prescribed, and number of prior admissions and ED visits in the prior year.

Statistical Analysis

Bivariate comparisons of each of the predictors of ACR and PAR risk (ie, patients with a 30‐day ACR or PAR vs those not readmitted within 30 days) were conducted using 2 trend tests for ordinal predictors (eg, likelihood of psychiatric disease), and 2 or Fisher exact test for dichotomous predictors (eg, receipt of inpatient substance abuse counseling).

We then used multivariate logistic regression analysis to adjust for all of the potential confounders noted above, entering each variable related to psychiatric illness into the model separately (eg, likely psychiatric illness, number of psychiatric medications). In a secondary analysis, we removed potentially collinear variables from the final model; as this did not alter the results, the full model is presented. We also conducted a secondary analysis where we included patients who left against medical advice (AMA), which also did not alter the results. Two‐sided P values <0.05 were considered significant, and all analyses were performed using the SAS version 9.2 (SAS Institute, Inc., Cary, NC).

RESULTS

There were 7984 unique patients discharged during the study period. Patients were generally white and English speaking; just over half of admissions came from the ED (Table 1). Of note, nearly all patients were insured, as are almost all patients in Massachusetts. They had high degrees of comorbid illness and large numbers of prescribed medications. Nearly 30% had at least 1 hospital admission within the prior year.

Baseline Characteristics of the Study Population
CharacteristicAll Patients, N (%)Not Readmitted, N (%)ACR, N (%)PAR N (%)a
  • NOTE: Abbreviations: ACR, all‐cause readmission; ED, emergency department; PAR, potentially avoidable readmission. PAR cohort excludes patients with unavoidable readmissions.

  • Percentages may not add up to 100% due to rounding or when subcategories were very small (<0.5%). Previously married includes patients who were divorced or widowed.

Study cohort6987 (100)5727 (72)1260 (18)388 (5.6)
Age, y    
<501663 (23.8)1343 (23.5)320 (25.4)85 (21.9)
51652273 (32.5)1859 (32.5)414 (32.9)136 (35.1)
66791444 (20.7)1176 (20.5)268 (18.6)80 (20.6)
>801607 (23.0)1349 (23.6)258 (16.1)87 (22.4)
Female3604 (51.6)2967 (51.8)637 (50.6)206 (53.1)
Race    
White5126 (73.4)4153 (72.5)973 (77.2)300 (77.3)
Black1075 (15.4)899 (15.7)176 (14.0)53 (13.7)
Hispanic562 (8.0)477 (8.3)85 (6.8)28 (7.2)
Other224 (3.2)198 (3.5)26 (2.1)7 (1.8)
Primary language    
English6345 (90.8)5180 (90.5)1165 (92.5)356 (91.8)
Marital status    
Married3642 (52.1)2942 (51.4)702 (55.7)214 (55.2)
Single, never married1662 (23.8)1393 (24.3)269 (21.4)73 (18.8)
Previously married1683 (24.1)1386 (24.2)289 (22.9)101 (26.0)
Insurance    
Medicare3550 (50.8)2949 (51.5)601 (47.7)188 (48.5)
Medicaid539 (7.7)430 (7.5)109 (8.7)33 (8.5)
Private2892 (41.4)2344 (40.9)548 (43.5)167 (43.0)
Uninsured6 (0.1)4 (0.1)2 (0.1)0 (0)
Source of index admission    
Clinic or home2136 (30.6)1711 (29.9)425 (33.7)117 (30.2)
Emergency department3592 (51.4)2999 (52.4)593 (47.1)181 (46.7)
Nursing facility1204 (17.2)977 (17.1)227 (18.0)84 (21.7)
Other55 (0.1)40 (0.7)15 (1.1)6 (1.6)
Length of stay, d    
021757 (25.2)1556 (27.2)201 (16.0)55 (14.2)
342200 (31.5)1842 (32.2)358 (28.4)105 (27.1)
571521 (21.8)1214 (21.2)307 (24.4)101 (26.0)
>71509 (21.6)1115 (19.5)394 (31.3)127 (32.7)
Elixhauser comorbidity index score    
011987 (28.4)1729 (30.2)258 (20.5)66 (17.0)
271773 (25.4)1541 (26.9)232 (18.4)67 (17.3)
8131535 (22.0)1212 (21.2)323 (25.6)86 (22.2)
>131692 (24.2)1245 (21.7)447 (35.5)169 (43.6)
Medications prescribed as outpatient    
061684 (24.1)1410 (24.6)274 (21.8)72 (18.6)
791601 (22.9)1349 (23.6)252 (20.0)77 (19.9)
10131836 (26.3)1508 (26.3)328 (26.0)107 (27.6)
>131866 (26.7)1460 (25.5)406 (32.2)132 (34.0)
Number of admissions in past year    
04816 (68.9)4032 (70.4)784 (62.2)279 (71.9)
152075 (29.7)1640 (28.6)435 (34.5)107 (27.6)
>596 (1.4)55 (1.0)41 (3.3)2 (0.5)
Number of ED visits in past year    
04661 (66.7)3862 (67.4)799 (63.4)261 (67.3)
152326 (33.3)1865 (32.6)461 (36.6)127 (32.7)

All‐Cause Readmissions

After exclusion of 997 patients who died, were discharged to skilled nursing or rehabilitation facilities, or left AMA, 6987 patients were included (Figure 1). Of these, 1260 had a readmission (18%). Approximately half were considered unlikely to be psychiatrically ill, 22% were considered moderately likely, and 29% likely (Table 2).

Bivariate Analysis of Predictors of Readmission Risk
 All‐Cause Readmission AnalysisPotentially Avoidable Readmission Analysis
 No. in Cohort (%)% of Patients With ACRP ValueaNo. in Cohort (%)% of Patients With PARP Valuea
  • NOTE: Abbreviations: ACR, all‐cause readmission, PAR, potentially avoidable readmission.

  • All analyses performed with 2 trend test for ordinal variables in more than 2 categories or Fisher exact test for dichotomous variables. Comparison group is patients without a readmission in all analyses. PAR analysis excludes patients with nonpreventable readmissions as determined by the SQLape algorithm.

Entire cohort698718.0 61156.3 
Likelihood of psychiatric illness      
Unlikely3424 (49)16.5 3026 (49)5.6 
Moderate1564 (22)23.5 1302 (21)7.1 
Likely1999 (29)16.4 1787 (29)6.4 
Likely versus unlikely  0.87  0.20
Moderate+likely versus unlikely  0.001  0.02
Likelihood of substance abuse  0.01  0.20
Unlikely5804 (83)18.7 5104 (83)6.5 
Likely1183 (17)14.8 1011 (17)5.40.14
Number of prescribed outpatient psychotropic medications  <0.001  0.04
04420 (63)16.3 3931 (64)5.9 
11725 (25)20.4 1481 (24)7.2 
2781 (11)22.3 653 (11)7.0 
>261 (1)23.0 50 (1)6.0 
Prescribed antidepressant1474 (21)20.60.0051248 (20)6.20.77
Prescribed antipsychotic375 (5)22.40.02315 (5)7.60.34
Prescribed mood stabilizer81 (1)18.50.9169 (1)4.40.49
Prescribed anxiolytic1814 (26)21.8<0.0011537 (25)7.70.01
Prescribed stimulant101 (2)26.70.0283 (1)10.80.09
Prescribed pharmacologic treatment for substance abuse79 (1)25.30.0960 (1)1.70.14
Number of psychiatric diagnoses on outpatient problem list  0.31  0.74
06405 (92)18.2 5509 (90)6.3 
1 or more582 (8)16.5 474 (8)7.0 
Outpatient diagnosis of substance abuse159 (2)13.20.11144 (2)4.20.28
Outpatient diagnosis of any psychiatric illness582 (8)16.50.31517 (8)8.00.73
Discharge diagnosis of depression774 (11)17.70.80690 (11)7.70.13
Discharge diagnosis of schizophrenia56 (1)23.20.3150 (1)140.03
Discharge diagnosis of bipolar disorder101 (1)10.90.0692 (2)2.20.10
Discharge diagnosis of anxiety1192 (17)15.00.0031080 (18)6.20.83
Discharge diagnosis of substance abuse885 (13)14.80.008803 (13)6.10.76
Discharge diagnosis of any psychiatric illness1839 (26)16.00.0081654 (27)6.60.63
Substance abuse consultation as inpatient284 (4)14.40.11252 (4)3.60.07

In bivariate analysis (Table 2), likelihood of psychiatric illness (P<0.01) and increasing numbers of prescribed outpatient psychiatric medications (P<0.01) were significantly associated with ACR. In multivariate analysis, each additional prescribed outpatient psychiatric medication increased ACR risk (odds ratio [OR]: 1.10, 95% confidence interval [CI]: 1.01‐1.20) or any prescription of an anxiolytic in particular (OR: 1.16, 95% CI: 1.001.35) was associated with increased risk of ACR, whereas discharge diagnoses of anxiety (OR: 0.82, 95% CI: 0.68‐0.99) and substance abuse (OR: 0.80, 95% CI: 0.65‐0.99) were associated with lower risk of ACR (Table 3).

Multivariate Analysis of Predictors of Readmission Risk
 ACR, OR (95% CI)PAR, OR (95% CI)a
  • NOTE: Abbreviations: ACR, all‐cause readmissions; CI, confidence interval; OR, odds ratio; PAR, potentially avoidable readmissions.

  • All analyses performed by multivariate logistic regression adjusting for patient age, gender, ethnicity, language spoken, marital status, insurance source, discharge location, length of stay, comorbidities (Elixhauser), number of outpatient medications, number of prior emergency department visits, and admissions in the prior year. Analyses were performed by entering each exposure of interest into the model separately while adjusting for all covariates. Comparison group is patients without any readmission for all analyses.

Likely psychiatric disease0.97 (0.82‐1.14)1.20 (0.92‐1.56)
Likely and possible psychiatric disease1.07 (0.94‐1.22)1.18 (0.94‐1.47)
Likely substance abuse0.83 (0.69‐0.99)0.85 (0.63‐1.16)
Psychiatric diagnosis on outpatient problem list0.97 (0.76‐1.23)1.04 (0.70‐1.55)
Substance abuse diagnosis on outpatient problem list0.63 (0.39‐1.02)0.65 (0.28‐1.52)
Increasing number of prescribed psychiatric medications1.10 (1.01‐1.20)1.00 (0.86‐1.16)
Outpatient prescription for antidepressant1.10 (0.94‐1.29)0.86 (0.66‐1.13)
Outpatient prescription for antipsychotic1.03 (0.79‐1.34)0.93 (0.59‐1.45)
Outpatient prescription for anxiolytic1.16 (1.001.35)1.13 (0.88‐1.44)
Outpatient prescription for methadone or buprenorphine1.15 (0.67‐1.98)0.18 (0.03‐1.36)
Discharge diagnosis of depression1.06 (0.86‐1.30)1.49 (1.09‐2.04)
Discharge diagnosis of schizophrenia1.43 (0.75‐2.74)2.63 (1.13‐6.13)
Discharge diagnosis of bipolar disorder0.53 (0.28‐1.02)0.35 (0.09‐1.45)
Discharge diagnosis of anxiety0.82 (0.68‐0.99)1.11 (0.83‐1.49)
Discharge diagnosis of substance abuse0.80 (0.65‐0.99)1.05 (0.75‐1.46)
Discharge diagnosis of any psychiatric illness0.88 (0.75‐1.02)1.22 (0.96‐1.56)
Addiction team consult while inpatient0.82 (0.58‐1.17)0.58 (0.29‐1.17)

Potentially Avoidable Readmissions

After further exclusion of 872 patients who had unavoidable readmissions according to the SQLape algorithm, 6115 patients remained. Of these, 388 had a PAR within 30 days (6.3%, Table 1).

In bivariate analysis (Table 2), the likelihood of psychiatric illness (P=0.02), number of outpatient psychiatric medications (P=0.04), and prescription of anxiolytics (P=0.01) were significantly associated with PAR, as they were with ACR. A discharge diagnosis of schizophrenia was also associated with PAR (P=0.03).

In multivariate analysis, only discharge diagnoses of depression (OR: 1.49, 95% CI: 1.09‐2.04) and schizophrenia (OR: 2.63, 95% CI: 1.13‐6.13) were associated with PAR.

DISCUSSION

Comorbid psychiatric illness was common among patients admitted to the medicine wards. Patients with documented discharge diagnoses of depression or schizophrenia were at highest risk for a potentially avoidable 30‐day readmission, whereas those prescribed more psychiatric medications were at increased risk for ACR. These findings were independent of a comprehensive set of risk factors among medicine inpatients in this retrospective cohort study.

This study extends prior work indicating patients with psychiatric disease have increased healthcare utilization,[6, 7, 8, 9, 10, 11] by identifying at least 2 subpopulations of the psychiatrically ill (those with depression and schizophrenia) at particularly high risk for 30‐day PAR. To our knowledge, this is the first study to identify schizophrenia as a predictor of hospital readmission for medical illnesses. One prior study prospectively identified depression as increasing the 90‐day risk of readmission 3‐fold, although medication usage was not assessed,[6] and our report strengthens this association.

There are several possible explanations why these two subpopulations in particular would be more predisposed to readmissions that are potentially avoidable. It is known that patients with schizophrenia, for example, live on average 20 years less than the general population, and most of this excess mortality is due to medical illnesses.[20, 21] Reasons for this may include poor healthcare access, adverse effects of medication, and socioeconomic factors among others.[21, 22] All of these reasons may contribute to the increased PAR risk in this population, mediated, for example, by decreased ability to adhere to postdischarge care plans. Successful community‐based interventions to decrease these inequities have been described and could serve as a model for addressing the increased readmission risk in this population.[23]

Our finding that patients with a greater number of prescribed psychiatric medications are at increased risk for ACR may be expected, given other studies that have highlighted the crucial importance of medications in postdischarge adverse events, including readmissions.[24] Indeed, medication‐related errors and toxicities are the most common postdischarge adverse events experienced by patients.[25] Whether psychiatric medications are particularly prone to causing postdischarge adverse events or whether these medications represent greater psychiatric comorbidity cannot be answered by this study.

It was surprising but reassuring that substance abuse was not a predictor of short‐term readmissions as identified using our measures; in fact, a discharge diagnosis of substance abuse was associated with lower risk of ACR than comparator patients. It seems unlikely that we would have inadequate power to find such a result, as we found a statistically significant negative association in the ACR population, and 17% of our population overall was considered likely to have a substance abuse comorbidity. However, it is likely the burden of disease was underestimated given that we did not try to determine the contribution of long‐term substance abuse to medical diseases that may increase readmission risk (eg, liver cirrhosis from alcohol use). Unlike other conditions in our study, patients with substance abuse diagnoses at BWH can be seen by a dedicated multidisciplinary team while an inpatient to start treatment and plan for postdischarge follow‐up; this may have played a role in our findings.

A discharge diagnosis of anxiety was also somewhat protective against readmission, whereas a prescription of an anxiolytic (predominantly benzodiazepines) increased risk; many patients prescribed a benzodiazepine do not have a Diagnostic and Statistical Manual of Mental Disorders4th Edition (DSM‐IV) diagnosis of anxiety disorder, and thus these findings may reflect different patient populations. Discharging physicians may have used anxiety as a discharge diagnosis in patients in whom they suspected somatic complaints without organic basis; these patients may be at lower risk of readmission.

Discharge diagnoses of psychiatric illnesses were associated with ACR and PAR in our study, but outpatient diagnoses were not. This likely reflects greater severity of illness (documentation as a treated diagnosis on discharge indicates the illness was relevant during the hospitalization), but may also reflect inaccuracies of diagnosis and lack of assessment of severity in outpatient coding, which would bias toward null findings. Although many of the patients in our study were seen by primary care doctors within the Partners system, some patients had outside primary care physicians and we did not have access to these records. This may also have decreased our ability to find associations.

The findings of our study should be interpreted in the context of the study design. Our study was retrospective, which limited our ability to conclusively diagnose psychiatric disease presence or severity (as is true of most institutions, validated psychiatric screening was not routinely used at our institutions on hospital admission or discharge). However, we used a conservative scale to classify the likelihood of patients having psychiatric or substance abuse disorders, and we used other metrics to establish the presence of illness, such as the number of prescribed medications, inpatient consultation with a substance abuse service, and hospital discharge diagnoses. This approach also allowed us to quickly identify a large cohort unaffected by selection bias. Our study was single center, potentially limiting generalizability. Although we capture at least 80% of readmissions, we were not able to capture all readmissions, and we cannot rule out that patients readmitted elsewhere are different than those readmitted within the Partners system. Last, the SQLape algorithm is not perfectly sensitive or specific in identifying avoidable readmissions,[13] but it does eliminate many readmissions that are clearly unavoidable, creating an enriched cohort of patients whose readmissions are more likely to be avoidable and therefore potentially actionable.

We suggest that our study findings first be considered when risk stratifying patients before hospital discharge in terms of readmission risk. Patients with depression and schizophrenia would seem to merit postdischarge interventions to decrease their potentially avoidable readmissions. Compulsory community treatment (a feature of treatment in Canada and Australia that is ordered by clinicians) has been shown to decrease mortality due to medical illness in patients who have been hospitalized and are psychiatrically ill, and addition of these services to postdischarge care may be useful.[23] Inpatient physicians could work to ensure follow‐up not just with medical providers but with robust outpatient mental health programs to decrease potentially avoidable readmission risk, and administrators could work to ensure close linkages with these community resources. Studies evaluating the impact of these types of interventions would need to be conducted. Patients with polypharmacy, including psychiatric medications, may benefit from interventions to improve medication safety, such as enhanced medication reconciliation and pharmacist counseling.[26]

Our study suggests that patients with depression, those with schizophrenia, and those who have increased numbers of prescribed psychiatric medications should be considered at high risk for readmission for medical illnesses. Targeting interventions to these patients may be fruitful in preventing avoidable readmissions.

Acknowledgements

The authors thank Dr. Yves Eggli for screening the database for potentially avoidable readmissions using the SQLape algorithm.

Disclosures

Dr. Donz was supported by the Swiss National Science Foundation and the Swiss Foundation for MedicalBiological Scholarships. The authors otherwise have no conflicts of interest to disclose. The content is solely the responsibility of the authors and does not necessarily represent the official views of the US Department of Veterans Affairs.

Files
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  3. Axon RN, Williams MV. Hospital readmission as an accountability measure. JAMA. 2011;305(5):504505.
  4. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  5. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155(2):97107.
  6. Kartha A, Anthony D, Manasseh CS, et al. Depression is a risk factor for rehospitalization in medical inpatients. Prim Care Companion J Clin Psychiatry. 2007;9(4):256262.
  7. Li Y, Glance LG, Cai X, Mukamel DB. Mental illness and hospitalization for ambulatory care sensitive medical conditions. Med Care. 2008;46(12):12491256.
  8. Raven MC, Carrier ER, Lee J, Billings JC, Marr M, Gourevitch MN. Substance use treatment barriers for patients with frequent hospital admissions. J Subst Abuse Treat. 2010;38(1):2230.
  9. Shepard DS, Daley M, Ritter GA, Hodgkin D, Beinecke RH. Managed care and the quality of substance abuse treatment. J Ment Health Policy Econ. 2002;5(4):163174.
  10. Rockett IR, Putnam SL, Jia H, Chang CF, Smith GS. Unmet substance abuse treatment need, health services utilization, and cost: a population‐based emergency department study. Ann Emerg Med. 2005;45(2):118127.
  11. Brennan PL, Kagay CR, Geppert JJ, Moos RH. Elderly Medicare inpatients with substance use disorders: characteristics and predictors of hospital readmissions over a four‐year interval. J Stud Alcohol. 2000;61(6):891895.
  12. Schnipper JL, Roumie CL, Cawthon C, et al. Rationale and design of the Pharmacist Intervention for Low Literacy in Cardiovascular Disease (PILL‐CVD) study. Circ Cardiovasc Qual Outcomes. 2010;3(2):212219.
  13. Halfon P, Eggli Y, Pretre‐Rohrbach I, Meylan D, Marazzi A, Burnand B. Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11);972981.
  14. Halfon P, Eggli Y, Melle G, Chevalier J, Wasserfallen JB, Burnand B. Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573587.
  15. Agency for Healthcare Research and Quality Quality Indicators. (April 7, 2006). Prevention Quality Indicators (PQI) Composite Measure Workgroup Final Report. Available at: http://www.qualityindicators.ahrq.gov/modules/pqi_resources.aspx. Accessed June 1, 2012.
  16. Hasan O, Meltzer DO, Shaykevich SA, et al. Hospital readmission in general medicine patients: a prediction model. J Gen Intern Med. 2010;25(3):211219.
  17. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  18. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. Jan 1998;36(1):827.
  19. Parks J, Svendsen D, Singer P, Foti ME, eds. Morbidity and mortality in people with serious mental illness. October 2006. National Association of State Mental Health Directors, Medical Directors Council. Available at: http://www.nasmhpd.org/docs/publications/MDCdocs/Mortality%20and%20 Mo rbidity%20Final%20Report%208.18.08.pdf. Accessed January 13, 2013.
  20. Kisely S, Smith M, Lawrence D, Maaten S. Mortality in individuals who have had psychiatric treatment: population‐based study in Nova Scotia. Br J Psychiat. 2005;187:552558.
  21. Kisely S, Smith M, Lawrence D, Cox M, Campbell LA, Maaten S. Inequitable access for mentally ill patients to some medically necessary procedures. CMAJ. 2007;176(6):779784.
  22. Mitchell AJ, Malone D, Doebbeling CC. Quality of medical care for people with and without comorbid mental illness and substance misuse: systematic review of comparative studies. Br J Psychiat. 2009;194:491499.
  23. Kisely S, Preston N, Xiao J, Lawrence D, Louise S, Crowe E. Reducing all‐cause mortality among patients with psychiatric disorders: a population‐based study. CMAJ. 2013;185(1):E50E56.
  24. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067E1072.
  25. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161167.
  26. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital‐based medication reconciliation practices: a systematic review. Arch Intern Med. 2012;172(14):10571069.
Article PDF
Issue
Journal of Hospital Medicine - 8(8)
Page Number
450-455
Sections
Files
Files
Article PDF
Article PDF

Readmissions to the hospital are common and costly.[1] However, identifying patients prospectively who are likely to be readmitted and who may benefit from interventions to reduce readmission risk has proven challenging, with published risk scores having only moderate ability to discriminate between patients likely and unlikely to be readmitted.[2] One reason for this may be that published studies have not typically focused on patients who are cognitively impaired, psychiatrically ill, have low health or English literacy, or have poor social supports, all of whom may represent a substantial fraction of readmitted patients.[2, 3, 4, 5]

Psychiatric disease, in particular, may contribute to increased readmission risk for nonpsychiatric (medical) illness, and is associated with increased utilization of healthcare resources.[6, 7, 8, 9, 10, 11] For example, patients with mental illness who were discharged from New York hospitals were more likely to be rehospitalized and had more costly readmissions than patients without these comorbidities, including a length of stay nearly 1 day longer on average.[7] An unmet need for treatment of substance abuse was projected to cost Tennessee $772 million of excess healthcare costs in 2000, mostly incurred through repeat hospitalizations and emergency department (ED) visits.[10]

Despite this, few investigators have considered the role of psychiatric disease and/or substance abuse in medical readmission risk. The purpose of the current study was to evaluate the role of psychiatric illness and substance abuse in unselected medical patients to determine their relative contributions to 30‐day all‐cause readmissions (ACR) and potentially avoidable readmissions (PAR).

METHODS

Patients and Setting

We conducted a retrospective cohort study of consecutive adult patients discharged from medicine services at Brigham and Women's Hospital (BWH), a 747‐bed tertiary referral center and teaching hospital, between July 1, 2009 and June 30, 2010. Most patients are cared for by resident housestaff teams at BWH (approximately 25% are cared for by physician assistants working directly with attending physicians), and approximately half receive primary care in the Partners system, which has a shared electronic medical record (EMR). Outpatient mental health services are provided by Partners‐associated mental health professionals including those at McLean Hospital and MassHealth (Medicaid)‐associated sites through the Massachusetts Behavioral Health Partnership. Exclusion criteria were death in the hospital or discharge to another acute care facility. We also excluded patients who left against medical advice (AMA). The study protocol was approved by the Partners Institutional Review Board.

Outcome

The primary outcomes were ACR and PAR within 30 days of discharge. First, we identified all 30‐day readmissions to BWH or to 2 other hospitals in the Partners Healthcare Network (previous studies have shown that 80% of all readmitted patients are readmitted to 1 of these 3 hospitals).[12] For patients with multiple readmissions, only the first readmission was included in the dataset.

To find potentially avoidable readmissions, administrative and billing data for these patients were processed using the SQLape (SQLape s.a.r.l., Corseaux, Switzerland) algorithm, which identifies PAR by excluding patients who undergo planned follow‐up treatment (such as a cycle of planned chemotherapy) or are readmitted for conditions unrelated in any way to the index hospitalization.[13, 14] Common complications of treatment are categorized as potentially avoidable, such as development of a deep venous thrombosis, a decubitus ulcer after prolonged bed rest, or bleeding complications after starting anticoagulation. Although the algorithm identifies theoretically preventable readmissions, the algorithm does not quantify how preventable they are, and these are thus referred to as potentially avoidable. This is similar to other admission metrics, such as the Agency for Healthcare Research and Quality's prevention quality indicators, which are created from a list of ambulatory care‐sensitive conditions.[15] SQLape has the advantage of being a specific tool for readmissions. Patients with 30‐day readmissions identified by SQLape as planned or unlikely to be avoidable were excluded in the PAR analysis, although still included in ACR analysis. In each case, the comparison group is patients without any readmission.

Predictors

Our predictors of interest included the overall prevalence of a psychiatric diagnosis or diagnosis of substance abuse, the presence of specific psychiatric diagnoses, and prescription of psychiatric medications to help assess the independent contribution of these comorbidities to readmission risk.

We used a combination of easily obtainable inpatient and outpatient clinical and administrative data to identify relevant patients. Patients were considered likely to be psychiatrically ill if they: (1) had a psychiatric diagnosis on their Partners outpatient EMR problem list and were prescribed a medication to treat that condition as an outpatient, or (2) had an International Classification of Diseases, 9th Revision diagnosis of a psychiatric illness at hospital discharge. Patients were considered to have moderate probability of disease if they: (1) had a psychiatric diagnosis on their outpatient problem list, or (2) were prescribed a medication intended to treat a psychiatric condition as an outpatient. Patients were considered unlikely to have psychiatric disease if none of these criteria were met. Patients were considered likely to have a substance abuse disorder if they had this diagnosis on their outpatient EMR, or were prescribed a medication to treat this condition (eg, buprenorphine/naloxone), or received inpatient consultation from a substance abuse treatment team during their inpatient hospitalization, and were considered unlikely if none of these were true. We also evaluated individual categories of psychiatric illness (schizophrenia, depression, anxiety, bipolar disorder) and of psychotropic medications (antidepressants, antipsychotics, anxiolytics).

Potential Confounders

Data on potential confounders, based on prior literature,[16, 17] collected at the index admission were derived from electronic administrative, clinical, and billing sources, including the Brigham Integrated Computer System and the Partners Clinical Data Repository. They included patient age, gender, ethnicity, primary language, marital status, insurance status, living situation prior to admission, discharge location, length of stay, Elixhauser comorbidity index,[18] total number of medications prescribed, and number of prior admissions and ED visits in the prior year.

Statistical Analysis

Bivariate comparisons of each of the predictors of ACR and PAR risk (ie, patients with a 30‐day ACR or PAR vs those not readmitted within 30 days) were conducted using 2 trend tests for ordinal predictors (eg, likelihood of psychiatric disease), and 2 or Fisher exact test for dichotomous predictors (eg, receipt of inpatient substance abuse counseling).

We then used multivariate logistic regression analysis to adjust for all of the potential confounders noted above, entering each variable related to psychiatric illness into the model separately (eg, likely psychiatric illness, number of psychiatric medications). In a secondary analysis, we removed potentially collinear variables from the final model; as this did not alter the results, the full model is presented. We also conducted a secondary analysis where we included patients who left against medical advice (AMA), which also did not alter the results. Two‐sided P values <0.05 were considered significant, and all analyses were performed using the SAS version 9.2 (SAS Institute, Inc., Cary, NC).

RESULTS

There were 7984 unique patients discharged during the study period. Patients were generally white and English speaking; just over half of admissions came from the ED (Table 1). Of note, nearly all patients were insured, as are almost all patients in Massachusetts. They had high degrees of comorbid illness and large numbers of prescribed medications. Nearly 30% had at least 1 hospital admission within the prior year.

Baseline Characteristics of the Study Population
CharacteristicAll Patients, N (%)Not Readmitted, N (%)ACR, N (%)PAR N (%)a
  • NOTE: Abbreviations: ACR, all‐cause readmission; ED, emergency department; PAR, potentially avoidable readmission. PAR cohort excludes patients with unavoidable readmissions.

  • Percentages may not add up to 100% due to rounding or when subcategories were very small (<0.5%). Previously married includes patients who were divorced or widowed.

Study cohort6987 (100)5727 (72)1260 (18)388 (5.6)
Age, y    
<501663 (23.8)1343 (23.5)320 (25.4)85 (21.9)
51652273 (32.5)1859 (32.5)414 (32.9)136 (35.1)
66791444 (20.7)1176 (20.5)268 (18.6)80 (20.6)
>801607 (23.0)1349 (23.6)258 (16.1)87 (22.4)
Female3604 (51.6)2967 (51.8)637 (50.6)206 (53.1)
Race    
White5126 (73.4)4153 (72.5)973 (77.2)300 (77.3)
Black1075 (15.4)899 (15.7)176 (14.0)53 (13.7)
Hispanic562 (8.0)477 (8.3)85 (6.8)28 (7.2)
Other224 (3.2)198 (3.5)26 (2.1)7 (1.8)
Primary language    
English6345 (90.8)5180 (90.5)1165 (92.5)356 (91.8)
Marital status    
Married3642 (52.1)2942 (51.4)702 (55.7)214 (55.2)
Single, never married1662 (23.8)1393 (24.3)269 (21.4)73 (18.8)
Previously married1683 (24.1)1386 (24.2)289 (22.9)101 (26.0)
Insurance    
Medicare3550 (50.8)2949 (51.5)601 (47.7)188 (48.5)
Medicaid539 (7.7)430 (7.5)109 (8.7)33 (8.5)
Private2892 (41.4)2344 (40.9)548 (43.5)167 (43.0)
Uninsured6 (0.1)4 (0.1)2 (0.1)0 (0)
Source of index admission    
Clinic or home2136 (30.6)1711 (29.9)425 (33.7)117 (30.2)
Emergency department3592 (51.4)2999 (52.4)593 (47.1)181 (46.7)
Nursing facility1204 (17.2)977 (17.1)227 (18.0)84 (21.7)
Other55 (0.1)40 (0.7)15 (1.1)6 (1.6)
Length of stay, d    
021757 (25.2)1556 (27.2)201 (16.0)55 (14.2)
342200 (31.5)1842 (32.2)358 (28.4)105 (27.1)
571521 (21.8)1214 (21.2)307 (24.4)101 (26.0)
>71509 (21.6)1115 (19.5)394 (31.3)127 (32.7)
Elixhauser comorbidity index score    
011987 (28.4)1729 (30.2)258 (20.5)66 (17.0)
271773 (25.4)1541 (26.9)232 (18.4)67 (17.3)
8131535 (22.0)1212 (21.2)323 (25.6)86 (22.2)
>131692 (24.2)1245 (21.7)447 (35.5)169 (43.6)
Medications prescribed as outpatient    
061684 (24.1)1410 (24.6)274 (21.8)72 (18.6)
791601 (22.9)1349 (23.6)252 (20.0)77 (19.9)
10131836 (26.3)1508 (26.3)328 (26.0)107 (27.6)
>131866 (26.7)1460 (25.5)406 (32.2)132 (34.0)
Number of admissions in past year    
04816 (68.9)4032 (70.4)784 (62.2)279 (71.9)
152075 (29.7)1640 (28.6)435 (34.5)107 (27.6)
>596 (1.4)55 (1.0)41 (3.3)2 (0.5)
Number of ED visits in past year    
04661 (66.7)3862 (67.4)799 (63.4)261 (67.3)
152326 (33.3)1865 (32.6)461 (36.6)127 (32.7)

All‐Cause Readmissions

After exclusion of 997 patients who died, were discharged to skilled nursing or rehabilitation facilities, or left AMA, 6987 patients were included (Figure 1). Of these, 1260 had a readmission (18%). Approximately half were considered unlikely to be psychiatrically ill, 22% were considered moderately likely, and 29% likely (Table 2).

Bivariate Analysis of Predictors of Readmission Risk
 All‐Cause Readmission AnalysisPotentially Avoidable Readmission Analysis
 No. in Cohort (%)% of Patients With ACRP ValueaNo. in Cohort (%)% of Patients With PARP Valuea
  • NOTE: Abbreviations: ACR, all‐cause readmission, PAR, potentially avoidable readmission.

  • All analyses performed with 2 trend test for ordinal variables in more than 2 categories or Fisher exact test for dichotomous variables. Comparison group is patients without a readmission in all analyses. PAR analysis excludes patients with nonpreventable readmissions as determined by the SQLape algorithm.

Entire cohort698718.0 61156.3 
Likelihood of psychiatric illness      
Unlikely3424 (49)16.5 3026 (49)5.6 
Moderate1564 (22)23.5 1302 (21)7.1 
Likely1999 (29)16.4 1787 (29)6.4 
Likely versus unlikely  0.87  0.20
Moderate+likely versus unlikely  0.001  0.02
Likelihood of substance abuse  0.01  0.20
Unlikely5804 (83)18.7 5104 (83)6.5 
Likely1183 (17)14.8 1011 (17)5.40.14
Number of prescribed outpatient psychotropic medications  <0.001  0.04
04420 (63)16.3 3931 (64)5.9 
11725 (25)20.4 1481 (24)7.2 
2781 (11)22.3 653 (11)7.0 
>261 (1)23.0 50 (1)6.0 
Prescribed antidepressant1474 (21)20.60.0051248 (20)6.20.77
Prescribed antipsychotic375 (5)22.40.02315 (5)7.60.34
Prescribed mood stabilizer81 (1)18.50.9169 (1)4.40.49
Prescribed anxiolytic1814 (26)21.8<0.0011537 (25)7.70.01
Prescribed stimulant101 (2)26.70.0283 (1)10.80.09
Prescribed pharmacologic treatment for substance abuse79 (1)25.30.0960 (1)1.70.14
Number of psychiatric diagnoses on outpatient problem list  0.31  0.74
06405 (92)18.2 5509 (90)6.3 
1 or more582 (8)16.5 474 (8)7.0 
Outpatient diagnosis of substance abuse159 (2)13.20.11144 (2)4.20.28
Outpatient diagnosis of any psychiatric illness582 (8)16.50.31517 (8)8.00.73
Discharge diagnosis of depression774 (11)17.70.80690 (11)7.70.13
Discharge diagnosis of schizophrenia56 (1)23.20.3150 (1)140.03
Discharge diagnosis of bipolar disorder101 (1)10.90.0692 (2)2.20.10
Discharge diagnosis of anxiety1192 (17)15.00.0031080 (18)6.20.83
Discharge diagnosis of substance abuse885 (13)14.80.008803 (13)6.10.76
Discharge diagnosis of any psychiatric illness1839 (26)16.00.0081654 (27)6.60.63
Substance abuse consultation as inpatient284 (4)14.40.11252 (4)3.60.07

In bivariate analysis (Table 2), likelihood of psychiatric illness (P<0.01) and increasing numbers of prescribed outpatient psychiatric medications (P<0.01) were significantly associated with ACR. In multivariate analysis, each additional prescribed outpatient psychiatric medication increased ACR risk (odds ratio [OR]: 1.10, 95% confidence interval [CI]: 1.01‐1.20) or any prescription of an anxiolytic in particular (OR: 1.16, 95% CI: 1.001.35) was associated with increased risk of ACR, whereas discharge diagnoses of anxiety (OR: 0.82, 95% CI: 0.68‐0.99) and substance abuse (OR: 0.80, 95% CI: 0.65‐0.99) were associated with lower risk of ACR (Table 3).

Multivariate Analysis of Predictors of Readmission Risk
 ACR, OR (95% CI)PAR, OR (95% CI)a
  • NOTE: Abbreviations: ACR, all‐cause readmissions; CI, confidence interval; OR, odds ratio; PAR, potentially avoidable readmissions.

  • All analyses performed by multivariate logistic regression adjusting for patient age, gender, ethnicity, language spoken, marital status, insurance source, discharge location, length of stay, comorbidities (Elixhauser), number of outpatient medications, number of prior emergency department visits, and admissions in the prior year. Analyses were performed by entering each exposure of interest into the model separately while adjusting for all covariates. Comparison group is patients without any readmission for all analyses.

Likely psychiatric disease0.97 (0.82‐1.14)1.20 (0.92‐1.56)
Likely and possible psychiatric disease1.07 (0.94‐1.22)1.18 (0.94‐1.47)
Likely substance abuse0.83 (0.69‐0.99)0.85 (0.63‐1.16)
Psychiatric diagnosis on outpatient problem list0.97 (0.76‐1.23)1.04 (0.70‐1.55)
Substance abuse diagnosis on outpatient problem list0.63 (0.39‐1.02)0.65 (0.28‐1.52)
Increasing number of prescribed psychiatric medications1.10 (1.01‐1.20)1.00 (0.86‐1.16)
Outpatient prescription for antidepressant1.10 (0.94‐1.29)0.86 (0.66‐1.13)
Outpatient prescription for antipsychotic1.03 (0.79‐1.34)0.93 (0.59‐1.45)
Outpatient prescription for anxiolytic1.16 (1.001.35)1.13 (0.88‐1.44)
Outpatient prescription for methadone or buprenorphine1.15 (0.67‐1.98)0.18 (0.03‐1.36)
Discharge diagnosis of depression1.06 (0.86‐1.30)1.49 (1.09‐2.04)
Discharge diagnosis of schizophrenia1.43 (0.75‐2.74)2.63 (1.13‐6.13)
Discharge diagnosis of bipolar disorder0.53 (0.28‐1.02)0.35 (0.09‐1.45)
Discharge diagnosis of anxiety0.82 (0.68‐0.99)1.11 (0.83‐1.49)
Discharge diagnosis of substance abuse0.80 (0.65‐0.99)1.05 (0.75‐1.46)
Discharge diagnosis of any psychiatric illness0.88 (0.75‐1.02)1.22 (0.96‐1.56)
Addiction team consult while inpatient0.82 (0.58‐1.17)0.58 (0.29‐1.17)

Potentially Avoidable Readmissions

After further exclusion of 872 patients who had unavoidable readmissions according to the SQLape algorithm, 6115 patients remained. Of these, 388 had a PAR within 30 days (6.3%, Table 1).

In bivariate analysis (Table 2), the likelihood of psychiatric illness (P=0.02), number of outpatient psychiatric medications (P=0.04), and prescription of anxiolytics (P=0.01) were significantly associated with PAR, as they were with ACR. A discharge diagnosis of schizophrenia was also associated with PAR (P=0.03).

In multivariate analysis, only discharge diagnoses of depression (OR: 1.49, 95% CI: 1.09‐2.04) and schizophrenia (OR: 2.63, 95% CI: 1.13‐6.13) were associated with PAR.

DISCUSSION

Comorbid psychiatric illness was common among patients admitted to the medicine wards. Patients with documented discharge diagnoses of depression or schizophrenia were at highest risk for a potentially avoidable 30‐day readmission, whereas those prescribed more psychiatric medications were at increased risk for ACR. These findings were independent of a comprehensive set of risk factors among medicine inpatients in this retrospective cohort study.

This study extends prior work indicating patients with psychiatric disease have increased healthcare utilization,[6, 7, 8, 9, 10, 11] by identifying at least 2 subpopulations of the psychiatrically ill (those with depression and schizophrenia) at particularly high risk for 30‐day PAR. To our knowledge, this is the first study to identify schizophrenia as a predictor of hospital readmission for medical illnesses. One prior study prospectively identified depression as increasing the 90‐day risk of readmission 3‐fold, although medication usage was not assessed,[6] and our report strengthens this association.

There are several possible explanations why these two subpopulations in particular would be more predisposed to readmissions that are potentially avoidable. It is known that patients with schizophrenia, for example, live on average 20 years less than the general population, and most of this excess mortality is due to medical illnesses.[20, 21] Reasons for this may include poor healthcare access, adverse effects of medication, and socioeconomic factors among others.[21, 22] All of these reasons may contribute to the increased PAR risk in this population, mediated, for example, by decreased ability to adhere to postdischarge care plans. Successful community‐based interventions to decrease these inequities have been described and could serve as a model for addressing the increased readmission risk in this population.[23]

Our finding that patients with a greater number of prescribed psychiatric medications are at increased risk for ACR may be expected, given other studies that have highlighted the crucial importance of medications in postdischarge adverse events, including readmissions.[24] Indeed, medication‐related errors and toxicities are the most common postdischarge adverse events experienced by patients.[25] Whether psychiatric medications are particularly prone to causing postdischarge adverse events or whether these medications represent greater psychiatric comorbidity cannot be answered by this study.

It was surprising but reassuring that substance abuse was not a predictor of short‐term readmissions as identified using our measures; in fact, a discharge diagnosis of substance abuse was associated with lower risk of ACR than comparator patients. It seems unlikely that we would have inadequate power to find such a result, as we found a statistically significant negative association in the ACR population, and 17% of our population overall was considered likely to have a substance abuse comorbidity. However, it is likely the burden of disease was underestimated given that we did not try to determine the contribution of long‐term substance abuse to medical diseases that may increase readmission risk (eg, liver cirrhosis from alcohol use). Unlike other conditions in our study, patients with substance abuse diagnoses at BWH can be seen by a dedicated multidisciplinary team while an inpatient to start treatment and plan for postdischarge follow‐up; this may have played a role in our findings.

A discharge diagnosis of anxiety was also somewhat protective against readmission, whereas a prescription of an anxiolytic (predominantly benzodiazepines) increased risk; many patients prescribed a benzodiazepine do not have a Diagnostic and Statistical Manual of Mental Disorders4th Edition (DSM‐IV) diagnosis of anxiety disorder, and thus these findings may reflect different patient populations. Discharging physicians may have used anxiety as a discharge diagnosis in patients in whom they suspected somatic complaints without organic basis; these patients may be at lower risk of readmission.

Discharge diagnoses of psychiatric illnesses were associated with ACR and PAR in our study, but outpatient diagnoses were not. This likely reflects greater severity of illness (documentation as a treated diagnosis on discharge indicates the illness was relevant during the hospitalization), but may also reflect inaccuracies of diagnosis and lack of assessment of severity in outpatient coding, which would bias toward null findings. Although many of the patients in our study were seen by primary care doctors within the Partners system, some patients had outside primary care physicians and we did not have access to these records. This may also have decreased our ability to find associations.

The findings of our study should be interpreted in the context of the study design. Our study was retrospective, which limited our ability to conclusively diagnose psychiatric disease presence or severity (as is true of most institutions, validated psychiatric screening was not routinely used at our institutions on hospital admission or discharge). However, we used a conservative scale to classify the likelihood of patients having psychiatric or substance abuse disorders, and we used other metrics to establish the presence of illness, such as the number of prescribed medications, inpatient consultation with a substance abuse service, and hospital discharge diagnoses. This approach also allowed us to quickly identify a large cohort unaffected by selection bias. Our study was single center, potentially limiting generalizability. Although we capture at least 80% of readmissions, we were not able to capture all readmissions, and we cannot rule out that patients readmitted elsewhere are different than those readmitted within the Partners system. Last, the SQLape algorithm is not perfectly sensitive or specific in identifying avoidable readmissions,[13] but it does eliminate many readmissions that are clearly unavoidable, creating an enriched cohort of patients whose readmissions are more likely to be avoidable and therefore potentially actionable.

We suggest that our study findings first be considered when risk stratifying patients before hospital discharge in terms of readmission risk. Patients with depression and schizophrenia would seem to merit postdischarge interventions to decrease their potentially avoidable readmissions. Compulsory community treatment (a feature of treatment in Canada and Australia that is ordered by clinicians) has been shown to decrease mortality due to medical illness in patients who have been hospitalized and are psychiatrically ill, and addition of these services to postdischarge care may be useful.[23] Inpatient physicians could work to ensure follow‐up not just with medical providers but with robust outpatient mental health programs to decrease potentially avoidable readmission risk, and administrators could work to ensure close linkages with these community resources. Studies evaluating the impact of these types of interventions would need to be conducted. Patients with polypharmacy, including psychiatric medications, may benefit from interventions to improve medication safety, such as enhanced medication reconciliation and pharmacist counseling.[26]

Our study suggests that patients with depression, those with schizophrenia, and those who have increased numbers of prescribed psychiatric medications should be considered at high risk for readmission for medical illnesses. Targeting interventions to these patients may be fruitful in preventing avoidable readmissions.

Acknowledgements

The authors thank Dr. Yves Eggli for screening the database for potentially avoidable readmissions using the SQLape algorithm.

Disclosures

Dr. Donz was supported by the Swiss National Science Foundation and the Swiss Foundation for MedicalBiological Scholarships. The authors otherwise have no conflicts of interest to disclose. The content is solely the responsibility of the authors and does not necessarily represent the official views of the US Department of Veterans Affairs.

Readmissions to the hospital are common and costly.[1] However, identifying patients prospectively who are likely to be readmitted and who may benefit from interventions to reduce readmission risk has proven challenging, with published risk scores having only moderate ability to discriminate between patients likely and unlikely to be readmitted.[2] One reason for this may be that published studies have not typically focused on patients who are cognitively impaired, psychiatrically ill, have low health or English literacy, or have poor social supports, all of whom may represent a substantial fraction of readmitted patients.[2, 3, 4, 5]

Psychiatric disease, in particular, may contribute to increased readmission risk for nonpsychiatric (medical) illness, and is associated with increased utilization of healthcare resources.[6, 7, 8, 9, 10, 11] For example, patients with mental illness who were discharged from New York hospitals were more likely to be rehospitalized and had more costly readmissions than patients without these comorbidities, including a length of stay nearly 1 day longer on average.[7] An unmet need for treatment of substance abuse was projected to cost Tennessee $772 million of excess healthcare costs in 2000, mostly incurred through repeat hospitalizations and emergency department (ED) visits.[10]

Despite this, few investigators have considered the role of psychiatric disease and/or substance abuse in medical readmission risk. The purpose of the current study was to evaluate the role of psychiatric illness and substance abuse in unselected medical patients to determine their relative contributions to 30‐day all‐cause readmissions (ACR) and potentially avoidable readmissions (PAR).

METHODS

Patients and Setting

We conducted a retrospective cohort study of consecutive adult patients discharged from medicine services at Brigham and Women's Hospital (BWH), a 747‐bed tertiary referral center and teaching hospital, between July 1, 2009 and June 30, 2010. Most patients are cared for by resident housestaff teams at BWH (approximately 25% are cared for by physician assistants working directly with attending physicians), and approximately half receive primary care in the Partners system, which has a shared electronic medical record (EMR). Outpatient mental health services are provided by Partners‐associated mental health professionals including those at McLean Hospital and MassHealth (Medicaid)‐associated sites through the Massachusetts Behavioral Health Partnership. Exclusion criteria were death in the hospital or discharge to another acute care facility. We also excluded patients who left against medical advice (AMA). The study protocol was approved by the Partners Institutional Review Board.

Outcome

The primary outcomes were ACR and PAR within 30 days of discharge. First, we identified all 30‐day readmissions to BWH or to 2 other hospitals in the Partners Healthcare Network (previous studies have shown that 80% of all readmitted patients are readmitted to 1 of these 3 hospitals).[12] For patients with multiple readmissions, only the first readmission was included in the dataset.

To find potentially avoidable readmissions, administrative and billing data for these patients were processed using the SQLape (SQLape s.a.r.l., Corseaux, Switzerland) algorithm, which identifies PAR by excluding patients who undergo planned follow‐up treatment (such as a cycle of planned chemotherapy) or are readmitted for conditions unrelated in any way to the index hospitalization.[13, 14] Common complications of treatment are categorized as potentially avoidable, such as development of a deep venous thrombosis, a decubitus ulcer after prolonged bed rest, or bleeding complications after starting anticoagulation. Although the algorithm identifies theoretically preventable readmissions, the algorithm does not quantify how preventable they are, and these are thus referred to as potentially avoidable. This is similar to other admission metrics, such as the Agency for Healthcare Research and Quality's prevention quality indicators, which are created from a list of ambulatory care‐sensitive conditions.[15] SQLape has the advantage of being a specific tool for readmissions. Patients with 30‐day readmissions identified by SQLape as planned or unlikely to be avoidable were excluded in the PAR analysis, although still included in ACR analysis. In each case, the comparison group is patients without any readmission.

Predictors

Our predictors of interest included the overall prevalence of a psychiatric diagnosis or diagnosis of substance abuse, the presence of specific psychiatric diagnoses, and prescription of psychiatric medications to help assess the independent contribution of these comorbidities to readmission risk.

We used a combination of easily obtainable inpatient and outpatient clinical and administrative data to identify relevant patients. Patients were considered likely to be psychiatrically ill if they: (1) had a psychiatric diagnosis on their Partners outpatient EMR problem list and were prescribed a medication to treat that condition as an outpatient, or (2) had an International Classification of Diseases, 9th Revision diagnosis of a psychiatric illness at hospital discharge. Patients were considered to have moderate probability of disease if they: (1) had a psychiatric diagnosis on their outpatient problem list, or (2) were prescribed a medication intended to treat a psychiatric condition as an outpatient. Patients were considered unlikely to have psychiatric disease if none of these criteria were met. Patients were considered likely to have a substance abuse disorder if they had this diagnosis on their outpatient EMR, or were prescribed a medication to treat this condition (eg, buprenorphine/naloxone), or received inpatient consultation from a substance abuse treatment team during their inpatient hospitalization, and were considered unlikely if none of these were true. We also evaluated individual categories of psychiatric illness (schizophrenia, depression, anxiety, bipolar disorder) and of psychotropic medications (antidepressants, antipsychotics, anxiolytics).

Potential Confounders

Data on potential confounders, based on prior literature,[16, 17] collected at the index admission were derived from electronic administrative, clinical, and billing sources, including the Brigham Integrated Computer System and the Partners Clinical Data Repository. They included patient age, gender, ethnicity, primary language, marital status, insurance status, living situation prior to admission, discharge location, length of stay, Elixhauser comorbidity index,[18] total number of medications prescribed, and number of prior admissions and ED visits in the prior year.

Statistical Analysis

Bivariate comparisons of each of the predictors of ACR and PAR risk (ie, patients with a 30‐day ACR or PAR vs those not readmitted within 30 days) were conducted using 2 trend tests for ordinal predictors (eg, likelihood of psychiatric disease), and 2 or Fisher exact test for dichotomous predictors (eg, receipt of inpatient substance abuse counseling).

We then used multivariate logistic regression analysis to adjust for all of the potential confounders noted above, entering each variable related to psychiatric illness into the model separately (eg, likely psychiatric illness, number of psychiatric medications). In a secondary analysis, we removed potentially collinear variables from the final model; as this did not alter the results, the full model is presented. We also conducted a secondary analysis where we included patients who left against medical advice (AMA), which also did not alter the results. Two‐sided P values <0.05 were considered significant, and all analyses were performed using the SAS version 9.2 (SAS Institute, Inc., Cary, NC).

RESULTS

There were 7984 unique patients discharged during the study period. Patients were generally white and English speaking; just over half of admissions came from the ED (Table 1). Of note, nearly all patients were insured, as are almost all patients in Massachusetts. They had high degrees of comorbid illness and large numbers of prescribed medications. Nearly 30% had at least 1 hospital admission within the prior year.

Baseline Characteristics of the Study Population
CharacteristicAll Patients, N (%)Not Readmitted, N (%)ACR, N (%)PAR N (%)a
  • NOTE: Abbreviations: ACR, all‐cause readmission; ED, emergency department; PAR, potentially avoidable readmission. PAR cohort excludes patients with unavoidable readmissions.

  • Percentages may not add up to 100% due to rounding or when subcategories were very small (<0.5%). Previously married includes patients who were divorced or widowed.

Study cohort6987 (100)5727 (72)1260 (18)388 (5.6)
Age, y    
<501663 (23.8)1343 (23.5)320 (25.4)85 (21.9)
51652273 (32.5)1859 (32.5)414 (32.9)136 (35.1)
66791444 (20.7)1176 (20.5)268 (18.6)80 (20.6)
>801607 (23.0)1349 (23.6)258 (16.1)87 (22.4)
Female3604 (51.6)2967 (51.8)637 (50.6)206 (53.1)
Race    
White5126 (73.4)4153 (72.5)973 (77.2)300 (77.3)
Black1075 (15.4)899 (15.7)176 (14.0)53 (13.7)
Hispanic562 (8.0)477 (8.3)85 (6.8)28 (7.2)
Other224 (3.2)198 (3.5)26 (2.1)7 (1.8)
Primary language    
English6345 (90.8)5180 (90.5)1165 (92.5)356 (91.8)
Marital status    
Married3642 (52.1)2942 (51.4)702 (55.7)214 (55.2)
Single, never married1662 (23.8)1393 (24.3)269 (21.4)73 (18.8)
Previously married1683 (24.1)1386 (24.2)289 (22.9)101 (26.0)
Insurance    
Medicare3550 (50.8)2949 (51.5)601 (47.7)188 (48.5)
Medicaid539 (7.7)430 (7.5)109 (8.7)33 (8.5)
Private2892 (41.4)2344 (40.9)548 (43.5)167 (43.0)
Uninsured6 (0.1)4 (0.1)2 (0.1)0 (0)
Source of index admission    
Clinic or home2136 (30.6)1711 (29.9)425 (33.7)117 (30.2)
Emergency department3592 (51.4)2999 (52.4)593 (47.1)181 (46.7)
Nursing facility1204 (17.2)977 (17.1)227 (18.0)84 (21.7)
Other55 (0.1)40 (0.7)15 (1.1)6 (1.6)
Length of stay, d    
021757 (25.2)1556 (27.2)201 (16.0)55 (14.2)
342200 (31.5)1842 (32.2)358 (28.4)105 (27.1)
571521 (21.8)1214 (21.2)307 (24.4)101 (26.0)
>71509 (21.6)1115 (19.5)394 (31.3)127 (32.7)
Elixhauser comorbidity index score    
011987 (28.4)1729 (30.2)258 (20.5)66 (17.0)
271773 (25.4)1541 (26.9)232 (18.4)67 (17.3)
8131535 (22.0)1212 (21.2)323 (25.6)86 (22.2)
>131692 (24.2)1245 (21.7)447 (35.5)169 (43.6)
Medications prescribed as outpatient    
061684 (24.1)1410 (24.6)274 (21.8)72 (18.6)
791601 (22.9)1349 (23.6)252 (20.0)77 (19.9)
10131836 (26.3)1508 (26.3)328 (26.0)107 (27.6)
>131866 (26.7)1460 (25.5)406 (32.2)132 (34.0)
Number of admissions in past year    
04816 (68.9)4032 (70.4)784 (62.2)279 (71.9)
152075 (29.7)1640 (28.6)435 (34.5)107 (27.6)
>596 (1.4)55 (1.0)41 (3.3)2 (0.5)
Number of ED visits in past year    
04661 (66.7)3862 (67.4)799 (63.4)261 (67.3)
152326 (33.3)1865 (32.6)461 (36.6)127 (32.7)

All‐Cause Readmissions

After exclusion of 997 patients who died, were discharged to skilled nursing or rehabilitation facilities, or left AMA, 6987 patients were included (Figure 1). Of these, 1260 had a readmission (18%). Approximately half were considered unlikely to be psychiatrically ill, 22% were considered moderately likely, and 29% likely (Table 2).

Bivariate Analysis of Predictors of Readmission Risk
 All‐Cause Readmission AnalysisPotentially Avoidable Readmission Analysis
 No. in Cohort (%)% of Patients With ACRP ValueaNo. in Cohort (%)% of Patients With PARP Valuea
  • NOTE: Abbreviations: ACR, all‐cause readmission, PAR, potentially avoidable readmission.

  • All analyses performed with 2 trend test for ordinal variables in more than 2 categories or Fisher exact test for dichotomous variables. Comparison group is patients without a readmission in all analyses. PAR analysis excludes patients with nonpreventable readmissions as determined by the SQLape algorithm.

Entire cohort698718.0 61156.3 
Likelihood of psychiatric illness      
Unlikely3424 (49)16.5 3026 (49)5.6 
Moderate1564 (22)23.5 1302 (21)7.1 
Likely1999 (29)16.4 1787 (29)6.4 
Likely versus unlikely  0.87  0.20
Moderate+likely versus unlikely  0.001  0.02
Likelihood of substance abuse  0.01  0.20
Unlikely5804 (83)18.7 5104 (83)6.5 
Likely1183 (17)14.8 1011 (17)5.40.14
Number of prescribed outpatient psychotropic medications  <0.001  0.04
04420 (63)16.3 3931 (64)5.9 
11725 (25)20.4 1481 (24)7.2 
2781 (11)22.3 653 (11)7.0 
>261 (1)23.0 50 (1)6.0 
Prescribed antidepressant1474 (21)20.60.0051248 (20)6.20.77
Prescribed antipsychotic375 (5)22.40.02315 (5)7.60.34
Prescribed mood stabilizer81 (1)18.50.9169 (1)4.40.49
Prescribed anxiolytic1814 (26)21.8<0.0011537 (25)7.70.01
Prescribed stimulant101 (2)26.70.0283 (1)10.80.09
Prescribed pharmacologic treatment for substance abuse79 (1)25.30.0960 (1)1.70.14
Number of psychiatric diagnoses on outpatient problem list  0.31  0.74
06405 (92)18.2 5509 (90)6.3 
1 or more582 (8)16.5 474 (8)7.0 
Outpatient diagnosis of substance abuse159 (2)13.20.11144 (2)4.20.28
Outpatient diagnosis of any psychiatric illness582 (8)16.50.31517 (8)8.00.73
Discharge diagnosis of depression774 (11)17.70.80690 (11)7.70.13
Discharge diagnosis of schizophrenia56 (1)23.20.3150 (1)140.03
Discharge diagnosis of bipolar disorder101 (1)10.90.0692 (2)2.20.10
Discharge diagnosis of anxiety1192 (17)15.00.0031080 (18)6.20.83
Discharge diagnosis of substance abuse885 (13)14.80.008803 (13)6.10.76
Discharge diagnosis of any psychiatric illness1839 (26)16.00.0081654 (27)6.60.63
Substance abuse consultation as inpatient284 (4)14.40.11252 (4)3.60.07

In bivariate analysis (Table 2), likelihood of psychiatric illness (P<0.01) and increasing numbers of prescribed outpatient psychiatric medications (P<0.01) were significantly associated with ACR. In multivariate analysis, each additional prescribed outpatient psychiatric medication increased ACR risk (odds ratio [OR]: 1.10, 95% confidence interval [CI]: 1.01‐1.20) or any prescription of an anxiolytic in particular (OR: 1.16, 95% CI: 1.001.35) was associated with increased risk of ACR, whereas discharge diagnoses of anxiety (OR: 0.82, 95% CI: 0.68‐0.99) and substance abuse (OR: 0.80, 95% CI: 0.65‐0.99) were associated with lower risk of ACR (Table 3).

Multivariate Analysis of Predictors of Readmission Risk
 ACR, OR (95% CI)PAR, OR (95% CI)a
  • NOTE: Abbreviations: ACR, all‐cause readmissions; CI, confidence interval; OR, odds ratio; PAR, potentially avoidable readmissions.

  • All analyses performed by multivariate logistic regression adjusting for patient age, gender, ethnicity, language spoken, marital status, insurance source, discharge location, length of stay, comorbidities (Elixhauser), number of outpatient medications, number of prior emergency department visits, and admissions in the prior year. Analyses were performed by entering each exposure of interest into the model separately while adjusting for all covariates. Comparison group is patients without any readmission for all analyses.

Likely psychiatric disease0.97 (0.82‐1.14)1.20 (0.92‐1.56)
Likely and possible psychiatric disease1.07 (0.94‐1.22)1.18 (0.94‐1.47)
Likely substance abuse0.83 (0.69‐0.99)0.85 (0.63‐1.16)
Psychiatric diagnosis on outpatient problem list0.97 (0.76‐1.23)1.04 (0.70‐1.55)
Substance abuse diagnosis on outpatient problem list0.63 (0.39‐1.02)0.65 (0.28‐1.52)
Increasing number of prescribed psychiatric medications1.10 (1.01‐1.20)1.00 (0.86‐1.16)
Outpatient prescription for antidepressant1.10 (0.94‐1.29)0.86 (0.66‐1.13)
Outpatient prescription for antipsychotic1.03 (0.79‐1.34)0.93 (0.59‐1.45)
Outpatient prescription for anxiolytic1.16 (1.001.35)1.13 (0.88‐1.44)
Outpatient prescription for methadone or buprenorphine1.15 (0.67‐1.98)0.18 (0.03‐1.36)
Discharge diagnosis of depression1.06 (0.86‐1.30)1.49 (1.09‐2.04)
Discharge diagnosis of schizophrenia1.43 (0.75‐2.74)2.63 (1.13‐6.13)
Discharge diagnosis of bipolar disorder0.53 (0.28‐1.02)0.35 (0.09‐1.45)
Discharge diagnosis of anxiety0.82 (0.68‐0.99)1.11 (0.83‐1.49)
Discharge diagnosis of substance abuse0.80 (0.65‐0.99)1.05 (0.75‐1.46)
Discharge diagnosis of any psychiatric illness0.88 (0.75‐1.02)1.22 (0.96‐1.56)
Addiction team consult while inpatient0.82 (0.58‐1.17)0.58 (0.29‐1.17)

Potentially Avoidable Readmissions

After further exclusion of 872 patients who had unavoidable readmissions according to the SQLape algorithm, 6115 patients remained. Of these, 388 had a PAR within 30 days (6.3%, Table 1).

In bivariate analysis (Table 2), the likelihood of psychiatric illness (P=0.02), number of outpatient psychiatric medications (P=0.04), and prescription of anxiolytics (P=0.01) were significantly associated with PAR, as they were with ACR. A discharge diagnosis of schizophrenia was also associated with PAR (P=0.03).

In multivariate analysis, only discharge diagnoses of depression (OR: 1.49, 95% CI: 1.09‐2.04) and schizophrenia (OR: 2.63, 95% CI: 1.13‐6.13) were associated with PAR.

DISCUSSION

Comorbid psychiatric illness was common among patients admitted to the medicine wards. Patients with documented discharge diagnoses of depression or schizophrenia were at highest risk for a potentially avoidable 30‐day readmission, whereas those prescribed more psychiatric medications were at increased risk for ACR. These findings were independent of a comprehensive set of risk factors among medicine inpatients in this retrospective cohort study.

This study extends prior work indicating patients with psychiatric disease have increased healthcare utilization,[6, 7, 8, 9, 10, 11] by identifying at least 2 subpopulations of the psychiatrically ill (those with depression and schizophrenia) at particularly high risk for 30‐day PAR. To our knowledge, this is the first study to identify schizophrenia as a predictor of hospital readmission for medical illnesses. One prior study prospectively identified depression as increasing the 90‐day risk of readmission 3‐fold, although medication usage was not assessed,[6] and our report strengthens this association.

There are several possible explanations why these two subpopulations in particular would be more predisposed to readmissions that are potentially avoidable. It is known that patients with schizophrenia, for example, live on average 20 years less than the general population, and most of this excess mortality is due to medical illnesses.[20, 21] Reasons for this may include poor healthcare access, adverse effects of medication, and socioeconomic factors among others.[21, 22] All of these reasons may contribute to the increased PAR risk in this population, mediated, for example, by decreased ability to adhere to postdischarge care plans. Successful community‐based interventions to decrease these inequities have been described and could serve as a model for addressing the increased readmission risk in this population.[23]

Our finding that patients with a greater number of prescribed psychiatric medications are at increased risk for ACR may be expected, given other studies that have highlighted the crucial importance of medications in postdischarge adverse events, including readmissions.[24] Indeed, medication‐related errors and toxicities are the most common postdischarge adverse events experienced by patients.[25] Whether psychiatric medications are particularly prone to causing postdischarge adverse events or whether these medications represent greater psychiatric comorbidity cannot be answered by this study.

It was surprising but reassuring that substance abuse was not a predictor of short‐term readmissions as identified using our measures; in fact, a discharge diagnosis of substance abuse was associated with lower risk of ACR than comparator patients. It seems unlikely that we would have inadequate power to find such a result, as we found a statistically significant negative association in the ACR population, and 17% of our population overall was considered likely to have a substance abuse comorbidity. However, it is likely the burden of disease was underestimated given that we did not try to determine the contribution of long‐term substance abuse to medical diseases that may increase readmission risk (eg, liver cirrhosis from alcohol use). Unlike other conditions in our study, patients with substance abuse diagnoses at BWH can be seen by a dedicated multidisciplinary team while an inpatient to start treatment and plan for postdischarge follow‐up; this may have played a role in our findings.

A discharge diagnosis of anxiety was also somewhat protective against readmission, whereas a prescription of an anxiolytic (predominantly benzodiazepines) increased risk; many patients prescribed a benzodiazepine do not have a Diagnostic and Statistical Manual of Mental Disorders4th Edition (DSM‐IV) diagnosis of anxiety disorder, and thus these findings may reflect different patient populations. Discharging physicians may have used anxiety as a discharge diagnosis in patients in whom they suspected somatic complaints without organic basis; these patients may be at lower risk of readmission.

Discharge diagnoses of psychiatric illnesses were associated with ACR and PAR in our study, but outpatient diagnoses were not. This likely reflects greater severity of illness (documentation as a treated diagnosis on discharge indicates the illness was relevant during the hospitalization), but may also reflect inaccuracies of diagnosis and lack of assessment of severity in outpatient coding, which would bias toward null findings. Although many of the patients in our study were seen by primary care doctors within the Partners system, some patients had outside primary care physicians and we did not have access to these records. This may also have decreased our ability to find associations.

The findings of our study should be interpreted in the context of the study design. Our study was retrospective, which limited our ability to conclusively diagnose psychiatric disease presence or severity (as is true of most institutions, validated psychiatric screening was not routinely used at our institutions on hospital admission or discharge). However, we used a conservative scale to classify the likelihood of patients having psychiatric or substance abuse disorders, and we used other metrics to establish the presence of illness, such as the number of prescribed medications, inpatient consultation with a substance abuse service, and hospital discharge diagnoses. This approach also allowed us to quickly identify a large cohort unaffected by selection bias. Our study was single center, potentially limiting generalizability. Although we capture at least 80% of readmissions, we were not able to capture all readmissions, and we cannot rule out that patients readmitted elsewhere are different than those readmitted within the Partners system. Last, the SQLape algorithm is not perfectly sensitive or specific in identifying avoidable readmissions,[13] but it does eliminate many readmissions that are clearly unavoidable, creating an enriched cohort of patients whose readmissions are more likely to be avoidable and therefore potentially actionable.

We suggest that our study findings first be considered when risk stratifying patients before hospital discharge in terms of readmission risk. Patients with depression and schizophrenia would seem to merit postdischarge interventions to decrease their potentially avoidable readmissions. Compulsory community treatment (a feature of treatment in Canada and Australia that is ordered by clinicians) has been shown to decrease mortality due to medical illness in patients who have been hospitalized and are psychiatrically ill, and addition of these services to postdischarge care may be useful.[23] Inpatient physicians could work to ensure follow‐up not just with medical providers but with robust outpatient mental health programs to decrease potentially avoidable readmission risk, and administrators could work to ensure close linkages with these community resources. Studies evaluating the impact of these types of interventions would need to be conducted. Patients with polypharmacy, including psychiatric medications, may benefit from interventions to improve medication safety, such as enhanced medication reconciliation and pharmacist counseling.[26]

Our study suggests that patients with depression, those with schizophrenia, and those who have increased numbers of prescribed psychiatric medications should be considered at high risk for readmission for medical illnesses. Targeting interventions to these patients may be fruitful in preventing avoidable readmissions.

Acknowledgements

The authors thank Dr. Yves Eggli for screening the database for potentially avoidable readmissions using the SQLape algorithm.

Disclosures

Dr. Donz was supported by the Swiss National Science Foundation and the Swiss Foundation for MedicalBiological Scholarships. The authors otherwise have no conflicts of interest to disclose. The content is solely the responsibility of the authors and does not necessarily represent the official views of the US Department of Veterans Affairs.

References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  3. Axon RN, Williams MV. Hospital readmission as an accountability measure. JAMA. 2011;305(5):504505.
  4. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  5. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155(2):97107.
  6. Kartha A, Anthony D, Manasseh CS, et al. Depression is a risk factor for rehospitalization in medical inpatients. Prim Care Companion J Clin Psychiatry. 2007;9(4):256262.
  7. Li Y, Glance LG, Cai X, Mukamel DB. Mental illness and hospitalization for ambulatory care sensitive medical conditions. Med Care. 2008;46(12):12491256.
  8. Raven MC, Carrier ER, Lee J, Billings JC, Marr M, Gourevitch MN. Substance use treatment barriers for patients with frequent hospital admissions. J Subst Abuse Treat. 2010;38(1):2230.
  9. Shepard DS, Daley M, Ritter GA, Hodgkin D, Beinecke RH. Managed care and the quality of substance abuse treatment. J Ment Health Policy Econ. 2002;5(4):163174.
  10. Rockett IR, Putnam SL, Jia H, Chang CF, Smith GS. Unmet substance abuse treatment need, health services utilization, and cost: a population‐based emergency department study. Ann Emerg Med. 2005;45(2):118127.
  11. Brennan PL, Kagay CR, Geppert JJ, Moos RH. Elderly Medicare inpatients with substance use disorders: characteristics and predictors of hospital readmissions over a four‐year interval. J Stud Alcohol. 2000;61(6):891895.
  12. Schnipper JL, Roumie CL, Cawthon C, et al. Rationale and design of the Pharmacist Intervention for Low Literacy in Cardiovascular Disease (PILL‐CVD) study. Circ Cardiovasc Qual Outcomes. 2010;3(2):212219.
  13. Halfon P, Eggli Y, Pretre‐Rohrbach I, Meylan D, Marazzi A, Burnand B. Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11);972981.
  14. Halfon P, Eggli Y, Melle G, Chevalier J, Wasserfallen JB, Burnand B. Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573587.
  15. Agency for Healthcare Research and Quality Quality Indicators. (April 7, 2006). Prevention Quality Indicators (PQI) Composite Measure Workgroup Final Report. Available at: http://www.qualityindicators.ahrq.gov/modules/pqi_resources.aspx. Accessed June 1, 2012.
  16. Hasan O, Meltzer DO, Shaykevich SA, et al. Hospital readmission in general medicine patients: a prediction model. J Gen Intern Med. 2010;25(3):211219.
  17. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  18. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. Jan 1998;36(1):827.
  19. Parks J, Svendsen D, Singer P, Foti ME, eds. Morbidity and mortality in people with serious mental illness. October 2006. National Association of State Mental Health Directors, Medical Directors Council. Available at: http://www.nasmhpd.org/docs/publications/MDCdocs/Mortality%20and%20 Mo rbidity%20Final%20Report%208.18.08.pdf. Accessed January 13, 2013.
  20. Kisely S, Smith M, Lawrence D, Maaten S. Mortality in individuals who have had psychiatric treatment: population‐based study in Nova Scotia. Br J Psychiat. 2005;187:552558.
  21. Kisely S, Smith M, Lawrence D, Cox M, Campbell LA, Maaten S. Inequitable access for mentally ill patients to some medically necessary procedures. CMAJ. 2007;176(6):779784.
  22. Mitchell AJ, Malone D, Doebbeling CC. Quality of medical care for people with and without comorbid mental illness and substance misuse: systematic review of comparative studies. Br J Psychiat. 2009;194:491499.
  23. Kisely S, Preston N, Xiao J, Lawrence D, Louise S, Crowe E. Reducing all‐cause mortality among patients with psychiatric disorders: a population‐based study. CMAJ. 2013;185(1):E50E56.
  24. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067E1072.
  25. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161167.
  26. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital‐based medication reconciliation practices: a systematic review. Arch Intern Med. 2012;172(14):10571069.
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  3. Axon RN, Williams MV. Hospital readmission as an accountability measure. JAMA. 2011;305(5):504505.
  4. Arbaje AI, Wolff JL, Yu Q, Powe NR, Anderson GF, Boult C. Postdischarge environmental and socioeconomic factors and the likelihood of early hospital readmission among community‐dwelling Medicare beneficiaries. Gerontologist. 2008;48(4):495504.
  5. Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011;155(2):97107.
  6. Kartha A, Anthony D, Manasseh CS, et al. Depression is a risk factor for rehospitalization in medical inpatients. Prim Care Companion J Clin Psychiatry. 2007;9(4):256262.
  7. Li Y, Glance LG, Cai X, Mukamel DB. Mental illness and hospitalization for ambulatory care sensitive medical conditions. Med Care. 2008;46(12):12491256.
  8. Raven MC, Carrier ER, Lee J, Billings JC, Marr M, Gourevitch MN. Substance use treatment barriers for patients with frequent hospital admissions. J Subst Abuse Treat. 2010;38(1):2230.
  9. Shepard DS, Daley M, Ritter GA, Hodgkin D, Beinecke RH. Managed care and the quality of substance abuse treatment. J Ment Health Policy Econ. 2002;5(4):163174.
  10. Rockett IR, Putnam SL, Jia H, Chang CF, Smith GS. Unmet substance abuse treatment need, health services utilization, and cost: a population‐based emergency department study. Ann Emerg Med. 2005;45(2):118127.
  11. Brennan PL, Kagay CR, Geppert JJ, Moos RH. Elderly Medicare inpatients with substance use disorders: characteristics and predictors of hospital readmissions over a four‐year interval. J Stud Alcohol. 2000;61(6):891895.
  12. Schnipper JL, Roumie CL, Cawthon C, et al. Rationale and design of the Pharmacist Intervention for Low Literacy in Cardiovascular Disease (PILL‐CVD) study. Circ Cardiovasc Qual Outcomes. 2010;3(2):212219.
  13. Halfon P, Eggli Y, Pretre‐Rohrbach I, Meylan D, Marazzi A, Burnand B. Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11);972981.
  14. Halfon P, Eggli Y, Melle G, Chevalier J, Wasserfallen JB, Burnand B. Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573587.
  15. Agency for Healthcare Research and Quality Quality Indicators. (April 7, 2006). Prevention Quality Indicators (PQI) Composite Measure Workgroup Final Report. Available at: http://www.qualityindicators.ahrq.gov/modules/pqi_resources.aspx. Accessed June 1, 2012.
  16. Hasan O, Meltzer DO, Shaykevich SA, et al. Hospital readmission in general medicine patients: a prediction model. J Gen Intern Med. 2010;25(3):211219.
  17. Walraven C, Dhalla IA, Bell C, et al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182(6):551557.
  18. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. Jan 1998;36(1):827.
  19. Parks J, Svendsen D, Singer P, Foti ME, eds. Morbidity and mortality in people with serious mental illness. October 2006. National Association of State Mental Health Directors, Medical Directors Council. Available at: http://www.nasmhpd.org/docs/publications/MDCdocs/Mortality%20and%20 Mo rbidity%20Final%20Report%208.18.08.pdf. Accessed January 13, 2013.
  20. Kisely S, Smith M, Lawrence D, Maaten S. Mortality in individuals who have had psychiatric treatment: population‐based study in Nova Scotia. Br J Psychiat. 2005;187:552558.
  21. Kisely S, Smith M, Lawrence D, Cox M, Campbell LA, Maaten S. Inequitable access for mentally ill patients to some medically necessary procedures. CMAJ. 2007;176(6):779784.
  22. Mitchell AJ, Malone D, Doebbeling CC. Quality of medical care for people with and without comorbid mental illness and substance misuse: systematic review of comparative studies. Br J Psychiat. 2009;194:491499.
  23. Kisely S, Preston N, Xiao J, Lawrence D, Louise S, Crowe E. Reducing all‐cause mortality among patients with psychiatric disorders: a population‐based study. CMAJ. 2013;185(1):E50E56.
  24. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067E1072.
  25. Forster AJ, Murff HJ, Peterson JF, Gandhi TK, Bates DW. The incidence and severity of adverse events affecting patients after discharge from the hospital. Ann Intern Med. 2003;138(3):161167.
  26. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital‐based medication reconciliation practices: a systematic review. Arch Intern Med. 2012;172(14):10571069.
Issue
Journal of Hospital Medicine - 8(8)
Issue
Journal of Hospital Medicine - 8(8)
Page Number
450-455
Page Number
450-455
Article Type
Display Headline
Contribution of psychiatric illness and substance abuse to 30‐day readmission risk
Display Headline
Contribution of psychiatric illness and substance abuse to 30‐day readmission risk
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Robert E. Burke, MD, Denver VA Medical Center, Medical Service (111), 1055 Clermont Street, Denver, CO 80220‐3808; Telephone: 303‐399‐8020; Fax: 303‐393‐5199; E‐mail: Robert.Burke5@va.gov
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Should I retire early?

Article Type
Changed
Thu, 03/28/2019 - 16:05
Display Headline
Should I retire early?

Much has been written of the widespread concern among America’s physicians over upcoming changes in our health care system. Dire predictions of impending doom have prompted many to consider early retirement.

I do not share such concerns, for what that is worth; but if you do, and you are serious about retiring sooner than planned, now would be a great time to take a close look at your financial situation.

Many doctors have a false sense of security about their money; most of us save too little. We either miscalculate or underestimate how much we’ll need to last through retirement.

We tend to live longer than we think we will, and as such we run the risk of outliving our savings. And we don’t face facts about long-term care. Not nearly enough of us have long-term care insurance, or the means to self-fund an extended long-term care situation.

Many people lack a clear idea of where their retirement income will come from, and even when they do, they don’t know how to manage their savings correctly. Doctors in particular are notorious for not understanding investments. Many attempt to manage their practice’s retirement plans with inadequate knowledge of how the investments within their plans work.

So how will you know if you can safely retire before Obamacare gets up to speed? Of course, as with everything else, it depends. But to arrive at any sort of reliable ballpark figure, you’ll need to know three things: (1) how much you realistically expect to spend annually after retirement; (2) how much principal you will need to generate that annual income; and (3) how far your present savings are from that target figure.

An oft-quoted rule of thumb is that in retirement you should plan to spend about 70% of what you are spending now. In my opinion, that’s nonsense. While a few significant expenses, such as disability and malpractice insurance premiums, will be eliminated, other expenses, such as travel, recreation, and medical care (including long-term care insurance, which no one should be without), will increase. My wife and I are assuming we will spend about the same in retirement as we spend now, and I suggest you do too.

Once you know how much money you will spend per year, you can calculate how much money – in interest- and dividend-producing assets – will be needed to generate that amount.

Ideally, you will want to spend only the interest and dividends; by leaving the principal untouched you will never run short, even if you retire at an unusually young age, or longevity runs in your family (or both). Most financial advisers use the 5% rule: You can safely assume a minimum average of 5% annual return on your nest egg. So if you want to spend $100,000 per year, you will need $2 million in assets; for $200,000, you’ll need $4 million.

This is where you may discover – if your present savings are a long way from your target figure – that early retirement is not a realistic option. Better, though, to make that unpleasant discovery now, rather than face the frightening prospect of running out of money at an advanced age. Don’t be tempted to close a wide gap in a hurry with high-return/high-risk investments, which often backfire, leaving you further than ever from retirement.

Of course, it goes without saying that debt can destroy the best-laid retirement plans. If you carry significant debt, pay it off as soon as possible, and certainly before you retire.

Even if you have no plans to retire in the immediate future, it is never too soon to think about retirement. Young physicians often defer contributing to their retirement plans because they want to save for a new house, or college for their children. But there are tangible tax benefits that you get now, because your contributions usually reduce your taxable income, and your investment grows tax-free until you take it out.

For long-term planning, the most foolproof strategy – seldom employed, because it’s boring – is to sock away a fixed amount per month (after your retirement plan has been funded) in a mutual fund. For example, $1,000 per month for 25 years with the market earning 10% overall comes to almost $2 million, with the power of compounded interest working for you.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
health care, dermatology, Joseph Eastern
Sections
Author and Disclosure Information

Author and Disclosure Information

Much has been written of the widespread concern among America’s physicians over upcoming changes in our health care system. Dire predictions of impending doom have prompted many to consider early retirement.

I do not share such concerns, for what that is worth; but if you do, and you are serious about retiring sooner than planned, now would be a great time to take a close look at your financial situation.

Many doctors have a false sense of security about their money; most of us save too little. We either miscalculate or underestimate how much we’ll need to last through retirement.

We tend to live longer than we think we will, and as such we run the risk of outliving our savings. And we don’t face facts about long-term care. Not nearly enough of us have long-term care insurance, or the means to self-fund an extended long-term care situation.

Many people lack a clear idea of where their retirement income will come from, and even when they do, they don’t know how to manage their savings correctly. Doctors in particular are notorious for not understanding investments. Many attempt to manage their practice’s retirement plans with inadequate knowledge of how the investments within their plans work.

So how will you know if you can safely retire before Obamacare gets up to speed? Of course, as with everything else, it depends. But to arrive at any sort of reliable ballpark figure, you’ll need to know three things: (1) how much you realistically expect to spend annually after retirement; (2) how much principal you will need to generate that annual income; and (3) how far your present savings are from that target figure.

An oft-quoted rule of thumb is that in retirement you should plan to spend about 70% of what you are spending now. In my opinion, that’s nonsense. While a few significant expenses, such as disability and malpractice insurance premiums, will be eliminated, other expenses, such as travel, recreation, and medical care (including long-term care insurance, which no one should be without), will increase. My wife and I are assuming we will spend about the same in retirement as we spend now, and I suggest you do too.

Once you know how much money you will spend per year, you can calculate how much money – in interest- and dividend-producing assets – will be needed to generate that amount.

Ideally, you will want to spend only the interest and dividends; by leaving the principal untouched you will never run short, even if you retire at an unusually young age, or longevity runs in your family (or both). Most financial advisers use the 5% rule: You can safely assume a minimum average of 5% annual return on your nest egg. So if you want to spend $100,000 per year, you will need $2 million in assets; for $200,000, you’ll need $4 million.

This is where you may discover – if your present savings are a long way from your target figure – that early retirement is not a realistic option. Better, though, to make that unpleasant discovery now, rather than face the frightening prospect of running out of money at an advanced age. Don’t be tempted to close a wide gap in a hurry with high-return/high-risk investments, which often backfire, leaving you further than ever from retirement.

Of course, it goes without saying that debt can destroy the best-laid retirement plans. If you carry significant debt, pay it off as soon as possible, and certainly before you retire.

Even if you have no plans to retire in the immediate future, it is never too soon to think about retirement. Young physicians often defer contributing to their retirement plans because they want to save for a new house, or college for their children. But there are tangible tax benefits that you get now, because your contributions usually reduce your taxable income, and your investment grows tax-free until you take it out.

For long-term planning, the most foolproof strategy – seldom employed, because it’s boring – is to sock away a fixed amount per month (after your retirement plan has been funded) in a mutual fund. For example, $1,000 per month for 25 years with the market earning 10% overall comes to almost $2 million, with the power of compounded interest working for you.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J.

Much has been written of the widespread concern among America’s physicians over upcoming changes in our health care system. Dire predictions of impending doom have prompted many to consider early retirement.

I do not share such concerns, for what that is worth; but if you do, and you are serious about retiring sooner than planned, now would be a great time to take a close look at your financial situation.

Many doctors have a false sense of security about their money; most of us save too little. We either miscalculate or underestimate how much we’ll need to last through retirement.

We tend to live longer than we think we will, and as such we run the risk of outliving our savings. And we don’t face facts about long-term care. Not nearly enough of us have long-term care insurance, or the means to self-fund an extended long-term care situation.

Many people lack a clear idea of where their retirement income will come from, and even when they do, they don’t know how to manage their savings correctly. Doctors in particular are notorious for not understanding investments. Many attempt to manage their practice’s retirement plans with inadequate knowledge of how the investments within their plans work.

So how will you know if you can safely retire before Obamacare gets up to speed? Of course, as with everything else, it depends. But to arrive at any sort of reliable ballpark figure, you’ll need to know three things: (1) how much you realistically expect to spend annually after retirement; (2) how much principal you will need to generate that annual income; and (3) how far your present savings are from that target figure.

An oft-quoted rule of thumb is that in retirement you should plan to spend about 70% of what you are spending now. In my opinion, that’s nonsense. While a few significant expenses, such as disability and malpractice insurance premiums, will be eliminated, other expenses, such as travel, recreation, and medical care (including long-term care insurance, which no one should be without), will increase. My wife and I are assuming we will spend about the same in retirement as we spend now, and I suggest you do too.

Once you know how much money you will spend per year, you can calculate how much money – in interest- and dividend-producing assets – will be needed to generate that amount.

Ideally, you will want to spend only the interest and dividends; by leaving the principal untouched you will never run short, even if you retire at an unusually young age, or longevity runs in your family (or both). Most financial advisers use the 5% rule: You can safely assume a minimum average of 5% annual return on your nest egg. So if you want to spend $100,000 per year, you will need $2 million in assets; for $200,000, you’ll need $4 million.

This is where you may discover – if your present savings are a long way from your target figure – that early retirement is not a realistic option. Better, though, to make that unpleasant discovery now, rather than face the frightening prospect of running out of money at an advanced age. Don’t be tempted to close a wide gap in a hurry with high-return/high-risk investments, which often backfire, leaving you further than ever from retirement.

Of course, it goes without saying that debt can destroy the best-laid retirement plans. If you carry significant debt, pay it off as soon as possible, and certainly before you retire.

Even if you have no plans to retire in the immediate future, it is never too soon to think about retirement. Young physicians often defer contributing to their retirement plans because they want to save for a new house, or college for their children. But there are tangible tax benefits that you get now, because your contributions usually reduce your taxable income, and your investment grows tax-free until you take it out.

For long-term planning, the most foolproof strategy – seldom employed, because it’s boring – is to sock away a fixed amount per month (after your retirement plan has been funded) in a mutual fund. For example, $1,000 per month for 25 years with the market earning 10% overall comes to almost $2 million, with the power of compounded interest working for you.

Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J.

Publications
Publications
Topics
Article Type
Display Headline
Should I retire early?
Display Headline
Should I retire early?
Legacy Keywords
health care, dermatology, Joseph Eastern
Legacy Keywords
health care, dermatology, Joseph Eastern
Sections
Article Source

PURLs Copyright

Inside the Article

Managing symptoms of depression

Article Type
Changed
Mon, 04/16/2018 - 13:21
Display Headline
Managing symptoms of depression

Diana looked at her pill bottles and wondered why she was on all these medications when she did not feel any better. She looked at the five bottles: bupropion, paroxetine, diazepam, alprazolam, and zolpidem. She thought about the side effects she was experiencing.

She had been taking this cocktail, in various dosages, for the best part of a year now. Her depression remained unchanged. She made a decision that she would tell her psychiatrist that she wanted off the medications at her next visit. She would then ask for other treatments. She had found many therapies offered on the Internet for treatment of depression, and she hoped her psychiatrist would be able to help her decide which therapies might be best suited for her. Perhaps she would agree to stay on one medication as a compromise as she knew her psychiatrist thought treatment of depression with medication to be important.

Up to 30% of patients with depression do not respond to multiple treatment trials and are considered to have treatment-resistant depression. Most treatment trials for these patients focus on symptom reduction as a goal. This emphasis on symptom reduction often leads to tunnel vision, where other evidence-based treatments become marginalized by psychiatrists. Thus, patients like Diana end up on multiple medications, without an integrated approach to assessment or discussion of combined treatments (medications and psychotherapy).

Dr. Gabor Keitner, who practices in Providence, R.I., and is a member of the Association of Family Psychiatrists, offers a new program aimed at helping patients manage their depression. His philosophical stance is that depression is a chronic illness and that expecting symptoms to be cured with medications is, for most patients, a false hope perpetuated by a consumer society, where the pharmaceutical industry has dominated the education of patients, their families, and the psychiatric profession. He conceptualizes depression, like other chronic medical illnesses, such as diabetes or hypertension, with a similar range of severity. Therefore, the assessment and treatment of depression requires a more nuanced approach.

He is scheduled to present his Management of Depression (MOD) program at this year’s American Psychiatric Association meeting in San Francisco. His MOD program focuses on how a patient such as Diana can build a satisfying life with meaningful goals and relationships – even if her depressive symptoms persist.

In his pilot study, 30 patients with treatment-resistant depression were randomized to treatment as usual (TAU, n = 13) or the MOD program (n = 17) for 12 weeks. The patients in the MOD group had significant improvement in perception of social support (P < .034) and purpose in life (P < .038) scores, in contrast to the TAU group. The MOD group participated in nine adjunctive sessions of disease management focused therapy. The Scales of Psychological Well-Being measured purpose in life, life goals, and meaning. Social support was measured with the Multidimensional Scale of Perceived Social Support. Depression severity was measured by the Montgomery-Åsberg Depression Rating Scale. Patients were assessed at baseline and week 12. Both groups of patients had significant improvements in their depressive symptoms (TAU 35.46 to 25.9 P < .010; MOD 31.88 to 22.41 P < .001) but continued to experience moderate levels of depression. Adjunctive treatment focusing on functioning, life meaning, and relationships, as opposed to symptom reduction, will help Diana to have a more satisfying life, despite her symptoms of depression.

Measuring relational functioning briefly

In another session, Dr. Keitner is slated to present "The Brief Multidimensional Assessment Scale (BMAS): A Mental Health Check Up," coauthored with Abigail K. Mansfield Maraccio, Ph.D., and Joan Kelley. This scale evaluates global mental health outcomes, including quality of life, symptoms, functioning, and relationships. This measure can be used to assess the clinical status of patients at every health encounter and over the course of an illness. Most available scales are either too long for routine clinical use, focus on a narrow range of symptoms, or focus on specific diagnostic groups. Best of all, this new scale takes less than a minute to complete.

The BMAS was tested against The Outcome Questionnaire–45 (OQ45) with 248 psychiatric outpatients as part of their standard ongoing care. Internal consistency was evaluated with Cronbach’s alpha, which was .75 for the four items. Test-retest reliability was assessed using Pearson’s r and ranged from .45 (symptom severity, which can fluctuate daily) to .79 (quality of life) for each of the BMAS items. Concurrent and convergent validity was analyzed with Pearson product moment correlations between BMAS and OQ45 scales. All correlations were significant for the relevant dimensions.

 

 

The BMAS demonstrated acceptable reliability, especially for such a brief measure. It also demonstrated concurrent and convergent validity with a much longer commonly used clinical outcome scale. The BMAS is a useful assessment tool for patients with any clinical condition for which it is desirable to track how the patient is experiencing his or her life situation at a given point in time and when there is a desire to monitor change over time. Notably, BMAS includes health relationships as a measure of good clinical outcome.

A daughter’s documentary about her father

One media workshop slated for the APA meeting will be offered by three members of the Association of Family Psychiatrists: Dr. Michael S. Ascher, Dr. Ira Glick, and Dr. Igor Galynker. They will present a film, "Unlisted: A Story of Schizophrenia." This is a soul-searching examination of responsibility – of parents and children, physicians and patients, and of society and citizens – toward those afflicted with severe mental illness. The film was made by Dr. Delaney Ruston, a Seattle general physician who documents the rebuilding of her relationship with her father. "Unlisted" examines the challenging family dynamics that are present when schizophrenia occurs. Dr. Ruston works hard to overcome the obstacles in accessing appropriate treatment for her father, and her documentary exposes the many failings of the American mental health system as experienced by the families. Dr. Ruston traces the progression of her father’s illness. She studies his medical files and narrates from his autobiographical surrealist novel. In beautifully portrayed scenes, "Unlisted" enters the inner life of Richard Ruston with a clarity and affection missing from many films about people with mental illness.

In summary, family-oriented patient care can be delivered in many ways, from focusing on relational improvement in individual work, to being aware of how to assess and measure relational functioning briefly at each visit, to being able to listen to the accounts of family members and invite them into the treatment room.

Dr. Heru is with the department of psychiatry at the University of Colorado at Denver, Aurora. She is editor of the recently published book, "Working With Families in Medical Settings: A Multidisciplinary Guide for Psychiatrists and Other Health Professions" (New York: Routledge, March 2013), and has been a member of the Association of Family Psychiatrists since 2002.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
bupropion, paroxetine, diazepam, alprazolam, zolpidem, depression, psychiatry, medication
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

Diana looked at her pill bottles and wondered why she was on all these medications when she did not feel any better. She looked at the five bottles: bupropion, paroxetine, diazepam, alprazolam, and zolpidem. She thought about the side effects she was experiencing.

She had been taking this cocktail, in various dosages, for the best part of a year now. Her depression remained unchanged. She made a decision that she would tell her psychiatrist that she wanted off the medications at her next visit. She would then ask for other treatments. She had found many therapies offered on the Internet for treatment of depression, and she hoped her psychiatrist would be able to help her decide which therapies might be best suited for her. Perhaps she would agree to stay on one medication as a compromise as she knew her psychiatrist thought treatment of depression with medication to be important.

Up to 30% of patients with depression do not respond to multiple treatment trials and are considered to have treatment-resistant depression. Most treatment trials for these patients focus on symptom reduction as a goal. This emphasis on symptom reduction often leads to tunnel vision, where other evidence-based treatments become marginalized by psychiatrists. Thus, patients like Diana end up on multiple medications, without an integrated approach to assessment or discussion of combined treatments (medications and psychotherapy).

Dr. Gabor Keitner, who practices in Providence, R.I., and is a member of the Association of Family Psychiatrists, offers a new program aimed at helping patients manage their depression. His philosophical stance is that depression is a chronic illness and that expecting symptoms to be cured with medications is, for most patients, a false hope perpetuated by a consumer society, where the pharmaceutical industry has dominated the education of patients, their families, and the psychiatric profession. He conceptualizes depression, like other chronic medical illnesses, such as diabetes or hypertension, with a similar range of severity. Therefore, the assessment and treatment of depression requires a more nuanced approach.

He is scheduled to present his Management of Depression (MOD) program at this year’s American Psychiatric Association meeting in San Francisco. His MOD program focuses on how a patient such as Diana can build a satisfying life with meaningful goals and relationships – even if her depressive symptoms persist.

In his pilot study, 30 patients with treatment-resistant depression were randomized to treatment as usual (TAU, n = 13) or the MOD program (n = 17) for 12 weeks. The patients in the MOD group had significant improvement in perception of social support (P < .034) and purpose in life (P < .038) scores, in contrast to the TAU group. The MOD group participated in nine adjunctive sessions of disease management focused therapy. The Scales of Psychological Well-Being measured purpose in life, life goals, and meaning. Social support was measured with the Multidimensional Scale of Perceived Social Support. Depression severity was measured by the Montgomery-Åsberg Depression Rating Scale. Patients were assessed at baseline and week 12. Both groups of patients had significant improvements in their depressive symptoms (TAU 35.46 to 25.9 P < .010; MOD 31.88 to 22.41 P < .001) but continued to experience moderate levels of depression. Adjunctive treatment focusing on functioning, life meaning, and relationships, as opposed to symptom reduction, will help Diana to have a more satisfying life, despite her symptoms of depression.

Measuring relational functioning briefly

In another session, Dr. Keitner is slated to present "The Brief Multidimensional Assessment Scale (BMAS): A Mental Health Check Up," coauthored with Abigail K. Mansfield Maraccio, Ph.D., and Joan Kelley. This scale evaluates global mental health outcomes, including quality of life, symptoms, functioning, and relationships. This measure can be used to assess the clinical status of patients at every health encounter and over the course of an illness. Most available scales are either too long for routine clinical use, focus on a narrow range of symptoms, or focus on specific diagnostic groups. Best of all, this new scale takes less than a minute to complete.

The BMAS was tested against The Outcome Questionnaire–45 (OQ45) with 248 psychiatric outpatients as part of their standard ongoing care. Internal consistency was evaluated with Cronbach’s alpha, which was .75 for the four items. Test-retest reliability was assessed using Pearson’s r and ranged from .45 (symptom severity, which can fluctuate daily) to .79 (quality of life) for each of the BMAS items. Concurrent and convergent validity was analyzed with Pearson product moment correlations between BMAS and OQ45 scales. All correlations were significant for the relevant dimensions.

 

 

The BMAS demonstrated acceptable reliability, especially for such a brief measure. It also demonstrated concurrent and convergent validity with a much longer commonly used clinical outcome scale. The BMAS is a useful assessment tool for patients with any clinical condition for which it is desirable to track how the patient is experiencing his or her life situation at a given point in time and when there is a desire to monitor change over time. Notably, BMAS includes health relationships as a measure of good clinical outcome.

A daughter’s documentary about her father

One media workshop slated for the APA meeting will be offered by three members of the Association of Family Psychiatrists: Dr. Michael S. Ascher, Dr. Ira Glick, and Dr. Igor Galynker. They will present a film, "Unlisted: A Story of Schizophrenia." This is a soul-searching examination of responsibility – of parents and children, physicians and patients, and of society and citizens – toward those afflicted with severe mental illness. The film was made by Dr. Delaney Ruston, a Seattle general physician who documents the rebuilding of her relationship with her father. "Unlisted" examines the challenging family dynamics that are present when schizophrenia occurs. Dr. Ruston works hard to overcome the obstacles in accessing appropriate treatment for her father, and her documentary exposes the many failings of the American mental health system as experienced by the families. Dr. Ruston traces the progression of her father’s illness. She studies his medical files and narrates from his autobiographical surrealist novel. In beautifully portrayed scenes, "Unlisted" enters the inner life of Richard Ruston with a clarity and affection missing from many films about people with mental illness.

In summary, family-oriented patient care can be delivered in many ways, from focusing on relational improvement in individual work, to being aware of how to assess and measure relational functioning briefly at each visit, to being able to listen to the accounts of family members and invite them into the treatment room.

Dr. Heru is with the department of psychiatry at the University of Colorado at Denver, Aurora. She is editor of the recently published book, "Working With Families in Medical Settings: A Multidisciplinary Guide for Psychiatrists and Other Health Professions" (New York: Routledge, March 2013), and has been a member of the Association of Family Psychiatrists since 2002.

Diana looked at her pill bottles and wondered why she was on all these medications when she did not feel any better. She looked at the five bottles: bupropion, paroxetine, diazepam, alprazolam, and zolpidem. She thought about the side effects she was experiencing.

She had been taking this cocktail, in various dosages, for the best part of a year now. Her depression remained unchanged. She made a decision that she would tell her psychiatrist that she wanted off the medications at her next visit. She would then ask for other treatments. She had found many therapies offered on the Internet for treatment of depression, and she hoped her psychiatrist would be able to help her decide which therapies might be best suited for her. Perhaps she would agree to stay on one medication as a compromise as she knew her psychiatrist thought treatment of depression with medication to be important.

Up to 30% of patients with depression do not respond to multiple treatment trials and are considered to have treatment-resistant depression. Most treatment trials for these patients focus on symptom reduction as a goal. This emphasis on symptom reduction often leads to tunnel vision, where other evidence-based treatments become marginalized by psychiatrists. Thus, patients like Diana end up on multiple medications, without an integrated approach to assessment or discussion of combined treatments (medications and psychotherapy).

Dr. Gabor Keitner, who practices in Providence, R.I., and is a member of the Association of Family Psychiatrists, offers a new program aimed at helping patients manage their depression. His philosophical stance is that depression is a chronic illness and that expecting symptoms to be cured with medications is, for most patients, a false hope perpetuated by a consumer society, where the pharmaceutical industry has dominated the education of patients, their families, and the psychiatric profession. He conceptualizes depression, like other chronic medical illnesses, such as diabetes or hypertension, with a similar range of severity. Therefore, the assessment and treatment of depression requires a more nuanced approach.

He is scheduled to present his Management of Depression (MOD) program at this year’s American Psychiatric Association meeting in San Francisco. His MOD program focuses on how a patient such as Diana can build a satisfying life with meaningful goals and relationships – even if her depressive symptoms persist.

In his pilot study, 30 patients with treatment-resistant depression were randomized to treatment as usual (TAU, n = 13) or the MOD program (n = 17) for 12 weeks. The patients in the MOD group had significant improvement in perception of social support (P < .034) and purpose in life (P < .038) scores, in contrast to the TAU group. The MOD group participated in nine adjunctive sessions of disease management focused therapy. The Scales of Psychological Well-Being measured purpose in life, life goals, and meaning. Social support was measured with the Multidimensional Scale of Perceived Social Support. Depression severity was measured by the Montgomery-Åsberg Depression Rating Scale. Patients were assessed at baseline and week 12. Both groups of patients had significant improvements in their depressive symptoms (TAU 35.46 to 25.9 P < .010; MOD 31.88 to 22.41 P < .001) but continued to experience moderate levels of depression. Adjunctive treatment focusing on functioning, life meaning, and relationships, as opposed to symptom reduction, will help Diana to have a more satisfying life, despite her symptoms of depression.

Measuring relational functioning briefly

In another session, Dr. Keitner is slated to present "The Brief Multidimensional Assessment Scale (BMAS): A Mental Health Check Up," coauthored with Abigail K. Mansfield Maraccio, Ph.D., and Joan Kelley. This scale evaluates global mental health outcomes, including quality of life, symptoms, functioning, and relationships. This measure can be used to assess the clinical status of patients at every health encounter and over the course of an illness. Most available scales are either too long for routine clinical use, focus on a narrow range of symptoms, or focus on specific diagnostic groups. Best of all, this new scale takes less than a minute to complete.

The BMAS was tested against The Outcome Questionnaire–45 (OQ45) with 248 psychiatric outpatients as part of their standard ongoing care. Internal consistency was evaluated with Cronbach’s alpha, which was .75 for the four items. Test-retest reliability was assessed using Pearson’s r and ranged from .45 (symptom severity, which can fluctuate daily) to .79 (quality of life) for each of the BMAS items. Concurrent and convergent validity was analyzed with Pearson product moment correlations between BMAS and OQ45 scales. All correlations were significant for the relevant dimensions.

 

 

The BMAS demonstrated acceptable reliability, especially for such a brief measure. It also demonstrated concurrent and convergent validity with a much longer commonly used clinical outcome scale. The BMAS is a useful assessment tool for patients with any clinical condition for which it is desirable to track how the patient is experiencing his or her life situation at a given point in time and when there is a desire to monitor change over time. Notably, BMAS includes health relationships as a measure of good clinical outcome.

A daughter’s documentary about her father

One media workshop slated for the APA meeting will be offered by three members of the Association of Family Psychiatrists: Dr. Michael S. Ascher, Dr. Ira Glick, and Dr. Igor Galynker. They will present a film, "Unlisted: A Story of Schizophrenia." This is a soul-searching examination of responsibility – of parents and children, physicians and patients, and of society and citizens – toward those afflicted with severe mental illness. The film was made by Dr. Delaney Ruston, a Seattle general physician who documents the rebuilding of her relationship with her father. "Unlisted" examines the challenging family dynamics that are present when schizophrenia occurs. Dr. Ruston works hard to overcome the obstacles in accessing appropriate treatment for her father, and her documentary exposes the many failings of the American mental health system as experienced by the families. Dr. Ruston traces the progression of her father’s illness. She studies his medical files and narrates from his autobiographical surrealist novel. In beautifully portrayed scenes, "Unlisted" enters the inner life of Richard Ruston with a clarity and affection missing from many films about people with mental illness.

In summary, family-oriented patient care can be delivered in many ways, from focusing on relational improvement in individual work, to being aware of how to assess and measure relational functioning briefly at each visit, to being able to listen to the accounts of family members and invite them into the treatment room.

Dr. Heru is with the department of psychiatry at the University of Colorado at Denver, Aurora. She is editor of the recently published book, "Working With Families in Medical Settings: A Multidisciplinary Guide for Psychiatrists and Other Health Professions" (New York: Routledge, March 2013), and has been a member of the Association of Family Psychiatrists since 2002.

Publications
Publications
Topics
Article Type
Display Headline
Managing symptoms of depression
Display Headline
Managing symptoms of depression
Legacy Keywords
bupropion, paroxetine, diazepam, alprazolam, zolpidem, depression, psychiatry, medication
Legacy Keywords
bupropion, paroxetine, diazepam, alprazolam, zolpidem, depression, psychiatry, medication
Sections
Article Source

PURLs Copyright

Inside the Article

Only doctors can save America

Article Type
Changed
Thu, 03/28/2019 - 16:06
Display Headline
Only doctors can save America

Dr. Ezekiel J. Emanuel, one of the brains behind Obamacare, has a blunt message for his fellow physicians:

Only you can save America.

He's not just talking about medicine. As might befit someone who holds a faculty title at the business-oriented Wharton School at the University of Pennsylvania, Dr. Emanuel spent much of his keynote address here at the American College of Physicians' annual meeting in San Francisco talking about the U.S. economy. The enormous impact of runaway spending on U.S. health care threatens "everything we care about," including access to health care, state funds available for education, corporate wages for the middle class, and the fiscal health of the nation, he said.

"More than any other group in America, doctors have the power to solve our long-term economic challenges to ensure a prosperous future," Dr. Emanuel said.

Dr. Ezekiel J. Emanuel

If the U.S. health care system were a country, its nearly $3 trillion economy in 2012 would be the fifth largest in the world, behind only the U.S. as a whole, China, Japan, and Germany. "We spend more on health care in this country than the 66 million French spend on everything in their society," he said. "It is an astounding number how much we spend on health care."

Take just the federal portions of Medicare and Medicaid, excluding state spending, and you've still got the 16th largest economy in the world, bigger than the economies of Switzerland, Turkey, or the Netherlands, for example. The impact of any other fiscal variable on the U.S. economy, including Social Security, is swamped by the impact of health care costs, said Dr. Emanuel, who is also chair of medical ethics and health policy at the University of Pennsylvania, Philadelphia.

Per person, the United States far outspends other countries when it comes to health care, and the proportion of the gross domestic product consumed by health care keeps getting larger and larger.

Dr. Emanuel served as a special adviser for health policy to the director of the federal Office of Management and Budget in 2009-2011 - during the design, passage, and first steps to implementation of the Patient Protection and Affordable Care Act (commonly known as Obamacare) - and he seemed to address some critics in absentia who have claimed that health care reform will lead to unwanted rationing of care. There's no need to ration, Dr. Emanuel said. Switzerland doesn't ration care, and it spends far less per capita for what is considered quality health care. "We can do a better job in this country of controlling costs without the need to ration care," he said.

The only way to really control costs is to transform the way U.S. health care is delivered, he said. Ten percent of U.S. patients account for 63% of dollars spent on health care. "You know who they are - people with congestive heart failure, COPD, diabetes, adult asthma, coronary artery disease, cancer. People with chronic multiple chronic illnesses. That's where the money's going. That's where the uneven quality is," and that's where health care delivery needs to improve, he said.

Dr. Emanuel proposed six essential components to transforming the health care system. Among them: The focus needs to be on cost according to value, and getting rid of services with no value. The system must focus on patients' needs, not on physicians' schedules or other concerns. And the system must evolve toward clinicians working as teams including allied health professionals, not as individuals. "We are not going to be, going forward, one-sies and two-sies in practice" anymore, he said.

Greater emphasis on delivering health care via organizations and systems, standardization of processes, and transparency around price and quality will be essential, he added.

Transparency in pricing and quality isn't just something consumers will want. Physicians will want it in order to refer patients to quality care and set prices appropriately, Dr. Emanuel argued. "I think this is inevitable, and I think it's going to happen faster than you think," he said.

Most U.S. physicians are stuck in fee-for-service payment systems, which don't provide the incentives needed for change, he said. Doctors "as a group" should push for changes to the payment system, which will increase physician autonomy but also will assign more financial risk to physicians. "I see no way of getting out of that," Dr. Emanuel said.

In his eyes, if doctors don't push for changes in how health care is delivered, we basically can kiss the U.S. economy and future prosperity good-bye. "Doctors are the only people who can re-engineer the delivery system," he said. "If you don't do it, it ain't gonna happen. It's that simple," he said. All previous reform efforts that did not have physician leadership have failed.

 

 

"You have to lead this," he explained.

No one should expect that reforming the fifth-largest economy in the world could be accomplished in just a few years, however. "It's going to take this decade," Dr. Emanuel predicted.

Dr. Emanuel reported having no financial disclosures.

sboschert@frontlinemedcom.com

Twitter: @sherryboschert

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Dr. Ezekiel J. Emanuel, one of the brains behind Obamacare, has a blunt message for his fellow physicians:

Only you can save America.

He's not just talking about medicine. As might befit someone who holds a faculty title at the business-oriented Wharton School at the University of Pennsylvania, Dr. Emanuel spent much of his keynote address here at the American College of Physicians' annual meeting in San Francisco talking about the U.S. economy. The enormous impact of runaway spending on U.S. health care threatens "everything we care about," including access to health care, state funds available for education, corporate wages for the middle class, and the fiscal health of the nation, he said.

"More than any other group in America, doctors have the power to solve our long-term economic challenges to ensure a prosperous future," Dr. Emanuel said.

Dr. Ezekiel J. Emanuel

If the U.S. health care system were a country, its nearly $3 trillion economy in 2012 would be the fifth largest in the world, behind only the U.S. as a whole, China, Japan, and Germany. "We spend more on health care in this country than the 66 million French spend on everything in their society," he said. "It is an astounding number how much we spend on health care."

Take just the federal portions of Medicare and Medicaid, excluding state spending, and you've still got the 16th largest economy in the world, bigger than the economies of Switzerland, Turkey, or the Netherlands, for example. The impact of any other fiscal variable on the U.S. economy, including Social Security, is swamped by the impact of health care costs, said Dr. Emanuel, who is also chair of medical ethics and health policy at the University of Pennsylvania, Philadelphia.

Per person, the United States far outspends other countries when it comes to health care, and the proportion of the gross domestic product consumed by health care keeps getting larger and larger.

Dr. Emanuel served as a special adviser for health policy to the director of the federal Office of Management and Budget in 2009-2011 - during the design, passage, and first steps to implementation of the Patient Protection and Affordable Care Act (commonly known as Obamacare) - and he seemed to address some critics in absentia who have claimed that health care reform will lead to unwanted rationing of care. There's no need to ration, Dr. Emanuel said. Switzerland doesn't ration care, and it spends far less per capita for what is considered quality health care. "We can do a better job in this country of controlling costs without the need to ration care," he said.

The only way to really control costs is to transform the way U.S. health care is delivered, he said. Ten percent of U.S. patients account for 63% of dollars spent on health care. "You know who they are - people with congestive heart failure, COPD, diabetes, adult asthma, coronary artery disease, cancer. People with chronic multiple chronic illnesses. That's where the money's going. That's where the uneven quality is," and that's where health care delivery needs to improve, he said.

Dr. Emanuel proposed six essential components to transforming the health care system. Among them: The focus needs to be on cost according to value, and getting rid of services with no value. The system must focus on patients' needs, not on physicians' schedules or other concerns. And the system must evolve toward clinicians working as teams including allied health professionals, not as individuals. "We are not going to be, going forward, one-sies and two-sies in practice" anymore, he said.

Greater emphasis on delivering health care via organizations and systems, standardization of processes, and transparency around price and quality will be essential, he added.

Transparency in pricing and quality isn't just something consumers will want. Physicians will want it in order to refer patients to quality care and set prices appropriately, Dr. Emanuel argued. "I think this is inevitable, and I think it's going to happen faster than you think," he said.

Most U.S. physicians are stuck in fee-for-service payment systems, which don't provide the incentives needed for change, he said. Doctors "as a group" should push for changes to the payment system, which will increase physician autonomy but also will assign more financial risk to physicians. "I see no way of getting out of that," Dr. Emanuel said.

In his eyes, if doctors don't push for changes in how health care is delivered, we basically can kiss the U.S. economy and future prosperity good-bye. "Doctors are the only people who can re-engineer the delivery system," he said. "If you don't do it, it ain't gonna happen. It's that simple," he said. All previous reform efforts that did not have physician leadership have failed.

 

 

"You have to lead this," he explained.

No one should expect that reforming the fifth-largest economy in the world could be accomplished in just a few years, however. "It's going to take this decade," Dr. Emanuel predicted.

Dr. Emanuel reported having no financial disclosures.

sboschert@frontlinemedcom.com

Twitter: @sherryboschert

Dr. Ezekiel J. Emanuel, one of the brains behind Obamacare, has a blunt message for his fellow physicians:

Only you can save America.

He's not just talking about medicine. As might befit someone who holds a faculty title at the business-oriented Wharton School at the University of Pennsylvania, Dr. Emanuel spent much of his keynote address here at the American College of Physicians' annual meeting in San Francisco talking about the U.S. economy. The enormous impact of runaway spending on U.S. health care threatens "everything we care about," including access to health care, state funds available for education, corporate wages for the middle class, and the fiscal health of the nation, he said.

"More than any other group in America, doctors have the power to solve our long-term economic challenges to ensure a prosperous future," Dr. Emanuel said.

Dr. Ezekiel J. Emanuel

If the U.S. health care system were a country, its nearly $3 trillion economy in 2012 would be the fifth largest in the world, behind only the U.S. as a whole, China, Japan, and Germany. "We spend more on health care in this country than the 66 million French spend on everything in their society," he said. "It is an astounding number how much we spend on health care."

Take just the federal portions of Medicare and Medicaid, excluding state spending, and you've still got the 16th largest economy in the world, bigger than the economies of Switzerland, Turkey, or the Netherlands, for example. The impact of any other fiscal variable on the U.S. economy, including Social Security, is swamped by the impact of health care costs, said Dr. Emanuel, who is also chair of medical ethics and health policy at the University of Pennsylvania, Philadelphia.

Per person, the United States far outspends other countries when it comes to health care, and the proportion of the gross domestic product consumed by health care keeps getting larger and larger.

Dr. Emanuel served as a special adviser for health policy to the director of the federal Office of Management and Budget in 2009-2011 - during the design, passage, and first steps to implementation of the Patient Protection and Affordable Care Act (commonly known as Obamacare) - and he seemed to address some critics in absentia who have claimed that health care reform will lead to unwanted rationing of care. There's no need to ration, Dr. Emanuel said. Switzerland doesn't ration care, and it spends far less per capita for what is considered quality health care. "We can do a better job in this country of controlling costs without the need to ration care," he said.

The only way to really control costs is to transform the way U.S. health care is delivered, he said. Ten percent of U.S. patients account for 63% of dollars spent on health care. "You know who they are - people with congestive heart failure, COPD, diabetes, adult asthma, coronary artery disease, cancer. People with chronic multiple chronic illnesses. That's where the money's going. That's where the uneven quality is," and that's where health care delivery needs to improve, he said.

Dr. Emanuel proposed six essential components to transforming the health care system. Among them: The focus needs to be on cost according to value, and getting rid of services with no value. The system must focus on patients' needs, not on physicians' schedules or other concerns. And the system must evolve toward clinicians working as teams including allied health professionals, not as individuals. "We are not going to be, going forward, one-sies and two-sies in practice" anymore, he said.

Greater emphasis on delivering health care via organizations and systems, standardization of processes, and transparency around price and quality will be essential, he added.

Transparency in pricing and quality isn't just something consumers will want. Physicians will want it in order to refer patients to quality care and set prices appropriately, Dr. Emanuel argued. "I think this is inevitable, and I think it's going to happen faster than you think," he said.

Most U.S. physicians are stuck in fee-for-service payment systems, which don't provide the incentives needed for change, he said. Doctors "as a group" should push for changes to the payment system, which will increase physician autonomy but also will assign more financial risk to physicians. "I see no way of getting out of that," Dr. Emanuel said.

In his eyes, if doctors don't push for changes in how health care is delivered, we basically can kiss the U.S. economy and future prosperity good-bye. "Doctors are the only people who can re-engineer the delivery system," he said. "If you don't do it, it ain't gonna happen. It's that simple," he said. All previous reform efforts that did not have physician leadership have failed.

 

 

"You have to lead this," he explained.

No one should expect that reforming the fifth-largest economy in the world could be accomplished in just a few years, however. "It's going to take this decade," Dr. Emanuel predicted.

Dr. Emanuel reported having no financial disclosures.

sboschert@frontlinemedcom.com

Twitter: @sherryboschert

Publications
Publications
Topics
Article Type
Display Headline
Only doctors can save America
Display Headline
Only doctors can save America
Sections
Article Source

PURLs Copyright

Inside the Article

Patient Prediction Model Trims Avoidable Hospital Readmissions

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
Patient Prediction Model Trims Avoidable Hospital Readmissions

A new prediction model that uses a familiar phrase can help identify potentially avoidable hospital patient readmissions, according to a report in JAMA Internal Medicine.

The retrospective cohort study, "Potentially Avoidable 30-Day Hospital Readmissions in Medical Patients," used a model dubbed HOSPITAL to create a score that targets patients most likely to benefit from pre-discharge interventions. The model is based on seven factors: hemoglobin at discharge, discharge from an oncology service, sodium levels at discharge, procedure during the index admission, index type of admission, number of admissions in the prior 12 months, and length of stay. The HOSPITAL score had fair discriminatory power (C statistic 0.71) and good calibration, the authors noted.

"By definition, these [interventions] are expensive and you really want to reserve them for the patients that are most likely to benefit," says study co-author Jeffrey Schnipper, MD, MPH, FHM, director of clinical research and an associate physician in the general medicine division at Brigham and Women's Hospital in Boston.

The study identified 879 potentially avoidable discharges out of 10,731 eligible discharges, or 8.5%. The estimated potentially avoidable readmission risk was 18%. Dr. Schnipper says that in absolute reduction, the model could cut 2% to 3% of readmissions.

"This is an evolution of sophistication in how we think about this work," Dr. Schnipper adds. "Not all patients have a preventable readmission. Maybe some of those patients are more likely to benefit. The next step is to prove it. That's the gold standard and that’s our next study." TH

Visit our website for more information on 30-day readmissions.


 

 

Issue
The Hospitalist - 2013(04)
Publications
Sections

A new prediction model that uses a familiar phrase can help identify potentially avoidable hospital patient readmissions, according to a report in JAMA Internal Medicine.

The retrospective cohort study, "Potentially Avoidable 30-Day Hospital Readmissions in Medical Patients," used a model dubbed HOSPITAL to create a score that targets patients most likely to benefit from pre-discharge interventions. The model is based on seven factors: hemoglobin at discharge, discharge from an oncology service, sodium levels at discharge, procedure during the index admission, index type of admission, number of admissions in the prior 12 months, and length of stay. The HOSPITAL score had fair discriminatory power (C statistic 0.71) and good calibration, the authors noted.

"By definition, these [interventions] are expensive and you really want to reserve them for the patients that are most likely to benefit," says study co-author Jeffrey Schnipper, MD, MPH, FHM, director of clinical research and an associate physician in the general medicine division at Brigham and Women's Hospital in Boston.

The study identified 879 potentially avoidable discharges out of 10,731 eligible discharges, or 8.5%. The estimated potentially avoidable readmission risk was 18%. Dr. Schnipper says that in absolute reduction, the model could cut 2% to 3% of readmissions.

"This is an evolution of sophistication in how we think about this work," Dr. Schnipper adds. "Not all patients have a preventable readmission. Maybe some of those patients are more likely to benefit. The next step is to prove it. That's the gold standard and that’s our next study." TH

Visit our website for more information on 30-day readmissions.


 

 

A new prediction model that uses a familiar phrase can help identify potentially avoidable hospital patient readmissions, according to a report in JAMA Internal Medicine.

The retrospective cohort study, "Potentially Avoidable 30-Day Hospital Readmissions in Medical Patients," used a model dubbed HOSPITAL to create a score that targets patients most likely to benefit from pre-discharge interventions. The model is based on seven factors: hemoglobin at discharge, discharge from an oncology service, sodium levels at discharge, procedure during the index admission, index type of admission, number of admissions in the prior 12 months, and length of stay. The HOSPITAL score had fair discriminatory power (C statistic 0.71) and good calibration, the authors noted.

"By definition, these [interventions] are expensive and you really want to reserve them for the patients that are most likely to benefit," says study co-author Jeffrey Schnipper, MD, MPH, FHM, director of clinical research and an associate physician in the general medicine division at Brigham and Women's Hospital in Boston.

The study identified 879 potentially avoidable discharges out of 10,731 eligible discharges, or 8.5%. The estimated potentially avoidable readmission risk was 18%. Dr. Schnipper says that in absolute reduction, the model could cut 2% to 3% of readmissions.

"This is an evolution of sophistication in how we think about this work," Dr. Schnipper adds. "Not all patients have a preventable readmission. Maybe some of those patients are more likely to benefit. The next step is to prove it. That's the gold standard and that’s our next study." TH

Visit our website for more information on 30-day readmissions.


 

 

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
Patient Prediction Model Trims Avoidable Hospital Readmissions
Display Headline
Patient Prediction Model Trims Avoidable Hospital Readmissions
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Hospitals Seek Ways to Defuse Angry Doctors

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
Hospitals Seek Ways to Defuse Angry Doctors

Everyone is prone to an angry outburst from time to time, and doctors are no exception. With well-documented, negative effects on morale, nurse retention, and patient safety, it's safe to say anger issues crop up from time to time among the nearly 40,000 practicing hospitalists throughout the U.S.

A recent article in Kaiser Health News describes efforts by hospitals to deal with physicians' tirades, such as a three-day counseling program developed at Vanderbilt University in Nashville, Tenn.

"All physicians need to be aware that there should be a 'zero tolerance' attitude for disruptive behavior, hospitalists included, and that disruptive behavior undermines a culture of safety, and therefore can put patients in danger," says Danielle Scheurer, MD, MSCR, SFHM, hospitalist and chief quality officer at Medical University of South Carolina in Charleston and physician editor of The Hospitalist.

In 2009, The Joint Commission issued a sentinel alert about intimidating and disruptive behaviors by physicians and the ways in which hospitals can address the issue.

The problem is not unique to any physician specialty, including hospitalists, says Alan Rosenstein, MD, an internist and disruptive behavior researcher based in San Francisco. A physician's training or personality might contribute to angry outbursts, but excessive workloads will cause pressure, stress, and burnout, which can lead to poor behavior.

"Hospitals can no longer afford to look the other way," Dr. Rosenstein says. "I look at physicians as a precious resource. The organizations they're affiliated with need to be more proactive and empathetic, intervening before the problem reaches the stage of requiring discipline through techniques such as coaching and stress management." TH

Visit our website for more information about the impact of workloads on hospitalists.


 

 

Issue
The Hospitalist - 2013(04)
Publications
Sections

Everyone is prone to an angry outburst from time to time, and doctors are no exception. With well-documented, negative effects on morale, nurse retention, and patient safety, it's safe to say anger issues crop up from time to time among the nearly 40,000 practicing hospitalists throughout the U.S.

A recent article in Kaiser Health News describes efforts by hospitals to deal with physicians' tirades, such as a three-day counseling program developed at Vanderbilt University in Nashville, Tenn.

"All physicians need to be aware that there should be a 'zero tolerance' attitude for disruptive behavior, hospitalists included, and that disruptive behavior undermines a culture of safety, and therefore can put patients in danger," says Danielle Scheurer, MD, MSCR, SFHM, hospitalist and chief quality officer at Medical University of South Carolina in Charleston and physician editor of The Hospitalist.

In 2009, The Joint Commission issued a sentinel alert about intimidating and disruptive behaviors by physicians and the ways in which hospitals can address the issue.

The problem is not unique to any physician specialty, including hospitalists, says Alan Rosenstein, MD, an internist and disruptive behavior researcher based in San Francisco. A physician's training or personality might contribute to angry outbursts, but excessive workloads will cause pressure, stress, and burnout, which can lead to poor behavior.

"Hospitals can no longer afford to look the other way," Dr. Rosenstein says. "I look at physicians as a precious resource. The organizations they're affiliated with need to be more proactive and empathetic, intervening before the problem reaches the stage of requiring discipline through techniques such as coaching and stress management." TH

Visit our website for more information about the impact of workloads on hospitalists.


 

 

Everyone is prone to an angry outburst from time to time, and doctors are no exception. With well-documented, negative effects on morale, nurse retention, and patient safety, it's safe to say anger issues crop up from time to time among the nearly 40,000 practicing hospitalists throughout the U.S.

A recent article in Kaiser Health News describes efforts by hospitals to deal with physicians' tirades, such as a three-day counseling program developed at Vanderbilt University in Nashville, Tenn.

"All physicians need to be aware that there should be a 'zero tolerance' attitude for disruptive behavior, hospitalists included, and that disruptive behavior undermines a culture of safety, and therefore can put patients in danger," says Danielle Scheurer, MD, MSCR, SFHM, hospitalist and chief quality officer at Medical University of South Carolina in Charleston and physician editor of The Hospitalist.

In 2009, The Joint Commission issued a sentinel alert about intimidating and disruptive behaviors by physicians and the ways in which hospitals can address the issue.

The problem is not unique to any physician specialty, including hospitalists, says Alan Rosenstein, MD, an internist and disruptive behavior researcher based in San Francisco. A physician's training or personality might contribute to angry outbursts, but excessive workloads will cause pressure, stress, and burnout, which can lead to poor behavior.

"Hospitals can no longer afford to look the other way," Dr. Rosenstein says. "I look at physicians as a precious resource. The organizations they're affiliated with need to be more proactive and empathetic, intervening before the problem reaches the stage of requiring discipline through techniques such as coaching and stress management." TH

Visit our website for more information about the impact of workloads on hospitalists.


 

 

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
Hospitals Seek Ways to Defuse Angry Doctors
Display Headline
Hospitals Seek Ways to Defuse Angry Doctors
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)