Affiliations
Department of Medicine, Johns Hopkins University School of Medicine and Johns Hopkins Bayview Medical Center, Baltimore, Maryland
Given name(s)
Scott
Family name
Wright
Degrees
MD

Documentation of Clinical Reasoning in Admission Notes of Hospitalists: Validation of the CRANAPL Assessment Rubric

Article Type
Changed
Thu, 11/21/2019 - 14:32

Approximately 60,000 hospitalists were working in the United States in 2018.1 Hospitalist groups work collaboratively because of the shiftwork required for 24/7 patient coverage, and first-rate clinical documentation is essential for quality care.2 Thoughtful clinical documentation not only transmits one provider’s clinical reasoning to other providers but is a professional responsibility.3 Hospitalists spend two-thirds of their time in indirect patient-care activities and approximately one quarter of their time on documentation in electronic health records (EHRs).4 Despite documentation occupying a substantial portion of the clinician’s time, published literature on the best practices for the documentation of clinical reasoning in hospital medicine or its assessment remains scant.5-7

Clinical reasoning involves establishing a diagnosis and developing a therapeutic plan that fits the unique circumstances and needs of the patient.8 Inpatient providers who admit patients to the hospital end the admission note with their assessment and plan (A&P) after reflecting about a patient’s presenting illness. The A&P generally represents the interpretations, deductions, and clinical reasoning of the inpatient providers; this is the section of the note that fellow physicians concentrate on over others.9 The documentation of clinical reasoning in the A&P allows for many to consider how the recorded interpretations relate to their own elucidations resulting in distributed cognition.10

Disorganized documentation can contribute to cognitive overload and impede thoughtful consideration about the clinical presentation.3 The assessment of clinical documentation may translate into reduced medical errors and improved note quality.11,12 Studies that have formally evaluated the documentation of clinical reasoning have focused exclusively on medical students.13-15 The nonexistence of a detailed rubric for evaluating clinical reasoning in the A&Ps of hospitalists represents a missed opportunity for evaluating what hospitalists “do”; if this evolves into a mechanism for offering formative feedback, such professional development would impact the highest level of Miller’s assessment pyramid.16 We therefore undertook this study to establish a metric to assess the hospitalist providers’ documentation of clinical reasoning in the A&P of an admission note.

METHODS

Study Design, Setting, and Subjects

This was a retrospective study that reviewed the admission notes of hospitalists for patients admitted over the period of January 2014 and October 2017 at three hospitals in Maryland. One is a community hospital (Hospital A) and two are academic medical centers (Hospital B and Hospital C). Even though these three hospitals are part of one health system, they have distinct cultures and leadership, serve different populations, and are staffed by different provider teams.

 

 

The notes of physicians working for the hospitalist groups at each of the three hospitals were the focus of the analysis in this study.

Development of the Documentation Assessment Rubric

A team was assembled to develop the Clinical Reasoning in Admission Note Assessment & PLan (CRANAPL) tool. The CRANAPL was designed to assess the comprehensiveness and thoughtfulness of the clinical reasoning documented in the A&P sections of the notes of patients who were admitted to the hospital with an acute illness. Validity evidence for CRANAPL was summarized on the basis of Messick’s unified validity framework by using four of the five sources of validity: content, response process, internal structure, and relations to other variables.17

Content Validity

The development team consisted of members who have an average of 10 years of clinical experience in hospital medicine; have studied clinical excellence and clinical reasoning; and have expertise in feedback, assessment, and professional development.18-22 The development of the CRANAPL tool by the team was informed by a review of the clinical reasoning literature, with particular attention paid to the standards and competencies outlined by the Liaison Committee on Medical Education, the Association of American Medical Colleges, the Accreditation Council on Graduate Medical Education, the Internal Medicine Milestone Project, and the Society of Hospital Medicine.23-26 For each of these parties, diagnostic reasoning and its impact on clinical decision-making are considered to be a core competency. Several works that heavily influenced the CRANAPL tool’s development were Baker’s Interpretive Summary, Differential Diagnosis, Explanation of Reasoning, And Alternatives (IDEA) assessment tool;14 King’s Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric;15 and three other studies related to diagnostic reasoning.16,27,28 These manuscripts and other works substantively informed the preliminary behavioral-based anchors that formed the initial foundation for the tool under development. The CRANAPL tool was shown to colleagues at other institutions who are leaders on clinical reasoning and was presented at academic conferences in the Division of General Internal Medicine and the Division of Hospital Medicine of our institution. Feedback resulted in iterative revisions. The aforementioned methods established content validity evidence for the CRANAPL tool.

Response Process Validity

Several of the authors pilot-tested earlier iterations on admission notes that were excluded from the sample when refining the CRANAPL tool. The weaknesses and sources of confusion with specific items were addressed by scoring 10 A&Ps individually and then comparing data captured on the tool. This cycle was repeated three times for the iterative enhancement and finalization of the CRANAPL tool. On several occasions when two authors were piloting the near-final CRANAPL tool, a third author interviewed each of the two authors about reactivity while assessing individual items and exploring with probes how their own clinical documentation practices were being considered when scoring the notes. The reasonable and thoughtful answers provided by the two authors as they explained and justified the scores they were selecting during the pilot testing served to confer response process validity evidence.

Finalizing the CRANAPL Tool

The nine-item CRANAPL tool includes elements for problem representation, leading diagnosis, uncertainty, differential diagnosis, plans for diagnosis and treatment, estimated length of stay (LOS), potential for upgrade in status to a higher level of care, and consideration of disposition. Although the final three items are not core clinical reasoning domains in the medical education literature, they represent clinical judgments that are especially relevant for the delivery of the high-quality and cost-effective care of hospitalized patients. Given that the probabilities and estimations of these three elements evolve over the course of any hospitalization on the basis of test results and response to therapy, the documentation of initial expectations on these fronts can facilitate distributed cognition with all individuals becoming wiser from shared insights.10 The tool uses two- and three-point rating scales, with each number score being clearly defined by specific written criteria (total score range: 0-14; Appendix).

 

 

Data Collection

Hospitalists’ admission notes from the three hospitals were used to validate the CRANAPL tool. Admission notes from patients hospitalized to the general medical floors with an admission diagnosis of either fever, syncope/dizziness, or abdominal pain were used. These diagnoses were purposefully examined because they (1) have a wide differential diagnosis, (2) are common presenting symptoms, and (3) are prone to diagnostic errors.29-32

The centralized EHR system across the three hospitals identified admission notes with one of these primary diagnoses of patients admitted over the period of January 2014 to October 2017. We submitted a request for 650 admission notes to be randomly selected from the centralized institutional records system. The notes were stratified by hospital and diagnosis. The sample size of our study was comparable with that of prior psychometric validation studies.33,34 Upon reviewing the A&Ps associated with these admissions, 365 notes were excluded for one of three reasons: (1) the note was written by a nurse practitioner, physician assistant, resident, or medical student; (2) the admission diagnosis had been definitively confirmed in the emergency department (eg, abdominal pain due to diverticulitis seen on CT); and (3) the note represented the fourth or more note by any single provider (to sample notes of many providers, no more than three notes written by any single provider were analyzed). A total of 285 admission notes were ultimately included in the sample.

Data were deidentified, and the A&P sections of the admission notes were each copied from the EHR into a unique Word document. Patient and hospital demographic data (including age, gender, race, number of comorbid conditions, LOS, hospital charges, and readmission to the same health system within 30 days) were collected separately from the EHR. Select physician characteristics were also collected from the hospitalist groups at each of the three hospitals, as was the length (word count) of each A&P.

The study was approved by our institutional review board.

Data Analysis

Two authors scored all deidentified A&Ps by using the finalized version of the CRANAPL tool. Prior to using the CRANAPL tool on each of the notes, these raters read each A&P and scored them by using two single-item rating scales: a global clinical reasoning and a global readability/clarity measure. Both of these global scales used three-item Likert scales (below average, average, and above average). These global rating scales collected the reviewers’ gestalt about the quality and clarity of the A&P. The use of gestalt ratings as comparators is supported by other research.35

Descriptive statistics were computed for all variables. Each rater rescored a sample of 48 records (one month after the initial scoring) and intraclass correlations (ICCs) were computed for intrarater reliability. ICCs were calculated for each item and for the CRANAPL total to determine interrater reliability.

The averaged ratings from the two raters were used for all other analyses. For CRANAPL’s internal structure validity evidence, Cronbach’s alpha was calculated as a measure of internal consistency. For relations to other variables validity evidence, CRANAPL total scores were compared with the two global assessment variables with linear regressions.

Bivariate analyses were performed by applying parametric and nonparametric tests as appropriate. A series of multivariate linear regressions, controlling for diagnosis and clustered variance by hospital site, were performed using CRANAPL total as the dependent variable and patient variables as predictors.

All data were analyzed using Stata (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas: StataCorp LP.)

 

 

RESULTS

The admission notes of 120 hospitalists were evaluated (Table 1). A total of 39 (33%) physicians were moonlighters with primary appointments outside of the hospitalist division, and 81 (68%) were full-time hospitalists. Among the 120 hospitalists, 48 (40%) were female, 60 (50%) were international medical graduates, and 90 (75%) were of nonwhite race. Most hospitalist physicians (n = 47, 58%) had worked in our health system for less than five years, and 64 hospitalists (53%) devoted greater than 50% of their time to patient care.

Approximately equal numbers of patient admission notes were pulled from each of the three hospitals. The average age of patients was 67.2 (SD 13.6) years, 145 (51%) were female, and 120 (42%) were of nonwhite race. The mean LOS for all patients was 4.0 (SD 3.4) days. A total of 44 (15%) patients were readmitted to the same health system within 30 days of discharge. None of the patients died during the incident hospitalization. The average charge for each of the hospitalizations was $10,646 (SD $9,964).

CRANAPL Data

Figure 1 shows the distribution of the scores given by each rater for each of the nine items. The mean of the total CRANAPL score given by both raters was 6.4 (SD 2.2). Scoring for some items were high (eg, summary statement: 1.5/2), whereas performance on others were low (eg, estimating LOS: 0.1/1 and describing the potential need for upgrade in care: 0.0/1).

Validity of the CRANAPL Tool’s Internal Structure

Cronbach’s alpha, which was used to measure internal consistency within the CRANAPL tool, was 0.43. The ICC, which was applied to measure the interrater reliability for both raters for the total CRANAPL score, was 0.83 (95% CI:  0.76-0.87). The ICC values for intrarater reliability for raters 1 and 2 were 0.73 (95% CI: 0.60-0.83) and 0.73 (95% CI: 0.45-0.86), respectively.

Relations to Other Variables Validity

Associations between CRANAPL total scores, global clinical reasoning, and global scores for note readability/clarity were statistically significant (P < .001), Figure 2.

Eight out of nine CRANAPL variables were statistically significantly different across the three hospitals (P <. 01) when data were analyzed by hospital site. Hospital C had the highest mean score of 7.4 (SD 2.0), followed by Hospital B with a score of 6.6 (SD 2.1), and Hospital A had the lowest total CRANAPL score of 5.2 (SD 1.9). This difference was statistically significant (P < .001). Five variables with respect to admission diagnoses (uncertainty acknowledged, differential diagnosis, plan for diagnosis, plan for treatment, and upgrade plan) were statistically significantly different across notes. Notes for syncope/dizziness generally yielded higher scores than those for abdominal pain and fever.

Factors Associated with High CRANAPL Scores

Table 2 shows the associations between CRANAPL scores and several covariates. Before adjustment, high CRANAPL scores were associated with high word counts of A&Ps (P < .001) and high hospital charges (P < .05). These associations were no longer significant after adjusting for hospital site and admitting diagnoses.

 

 

DISCUSSION

We reviewed the documentation of clinical reasoning in 285 admission notes at three different hospitals written by hospitalist physicians during routine clinical care. To our knowledge, this is the first study that assessed the documentation of hospitalists’ clinical reasoning with real patient notes. Wide variability exists in the documentation of clinical reasoning within the A&Ps of hospitalists’ admission notes. We have provided validity evidence to support the use of the user-friendly CRANAPL tool.

Prior studies have described rubrics for evaluating the clinical reasoning skills of medical students.14,15 The ICCs for the IDEA rubric used to assess medical students’ documentation of clinical reasoning were fair to moderate (0.29-0.67), whereas the ICC for the CRANAPL tool was high at 0.83. This measure of reliability is similar to that for the P-HAPEE rubric used to assess medical students’ documentation of pediatric history and physical notes.15 These data are markedly different from the data in previous studies that have found low interrater reliability for psychometric evaluations related to judgment and decision-making.36-39 CRANAPL was also found to have high intrarater reliability, which shows the reproducibility of an individual’s assessment over time. The strong association between the total CRANAPL score and global clinical reasoning assessment found in the present study is similar to that found in previous studies that have also embedded global rating scales as comparators when assessing clinical reasoning.13,,15,40,41 Global rating scales represent an overarching structure for comparison given the absence of an accepted method or gold standard for assessing clinical reasoning documentation. High-quality provider notes are defined by clarity, thoroughness, and accuracy;35 and effective documentation promotes communication and the coordination of care among the members of the care team.3

The total CRANAPL scores varied by hospital site with academic hospitals (B and C) scoring higher than the community hospital (A) in our study. Similarly, lengthy A&Ps were associated with high CRANAPL scores (P < .001) prior to adjustment for hospital site. Healthcare providers consider that the thoroughness of documentation denotes quality and attention to detail.35,42 Comprehensive documentation takes time; the longer notes by academic hospitalists than those by community hospitalists may be attributed to the fewer number of patients generally carried by hospitalists at academic centers than that by hospitalists at community hospitals.43

The documentation of the estimations of LOS, possibility of potential upgrade, and thoughts about disposition were consistently poorly described across all hospital sites and diagnoses. In contrast to CRANAPL, other clinical reasoning rubrics have excluded these items or discussed uncertainty.14,15,44 These elements represent the forward thinking that may be essential for high-quality progressive care by hospitalists. Physicians’s difficulty in acknowledging uncertainty has been associated with resource overuse, including the excessive ordering of tests, iatrogenic injury, and heavy financial burden on the healthcare system.45,46 The lack of thoughtful clinical and management reasoning at the time of admission is believed to be associated with medical errors.47 If used as a guide, the CRANAPL tool may promote reflection on the part of the admitting physician. The estimations of LOS, potential for upgrade to a higher level of care, and disposition are markers of optimal inpatient care, especially for hospitalists who work in shifts with embedded handoffs. When shared with colleagues (through documentation), there is the potential for distributed cognition10 to extend throughout the social network of the hospitalist group. The fact that so few providers are currently including these items in their A&P’s show that the providers are either not performing or documenting the ‘reasoning’. Either way, this is an opportunity that has been highlighted by the CRANAPL tool.

Several limitations of this study should be considered. First, the CRANAPL tool may not have captured elements of optimal clinical reasoning documentation. The reliance on multiple methods and an iterative process in the refinement of the CRANAPL tool should have minimized this. Second, this study was conducted across a single healthcare system that uses the same EHR; this EHR or institutional culture may influence documentation practices and behaviors. Given that using the CRANAPL tool to score an A&P is quick and easy, the benefit of giving providers feedback on their notes remains to be seen—here and at other hospitals. Third, our sample size could limit the generalizability of the results and the significance of the associations. However, the sample assessed in our study was significantly larger than that assessed in other studies that have validated clinical reasoning rubrics.14,15 Fourth, clinical reasoning is a broad and multidimensional construct. The CRANAPL tool focuses exclusively on hospitalists’ documentation of clinical reasoning and therefore does not assess aspects of clinical reasoning occurring in the physicians’ minds. Finally, given our goal to optimally validate the CRANAPL tool, we chose to test the tool on specific presentations that are known to be associated with diagnostic practice variation and errors. We may have observed different results had we chosen a different set of diagnoses from each hospital. Further validity evidence will be established when applying the CRANPL tool to different diagnoses and to notes from other clinical settings.

In conclusion, this study focuses on the development and validation of the CRANAPL tool that assesses how hospitalists document their clinical reasoning in the A&P section of admission notes. Our results show that wide variability exists in the documentation of clinical reasoning by hospitalists within and across hospitals. Given the CRANAPL tool’s ease-of-use and its versatility, hospitalist divisions in academic and nonacademic settings may use the CRANAPL tool to assess and provide feedback on the documentation of hospitalists’ clinical reasoning. Beyond studying whether physicians can be taught to improve their notes with feedback based on the CRANAPL tool, future studies may explore whether enhancing clinical reasoning documentation may be associated with improvements in patient care and clinical outcomes.

 

 

Acknowledgments

Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine which is supported through Hopkins’ Center for Innovative Medicine.

The authors thank Christine Caufield-Noll, MLIS, AHIP (Johns Hopkins Bayview Medical Center, Baltimore, Maryland) for her assistance with this project.

Disclosures

The authors have nothing to disclose.

 

Files
References

1. State of Hospital Medicine. Society of Hospital Medicine. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed August 19, 2018.
2. Mehta R, Radhakrishnan NS, Warring CD, et al. The use of evidence-based, problem-oriented templates as a clinical decision support in an inpatient electronic health record system. Appl Clin Inform. 2016;7(3):790-802. https://doi.org/10.4338/ACI-2015-11-RA-0164
3. Improving Diagnosis in Healthcare: Health and Medicine Division. http://www.nationalacademies.org/hmd/Reports/2015/Improving-Diagnosis-in-Healthcare.aspx. Accessed August 7, 2018.
4. Tipping MD, Forth VE, O’Leary KJ, et al. Where did the day go? A time-motion study of hospitalists. J Hosp Med. 2010;5(6):323-328. https://doi.org/10.1002/jhm.790
5. Varpio L, Rashotte J, Day K, King J, Kuziemsky C, Parush A. The EHR and building the patient’s story: a qualitative investigation of how EHR use obstructs a vital clinical activity. Int J Med Inform. 2015;84(12):1019-1028. https://doi.org/10.1016/j.ijmedinf.2015.09.004
6. Clynch N, Kellett J. Medical documentation: part of the solution, or part of the problem? A narrative review of the literature on the time spent on and value of medical documentation. Int J Med Inform. 2015;84(4):221-228. https://doi.org/10.1016/j.ijmedinf.2014.12.001
7. Varpio L, Day K, Elliot-Miller P, et al. The impact of adopting EHRs: how losing connectivity affects clinical reasoning. Med Educ. 2015;49(5):476-486. https://doi.org/10.1111/medu.12665
8. McBee E, Ratcliffe T, Schuwirth L, et al. Context and clinical reasoning: understanding the medical student perspective. Perspect Med Educ. 2018;7(4):256-263. https://doi.org/10.1007/s40037-018-0417-x
9. Brown PJ, Marquard JL, Amster B, et al. What do physicians read (and ignore) in electronic progress notes? Appl Clin Inform. 2014;5(2):430-444. https://doi.org/10.4338/ACI-2014-01-RA-0003
10. Katherine D, Shalin VL. Creating a common trajectory: Shared decision making and distributed cognition in medical consultations. https://pxjournal.org/cgi/viewcontent.cgi?article=1116&context=journal Accessed April 4, 2019.
11. Harchelroad FP, Martin ML, Kremen RM, Murray KW. Emergency department daily record review: a quality assurance system in a teaching hospital. QRB Qual Rev Bull. 1988;14(2):45-49. https://doi.org/10.1016/S0097-5990(16)30187-7.
12. Opila DA. The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting. J Gen Intern Med. 1997;12(6):352-356. https://doi.org/10.1007/s11606-006-5083-8.
13. Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The development and preliminary validation of a rubric to assess medical students’ written summary statements in virtual patient cases. Acad Med. 2016;91(1):94-100. https://doi.org/10.1097/ACM.0000000000000800
14. Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA assessment tool: assessing the reporting, diagnostic reasoning, and decision-making skills demonstrated in medical students’ hospital admission notes. Teach Learn Med. 2015;27(2):163-173. https://doi.org/10.1080/10401334.2015.1011654
15. King MA, Phillipi CA, Buchanan PM, Lewin LO. Developing validity evidence for the written pediatric history and physical exam evaluation rubric. Acad Pediatr. 2017;17(1):68-73. https://doi.org/10.1016/j.acap.2016.08.001
16. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63-S67.
17. Messick S. Standards of validity and the validity of standards in performance asessment. Educ Meas Issues Pract. 2005;14(4):5-8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x
18. Menachery EP, Knight AM, Kolodner K, Wright SM. Physician characteristics associated with proficiency in feedback skills. J Gen Intern Med. 2006;21(5):440-446. https://doi.org/10.1111/j.1525-1497.2006.00424.x
19. Tackett S, Eisele D, McGuire M, Rotello L, Wright S. Fostering clinical excellence across an academic health system. South Med J. 2016;109(8):471-476. https://doi.org/10.14423/SMJ.0000000000000498
20. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989-994. https://doi.org/10.4065/83.9.989
21. Wright SM, Kravet S, Christmas C, Burkhart K, Durso SC. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3-year experience. Acad Med. 2010;85(12):1833-1839. https://doi.org/10.1097/ACM.0b013e3181fa416c
22. Kotwal S, Peña I, Howell E, Wright S. Defining clinical excellence in hospital medicine: a qualitative study. J Contin Educ Health Prof. 2017;37(1):3-8. https://doi.org/10.1097/CEH.0000000000000145
23. Common Program Requirements. https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements. Accessed August 21, 2018.
24. Warren J, Lupi C, Schwartz ML, et al. Chief Medical Education Officer.; 2017. https://www.aamc.org/download/482204/data/epa9toolkit.pdf. Accessed August 21, 2018.
25. Th He Inte. https://www.abim.org/~/media/ABIM Public/Files/pdf/milestones/internal-medicine-milestones-project.pdf. Accessed August 21, 2018.
26. Core Competencies. Society of Hospital Medicine. https://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed August 21, 2018.
27. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. Cox M,
Irby DM, eds. N Engl J Med. 2006;355(21):2217-2225. https://doi.org/10.1056/NEJMra054782
28. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74(11):1203-1207. https://doi.org/10.1097/00001888-199911000-00012.
29. Rao G, Epner P, Bauer V, Solomonides A, Newman-Toker DE. Identifying and analyzing diagnostic paths: a new approach for studying diagnostic practices. Diagnosis Berlin, Ger. 2017;4(2):67-72. https://doi.org/10.1515/dx-2016-0049
30. Ely JW, Kaldjian LC, D’Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Med. 2012;25(1):87-97. https://doi.org/10.3122/jabfm.2012.01.110174
31. Kerber KA, Newman-Toker DE. Misdiagnosing dizzy patients: common pitfalls in clinical practice. Neurol Clin. 2015;33(3):565-75, viii. https://doi.org/10.1016/j.ncl.2015.04.009
32. Singh H, Giardina TD, Meyer AND, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173(6):418. https://doi.org/10.1001/jamainternmed.2013.2777.
33. Kahn D, Stewart E, Duncan M, et al. A prescription for note bloat: an effective progress note template. J Hosp Med. 2018;13(6):378-382. https://doi.org/10.12788/jhm.2898
34. Anthoine E, Moret L, Regnault A, Sébille V, Hardouin J-B. Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures. Health Qual Life Outcomes. 2014;12(1):176. https://doi.org/10.1186/s12955-014-0176-2
35. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the physician documentation quality instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164-174. https://doi.org/10.4338/ACI-2011-11-RA-0070
36. Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: effects of rater expertise. Adv Health Sci Educ Theory Pract. 2011;16(2):151-165. https://doi.org/10.1007/s10459-010-9250-7
37. Kreiter CD, Ferguson KJ. Examining the generalizability of ratings across clerkships using a clinical evaluation form. Eval Health Prof. 2001;24(1):36-46. https://doi.org/10.1177/01632780122034768
38. Middleman AB, Sunder PK, Yen AG. Reliability of the history and physical assessment (HAPA) form. Clin Teach. 2011;8(3):192-195. https://doi.org/10.1111/j.1743-498X.2011.00459.x
39. Kogan JR, Shea JA. Psychometric characteristics of a write-up assessment form in a medicine core clerkship. Teach Learn Med. 2005;17(2):101-106. https://doi.org/10.1207/s15328015tlm1702_2
40. Lewin LO, Beraho L, Dolan S, Millstein L, Bowman D. Interrater reliability of an oral case presentation rating tool in a pediatric clerkship. Teach Learn Med. 2013;25(1):31-38. https://doi.org/10.1080/10401334.2012.741537
41. Gray JD. Global rating scales in residency education. Acad Med. 1996;71(1):S55-S63.
42. Rosenbloom ST, Crow AN, Blackford JU, Johnson KB. Cognitive factors influencing perceptions of clinical documentation tools. J Biomed Inform. 2007;40(2):106-113. https://doi.org/10.1016/j.jbi.2006.06.006
43. Michtalik HJ, Pronovost PJ, Marsteller JA, Spetz J, Brotman DJ. Identifying potential predictors of a safe attending physician workload: a survey of hospitalists. J Hosp Med. 2013;8(11):644-646. https://doi.org/10.1002/jhm.2088
44. Seo J-H, Kong H-H, Im S-J, et al. A pilot study on the evaluation of medical student documentation: assessment of SOAP notes. Korean J Med Educ. 2016;28(2):237-241. https://doi.org/10.3946/kjme.2016.26
45. Kassirer JP. Our stubborn quest for diagnostic certainty. A cause of excessive testing. N Engl J Med. 1989;320(22):1489-1491. https://doi.org/10.1056/NEJM198906013202211
46. Hatch S. Uncertainty in medicine. BMJ. 2017;357:j2180. https://doi.org/10.1136/bmj.j2180
47. Cook DA, Sherbino J, Durning SJ. Management reasoning. JAMA. 2018;319(22):2267. https://doi.org/10.1001/jama.2018.4385

Article PDF
Issue
Journal of Hospital Medicine 14(12)
Publications
Topics
Page Number
746-753. Published online first June 11, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Approximately 60,000 hospitalists were working in the United States in 2018.1 Hospitalist groups work collaboratively because of the shiftwork required for 24/7 patient coverage, and first-rate clinical documentation is essential for quality care.2 Thoughtful clinical documentation not only transmits one provider’s clinical reasoning to other providers but is a professional responsibility.3 Hospitalists spend two-thirds of their time in indirect patient-care activities and approximately one quarter of their time on documentation in electronic health records (EHRs).4 Despite documentation occupying a substantial portion of the clinician’s time, published literature on the best practices for the documentation of clinical reasoning in hospital medicine or its assessment remains scant.5-7

Clinical reasoning involves establishing a diagnosis and developing a therapeutic plan that fits the unique circumstances and needs of the patient.8 Inpatient providers who admit patients to the hospital end the admission note with their assessment and plan (A&P) after reflecting about a patient’s presenting illness. The A&P generally represents the interpretations, deductions, and clinical reasoning of the inpatient providers; this is the section of the note that fellow physicians concentrate on over others.9 The documentation of clinical reasoning in the A&P allows for many to consider how the recorded interpretations relate to their own elucidations resulting in distributed cognition.10

Disorganized documentation can contribute to cognitive overload and impede thoughtful consideration about the clinical presentation.3 The assessment of clinical documentation may translate into reduced medical errors and improved note quality.11,12 Studies that have formally evaluated the documentation of clinical reasoning have focused exclusively on medical students.13-15 The nonexistence of a detailed rubric for evaluating clinical reasoning in the A&Ps of hospitalists represents a missed opportunity for evaluating what hospitalists “do”; if this evolves into a mechanism for offering formative feedback, such professional development would impact the highest level of Miller’s assessment pyramid.16 We therefore undertook this study to establish a metric to assess the hospitalist providers’ documentation of clinical reasoning in the A&P of an admission note.

METHODS

Study Design, Setting, and Subjects

This was a retrospective study that reviewed the admission notes of hospitalists for patients admitted over the period of January 2014 and October 2017 at three hospitals in Maryland. One is a community hospital (Hospital A) and two are academic medical centers (Hospital B and Hospital C). Even though these three hospitals are part of one health system, they have distinct cultures and leadership, serve different populations, and are staffed by different provider teams.

 

 

The notes of physicians working for the hospitalist groups at each of the three hospitals were the focus of the analysis in this study.

Development of the Documentation Assessment Rubric

A team was assembled to develop the Clinical Reasoning in Admission Note Assessment & PLan (CRANAPL) tool. The CRANAPL was designed to assess the comprehensiveness and thoughtfulness of the clinical reasoning documented in the A&P sections of the notes of patients who were admitted to the hospital with an acute illness. Validity evidence for CRANAPL was summarized on the basis of Messick’s unified validity framework by using four of the five sources of validity: content, response process, internal structure, and relations to other variables.17

Content Validity

The development team consisted of members who have an average of 10 years of clinical experience in hospital medicine; have studied clinical excellence and clinical reasoning; and have expertise in feedback, assessment, and professional development.18-22 The development of the CRANAPL tool by the team was informed by a review of the clinical reasoning literature, with particular attention paid to the standards and competencies outlined by the Liaison Committee on Medical Education, the Association of American Medical Colleges, the Accreditation Council on Graduate Medical Education, the Internal Medicine Milestone Project, and the Society of Hospital Medicine.23-26 For each of these parties, diagnostic reasoning and its impact on clinical decision-making are considered to be a core competency. Several works that heavily influenced the CRANAPL tool’s development were Baker’s Interpretive Summary, Differential Diagnosis, Explanation of Reasoning, And Alternatives (IDEA) assessment tool;14 King’s Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric;15 and three other studies related to diagnostic reasoning.16,27,28 These manuscripts and other works substantively informed the preliminary behavioral-based anchors that formed the initial foundation for the tool under development. The CRANAPL tool was shown to colleagues at other institutions who are leaders on clinical reasoning and was presented at academic conferences in the Division of General Internal Medicine and the Division of Hospital Medicine of our institution. Feedback resulted in iterative revisions. The aforementioned methods established content validity evidence for the CRANAPL tool.

Response Process Validity

Several of the authors pilot-tested earlier iterations on admission notes that were excluded from the sample when refining the CRANAPL tool. The weaknesses and sources of confusion with specific items were addressed by scoring 10 A&Ps individually and then comparing data captured on the tool. This cycle was repeated three times for the iterative enhancement and finalization of the CRANAPL tool. On several occasions when two authors were piloting the near-final CRANAPL tool, a third author interviewed each of the two authors about reactivity while assessing individual items and exploring with probes how their own clinical documentation practices were being considered when scoring the notes. The reasonable and thoughtful answers provided by the two authors as they explained and justified the scores they were selecting during the pilot testing served to confer response process validity evidence.

Finalizing the CRANAPL Tool

The nine-item CRANAPL tool includes elements for problem representation, leading diagnosis, uncertainty, differential diagnosis, plans for diagnosis and treatment, estimated length of stay (LOS), potential for upgrade in status to a higher level of care, and consideration of disposition. Although the final three items are not core clinical reasoning domains in the medical education literature, they represent clinical judgments that are especially relevant for the delivery of the high-quality and cost-effective care of hospitalized patients. Given that the probabilities and estimations of these three elements evolve over the course of any hospitalization on the basis of test results and response to therapy, the documentation of initial expectations on these fronts can facilitate distributed cognition with all individuals becoming wiser from shared insights.10 The tool uses two- and three-point rating scales, with each number score being clearly defined by specific written criteria (total score range: 0-14; Appendix).

 

 

Data Collection

Hospitalists’ admission notes from the three hospitals were used to validate the CRANAPL tool. Admission notes from patients hospitalized to the general medical floors with an admission diagnosis of either fever, syncope/dizziness, or abdominal pain were used. These diagnoses were purposefully examined because they (1) have a wide differential diagnosis, (2) are common presenting symptoms, and (3) are prone to diagnostic errors.29-32

The centralized EHR system across the three hospitals identified admission notes with one of these primary diagnoses of patients admitted over the period of January 2014 to October 2017. We submitted a request for 650 admission notes to be randomly selected from the centralized institutional records system. The notes were stratified by hospital and diagnosis. The sample size of our study was comparable with that of prior psychometric validation studies.33,34 Upon reviewing the A&Ps associated with these admissions, 365 notes were excluded for one of three reasons: (1) the note was written by a nurse practitioner, physician assistant, resident, or medical student; (2) the admission diagnosis had been definitively confirmed in the emergency department (eg, abdominal pain due to diverticulitis seen on CT); and (3) the note represented the fourth or more note by any single provider (to sample notes of many providers, no more than three notes written by any single provider were analyzed). A total of 285 admission notes were ultimately included in the sample.

Data were deidentified, and the A&P sections of the admission notes were each copied from the EHR into a unique Word document. Patient and hospital demographic data (including age, gender, race, number of comorbid conditions, LOS, hospital charges, and readmission to the same health system within 30 days) were collected separately from the EHR. Select physician characteristics were also collected from the hospitalist groups at each of the three hospitals, as was the length (word count) of each A&P.

The study was approved by our institutional review board.

Data Analysis

Two authors scored all deidentified A&Ps by using the finalized version of the CRANAPL tool. Prior to using the CRANAPL tool on each of the notes, these raters read each A&P and scored them by using two single-item rating scales: a global clinical reasoning and a global readability/clarity measure. Both of these global scales used three-item Likert scales (below average, average, and above average). These global rating scales collected the reviewers’ gestalt about the quality and clarity of the A&P. The use of gestalt ratings as comparators is supported by other research.35

Descriptive statistics were computed for all variables. Each rater rescored a sample of 48 records (one month after the initial scoring) and intraclass correlations (ICCs) were computed for intrarater reliability. ICCs were calculated for each item and for the CRANAPL total to determine interrater reliability.

The averaged ratings from the two raters were used for all other analyses. For CRANAPL’s internal structure validity evidence, Cronbach’s alpha was calculated as a measure of internal consistency. For relations to other variables validity evidence, CRANAPL total scores were compared with the two global assessment variables with linear regressions.

Bivariate analyses were performed by applying parametric and nonparametric tests as appropriate. A series of multivariate linear regressions, controlling for diagnosis and clustered variance by hospital site, were performed using CRANAPL total as the dependent variable and patient variables as predictors.

All data were analyzed using Stata (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas: StataCorp LP.)

 

 

RESULTS

The admission notes of 120 hospitalists were evaluated (Table 1). A total of 39 (33%) physicians were moonlighters with primary appointments outside of the hospitalist division, and 81 (68%) were full-time hospitalists. Among the 120 hospitalists, 48 (40%) were female, 60 (50%) were international medical graduates, and 90 (75%) were of nonwhite race. Most hospitalist physicians (n = 47, 58%) had worked in our health system for less than five years, and 64 hospitalists (53%) devoted greater than 50% of their time to patient care.

Approximately equal numbers of patient admission notes were pulled from each of the three hospitals. The average age of patients was 67.2 (SD 13.6) years, 145 (51%) were female, and 120 (42%) were of nonwhite race. The mean LOS for all patients was 4.0 (SD 3.4) days. A total of 44 (15%) patients were readmitted to the same health system within 30 days of discharge. None of the patients died during the incident hospitalization. The average charge for each of the hospitalizations was $10,646 (SD $9,964).

CRANAPL Data

Figure 1 shows the distribution of the scores given by each rater for each of the nine items. The mean of the total CRANAPL score given by both raters was 6.4 (SD 2.2). Scoring for some items were high (eg, summary statement: 1.5/2), whereas performance on others were low (eg, estimating LOS: 0.1/1 and describing the potential need for upgrade in care: 0.0/1).

Validity of the CRANAPL Tool’s Internal Structure

Cronbach’s alpha, which was used to measure internal consistency within the CRANAPL tool, was 0.43. The ICC, which was applied to measure the interrater reliability for both raters for the total CRANAPL score, was 0.83 (95% CI:  0.76-0.87). The ICC values for intrarater reliability for raters 1 and 2 were 0.73 (95% CI: 0.60-0.83) and 0.73 (95% CI: 0.45-0.86), respectively.

Relations to Other Variables Validity

Associations between CRANAPL total scores, global clinical reasoning, and global scores for note readability/clarity were statistically significant (P < .001), Figure 2.

Eight out of nine CRANAPL variables were statistically significantly different across the three hospitals (P <. 01) when data were analyzed by hospital site. Hospital C had the highest mean score of 7.4 (SD 2.0), followed by Hospital B with a score of 6.6 (SD 2.1), and Hospital A had the lowest total CRANAPL score of 5.2 (SD 1.9). This difference was statistically significant (P < .001). Five variables with respect to admission diagnoses (uncertainty acknowledged, differential diagnosis, plan for diagnosis, plan for treatment, and upgrade plan) were statistically significantly different across notes. Notes for syncope/dizziness generally yielded higher scores than those for abdominal pain and fever.

Factors Associated with High CRANAPL Scores

Table 2 shows the associations between CRANAPL scores and several covariates. Before adjustment, high CRANAPL scores were associated with high word counts of A&Ps (P < .001) and high hospital charges (P < .05). These associations were no longer significant after adjusting for hospital site and admitting diagnoses.

 

 

DISCUSSION

We reviewed the documentation of clinical reasoning in 285 admission notes at three different hospitals written by hospitalist physicians during routine clinical care. To our knowledge, this is the first study that assessed the documentation of hospitalists’ clinical reasoning with real patient notes. Wide variability exists in the documentation of clinical reasoning within the A&Ps of hospitalists’ admission notes. We have provided validity evidence to support the use of the user-friendly CRANAPL tool.

Prior studies have described rubrics for evaluating the clinical reasoning skills of medical students.14,15 The ICCs for the IDEA rubric used to assess medical students’ documentation of clinical reasoning were fair to moderate (0.29-0.67), whereas the ICC for the CRANAPL tool was high at 0.83. This measure of reliability is similar to that for the P-HAPEE rubric used to assess medical students’ documentation of pediatric history and physical notes.15 These data are markedly different from the data in previous studies that have found low interrater reliability for psychometric evaluations related to judgment and decision-making.36-39 CRANAPL was also found to have high intrarater reliability, which shows the reproducibility of an individual’s assessment over time. The strong association between the total CRANAPL score and global clinical reasoning assessment found in the present study is similar to that found in previous studies that have also embedded global rating scales as comparators when assessing clinical reasoning.13,,15,40,41 Global rating scales represent an overarching structure for comparison given the absence of an accepted method or gold standard for assessing clinical reasoning documentation. High-quality provider notes are defined by clarity, thoroughness, and accuracy;35 and effective documentation promotes communication and the coordination of care among the members of the care team.3

The total CRANAPL scores varied by hospital site with academic hospitals (B and C) scoring higher than the community hospital (A) in our study. Similarly, lengthy A&Ps were associated with high CRANAPL scores (P < .001) prior to adjustment for hospital site. Healthcare providers consider that the thoroughness of documentation denotes quality and attention to detail.35,42 Comprehensive documentation takes time; the longer notes by academic hospitalists than those by community hospitalists may be attributed to the fewer number of patients generally carried by hospitalists at academic centers than that by hospitalists at community hospitals.43

The documentation of the estimations of LOS, possibility of potential upgrade, and thoughts about disposition were consistently poorly described across all hospital sites and diagnoses. In contrast to CRANAPL, other clinical reasoning rubrics have excluded these items or discussed uncertainty.14,15,44 These elements represent the forward thinking that may be essential for high-quality progressive care by hospitalists. Physicians’s difficulty in acknowledging uncertainty has been associated with resource overuse, including the excessive ordering of tests, iatrogenic injury, and heavy financial burden on the healthcare system.45,46 The lack of thoughtful clinical and management reasoning at the time of admission is believed to be associated with medical errors.47 If used as a guide, the CRANAPL tool may promote reflection on the part of the admitting physician. The estimations of LOS, potential for upgrade to a higher level of care, and disposition are markers of optimal inpatient care, especially for hospitalists who work in shifts with embedded handoffs. When shared with colleagues (through documentation), there is the potential for distributed cognition10 to extend throughout the social network of the hospitalist group. The fact that so few providers are currently including these items in their A&P’s show that the providers are either not performing or documenting the ‘reasoning’. Either way, this is an opportunity that has been highlighted by the CRANAPL tool.

Several limitations of this study should be considered. First, the CRANAPL tool may not have captured elements of optimal clinical reasoning documentation. The reliance on multiple methods and an iterative process in the refinement of the CRANAPL tool should have minimized this. Second, this study was conducted across a single healthcare system that uses the same EHR; this EHR or institutional culture may influence documentation practices and behaviors. Given that using the CRANAPL tool to score an A&P is quick and easy, the benefit of giving providers feedback on their notes remains to be seen—here and at other hospitals. Third, our sample size could limit the generalizability of the results and the significance of the associations. However, the sample assessed in our study was significantly larger than that assessed in other studies that have validated clinical reasoning rubrics.14,15 Fourth, clinical reasoning is a broad and multidimensional construct. The CRANAPL tool focuses exclusively on hospitalists’ documentation of clinical reasoning and therefore does not assess aspects of clinical reasoning occurring in the physicians’ minds. Finally, given our goal to optimally validate the CRANAPL tool, we chose to test the tool on specific presentations that are known to be associated with diagnostic practice variation and errors. We may have observed different results had we chosen a different set of diagnoses from each hospital. Further validity evidence will be established when applying the CRANPL tool to different diagnoses and to notes from other clinical settings.

In conclusion, this study focuses on the development and validation of the CRANAPL tool that assesses how hospitalists document their clinical reasoning in the A&P section of admission notes. Our results show that wide variability exists in the documentation of clinical reasoning by hospitalists within and across hospitals. Given the CRANAPL tool’s ease-of-use and its versatility, hospitalist divisions in academic and nonacademic settings may use the CRANAPL tool to assess and provide feedback on the documentation of hospitalists’ clinical reasoning. Beyond studying whether physicians can be taught to improve their notes with feedback based on the CRANAPL tool, future studies may explore whether enhancing clinical reasoning documentation may be associated with improvements in patient care and clinical outcomes.

 

 

Acknowledgments

Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine which is supported through Hopkins’ Center for Innovative Medicine.

The authors thank Christine Caufield-Noll, MLIS, AHIP (Johns Hopkins Bayview Medical Center, Baltimore, Maryland) for her assistance with this project.

Disclosures

The authors have nothing to disclose.

 

Approximately 60,000 hospitalists were working in the United States in 2018.1 Hospitalist groups work collaboratively because of the shiftwork required for 24/7 patient coverage, and first-rate clinical documentation is essential for quality care.2 Thoughtful clinical documentation not only transmits one provider’s clinical reasoning to other providers but is a professional responsibility.3 Hospitalists spend two-thirds of their time in indirect patient-care activities and approximately one quarter of their time on documentation in electronic health records (EHRs).4 Despite documentation occupying a substantial portion of the clinician’s time, published literature on the best practices for the documentation of clinical reasoning in hospital medicine or its assessment remains scant.5-7

Clinical reasoning involves establishing a diagnosis and developing a therapeutic plan that fits the unique circumstances and needs of the patient.8 Inpatient providers who admit patients to the hospital end the admission note with their assessment and plan (A&P) after reflecting about a patient’s presenting illness. The A&P generally represents the interpretations, deductions, and clinical reasoning of the inpatient providers; this is the section of the note that fellow physicians concentrate on over others.9 The documentation of clinical reasoning in the A&P allows for many to consider how the recorded interpretations relate to their own elucidations resulting in distributed cognition.10

Disorganized documentation can contribute to cognitive overload and impede thoughtful consideration about the clinical presentation.3 The assessment of clinical documentation may translate into reduced medical errors and improved note quality.11,12 Studies that have formally evaluated the documentation of clinical reasoning have focused exclusively on medical students.13-15 The nonexistence of a detailed rubric for evaluating clinical reasoning in the A&Ps of hospitalists represents a missed opportunity for evaluating what hospitalists “do”; if this evolves into a mechanism for offering formative feedback, such professional development would impact the highest level of Miller’s assessment pyramid.16 We therefore undertook this study to establish a metric to assess the hospitalist providers’ documentation of clinical reasoning in the A&P of an admission note.

METHODS

Study Design, Setting, and Subjects

This was a retrospective study that reviewed the admission notes of hospitalists for patients admitted over the period of January 2014 and October 2017 at three hospitals in Maryland. One is a community hospital (Hospital A) and two are academic medical centers (Hospital B and Hospital C). Even though these three hospitals are part of one health system, they have distinct cultures and leadership, serve different populations, and are staffed by different provider teams.

 

 

The notes of physicians working for the hospitalist groups at each of the three hospitals were the focus of the analysis in this study.

Development of the Documentation Assessment Rubric

A team was assembled to develop the Clinical Reasoning in Admission Note Assessment & PLan (CRANAPL) tool. The CRANAPL was designed to assess the comprehensiveness and thoughtfulness of the clinical reasoning documented in the A&P sections of the notes of patients who were admitted to the hospital with an acute illness. Validity evidence for CRANAPL was summarized on the basis of Messick’s unified validity framework by using four of the five sources of validity: content, response process, internal structure, and relations to other variables.17

Content Validity

The development team consisted of members who have an average of 10 years of clinical experience in hospital medicine; have studied clinical excellence and clinical reasoning; and have expertise in feedback, assessment, and professional development.18-22 The development of the CRANAPL tool by the team was informed by a review of the clinical reasoning literature, with particular attention paid to the standards and competencies outlined by the Liaison Committee on Medical Education, the Association of American Medical Colleges, the Accreditation Council on Graduate Medical Education, the Internal Medicine Milestone Project, and the Society of Hospital Medicine.23-26 For each of these parties, diagnostic reasoning and its impact on clinical decision-making are considered to be a core competency. Several works that heavily influenced the CRANAPL tool’s development were Baker’s Interpretive Summary, Differential Diagnosis, Explanation of Reasoning, And Alternatives (IDEA) assessment tool;14 King’s Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric;15 and three other studies related to diagnostic reasoning.16,27,28 These manuscripts and other works substantively informed the preliminary behavioral-based anchors that formed the initial foundation for the tool under development. The CRANAPL tool was shown to colleagues at other institutions who are leaders on clinical reasoning and was presented at academic conferences in the Division of General Internal Medicine and the Division of Hospital Medicine of our institution. Feedback resulted in iterative revisions. The aforementioned methods established content validity evidence for the CRANAPL tool.

Response Process Validity

Several of the authors pilot-tested earlier iterations on admission notes that were excluded from the sample when refining the CRANAPL tool. The weaknesses and sources of confusion with specific items were addressed by scoring 10 A&Ps individually and then comparing data captured on the tool. This cycle was repeated three times for the iterative enhancement and finalization of the CRANAPL tool. On several occasions when two authors were piloting the near-final CRANAPL tool, a third author interviewed each of the two authors about reactivity while assessing individual items and exploring with probes how their own clinical documentation practices were being considered when scoring the notes. The reasonable and thoughtful answers provided by the two authors as they explained and justified the scores they were selecting during the pilot testing served to confer response process validity evidence.

Finalizing the CRANAPL Tool

The nine-item CRANAPL tool includes elements for problem representation, leading diagnosis, uncertainty, differential diagnosis, plans for diagnosis and treatment, estimated length of stay (LOS), potential for upgrade in status to a higher level of care, and consideration of disposition. Although the final three items are not core clinical reasoning domains in the medical education literature, they represent clinical judgments that are especially relevant for the delivery of the high-quality and cost-effective care of hospitalized patients. Given that the probabilities and estimations of these three elements evolve over the course of any hospitalization on the basis of test results and response to therapy, the documentation of initial expectations on these fronts can facilitate distributed cognition with all individuals becoming wiser from shared insights.10 The tool uses two- and three-point rating scales, with each number score being clearly defined by specific written criteria (total score range: 0-14; Appendix).

 

 

Data Collection

Hospitalists’ admission notes from the three hospitals were used to validate the CRANAPL tool. Admission notes from patients hospitalized to the general medical floors with an admission diagnosis of either fever, syncope/dizziness, or abdominal pain were used. These diagnoses were purposefully examined because they (1) have a wide differential diagnosis, (2) are common presenting symptoms, and (3) are prone to diagnostic errors.29-32

The centralized EHR system across the three hospitals identified admission notes with one of these primary diagnoses of patients admitted over the period of January 2014 to October 2017. We submitted a request for 650 admission notes to be randomly selected from the centralized institutional records system. The notes were stratified by hospital and diagnosis. The sample size of our study was comparable with that of prior psychometric validation studies.33,34 Upon reviewing the A&Ps associated with these admissions, 365 notes were excluded for one of three reasons: (1) the note was written by a nurse practitioner, physician assistant, resident, or medical student; (2) the admission diagnosis had been definitively confirmed in the emergency department (eg, abdominal pain due to diverticulitis seen on CT); and (3) the note represented the fourth or more note by any single provider (to sample notes of many providers, no more than three notes written by any single provider were analyzed). A total of 285 admission notes were ultimately included in the sample.

Data were deidentified, and the A&P sections of the admission notes were each copied from the EHR into a unique Word document. Patient and hospital demographic data (including age, gender, race, number of comorbid conditions, LOS, hospital charges, and readmission to the same health system within 30 days) were collected separately from the EHR. Select physician characteristics were also collected from the hospitalist groups at each of the three hospitals, as was the length (word count) of each A&P.

The study was approved by our institutional review board.

Data Analysis

Two authors scored all deidentified A&Ps by using the finalized version of the CRANAPL tool. Prior to using the CRANAPL tool on each of the notes, these raters read each A&P and scored them by using two single-item rating scales: a global clinical reasoning and a global readability/clarity measure. Both of these global scales used three-item Likert scales (below average, average, and above average). These global rating scales collected the reviewers’ gestalt about the quality and clarity of the A&P. The use of gestalt ratings as comparators is supported by other research.35

Descriptive statistics were computed for all variables. Each rater rescored a sample of 48 records (one month after the initial scoring) and intraclass correlations (ICCs) were computed for intrarater reliability. ICCs were calculated for each item and for the CRANAPL total to determine interrater reliability.

The averaged ratings from the two raters were used for all other analyses. For CRANAPL’s internal structure validity evidence, Cronbach’s alpha was calculated as a measure of internal consistency. For relations to other variables validity evidence, CRANAPL total scores were compared with the two global assessment variables with linear regressions.

Bivariate analyses were performed by applying parametric and nonparametric tests as appropriate. A series of multivariate linear regressions, controlling for diagnosis and clustered variance by hospital site, were performed using CRANAPL total as the dependent variable and patient variables as predictors.

All data were analyzed using Stata (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas: StataCorp LP.)

 

 

RESULTS

The admission notes of 120 hospitalists were evaluated (Table 1). A total of 39 (33%) physicians were moonlighters with primary appointments outside of the hospitalist division, and 81 (68%) were full-time hospitalists. Among the 120 hospitalists, 48 (40%) were female, 60 (50%) were international medical graduates, and 90 (75%) were of nonwhite race. Most hospitalist physicians (n = 47, 58%) had worked in our health system for less than five years, and 64 hospitalists (53%) devoted greater than 50% of their time to patient care.

Approximately equal numbers of patient admission notes were pulled from each of the three hospitals. The average age of patients was 67.2 (SD 13.6) years, 145 (51%) were female, and 120 (42%) were of nonwhite race. The mean LOS for all patients was 4.0 (SD 3.4) days. A total of 44 (15%) patients were readmitted to the same health system within 30 days of discharge. None of the patients died during the incident hospitalization. The average charge for each of the hospitalizations was $10,646 (SD $9,964).

CRANAPL Data

Figure 1 shows the distribution of the scores given by each rater for each of the nine items. The mean of the total CRANAPL score given by both raters was 6.4 (SD 2.2). Scoring for some items were high (eg, summary statement: 1.5/2), whereas performance on others were low (eg, estimating LOS: 0.1/1 and describing the potential need for upgrade in care: 0.0/1).

Validity of the CRANAPL Tool’s Internal Structure

Cronbach’s alpha, which was used to measure internal consistency within the CRANAPL tool, was 0.43. The ICC, which was applied to measure the interrater reliability for both raters for the total CRANAPL score, was 0.83 (95% CI:  0.76-0.87). The ICC values for intrarater reliability for raters 1 and 2 were 0.73 (95% CI: 0.60-0.83) and 0.73 (95% CI: 0.45-0.86), respectively.

Relations to Other Variables Validity

Associations between CRANAPL total scores, global clinical reasoning, and global scores for note readability/clarity were statistically significant (P < .001), Figure 2.

Eight out of nine CRANAPL variables were statistically significantly different across the three hospitals (P <. 01) when data were analyzed by hospital site. Hospital C had the highest mean score of 7.4 (SD 2.0), followed by Hospital B with a score of 6.6 (SD 2.1), and Hospital A had the lowest total CRANAPL score of 5.2 (SD 1.9). This difference was statistically significant (P < .001). Five variables with respect to admission diagnoses (uncertainty acknowledged, differential diagnosis, plan for diagnosis, plan for treatment, and upgrade plan) were statistically significantly different across notes. Notes for syncope/dizziness generally yielded higher scores than those for abdominal pain and fever.

Factors Associated with High CRANAPL Scores

Table 2 shows the associations between CRANAPL scores and several covariates. Before adjustment, high CRANAPL scores were associated with high word counts of A&Ps (P < .001) and high hospital charges (P < .05). These associations were no longer significant after adjusting for hospital site and admitting diagnoses.

 

 

DISCUSSION

We reviewed the documentation of clinical reasoning in 285 admission notes at three different hospitals written by hospitalist physicians during routine clinical care. To our knowledge, this is the first study that assessed the documentation of hospitalists’ clinical reasoning with real patient notes. Wide variability exists in the documentation of clinical reasoning within the A&Ps of hospitalists’ admission notes. We have provided validity evidence to support the use of the user-friendly CRANAPL tool.

Prior studies have described rubrics for evaluating the clinical reasoning skills of medical students.14,15 The ICCs for the IDEA rubric used to assess medical students’ documentation of clinical reasoning were fair to moderate (0.29-0.67), whereas the ICC for the CRANAPL tool was high at 0.83. This measure of reliability is similar to that for the P-HAPEE rubric used to assess medical students’ documentation of pediatric history and physical notes.15 These data are markedly different from the data in previous studies that have found low interrater reliability for psychometric evaluations related to judgment and decision-making.36-39 CRANAPL was also found to have high intrarater reliability, which shows the reproducibility of an individual’s assessment over time. The strong association between the total CRANAPL score and global clinical reasoning assessment found in the present study is similar to that found in previous studies that have also embedded global rating scales as comparators when assessing clinical reasoning.13,,15,40,41 Global rating scales represent an overarching structure for comparison given the absence of an accepted method or gold standard for assessing clinical reasoning documentation. High-quality provider notes are defined by clarity, thoroughness, and accuracy;35 and effective documentation promotes communication and the coordination of care among the members of the care team.3

The total CRANAPL scores varied by hospital site with academic hospitals (B and C) scoring higher than the community hospital (A) in our study. Similarly, lengthy A&Ps were associated with high CRANAPL scores (P < .001) prior to adjustment for hospital site. Healthcare providers consider that the thoroughness of documentation denotes quality and attention to detail.35,42 Comprehensive documentation takes time; the longer notes by academic hospitalists than those by community hospitalists may be attributed to the fewer number of patients generally carried by hospitalists at academic centers than that by hospitalists at community hospitals.43

The documentation of the estimations of LOS, possibility of potential upgrade, and thoughts about disposition were consistently poorly described across all hospital sites and diagnoses. In contrast to CRANAPL, other clinical reasoning rubrics have excluded these items or discussed uncertainty.14,15,44 These elements represent the forward thinking that may be essential for high-quality progressive care by hospitalists. Physicians’s difficulty in acknowledging uncertainty has been associated with resource overuse, including the excessive ordering of tests, iatrogenic injury, and heavy financial burden on the healthcare system.45,46 The lack of thoughtful clinical and management reasoning at the time of admission is believed to be associated with medical errors.47 If used as a guide, the CRANAPL tool may promote reflection on the part of the admitting physician. The estimations of LOS, potential for upgrade to a higher level of care, and disposition are markers of optimal inpatient care, especially for hospitalists who work in shifts with embedded handoffs. When shared with colleagues (through documentation), there is the potential for distributed cognition10 to extend throughout the social network of the hospitalist group. The fact that so few providers are currently including these items in their A&P’s show that the providers are either not performing or documenting the ‘reasoning’. Either way, this is an opportunity that has been highlighted by the CRANAPL tool.

Several limitations of this study should be considered. First, the CRANAPL tool may not have captured elements of optimal clinical reasoning documentation. The reliance on multiple methods and an iterative process in the refinement of the CRANAPL tool should have minimized this. Second, this study was conducted across a single healthcare system that uses the same EHR; this EHR or institutional culture may influence documentation practices and behaviors. Given that using the CRANAPL tool to score an A&P is quick and easy, the benefit of giving providers feedback on their notes remains to be seen—here and at other hospitals. Third, our sample size could limit the generalizability of the results and the significance of the associations. However, the sample assessed in our study was significantly larger than that assessed in other studies that have validated clinical reasoning rubrics.14,15 Fourth, clinical reasoning is a broad and multidimensional construct. The CRANAPL tool focuses exclusively on hospitalists’ documentation of clinical reasoning and therefore does not assess aspects of clinical reasoning occurring in the physicians’ minds. Finally, given our goal to optimally validate the CRANAPL tool, we chose to test the tool on specific presentations that are known to be associated with diagnostic practice variation and errors. We may have observed different results had we chosen a different set of diagnoses from each hospital. Further validity evidence will be established when applying the CRANPL tool to different diagnoses and to notes from other clinical settings.

In conclusion, this study focuses on the development and validation of the CRANAPL tool that assesses how hospitalists document their clinical reasoning in the A&P section of admission notes. Our results show that wide variability exists in the documentation of clinical reasoning by hospitalists within and across hospitals. Given the CRANAPL tool’s ease-of-use and its versatility, hospitalist divisions in academic and nonacademic settings may use the CRANAPL tool to assess and provide feedback on the documentation of hospitalists’ clinical reasoning. Beyond studying whether physicians can be taught to improve their notes with feedback based on the CRANAPL tool, future studies may explore whether enhancing clinical reasoning documentation may be associated with improvements in patient care and clinical outcomes.

 

 

Acknowledgments

Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine which is supported through Hopkins’ Center for Innovative Medicine.

The authors thank Christine Caufield-Noll, MLIS, AHIP (Johns Hopkins Bayview Medical Center, Baltimore, Maryland) for her assistance with this project.

Disclosures

The authors have nothing to disclose.

 

References

1. State of Hospital Medicine. Society of Hospital Medicine. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed August 19, 2018.
2. Mehta R, Radhakrishnan NS, Warring CD, et al. The use of evidence-based, problem-oriented templates as a clinical decision support in an inpatient electronic health record system. Appl Clin Inform. 2016;7(3):790-802. https://doi.org/10.4338/ACI-2015-11-RA-0164
3. Improving Diagnosis in Healthcare: Health and Medicine Division. http://www.nationalacademies.org/hmd/Reports/2015/Improving-Diagnosis-in-Healthcare.aspx. Accessed August 7, 2018.
4. Tipping MD, Forth VE, O’Leary KJ, et al. Where did the day go? A time-motion study of hospitalists. J Hosp Med. 2010;5(6):323-328. https://doi.org/10.1002/jhm.790
5. Varpio L, Rashotte J, Day K, King J, Kuziemsky C, Parush A. The EHR and building the patient’s story: a qualitative investigation of how EHR use obstructs a vital clinical activity. Int J Med Inform. 2015;84(12):1019-1028. https://doi.org/10.1016/j.ijmedinf.2015.09.004
6. Clynch N, Kellett J. Medical documentation: part of the solution, or part of the problem? A narrative review of the literature on the time spent on and value of medical documentation. Int J Med Inform. 2015;84(4):221-228. https://doi.org/10.1016/j.ijmedinf.2014.12.001
7. Varpio L, Day K, Elliot-Miller P, et al. The impact of adopting EHRs: how losing connectivity affects clinical reasoning. Med Educ. 2015;49(5):476-486. https://doi.org/10.1111/medu.12665
8. McBee E, Ratcliffe T, Schuwirth L, et al. Context and clinical reasoning: understanding the medical student perspective. Perspect Med Educ. 2018;7(4):256-263. https://doi.org/10.1007/s40037-018-0417-x
9. Brown PJ, Marquard JL, Amster B, et al. What do physicians read (and ignore) in electronic progress notes? Appl Clin Inform. 2014;5(2):430-444. https://doi.org/10.4338/ACI-2014-01-RA-0003
10. Katherine D, Shalin VL. Creating a common trajectory: Shared decision making and distributed cognition in medical consultations. https://pxjournal.org/cgi/viewcontent.cgi?article=1116&context=journal Accessed April 4, 2019.
11. Harchelroad FP, Martin ML, Kremen RM, Murray KW. Emergency department daily record review: a quality assurance system in a teaching hospital. QRB Qual Rev Bull. 1988;14(2):45-49. https://doi.org/10.1016/S0097-5990(16)30187-7.
12. Opila DA. The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting. J Gen Intern Med. 1997;12(6):352-356. https://doi.org/10.1007/s11606-006-5083-8.
13. Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The development and preliminary validation of a rubric to assess medical students’ written summary statements in virtual patient cases. Acad Med. 2016;91(1):94-100. https://doi.org/10.1097/ACM.0000000000000800
14. Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA assessment tool: assessing the reporting, diagnostic reasoning, and decision-making skills demonstrated in medical students’ hospital admission notes. Teach Learn Med. 2015;27(2):163-173. https://doi.org/10.1080/10401334.2015.1011654
15. King MA, Phillipi CA, Buchanan PM, Lewin LO. Developing validity evidence for the written pediatric history and physical exam evaluation rubric. Acad Pediatr. 2017;17(1):68-73. https://doi.org/10.1016/j.acap.2016.08.001
16. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63-S67.
17. Messick S. Standards of validity and the validity of standards in performance asessment. Educ Meas Issues Pract. 2005;14(4):5-8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x
18. Menachery EP, Knight AM, Kolodner K, Wright SM. Physician characteristics associated with proficiency in feedback skills. J Gen Intern Med. 2006;21(5):440-446. https://doi.org/10.1111/j.1525-1497.2006.00424.x
19. Tackett S, Eisele D, McGuire M, Rotello L, Wright S. Fostering clinical excellence across an academic health system. South Med J. 2016;109(8):471-476. https://doi.org/10.14423/SMJ.0000000000000498
20. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989-994. https://doi.org/10.4065/83.9.989
21. Wright SM, Kravet S, Christmas C, Burkhart K, Durso SC. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3-year experience. Acad Med. 2010;85(12):1833-1839. https://doi.org/10.1097/ACM.0b013e3181fa416c
22. Kotwal S, Peña I, Howell E, Wright S. Defining clinical excellence in hospital medicine: a qualitative study. J Contin Educ Health Prof. 2017;37(1):3-8. https://doi.org/10.1097/CEH.0000000000000145
23. Common Program Requirements. https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements. Accessed August 21, 2018.
24. Warren J, Lupi C, Schwartz ML, et al. Chief Medical Education Officer.; 2017. https://www.aamc.org/download/482204/data/epa9toolkit.pdf. Accessed August 21, 2018.
25. Th He Inte. https://www.abim.org/~/media/ABIM Public/Files/pdf/milestones/internal-medicine-milestones-project.pdf. Accessed August 21, 2018.
26. Core Competencies. Society of Hospital Medicine. https://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed August 21, 2018.
27. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. Cox M,
Irby DM, eds. N Engl J Med. 2006;355(21):2217-2225. https://doi.org/10.1056/NEJMra054782
28. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74(11):1203-1207. https://doi.org/10.1097/00001888-199911000-00012.
29. Rao G, Epner P, Bauer V, Solomonides A, Newman-Toker DE. Identifying and analyzing diagnostic paths: a new approach for studying diagnostic practices. Diagnosis Berlin, Ger. 2017;4(2):67-72. https://doi.org/10.1515/dx-2016-0049
30. Ely JW, Kaldjian LC, D’Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Med. 2012;25(1):87-97. https://doi.org/10.3122/jabfm.2012.01.110174
31. Kerber KA, Newman-Toker DE. Misdiagnosing dizzy patients: common pitfalls in clinical practice. Neurol Clin. 2015;33(3):565-75, viii. https://doi.org/10.1016/j.ncl.2015.04.009
32. Singh H, Giardina TD, Meyer AND, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173(6):418. https://doi.org/10.1001/jamainternmed.2013.2777.
33. Kahn D, Stewart E, Duncan M, et al. A prescription for note bloat: an effective progress note template. J Hosp Med. 2018;13(6):378-382. https://doi.org/10.12788/jhm.2898
34. Anthoine E, Moret L, Regnault A, Sébille V, Hardouin J-B. Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures. Health Qual Life Outcomes. 2014;12(1):176. https://doi.org/10.1186/s12955-014-0176-2
35. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the physician documentation quality instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164-174. https://doi.org/10.4338/ACI-2011-11-RA-0070
36. Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: effects of rater expertise. Adv Health Sci Educ Theory Pract. 2011;16(2):151-165. https://doi.org/10.1007/s10459-010-9250-7
37. Kreiter CD, Ferguson KJ. Examining the generalizability of ratings across clerkships using a clinical evaluation form. Eval Health Prof. 2001;24(1):36-46. https://doi.org/10.1177/01632780122034768
38. Middleman AB, Sunder PK, Yen AG. Reliability of the history and physical assessment (HAPA) form. Clin Teach. 2011;8(3):192-195. https://doi.org/10.1111/j.1743-498X.2011.00459.x
39. Kogan JR, Shea JA. Psychometric characteristics of a write-up assessment form in a medicine core clerkship. Teach Learn Med. 2005;17(2):101-106. https://doi.org/10.1207/s15328015tlm1702_2
40. Lewin LO, Beraho L, Dolan S, Millstein L, Bowman D. Interrater reliability of an oral case presentation rating tool in a pediatric clerkship. Teach Learn Med. 2013;25(1):31-38. https://doi.org/10.1080/10401334.2012.741537
41. Gray JD. Global rating scales in residency education. Acad Med. 1996;71(1):S55-S63.
42. Rosenbloom ST, Crow AN, Blackford JU, Johnson KB. Cognitive factors influencing perceptions of clinical documentation tools. J Biomed Inform. 2007;40(2):106-113. https://doi.org/10.1016/j.jbi.2006.06.006
43. Michtalik HJ, Pronovost PJ, Marsteller JA, Spetz J, Brotman DJ. Identifying potential predictors of a safe attending physician workload: a survey of hospitalists. J Hosp Med. 2013;8(11):644-646. https://doi.org/10.1002/jhm.2088
44. Seo J-H, Kong H-H, Im S-J, et al. A pilot study on the evaluation of medical student documentation: assessment of SOAP notes. Korean J Med Educ. 2016;28(2):237-241. https://doi.org/10.3946/kjme.2016.26
45. Kassirer JP. Our stubborn quest for diagnostic certainty. A cause of excessive testing. N Engl J Med. 1989;320(22):1489-1491. https://doi.org/10.1056/NEJM198906013202211
46. Hatch S. Uncertainty in medicine. BMJ. 2017;357:j2180. https://doi.org/10.1136/bmj.j2180
47. Cook DA, Sherbino J, Durning SJ. Management reasoning. JAMA. 2018;319(22):2267. https://doi.org/10.1001/jama.2018.4385

References

1. State of Hospital Medicine. Society of Hospital Medicine. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed August 19, 2018.
2. Mehta R, Radhakrishnan NS, Warring CD, et al. The use of evidence-based, problem-oriented templates as a clinical decision support in an inpatient electronic health record system. Appl Clin Inform. 2016;7(3):790-802. https://doi.org/10.4338/ACI-2015-11-RA-0164
3. Improving Diagnosis in Healthcare: Health and Medicine Division. http://www.nationalacademies.org/hmd/Reports/2015/Improving-Diagnosis-in-Healthcare.aspx. Accessed August 7, 2018.
4. Tipping MD, Forth VE, O’Leary KJ, et al. Where did the day go? A time-motion study of hospitalists. J Hosp Med. 2010;5(6):323-328. https://doi.org/10.1002/jhm.790
5. Varpio L, Rashotte J, Day K, King J, Kuziemsky C, Parush A. The EHR and building the patient’s story: a qualitative investigation of how EHR use obstructs a vital clinical activity. Int J Med Inform. 2015;84(12):1019-1028. https://doi.org/10.1016/j.ijmedinf.2015.09.004
6. Clynch N, Kellett J. Medical documentation: part of the solution, or part of the problem? A narrative review of the literature on the time spent on and value of medical documentation. Int J Med Inform. 2015;84(4):221-228. https://doi.org/10.1016/j.ijmedinf.2014.12.001
7. Varpio L, Day K, Elliot-Miller P, et al. The impact of adopting EHRs: how losing connectivity affects clinical reasoning. Med Educ. 2015;49(5):476-486. https://doi.org/10.1111/medu.12665
8. McBee E, Ratcliffe T, Schuwirth L, et al. Context and clinical reasoning: understanding the medical student perspective. Perspect Med Educ. 2018;7(4):256-263. https://doi.org/10.1007/s40037-018-0417-x
9. Brown PJ, Marquard JL, Amster B, et al. What do physicians read (and ignore) in electronic progress notes? Appl Clin Inform. 2014;5(2):430-444. https://doi.org/10.4338/ACI-2014-01-RA-0003
10. Katherine D, Shalin VL. Creating a common trajectory: Shared decision making and distributed cognition in medical consultations. https://pxjournal.org/cgi/viewcontent.cgi?article=1116&context=journal Accessed April 4, 2019.
11. Harchelroad FP, Martin ML, Kremen RM, Murray KW. Emergency department daily record review: a quality assurance system in a teaching hospital. QRB Qual Rev Bull. 1988;14(2):45-49. https://doi.org/10.1016/S0097-5990(16)30187-7.
12. Opila DA. The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting. J Gen Intern Med. 1997;12(6):352-356. https://doi.org/10.1007/s11606-006-5083-8.
13. Smith S, Kogan JR, Berman NB, Dell MS, Brock DM, Robins LS. The development and preliminary validation of a rubric to assess medical students’ written summary statements in virtual patient cases. Acad Med. 2016;91(1):94-100. https://doi.org/10.1097/ACM.0000000000000800
14. Baker EA, Ledford CH, Fogg L, Way DP, Park YS. The IDEA assessment tool: assessing the reporting, diagnostic reasoning, and decision-making skills demonstrated in medical students’ hospital admission notes. Teach Learn Med. 2015;27(2):163-173. https://doi.org/10.1080/10401334.2015.1011654
15. King MA, Phillipi CA, Buchanan PM, Lewin LO. Developing validity evidence for the written pediatric history and physical exam evaluation rubric. Acad Pediatr. 2017;17(1):68-73. https://doi.org/10.1016/j.acap.2016.08.001
16. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63-S67.
17. Messick S. Standards of validity and the validity of standards in performance asessment. Educ Meas Issues Pract. 2005;14(4):5-8. https://doi.org/10.1111/j.1745-3992.1995.tb00881.x
18. Menachery EP, Knight AM, Kolodner K, Wright SM. Physician characteristics associated with proficiency in feedback skills. J Gen Intern Med. 2006;21(5):440-446. https://doi.org/10.1111/j.1525-1497.2006.00424.x
19. Tackett S, Eisele D, McGuire M, Rotello L, Wright S. Fostering clinical excellence across an academic health system. South Med J. 2016;109(8):471-476. https://doi.org/10.14423/SMJ.0000000000000498
20. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989-994. https://doi.org/10.4065/83.9.989
21. Wright SM, Kravet S, Christmas C, Burkhart K, Durso SC. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3-year experience. Acad Med. 2010;85(12):1833-1839. https://doi.org/10.1097/ACM.0b013e3181fa416c
22. Kotwal S, Peña I, Howell E, Wright S. Defining clinical excellence in hospital medicine: a qualitative study. J Contin Educ Health Prof. 2017;37(1):3-8. https://doi.org/10.1097/CEH.0000000000000145
23. Common Program Requirements. https://www.acgme.org/What-We-Do/Accreditation/Common-Program-Requirements. Accessed August 21, 2018.
24. Warren J, Lupi C, Schwartz ML, et al. Chief Medical Education Officer.; 2017. https://www.aamc.org/download/482204/data/epa9toolkit.pdf. Accessed August 21, 2018.
25. Th He Inte. https://www.abim.org/~/media/ABIM Public/Files/pdf/milestones/internal-medicine-milestones-project.pdf. Accessed August 21, 2018.
26. Core Competencies. Society of Hospital Medicine. https://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed August 21, 2018.
27. Bowen JL. Educational strategies to promote clinical diagnostic reasoning. Cox M,
Irby DM, eds. N Engl J Med. 2006;355(21):2217-2225. https://doi.org/10.1056/NEJMra054782
28. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74(11):1203-1207. https://doi.org/10.1097/00001888-199911000-00012.
29. Rao G, Epner P, Bauer V, Solomonides A, Newman-Toker DE. Identifying and analyzing diagnostic paths: a new approach for studying diagnostic practices. Diagnosis Berlin, Ger. 2017;4(2):67-72. https://doi.org/10.1515/dx-2016-0049
30. Ely JW, Kaldjian LC, D’Alessandro DM. Diagnostic errors in primary care: lessons learned. J Am Board Fam Med. 2012;25(1):87-97. https://doi.org/10.3122/jabfm.2012.01.110174
31. Kerber KA, Newman-Toker DE. Misdiagnosing dizzy patients: common pitfalls in clinical practice. Neurol Clin. 2015;33(3):565-75, viii. https://doi.org/10.1016/j.ncl.2015.04.009
32. Singh H, Giardina TD, Meyer AND, Forjuoh SN, Reis MD, Thomas EJ. Types and origins of diagnostic errors in primary care settings. JAMA Intern Med. 2013;173(6):418. https://doi.org/10.1001/jamainternmed.2013.2777.
33. Kahn D, Stewart E, Duncan M, et al. A prescription for note bloat: an effective progress note template. J Hosp Med. 2018;13(6):378-382. https://doi.org/10.12788/jhm.2898
34. Anthoine E, Moret L, Regnault A, Sébille V, Hardouin J-B. Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures. Health Qual Life Outcomes. 2014;12(1):176. https://doi.org/10.1186/s12955-014-0176-2
35. Stetson PD, Bakken S, Wrenn JO, Siegler EL. Assessing electronic note quality using the physician documentation quality instrument (PDQI-9). Appl Clin Inform. 2012;3(2):164-174. https://doi.org/10.4338/ACI-2011-11-RA-0070
36. Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: effects of rater expertise. Adv Health Sci Educ Theory Pract. 2011;16(2):151-165. https://doi.org/10.1007/s10459-010-9250-7
37. Kreiter CD, Ferguson KJ. Examining the generalizability of ratings across clerkships using a clinical evaluation form. Eval Health Prof. 2001;24(1):36-46. https://doi.org/10.1177/01632780122034768
38. Middleman AB, Sunder PK, Yen AG. Reliability of the history and physical assessment (HAPA) form. Clin Teach. 2011;8(3):192-195. https://doi.org/10.1111/j.1743-498X.2011.00459.x
39. Kogan JR, Shea JA. Psychometric characteristics of a write-up assessment form in a medicine core clerkship. Teach Learn Med. 2005;17(2):101-106. https://doi.org/10.1207/s15328015tlm1702_2
40. Lewin LO, Beraho L, Dolan S, Millstein L, Bowman D. Interrater reliability of an oral case presentation rating tool in a pediatric clerkship. Teach Learn Med. 2013;25(1):31-38. https://doi.org/10.1080/10401334.2012.741537
41. Gray JD. Global rating scales in residency education. Acad Med. 1996;71(1):S55-S63.
42. Rosenbloom ST, Crow AN, Blackford JU, Johnson KB. Cognitive factors influencing perceptions of clinical documentation tools. J Biomed Inform. 2007;40(2):106-113. https://doi.org/10.1016/j.jbi.2006.06.006
43. Michtalik HJ, Pronovost PJ, Marsteller JA, Spetz J, Brotman DJ. Identifying potential predictors of a safe attending physician workload: a survey of hospitalists. J Hosp Med. 2013;8(11):644-646. https://doi.org/10.1002/jhm.2088
44. Seo J-H, Kong H-H, Im S-J, et al. A pilot study on the evaluation of medical student documentation: assessment of SOAP notes. Korean J Med Educ. 2016;28(2):237-241. https://doi.org/10.3946/kjme.2016.26
45. Kassirer JP. Our stubborn quest for diagnostic certainty. A cause of excessive testing. N Engl J Med. 1989;320(22):1489-1491. https://doi.org/10.1056/NEJM198906013202211
46. Hatch S. Uncertainty in medicine. BMJ. 2017;357:j2180. https://doi.org/10.1136/bmj.j2180
47. Cook DA, Sherbino J, Durning SJ. Management reasoning. JAMA. 2018;319(22):2267. https://doi.org/10.1001/jama.2018.4385

Issue
Journal of Hospital Medicine 14(12)
Issue
Journal of Hospital Medicine 14(12)
Page Number
746-753. Published online first June 11, 2019
Page Number
746-753. Published online first June 11, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Susrutha Kotwal, MD; E-mail: Skotwal1@jhmi.edu; Telephone: 410-550-5018; Fax: 410-550-2972; Twitter: @KotwalSusrutha
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

The Current State of Advanced Practice Provider Fellowships in Hospital Medicine: A Survey of Program Directors

Article Type
Changed
Sun, 07/28/2019 - 14:56

Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.1 As APPs continue to become a progressively larger part of the healthcare workforce, medical organizations are seeking more comprehensive strategies to train and mentor them.2 This has led to the development of formal postgraduate programs, often called APP fellowships.

Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.3 This gap is evident in hours of clinical training. Whereas NPs are generally expected to complete 500-1,500 hours of clinical practice before graduating,4 and PAs are expected to complete 2,000 hours,5 most physicians will complete over 15,000 hours of clinical training by the end of residency.6 As increasing patient complexity continues to challenge the healthcare workforce,7 both the NP and the PA leadership have recommended increased training of graduates and outcome studies of formal postgraduate fellowships.8,9 In 2007, there were over 60 of these programs in the United States,10 most of them offering training in surgical specialties.

First described in 2010 by the Mayo Clinic,11 APP fellowships in hospital medicine are also being developed. These programs are built to improve the training of nonphysician hospitalists, who often work independently12 and manage medically complex patients.13 However, little is known about the number or structure of these fellowships. The limited understanding of the current APP fellowship environment is partly due to the lack of an administrative body overseeing these programs.14 The Accreditation Review Commission on Education for the Physician Assistant (ARC-PA) pioneered a model in 2007 for postgraduate PA programs, but it has been held in abeyance since 2014.15 Both the American Nurses Credentialing Center and the National Nurse Practitioner Residency and Fellowship Training Consortium have fellowship accreditation review processes, but they are not specific to hospital medicine.16 The Society of Hospital Medicine (SHM) has several resources for the training of APPs;17 however, it neither reviews nor accredits fellowship programs. Without standards, guidelines, or active accrediting bodies, APP fellowships in hospital medicine are poorly understood and are of unknown efficacy. The purpose of this study was to identify and describe the active APP fellowships in hospital medicine.

METHODS

This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.

 

 

The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,19-21 Each question was developed by a team that had expertise in survey design (Wright and Tackett), and two survey design team members were themselves PDs of APP fellowships in hospital medicine (Kisuule and Franco). The survey was revised iteratively by the team on the basis of meetings and pilot testing with PDs of other programs. All qualitative or descriptive questions had a free response option available to allow PDs to answer the survey accurately and exhaustively. The final version of the survey was approved by consensus of all authors. It consisted of 25 multiple choice questions which were created to gather information about the following key areas of APP hospital medicine fellowships: fellowship and learner characteristics, program rationales, curricula, and methods of fellow assessment.

A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.

RESULTS

In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table). 

Fellowship and Individual Characteristics

Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.

The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.

Program Rationales

All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.

 

 

In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.

Curricula

Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.

There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.



There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.

Methods of Fellow Assessment

Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).

DISCUSSION

We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.

There have been several publications detailing successful individual APP fellowships in medical subspecialties,22 psychiatry,23 and surgical specialties,24 all of which describe the benefits to the institution. One study found that physician hospitalists have a poor understanding of the training PAs undergo and would favor a standardized curriculum for PA hospitalists.25 Another study compared all PA postgraduate training programs in emergency medicine;19 it also described a small number of relatively young programs with variable curricula and a need for standardization. Yet another paper10 surveyed postgraduate PA programs across all specialties; however, that study only captured two hospital medicine programs, and it was not focused on several key areas studied in this paper—such as the program rationale, curricular elements, and assessment.

It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.26 Given the findings that on a national level, the majority of hospitalist NPs and PAs practice autonomously or somewhat autonomously,12 it is reasonable to assume that similar trends of more experienced providers delivering safer care would be expected for APPs, but this remains speculative. From a retention standpoint, it has been well described that high APP turnover is often due to decreased feelings of competence and confidence during their transition from trainees to medical providers.27 APPs who have completed fellowships feel more confident and able to succeed in their field.28 To this point, in one survey of hospitalist PAs, almost all reported that they would have been interested in completing a fellowship, even it meant a lower initial salary.29Despite having the same general goals and using similar national resources, our study reveals that APP fellows are trained and assessed very differently between programs. This might represent an area of future growth in the field of hospitalist APP education. For physician learning, competency-based medical education (CBME) has emerged as a learner centric, outcomes-based model of teaching and assessment that emphasizes mastery of skills and progression through milestones.30 Both the ACGME31 and the SHM32 have described core competencies that provide a framework within CBME for determining readiness for independent practice. While we were not surprised to find that each fellowship has its own unique method of determining readiness for practice, these findings suggest that graduates from different programs likely have very different skill sets and aptitude levels. In the future, an active accrediting body could offer guidance in defining hospitalist APP core competencies and help standardize education.

Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.

 

 

CONCLUSION

APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.

Acknowledgments

The authors thank all program directors who responded to the survey.

Disclosures

The authors report no conflicts of interest.

Funding

This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.

Files
References

1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32. 
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014. 
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008. 
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892. 
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021. 
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.

Article PDF
Issue
Journal of Hospital Medicine 14(7)
Publications
Topics
Page Number
401-406. Published online first April 8, 2019.
Sections
Files
Files
Article PDF
Article PDF

Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.1 As APPs continue to become a progressively larger part of the healthcare workforce, medical organizations are seeking more comprehensive strategies to train and mentor them.2 This has led to the development of formal postgraduate programs, often called APP fellowships.

Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.3 This gap is evident in hours of clinical training. Whereas NPs are generally expected to complete 500-1,500 hours of clinical practice before graduating,4 and PAs are expected to complete 2,000 hours,5 most physicians will complete over 15,000 hours of clinical training by the end of residency.6 As increasing patient complexity continues to challenge the healthcare workforce,7 both the NP and the PA leadership have recommended increased training of graduates and outcome studies of formal postgraduate fellowships.8,9 In 2007, there were over 60 of these programs in the United States,10 most of them offering training in surgical specialties.

First described in 2010 by the Mayo Clinic,11 APP fellowships in hospital medicine are also being developed. These programs are built to improve the training of nonphysician hospitalists, who often work independently12 and manage medically complex patients.13 However, little is known about the number or structure of these fellowships. The limited understanding of the current APP fellowship environment is partly due to the lack of an administrative body overseeing these programs.14 The Accreditation Review Commission on Education for the Physician Assistant (ARC-PA) pioneered a model in 2007 for postgraduate PA programs, but it has been held in abeyance since 2014.15 Both the American Nurses Credentialing Center and the National Nurse Practitioner Residency and Fellowship Training Consortium have fellowship accreditation review processes, but they are not specific to hospital medicine.16 The Society of Hospital Medicine (SHM) has several resources for the training of APPs;17 however, it neither reviews nor accredits fellowship programs. Without standards, guidelines, or active accrediting bodies, APP fellowships in hospital medicine are poorly understood and are of unknown efficacy. The purpose of this study was to identify and describe the active APP fellowships in hospital medicine.

METHODS

This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.

 

 

The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,19-21 Each question was developed by a team that had expertise in survey design (Wright and Tackett), and two survey design team members were themselves PDs of APP fellowships in hospital medicine (Kisuule and Franco). The survey was revised iteratively by the team on the basis of meetings and pilot testing with PDs of other programs. All qualitative or descriptive questions had a free response option available to allow PDs to answer the survey accurately and exhaustively. The final version of the survey was approved by consensus of all authors. It consisted of 25 multiple choice questions which were created to gather information about the following key areas of APP hospital medicine fellowships: fellowship and learner characteristics, program rationales, curricula, and methods of fellow assessment.

A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.

RESULTS

In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table). 

Fellowship and Individual Characteristics

Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.

The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.

Program Rationales

All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.

 

 

In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.

Curricula

Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.

There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.



There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.

Methods of Fellow Assessment

Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).

DISCUSSION

We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.

There have been several publications detailing successful individual APP fellowships in medical subspecialties,22 psychiatry,23 and surgical specialties,24 all of which describe the benefits to the institution. One study found that physician hospitalists have a poor understanding of the training PAs undergo and would favor a standardized curriculum for PA hospitalists.25 Another study compared all PA postgraduate training programs in emergency medicine;19 it also described a small number of relatively young programs with variable curricula and a need for standardization. Yet another paper10 surveyed postgraduate PA programs across all specialties; however, that study only captured two hospital medicine programs, and it was not focused on several key areas studied in this paper—such as the program rationale, curricular elements, and assessment.

It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.26 Given the findings that on a national level, the majority of hospitalist NPs and PAs practice autonomously or somewhat autonomously,12 it is reasonable to assume that similar trends of more experienced providers delivering safer care would be expected for APPs, but this remains speculative. From a retention standpoint, it has been well described that high APP turnover is often due to decreased feelings of competence and confidence during their transition from trainees to medical providers.27 APPs who have completed fellowships feel more confident and able to succeed in their field.28 To this point, in one survey of hospitalist PAs, almost all reported that they would have been interested in completing a fellowship, even it meant a lower initial salary.29Despite having the same general goals and using similar national resources, our study reveals that APP fellows are trained and assessed very differently between programs. This might represent an area of future growth in the field of hospitalist APP education. For physician learning, competency-based medical education (CBME) has emerged as a learner centric, outcomes-based model of teaching and assessment that emphasizes mastery of skills and progression through milestones.30 Both the ACGME31 and the SHM32 have described core competencies that provide a framework within CBME for determining readiness for independent practice. While we were not surprised to find that each fellowship has its own unique method of determining readiness for practice, these findings suggest that graduates from different programs likely have very different skill sets and aptitude levels. In the future, an active accrediting body could offer guidance in defining hospitalist APP core competencies and help standardize education.

Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.

 

 

CONCLUSION

APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.

Acknowledgments

The authors thank all program directors who responded to the survey.

Disclosures

The authors report no conflicts of interest.

Funding

This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.

Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.1 As APPs continue to become a progressively larger part of the healthcare workforce, medical organizations are seeking more comprehensive strategies to train and mentor them.2 This has led to the development of formal postgraduate programs, often called APP fellowships.

Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.3 This gap is evident in hours of clinical training. Whereas NPs are generally expected to complete 500-1,500 hours of clinical practice before graduating,4 and PAs are expected to complete 2,000 hours,5 most physicians will complete over 15,000 hours of clinical training by the end of residency.6 As increasing patient complexity continues to challenge the healthcare workforce,7 both the NP and the PA leadership have recommended increased training of graduates and outcome studies of formal postgraduate fellowships.8,9 In 2007, there were over 60 of these programs in the United States,10 most of them offering training in surgical specialties.

First described in 2010 by the Mayo Clinic,11 APP fellowships in hospital medicine are also being developed. These programs are built to improve the training of nonphysician hospitalists, who often work independently12 and manage medically complex patients.13 However, little is known about the number or structure of these fellowships. The limited understanding of the current APP fellowship environment is partly due to the lack of an administrative body overseeing these programs.14 The Accreditation Review Commission on Education for the Physician Assistant (ARC-PA) pioneered a model in 2007 for postgraduate PA programs, but it has been held in abeyance since 2014.15 Both the American Nurses Credentialing Center and the National Nurse Practitioner Residency and Fellowship Training Consortium have fellowship accreditation review processes, but they are not specific to hospital medicine.16 The Society of Hospital Medicine (SHM) has several resources for the training of APPs;17 however, it neither reviews nor accredits fellowship programs. Without standards, guidelines, or active accrediting bodies, APP fellowships in hospital medicine are poorly understood and are of unknown efficacy. The purpose of this study was to identify and describe the active APP fellowships in hospital medicine.

METHODS

This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.

 

 

The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,19-21 Each question was developed by a team that had expertise in survey design (Wright and Tackett), and two survey design team members were themselves PDs of APP fellowships in hospital medicine (Kisuule and Franco). The survey was revised iteratively by the team on the basis of meetings and pilot testing with PDs of other programs. All qualitative or descriptive questions had a free response option available to allow PDs to answer the survey accurately and exhaustively. The final version of the survey was approved by consensus of all authors. It consisted of 25 multiple choice questions which were created to gather information about the following key areas of APP hospital medicine fellowships: fellowship and learner characteristics, program rationales, curricula, and methods of fellow assessment.

A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.

RESULTS

In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table). 

Fellowship and Individual Characteristics

Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.

The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.

Program Rationales

All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.

 

 

In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.

Curricula

Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.

There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.



There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.

Methods of Fellow Assessment

Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).

DISCUSSION

We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.

There have been several publications detailing successful individual APP fellowships in medical subspecialties,22 psychiatry,23 and surgical specialties,24 all of which describe the benefits to the institution. One study found that physician hospitalists have a poor understanding of the training PAs undergo and would favor a standardized curriculum for PA hospitalists.25 Another study compared all PA postgraduate training programs in emergency medicine;19 it also described a small number of relatively young programs with variable curricula and a need for standardization. Yet another paper10 surveyed postgraduate PA programs across all specialties; however, that study only captured two hospital medicine programs, and it was not focused on several key areas studied in this paper—such as the program rationale, curricular elements, and assessment.

It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.26 Given the findings that on a national level, the majority of hospitalist NPs and PAs practice autonomously or somewhat autonomously,12 it is reasonable to assume that similar trends of more experienced providers delivering safer care would be expected for APPs, but this remains speculative. From a retention standpoint, it has been well described that high APP turnover is often due to decreased feelings of competence and confidence during their transition from trainees to medical providers.27 APPs who have completed fellowships feel more confident and able to succeed in their field.28 To this point, in one survey of hospitalist PAs, almost all reported that they would have been interested in completing a fellowship, even it meant a lower initial salary.29Despite having the same general goals and using similar national resources, our study reveals that APP fellows are trained and assessed very differently between programs. This might represent an area of future growth in the field of hospitalist APP education. For physician learning, competency-based medical education (CBME) has emerged as a learner centric, outcomes-based model of teaching and assessment that emphasizes mastery of skills and progression through milestones.30 Both the ACGME31 and the SHM32 have described core competencies that provide a framework within CBME for determining readiness for independent practice. While we were not surprised to find that each fellowship has its own unique method of determining readiness for practice, these findings suggest that graduates from different programs likely have very different skill sets and aptitude levels. In the future, an active accrediting body could offer guidance in defining hospitalist APP core competencies and help standardize education.

Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.

 

 

CONCLUSION

APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.

Acknowledgments

The authors thank all program directors who responded to the survey.

Disclosures

The authors report no conflicts of interest.

Funding

This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.

References

1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32. 
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014. 
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008. 
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892. 
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021. 
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.

References

1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32. 
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014. 
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008. 
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892. 
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021. 
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.

Issue
Journal of Hospital Medicine 14(7)
Issue
Journal of Hospital Medicine 14(7)
Page Number
401-406. Published online first April 8, 2019.
Page Number
401-406. Published online first April 8, 2019.
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
David Klimpl, MD; E-mail: David.klimpl@gmail.com; Telephone: 720-848-4289
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

The Authors Reply, “Pilot Study Aiming to Support Sleep Quality and Duration During Hospitalizations”

Article Type
Changed
Sat, 04/01/2017 - 10:35
Display Headline
The authors reply, “Pilot study aiming to support sleep quality and duration during hospitalizations”

We thank the authors for their comments and thoughts about our recent publication.1 Their suggestion that the incorporation of principles from the “Nudge Theory” might enhance the impact of our sleep intervention and shorten the lag time until patients appreciate the benefits is interesting.2 Our study aimed to assess the effect of a sleep-promoting intervention on sleep quality and duration among hospitalized patients within a quasi-experimental prospective study design. As is the case at the University of Chicago hospital described in Machado’s letter, nocturnal disruptions are also the “default” in order sets in our electronic medical records (EMR). Because the EMR team at our hospital is stretched thin with more requests than it can fulfill, it was not feasible or possible to incorporate any sleep supporting changes when designing the pilot. 

Complementing sleep-promoting procedures for hospitalized patients with “nudge” principles, such as the use of choice architecture with appropriate EMR defaults or even incentives and mappings, seems like a wise recommendation.3 Regular nudges may be helpful for sustaining any multicomponent interventions in healthcare delivery that rely on cooperation by multiple parties. It appears as if evidence is growing that “nudge principles” can augment behavior change attributable to interventions.4,5 Sleep-promoting nudges, namely “anti-nudges” by members of the healthcare team, should help patients to sleep better during their hospitalizations, when sleep is critically important to recovery and health restitution. 

References

1. Gathecha E, Rios R, Buenaver LF, Landis R, Howell E, Wright S. Pilot study aiming to support sleep quality and duration during hospitalizations. J Hosp Med. 2016;11(7):467-472. doi:10.1002/jhm.2578. PubMed

2. Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. New Haven, CT: Yale University Press; 2008.

3. Bourdeaux CP, Davies KJ, Thomas MJC, Bewley JS, Gould TH. Using “nudge” principles for order set design: a before and after evaluation of an electronic prescribing template in critical care. BMJ Qual Saf. 2014;23(5):382-388. doi:10.1136/bmjqs-2013-002395 PubMed

4. Hollands GJ, Shemilt I, Marteau TM, et al. Altering micro-environments to change population health behaviour: towards an evidence base for choice architecture interventions. BMC Public Health. 2013;13:1218. doi:10.1186/1471-2458-13-1218. PubMed

5. Arno A, Thomas S. The efficacy of nudge theory strategies in influencing adult dietary behavior: a systematic review and meta-analysis. BMC Public Health. 2016;16:676. doi:10.1186/s12889-016-3272-x. PubMed

 

Article PDF
Issue
Journal of Hospital Medicine - 12(1)
Publications
Topics
Sections
Article PDF
Article PDF

We thank the authors for their comments and thoughts about our recent publication.1 Their suggestion that the incorporation of principles from the “Nudge Theory” might enhance the impact of our sleep intervention and shorten the lag time until patients appreciate the benefits is interesting.2 Our study aimed to assess the effect of a sleep-promoting intervention on sleep quality and duration among hospitalized patients within a quasi-experimental prospective study design. As is the case at the University of Chicago hospital described in Machado’s letter, nocturnal disruptions are also the “default” in order sets in our electronic medical records (EMR). Because the EMR team at our hospital is stretched thin with more requests than it can fulfill, it was not feasible or possible to incorporate any sleep supporting changes when designing the pilot. 

Complementing sleep-promoting procedures for hospitalized patients with “nudge” principles, such as the use of choice architecture with appropriate EMR defaults or even incentives and mappings, seems like a wise recommendation.3 Regular nudges may be helpful for sustaining any multicomponent interventions in healthcare delivery that rely on cooperation by multiple parties. It appears as if evidence is growing that “nudge principles” can augment behavior change attributable to interventions.4,5 Sleep-promoting nudges, namely “anti-nudges” by members of the healthcare team, should help patients to sleep better during their hospitalizations, when sleep is critically important to recovery and health restitution. 

We thank the authors for their comments and thoughts about our recent publication.1 Their suggestion that the incorporation of principles from the “Nudge Theory” might enhance the impact of our sleep intervention and shorten the lag time until patients appreciate the benefits is interesting.2 Our study aimed to assess the effect of a sleep-promoting intervention on sleep quality and duration among hospitalized patients within a quasi-experimental prospective study design. As is the case at the University of Chicago hospital described in Machado’s letter, nocturnal disruptions are also the “default” in order sets in our electronic medical records (EMR). Because the EMR team at our hospital is stretched thin with more requests than it can fulfill, it was not feasible or possible to incorporate any sleep supporting changes when designing the pilot. 

Complementing sleep-promoting procedures for hospitalized patients with “nudge” principles, such as the use of choice architecture with appropriate EMR defaults or even incentives and mappings, seems like a wise recommendation.3 Regular nudges may be helpful for sustaining any multicomponent interventions in healthcare delivery that rely on cooperation by multiple parties. It appears as if evidence is growing that “nudge principles” can augment behavior change attributable to interventions.4,5 Sleep-promoting nudges, namely “anti-nudges” by members of the healthcare team, should help patients to sleep better during their hospitalizations, when sleep is critically important to recovery and health restitution. 

References

1. Gathecha E, Rios R, Buenaver LF, Landis R, Howell E, Wright S. Pilot study aiming to support sleep quality and duration during hospitalizations. J Hosp Med. 2016;11(7):467-472. doi:10.1002/jhm.2578. PubMed

2. Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. New Haven, CT: Yale University Press; 2008.

3. Bourdeaux CP, Davies KJ, Thomas MJC, Bewley JS, Gould TH. Using “nudge” principles for order set design: a before and after evaluation of an electronic prescribing template in critical care. BMJ Qual Saf. 2014;23(5):382-388. doi:10.1136/bmjqs-2013-002395 PubMed

4. Hollands GJ, Shemilt I, Marteau TM, et al. Altering micro-environments to change population health behaviour: towards an evidence base for choice architecture interventions. BMC Public Health. 2013;13:1218. doi:10.1186/1471-2458-13-1218. PubMed

5. Arno A, Thomas S. The efficacy of nudge theory strategies in influencing adult dietary behavior: a systematic review and meta-analysis. BMC Public Health. 2016;16:676. doi:10.1186/s12889-016-3272-x. PubMed

 

References

1. Gathecha E, Rios R, Buenaver LF, Landis R, Howell E, Wright S. Pilot study aiming to support sleep quality and duration during hospitalizations. J Hosp Med. 2016;11(7):467-472. doi:10.1002/jhm.2578. PubMed

2. Thaler R, Sunstein C. Nudge: Improving Decisions About Health, Wealth and Happiness. New Haven, CT: Yale University Press; 2008.

3. Bourdeaux CP, Davies KJ, Thomas MJC, Bewley JS, Gould TH. Using “nudge” principles for order set design: a before and after evaluation of an electronic prescribing template in critical care. BMJ Qual Saf. 2014;23(5):382-388. doi:10.1136/bmjqs-2013-002395 PubMed

4. Hollands GJ, Shemilt I, Marteau TM, et al. Altering micro-environments to change population health behaviour: towards an evidence base for choice architecture interventions. BMC Public Health. 2013;13:1218. doi:10.1186/1471-2458-13-1218. PubMed

5. Arno A, Thomas S. The efficacy of nudge theory strategies in influencing adult dietary behavior: a systematic review and meta-analysis. BMC Public Health. 2016;16:676. doi:10.1186/s12889-016-3272-x. PubMed

 

Issue
Journal of Hospital Medicine - 12(1)
Issue
Journal of Hospital Medicine - 12(1)
Publications
Publications
Topics
Article Type
Display Headline
The authors reply, “Pilot study aiming to support sleep quality and duration during hospitalizations”
Display Headline
The authors reply, “Pilot study aiming to support sleep quality and duration during hospitalizations”
Sections
Article Source

© 2017 Society of Hospital Medicine

Citation Override
J. Hosp. Med. 2017 January;12(1):61-62
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Gating Strategy
First Peek Free
Article PDF Media

A tool to assess comportment and communication for hospitalists

Article Type
Changed
Thu, 03/28/2019 - 15:01

 

With the rise of hospital medicine in the United States, the lion’s share of inpatient care is delivered by hospitalists. Both hospitals and hospitalist providers are committed to delivering excellent patient care, but to accomplish this goal, specific feedback is essential.

Patient satisfaction surveys that assess provider performance, such as Press Ganey (PG)1 and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS),2 do not truly provide feedback at the encounter level with valid attribution, and these data are not sent to providers in a timely manner.

Our team developed the hospital medicine comportment and communication observation tool (HMCCOT) as a way to assess a hospitalist’s performance at the bedside while they saw patients.3 The tool was iteratively revised and validated using multiple methods. An observer watches a hospitalist care for a patient and notes whether or not desirable behaviors are executed. A score is calculated using the HMCCOT variables, and places where the provider does not score highly can be used as a coaching tool to give immediate feedback related to comportment and communication with patients.

In the analyses, the HMCCOT scores were moderately correlated with the hospitalists’ PG scores. Higher scores on the HMCCOT took an average of 13 minutes per patient encounter, giving further credence to the fact that excellent communication and comportment can be rapidly established at the bedside.

Patients’ complaints about doctors often relate to comportment and communication; the grievances are most commonly about feeling rushed, not being heard, and that information was not conveyed in a clear manner.4 Patient-centeredness has been shown to improve patient satisfaction as well as clinical outcomes, in part because they feel like partners in the mutually agreed upon treatment plans.5 Many of the components of the HMCCOT are at the heart of patient-centered care. While comportment may not be a frequently used term in patient care, respectful behaviors performed at the opening of any encounter [etiquette-based medicine which includes introducing oneself to patients and smiling] set the tone for the doctor-patient interaction.

Demonstrating genuine interest in the patient as a person is a core component of excellent patient care. Sir William Osler famously observed “It is much more important to know what sort of a patient has a disease than what sort of a disease a patient has.”6 A common method of “demonstrating interest in the patient as a person” recorded by the HMCCOT was physicians asking about patient’s personal history and of their interests. It is not difficult to fathom how knowing about patients’ personal interests and perspectives can help to most effectively engage them in establishing their goals of care and with therapeutic decisions.

Because hospitalists spend only a small proportion of their clinical time in direct patient care at the bedside, they need to make every moment count. HMCCOT allows for the identification of providers who are excellent in communication and comportment. Once identified, these exemplars can watch their peers and become the trainers to establish a culture of excellence.

Larger studies will be needed in the future to assess whether interventions that translate into improved comportment and communication among hospitalists will definitively augment patient satisfaction and ameliorate clinical outcomes.
 

1. Press Ganey. Accessed Dec. 15, 2015.

2. HCAHPS. Accessed Feb. 2, 2016.

3. Kotwal S, Khaliq W, Landis R, Wright S. Developing a comportment and communication tool for use in hospital medicine. J Hosp Med. 2016 Aug 13. doi: 10.1002/jhm.2647.

4. Hickson GB, Clayton EW, Entman SS, Miller CS, Githens PB, Whetten-Goldstein K, Sloan FA. Obstetricians’ prior malpractice experience and patients’ satisfaction with care. JAMA. 1994 Nov 23-30;272(20):1583-7.

5. Epstein RM, Street RL. Patient-centered communication in cancer care: promoting healing and reducing suffering. National Cancer Institute, NIH Publication No. 07-6225. Bethesda, MD, 2007.

6. Taylor RB. White Coat Tales: Medicine’s Heroes, Heritage, and Misadventure. New York: Springer; 2007:126.

Susrutha Kotwal, MD, and Scott Wright, MD, are based in the department of medicine, division of hospital medicine, Johns Hopkins Bayview Medical Center and Johns Hopkins University, Baltimore.

Publications
Topics
Sections

 

With the rise of hospital medicine in the United States, the lion’s share of inpatient care is delivered by hospitalists. Both hospitals and hospitalist providers are committed to delivering excellent patient care, but to accomplish this goal, specific feedback is essential.

Patient satisfaction surveys that assess provider performance, such as Press Ganey (PG)1 and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS),2 do not truly provide feedback at the encounter level with valid attribution, and these data are not sent to providers in a timely manner.

Our team developed the hospital medicine comportment and communication observation tool (HMCCOT) as a way to assess a hospitalist’s performance at the bedside while they saw patients.3 The tool was iteratively revised and validated using multiple methods. An observer watches a hospitalist care for a patient and notes whether or not desirable behaviors are executed. A score is calculated using the HMCCOT variables, and places where the provider does not score highly can be used as a coaching tool to give immediate feedback related to comportment and communication with patients.

In the analyses, the HMCCOT scores were moderately correlated with the hospitalists’ PG scores. Higher scores on the HMCCOT took an average of 13 minutes per patient encounter, giving further credence to the fact that excellent communication and comportment can be rapidly established at the bedside.

Patients’ complaints about doctors often relate to comportment and communication; the grievances are most commonly about feeling rushed, not being heard, and that information was not conveyed in a clear manner.4 Patient-centeredness has been shown to improve patient satisfaction as well as clinical outcomes, in part because they feel like partners in the mutually agreed upon treatment plans.5 Many of the components of the HMCCOT are at the heart of patient-centered care. While comportment may not be a frequently used term in patient care, respectful behaviors performed at the opening of any encounter [etiquette-based medicine which includes introducing oneself to patients and smiling] set the tone for the doctor-patient interaction.

Demonstrating genuine interest in the patient as a person is a core component of excellent patient care. Sir William Osler famously observed “It is much more important to know what sort of a patient has a disease than what sort of a disease a patient has.”6 A common method of “demonstrating interest in the patient as a person” recorded by the HMCCOT was physicians asking about patient’s personal history and of their interests. It is not difficult to fathom how knowing about patients’ personal interests and perspectives can help to most effectively engage them in establishing their goals of care and with therapeutic decisions.

Because hospitalists spend only a small proportion of their clinical time in direct patient care at the bedside, they need to make every moment count. HMCCOT allows for the identification of providers who are excellent in communication and comportment. Once identified, these exemplars can watch their peers and become the trainers to establish a culture of excellence.

Larger studies will be needed in the future to assess whether interventions that translate into improved comportment and communication among hospitalists will definitively augment patient satisfaction and ameliorate clinical outcomes.
 

1. Press Ganey. Accessed Dec. 15, 2015.

2. HCAHPS. Accessed Feb. 2, 2016.

3. Kotwal S, Khaliq W, Landis R, Wright S. Developing a comportment and communication tool for use in hospital medicine. J Hosp Med. 2016 Aug 13. doi: 10.1002/jhm.2647.

4. Hickson GB, Clayton EW, Entman SS, Miller CS, Githens PB, Whetten-Goldstein K, Sloan FA. Obstetricians’ prior malpractice experience and patients’ satisfaction with care. JAMA. 1994 Nov 23-30;272(20):1583-7.

5. Epstein RM, Street RL. Patient-centered communication in cancer care: promoting healing and reducing suffering. National Cancer Institute, NIH Publication No. 07-6225. Bethesda, MD, 2007.

6. Taylor RB. White Coat Tales: Medicine’s Heroes, Heritage, and Misadventure. New York: Springer; 2007:126.

Susrutha Kotwal, MD, and Scott Wright, MD, are based in the department of medicine, division of hospital medicine, Johns Hopkins Bayview Medical Center and Johns Hopkins University, Baltimore.

 

With the rise of hospital medicine in the United States, the lion’s share of inpatient care is delivered by hospitalists. Both hospitals and hospitalist providers are committed to delivering excellent patient care, but to accomplish this goal, specific feedback is essential.

Patient satisfaction surveys that assess provider performance, such as Press Ganey (PG)1 and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS),2 do not truly provide feedback at the encounter level with valid attribution, and these data are not sent to providers in a timely manner.

Our team developed the hospital medicine comportment and communication observation tool (HMCCOT) as a way to assess a hospitalist’s performance at the bedside while they saw patients.3 The tool was iteratively revised and validated using multiple methods. An observer watches a hospitalist care for a patient and notes whether or not desirable behaviors are executed. A score is calculated using the HMCCOT variables, and places where the provider does not score highly can be used as a coaching tool to give immediate feedback related to comportment and communication with patients.

In the analyses, the HMCCOT scores were moderately correlated with the hospitalists’ PG scores. Higher scores on the HMCCOT took an average of 13 minutes per patient encounter, giving further credence to the fact that excellent communication and comportment can be rapidly established at the bedside.

Patients’ complaints about doctors often relate to comportment and communication; the grievances are most commonly about feeling rushed, not being heard, and that information was not conveyed in a clear manner.4 Patient-centeredness has been shown to improve patient satisfaction as well as clinical outcomes, in part because they feel like partners in the mutually agreed upon treatment plans.5 Many of the components of the HMCCOT are at the heart of patient-centered care. While comportment may not be a frequently used term in patient care, respectful behaviors performed at the opening of any encounter [etiquette-based medicine which includes introducing oneself to patients and smiling] set the tone for the doctor-patient interaction.

Demonstrating genuine interest in the patient as a person is a core component of excellent patient care. Sir William Osler famously observed “It is much more important to know what sort of a patient has a disease than what sort of a disease a patient has.”6 A common method of “demonstrating interest in the patient as a person” recorded by the HMCCOT was physicians asking about patient’s personal history and of their interests. It is not difficult to fathom how knowing about patients’ personal interests and perspectives can help to most effectively engage them in establishing their goals of care and with therapeutic decisions.

Because hospitalists spend only a small proportion of their clinical time in direct patient care at the bedside, they need to make every moment count. HMCCOT allows for the identification of providers who are excellent in communication and comportment. Once identified, these exemplars can watch their peers and become the trainers to establish a culture of excellence.

Larger studies will be needed in the future to assess whether interventions that translate into improved comportment and communication among hospitalists will definitively augment patient satisfaction and ameliorate clinical outcomes.
 

1. Press Ganey. Accessed Dec. 15, 2015.

2. HCAHPS. Accessed Feb. 2, 2016.

3. Kotwal S, Khaliq W, Landis R, Wright S. Developing a comportment and communication tool for use in hospital medicine. J Hosp Med. 2016 Aug 13. doi: 10.1002/jhm.2647.

4. Hickson GB, Clayton EW, Entman SS, Miller CS, Githens PB, Whetten-Goldstein K, Sloan FA. Obstetricians’ prior malpractice experience and patients’ satisfaction with care. JAMA. 1994 Nov 23-30;272(20):1583-7.

5. Epstein RM, Street RL. Patient-centered communication in cancer care: promoting healing and reducing suffering. National Cancer Institute, NIH Publication No. 07-6225. Bethesda, MD, 2007.

6. Taylor RB. White Coat Tales: Medicine’s Heroes, Heritage, and Misadventure. New York: Springer; 2007:126.

Susrutha Kotwal, MD, and Scott Wright, MD, are based in the department of medicine, division of hospital medicine, Johns Hopkins Bayview Medical Center and Johns Hopkins University, Baltimore.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads

Comportment and Communication Score

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Developing a comportment and communication tool for use in hospital medicine

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
Figure 1
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

Files
References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
Article PDF
Issue
Journal of Hospital Medicine - 11(12)
Publications
Page Number
853-858
Sections
Files
Files
Article PDF
Article PDF

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
Figure 1
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

In 2014, there were more than 40,000 hospitalists in the United States, and approximately 20% were employed by academic medical centers.[1] Hospitalist physicians groups are committed to delivering excellent patient care. However, the published literature is limited with respect to defining optimal care in hospital medicine.

Patient satisfaction surveys, such as Press Ganey (PG)[2] and Hospital Consumer Assessment of Healthcare Providers and Systems,[3] are being used to assess patients' contentment with the quality of care they receive while hospitalized. The Society of Hospital Medicine, the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] There are, however, several problems with the current methods. First, the attribution to specific providers is questionable. Second, recall about the provider by the patients may be poor because surveys are sent to patients days after they return home. Third, the patients' recovery and health outcomes are likely to influence their assessment of the doctor. Finally, feedback is known to be most valuable and transformative when it is specific and given in real time. Thus, a tool that is able to provide feedback at the encounter level should be more helpful than a tool that offers assessment at the level of the admission, particularly when it can be also delivered immediately after the data are collected.

Comportment has been used to describe both the way a person behaves and also the way she carries herself (ie, her general manner).[5] Excellent comportment and communication can serve as the foundation for delivering patient‐centered care.[6, 7, 8] Patient centeredness has been shown to improve the patient experience and clinical outcomes, including compliance with therapeutic plans.[9, 10, 11] Respectful behavior, etiquette‐based medicine, and effective communication also lay the foundation upon which the therapeutic alliance between a doctor and patient can be built.

The goal of this study was to establish a metric that could comprehensively assess a hospitalist provider's comportment and communication skills during an encounter with a hospitalized patient.

METHODS

Study Design and Setting

An observational study of hospitalist physicians was conducted between June 2013 and December 2013 at 5 hospitals in Maryland and Washington DC. Two are academic medical centers (Johns Hopkins Hospital and Johns Hopkins Bayview Medical Center [JHBMC]), and the others are community hospitals (Howard County General Hospital [HCGH], Sibley Memorial Hospital [SMC], and Suburban Hospital). These 5 hospitals, across 2 large cities, have distinct culture and leadership, each serving different populations.

Subjects

In developing a tool to measure communication and comportment, we needed to observe physicianpatient encounters wherein there would be a good deal of variability in performance. During pilot testing, when following a few of the most senior and respected hospitalists, we noted encounters during which they excelled and others where they performed less optimally. Further, in following some less‐experienced providers, their skills were less developed and they were uniformly missing most of the behaviors on the tool that were believed to be associated with optimal communication and comportment. Because of this, we decided to purposively sample the strongest clinicians at each of the 5 hospitals in hopes of seeing a range of scores on the tool.

The chiefs of hospital medicine at the 5 hospitals were contacted and asked to identify their most clinically excellent hospitalists, namely those who they thought were most clinically skilled within their groups. Because our goal was to observe the top tier (approximately 20%) of the hospitalists within each group, we asked each chief to name a specific number of physicians (eg, 3 names for 1 group with 15 hospitalists, and 8 from another group with 40 physicians). No precise definition of most clinically excellent hospitalists was provided to the chiefs. It was believed that they were well positioned to select their best clinicians because of both subjective feedback and objective data that flow to them. This postulate may have been corroborated by the fact that each of them efficiently sent a list of their top choices without any questions being asked.

The 29 hospitalists (named by their chiefs) were in turn emailed and invited to participate in the study. All but 3 hospitalists consented to participate in the study; this resulted in a cohort of 26 who would be observed.

Tool Development

A team was assembled to develop the hospital medicine comportment and communication observation tool (HMCCOT). All team members had extensive clinical experience, several had published articles on clinical excellence, had won clinical awards, and all had been teaching clinical skills for many years. The team's development of the HMCCOT was extensively informed by a review of the literature. Two articles that most heavily influenced the HMCCOT's development were Christmas et al.'s paper describing 7 core domains of excellence, 2 of which are intimately linked to communication and comportment,[12] and Kahn's text that delineates behaviors to be performed upon entering the patient's room, termed etiquette‐based medicine.[6] The team also considered the work from prior timemotion studies in hospital medicine,[7, 13] which led to the inclusion of temporal measurements during the observations. The tool was also presented at academic conferences in the Division of General Internal Medicine at Johns Hopkins and iteratively revised based on the feedback. Feedback was sought from people who have spent their entire career studying physicianpatient relationships and who are members of the American Academy on Communication in Healthcare. These methods established content validity evidence for the tool under development. The goal of the HMCCOT was to assess behaviors believed to be associated with optimal comportment and communication in hospital medicine.

The HMCCOT was pilot tested by observing different JHBMC hospitalists patient encounters and it was iteratively revised. On multiple occasions, 2 authors/emnvestigators spent time observing JHBMC hospitalists together and compared data capture and levels of agreement across all elements. Then, for formal assessment of inter‐rater reliability, 2 authors observed 5 different hospitalists across 25 patient encounters; the coefficient was 0.91 (standard error = 0.04). This step helped to establish internal structure validity evidence for the tool.

The initial version of the HMCCOT contained 36 elements, and it was organized sequentially to allow the observer to document behaviors in the order that they were likely to occur so as to facilitate the process and to minimize oversight. A few examples of the elements were as follows: open‐ended versus a close‐ended statement at the beginning of the encounter, hospitalist introduces himself/herself, and whether the provider smiles at any point during the patient encounter.

Data Collection

One author scheduled a time to observe each hospitalist physician during their routine clinical care of patients when they were not working with medical learners. Hospitalists were naturally aware that they were being observed but were not aware of the specific data elements or behaviors that were being recorded.

The study was approved by the institutional review board at the Johns Hopkins University School of Medicine, and by each of the research review committees at HCGH, SMC, and Suburban hospitals.

Data Analysis

After data collection, all data were deidentified so that the researchers were blinded to the identities of the physicians. Respondent characteristics are presented as proportions and means. Unpaired t test and 2 tests were used to compare demographic information, and stratified by mean HMCCOT score. The survey data were analyzed using Stata statistical software version 12.1 (StataCorp LP, College Station, TX).

Further Validation of the HMCCOT

Upon reviewing the distribution of data after observing the 26 physicians with their patients, we excluded 13 variables from the initial version of the tool that lacked discriminatory value (eg, 100% or 0% of physicians performed the observed behavior during the encounters); this left 23 variables that were judged to be most clinically relevant in the final version of the HMCCOT. Two examples of the variables that were excluded were: uses technology/literature to educate patients (not witnessed in any encounter), and obeys posted contact precautions (done uniformly by all). The HMCCOT score represents the proportion of observed behaviors (out of the 23 behaviors). It was computed for each hospitalist for every patient encounter. Finally, relation to other variables validity evidence would be established by comparing the mean HMCCOT scores of the physicians to their PG scores from the same time period to evaluate the correlation between the 2 scores. This association was assessed using Pearson correlations.

RESULTS

The average clinical experience of the 26 hospitalist physicians studied was 6 years (Table 1). Their mean age was 38 years, 13 (50%) were female, and 16 (62%) were of nonwhite race. Fourteen hospitalists (54%) worked at 1 of the nonacademic hospitals. In terms of clinical workload, most physicians (n = 17, 65%) devoted more than 70% of their time working in direct patient care. Mean time spent observing each physician was 280 minutes. During this time, the 26 physicians were observed for 181 separate clinical encounters; 54% of these patients were new encounters, patients who were not previously known to the physician. The average time each physician spent in a patient room was 10.8 minutes. Mean number of observed patient encounters per hospitalist was 7.

Characteristics of the Hospitalist Physicians Based on Their Hospital Medicine Comportment and Communication Observation Tool Score
Total Study Population, n = 26 HMCCOT Score 60, n = 14 HMCCOT Score >60, n = 12 P Value*
  • NOTE: Abbreviations: HCGH, Howard County General Hospital; HMCCOT, Hospital Medicine Comportment and Communication Observation Tool; JHBMC, Johns Hopkins Bayview Medical Center; JHH, Johns Hopkins Hospital; SD, standard deviation; SMC, Sibley Memorial Hospital. *2 with Yates‐corrected P value where at least 20% of frequencies were <5. Unpaired t test statistic

Age, mean (SD) 38 (5.6) 37.9 (5.6) 38.1 (5.7) 0.95
Female, n (%) 13 (50) 6 (43) 7 (58) 0.43
Race, n (%)
Caucasian 10 (38) 5 (36) 5 (41) 0.31
Asian 13 (50) 8 (57) 5 (41)
African/African American 2 (8) 0 (0) 2 (17)
Other 1 (4) 1 (7) 0 (0)
Clinical experience >6 years, n (%) 12 (46) 6 (43) 6 (50) 0.72
Clinical workload >70% 17 (65) 10 (71) 7 (58) 0.48
Academic hospitalist, n (%) 12 (46) 5 (36) 7 (58) 0.25
Hospital 0.47
JHBMC 8 (31) 3 (21.4) 5 (41)
JHH 4 (15) 2 (14.3) 2 (17)
HCGH 5 (19) 3 (21.4) 2 (17)
Suburban 6 (23) 3 (21.4) 3 (25)
SMC 3 (12) 3 (21.4) 0 (0)
Minutes spent observing hospitalist per shift, mean (SD) 280 (104.5) 280.4 (115.5) 281.4 (95.3) 0.98
Average time spent per patient encounter in minutes, mean (SD) 10.8 (8.9) 8.7 (9.1) 13 (8.1) 0.001
Proportion of observed patients who were new to provider, % 97 (53.5) 37 (39.7) 60 (68.1) 0.001

The distribution of HMCCOT scores was not statistically significantly different when analyzed by age, gender, race, amount of clinical experience, clinical workload of the hospitalist, hospital, time spent observing the hospitalist (all P > 0.05). The distribution of HMCCOT scores was statistically different in new patient encounters compared to follow‐ups (68.1% vs 39.7%, P 0.001). Encounters with patients that generated HMCCOT scores above versus below the mean were longer (13 minutes vs 8.7 minutes, P 0.001).

The mean HMCCOT score was 61 (standard deviation [SD] = 10.6), and it was normally distributed (Figure 1). Table 2 shows the data for the 23 behaviors that were objectively assessed as part of the HMCCOT for the 181 patient encounters. The most frequently observed behaviors were physicians washing hands after leaving the patient's room in 170 (94%) of the encounters and smiling (83%). The behaviors that were observed with the least regularity were using an empathic statement (26% of encounters), and employing teach‐back (13% of encounters). A common method of demonstrating interest in the patient as a person, seen in 41% of encounters, involved physicians asking about patients' personal histories and their interests.

Objective and Subjective Data Making Up the Hospital Medicine Comportment and Communication Observation Tool Score Assessed While Observing 26 Hospitalist Physicians
Variables All Visits Combined, n = 181 HMCCOT Score <60, n = 93 HMCCOT Score >60, n = 88 P Value*
  • NOTE: Abbreviations: HMCCOT, Hospital Medicine Comportment and Communication Observation Tool. *2 with Yates‐corrected P value where at least 20% of frequencies were <5.

Objective observations, n (%)
Washes hands after leaving room 170 (94) 83 (89) 87 (99) 0.007
Discusses plan for the day 163 (91) 78 (84) 85 (99) <0.001
Does not interrupt the patient 159 (88) 79 (85) 80 (91) 0.21
Smiles 149 (83) 71 (77) 78 (89) 0.04
Washes hands before entering 139 (77) 64 (69) 75 (85) 0.009
Begins with open‐ended question 134 (77) 68 (76) 66 (78) 0.74
Knocks before entering the room 127 (76) 57 (65) 70 (89) <0.001
Introduces him/herself to the patient 122 (67) 45 (48) 77 (88) <0.001
Explains his/her role 120 (66) 44 (47) 76 (86) <0.001
Asks about pain 110 (61) 45 (49) 65 (74) 0.001
Asks permission prior to examining 106 (61) 43 (50) 63 (72) 0.002
Uncovers body area for the physical exam 100 (57) 34 (38) 66 (77) <0.001
Discusses discharge plan 99 (55) 38 (41) 61 (71) <0.001
Sits down in the patient room 74 (41) 24 (26) 50 (57) <0.001
Asks about patient's feelings 58 (33) 17 (19) 41 (47) <0.001
Shakes hands with the patient 57 (32) 17 (18) 40 (46) <0.001
Uses teach‐back 24 (13) 4 (4.3) 20 (24) <0.001
Subjective observations, n (%)
Avoids medical jargon 160 (89) 85 (91) 83 (95) 0.28
Demonstrates interest in patient as a person 72 (41) 16 (18) 56 (66) <0.001
Touches appropriately 62 (34) 21 (23) 41 (47) 0.001
Shows sensitivity to patient modesty 57 (93) 15 (79) 42 (100) 0.002
Engages in nonmedical conversation 54 (30) 10 (11) 44 (51) <0.001
Uses empathic statement 47 (26) 9 (10) 38 (43) <0.001
Figure 1
Distribution of mean hospital medicine comportment and communication tool (HMCCOT) scores for the 26 hospitalist providers who were observed.

The average composite PG scores for the physician sample was 38.95 (SD=39.64). A moderate correlation was found between the HMCCOT score and PG score (adjusted Pearson correlation: 0.45, P = 0.047).

DISCUSSION

In this study, we followed 26 hospitalist physicians during routine clinical care, and we focused intently on their communication and their comportment with patients at the bedside. Even among clinically respected hospitalists, the results reveal that there is wide variability in comportment and communication practices and behaviors at the bedside. The physicians' HMCCOT scores were associated with their PG scores. These findings suggest that improved bedside communication and comportment with patients might translate into enhanced patient satisfaction.

This is the first study that honed in on hospitalist communication and comportment. With validity evidence established for the HMCCOT, some may elect to more explicitly perform these behaviors themselves, and others may wish to watch other hospitalists to give them feedback that is tied to specific behaviors. Beginning with the basics, the hospitalists we studied introduced themselves to their patients at the initial encounter 78% of the time, less frequently than is done by primary care clinicians (89%) but more consistently than do emergency department providers (64%).[7] Other variables that stood out in the HMCCOT was that teach‐back was employed in only 13% of the encounters. Previous studies have shown that teach‐back corroborates patient comprehension and can be used to engage patients (and caregivers) in realistic goal setting and optimal health service utilization.[14] Further, patients who clearly understand their postdischarge plan are 30% less likely to be readmitted or visit the emergency department.[14] The data for our group have helped us to see areas of strengths, such as hand washing, where we are above compliance rates across hospitals in the United States,[15] as well as those matters that represent opportunities for improvement such as connecting more deeply with our patients.

Tackett et al. have looked at encounter length and its association with performance of etiquette‐based medicine behaviors.[7] Similar to their study, we found a positive correlation between spending more time with patients and higher HMCCOT scores. We also found that HMCCOT scores were higher when providers were caring for new patients. Patients' complaints about doctors often relate to feeling rushed, that their physicians did not listen to them, or that information was not conveyed in a clear manner.[16] Such challenges in physicianpatient communication are ubiquitous across clinical settings.[16] When successfully achieved, patient‐centered communication has been associated with improved clinical outcomes, including adherence to recommended treatment and better self‐management of chronic disease.[17, 18, 19, 20, 21, 22, 23, 24, 25, 26] Many of the components of the HMCCOT described in this article are at the heart of patient‐centered care.

Several limitations of the study should be considered. First, physicians may have behaved differently while they were being observed, which is known as the Hawthorne effect. We observed them for many hours and across multiple patient encounters, and the physicians were not aware of the specific types of data that we were collecting. These factors may have limited the biases along such lines. Second, there may be elements of optimal comportment and communication that were not captured by the HMCCOT. Hopefully, there are not big gaps, as we used multiple methods and an iterative process in the refinement of the HMCCOT metric. Third, one investigator did all of the observing, and it is possible that he might have missed certain behaviors. Through extensive pilot testing and comparisons with other raters, the observer became very skilled and facile with such data collection and the tool. Fourth, we did not survey the same patients that were cared for to compare their perspectives to the HMCCOT scores following the clinical encounters. For patient perspectives, we relied only on PG scores. Fifth, quality of care is a broad and multidimensional construct. The HMCCOT focuses exclusively on hospitalists' comportment and communication at the bedside; therefore, it does not comprehensively assess care quality. Sixth, with our goal to optimally validate the HMCCOT, we tested it on the top tier of hospitalists within each group. We may have observed different results had we randomly selected hospitalists from each hospital or had we conducted the study at hospitals in other geographic regions. Finally, all of the doctors observed worked at hospitals in the Mid‐Atlantic region. However, these five distinct hospitals each have their own cultures, and they are led by different administrators. We purposively chose to sample both academic as well as community settings.

In conclusion, this study reports on the development of a comportment and communication tool that was established and validated by following clinically excellent hospitalists at the bedside. Future studies are necessary to determine whether hospitalists of all levels of experience and clinical skill can improve when given data and feedback using the HMCCOT. Larger studies will then be needed to assess whether enhancing comportment and communication can truly improve patient satisfaction and clinical outcomes in the hospital.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Susrutha Kotwal, MD, and Waseem Khaliq, MD, contributed equally to this work. The authors report no conflicts of interest.

References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
References
  1. 2014 state of hospital medicine report. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org/Web/Practice_Management/State_of_HM_Surveys/2014.aspx. Accessed January 10, 2015.
  2. Press Ganey website. Available at: http://www.pressganey.com/home. Accessed December 15, 2015.
  3. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed February 2, 2016.
  4. Membership committee guidelines for hospitalists patient satisfaction surveys. Society of Hospital Medicine website. Available at: http://www.hospitalmedicine.org. Accessed February 2, 2016.
  5. Definition of comportment. Available at: http://www.vocabulary.com/dictionary/comportment. Accessed December 15, 2015.
  6. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358(19):19881989.
  7. Tackett S, Tad‐y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  8. Levinson W, Lesser CS, Epstein RM. Developing physician communication skills for patient‐centered care. Health Aff (Millwood). 2010;29(7):13101318.
  9. Auerbach SM. The impact on patient health outcomes of interventions targeting the patient–physician relationship. Patient. 2009;2(2):7784.
  10. Griffin SJ, Kinmonth AL, Veltman MW, Gillard S, Grant J, Stewart M. Effect on health‐related outcomes of interventions to alter the interaction between patients and practitioners: a systematic review of trials. Ann Fam Med. 2004;2(6):595608.
  11. Street RL, Makoul G, Arora NK, Epstein RM. How does communication heal? Pathways linking clinician–patient communication to health outcomes. Patient Educ Couns. 2009;74(3):295301.
  12. Christmas C, Kravet SJ, Durso SC, Wright SM. Clinical excellence in academia: perspectives from masterful academic clinicians. Mayo Clin Proc. 2008;83(9):989994.
  13. Tipping MD, Forth VE, O'Leary KJ, et al. Where did the day go?—a time‐motion study of hospitalists. J Hosp Med. 2010;5(6):323328.
  14. Peter D, Robinson P, Jordan M, et al. Reducing readmissions using teach‐back: enhancing patient and family education. J Nurs Adm. 2015;45(1):3542.
  15. McGuckin M, Waterman R, Govednik J. Hand hygiene compliance rates in the United States—a one‐year multicenter collaboration using product/volume usage measurement and feedback. Am J Med Qual. 2009;24(3):205213.
  16. Hickson GB, Clayton EW, Entman SS, et al. Obstetricians' prior malpractice experience and patients' satisfaction with care. JAMA. 1994;272(20):15831587.
  17. Epstein RM, Street RL. Patient‐Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. NIH publication no. 07–6225. Bethesda, MD: National Cancer Institute; 2007.
  18. Arora NK. Interacting with cancer patients: the significance of physicians' communication behavior. Soc Sci Med. 2003;57(5):791806.
  19. Greenfield S, Kaplan S, Ware JE. Expanding patient involvement in care: effects on patient outcomes. Ann Intern Med. 1985;102(4):520528.
  20. Mead N, Bower P. Measuring patient‐centeredness: a comparison of three observation‐based instruments. Patient Educ Couns. 2000;39(1):7180.
  21. Ong LM, Haes JC, Hoos AM, Lammes FB. Doctor‐patient communication: a review of the literature. Soc Sci Med. 1995;40(7):903918.
  22. Safran DG, Taira DA, Rogers WH, Kosinski M, Ware JE, Tarlov AR. Linking primary care performance to outcomes of care. J Fam Pract. 1998;47(3):213220.
  23. Stewart M, Brown JB, Donner A, et al. The impact of patient‐centered care on outcomes. J Fam Pract. 2000;49(9):796804.
  24. Epstein RM, Franks P, Fiscella K, et al. Measuring patient‐centered communication in patient‐physician consultations: theoretical and practical issues. Soc Sci Med. 2005;61(7):15161528.
  25. Mead N, Bower P. Patient‐centered consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48(1):5161.
  26. Bredart A, Bouleuc C, Dolbeault S. Doctor‐patient communication and satisfaction with care in oncology. Curr Opin Oncol. 2005;17(4):351354.
Issue
Journal of Hospital Medicine - 11(12)
Issue
Journal of Hospital Medicine - 11(12)
Page Number
853-858
Page Number
853-858
Publications
Publications
Article Type
Display Headline
Developing a comportment and communication tool for use in hospital medicine
Display Headline
Developing a comportment and communication tool for use in hospital medicine
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Susrutha Kotwal, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 200 Eastern Avenue, MFL Building West Tower, 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: skotwal1@jhmi.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Patients' Sleep Quality and Duration

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Pilot study aiming to support sleep quality and duration during hospitalizations

Approximately 70 million adults within the United States have sleep disorders,[1] and up to 30% of adults report sleeping less than 6 hours per night.[2] Poor sleep has been associated with undesirable health outcomes.[1] Suboptimal sleep duration and sleep quality has been associated with a higher prevalence of chronic health conditions including hypertension, type 2 diabetes, coronary artery disease, stroke, and obesity, as well as increased overall mortality.[3, 4, 5, 6, 7]

Sleep plays an important role in restoration of wellness. Poor sleep is associated with physiological disturbances that may result in poor healing.[8, 9, 10] In the literature, prevalence of insomnia among elderly hospitalized patients was 36.7%,[11] whereas in younger hospitalized patients it was 50%.[12] Hospitalized patients frequently cite their acute illness, hospital‐related environmental factors, and disruptions that are part of routine care as causes for poor sleep during hospitalization.[13, 14, 15] Although the pervasiveness of poor sleep among hospitalized patients is high, interventions that prioritize sleep optimization as routine care, are uncommon. Few studies have reviewed the effect of sleep‐promoting measures on both sleep quality and sleep duration among patients hospitalized on general medicine units.

In this study, we aimed to assess the feasibility of incorporating sleep‐promoting interventions on a general medicine unit. We sought to identify differences in sleep measures between intervention and control groups. The primary outcome that we hoped to influence and lengthen in the intervention group was sleep duration. This outcome was measured both by sleep diary and with actigraphy. Secondary outcomes that we hypothesized should improve in the intervention group included feeling more refreshed in the mornings, sleep efficiency, and fewer sleep disruptions. As a feasibility pilot, we also wanted to explore the ease or difficulty with which sleep‐promoting interventions could be incorporated to the team's workflow.

METHODS

Study Design

A quasi‐experimental prospective pilot study was conducted at a single academic center, the Johns Hopkins Bayview Medical Center. Participants included adult patients admitted to the general medicine ward from July 2013 through January 2014. Patients with dementia; inability to complete survey questionnaires due to delirium, disability, or a language barrier; active withdrawal from alcohol or controlled substances; or acute psychiatric illness were excluded in this study.

The medicine ward at our medical center is comprised of 2 structurally identical units that admit patients with similar diagnoses, disease severity, and case‐mix disease groups. Nursing and support staff are unit specific. Pertaining to the sleep environment, the units both have semiprivate and private rooms. Visitors are encouraged to leave by 10 pm. Patients admitted from the emergency room to the medicine ward are assigned haphazardly to either unit based on bed availability. For the purpose of this study, we selected 1 unit to be a control unit and identified the other as the sleep‐promoting intervention unit.

Study Procedure

Upon arrival to the medicine unit, the research team approached all patients who met study eligibility criteria for study participation. Patients were provided full disclosure of the study using institutional research guidelines, and those interested in participating were consented. Participants were not explicitly told about their group assignment. This study was approved by the Johns Hopkins Institutional Review Board for human subject research.

In this study, the control group participants received standard of care as it pertains to sleep promotion. No additional sleep‐promoting measures were implemented to routine medical care, medication administration, nursing care, and overnight monitoring. Patients who used sleep medications at home, prior to admission, had those medicines continued only if they requested them and they were not contraindicated given their acute illness. Participants on the intervention unit were exposed to a nurse‐delivered sleep‐promoting protocol aimed at transforming the culture of care such that helping patients to sleep soundly was made a top priority. Environmental changes included unit‐wide efforts to minimize light and noise disturbances by dimming hallway lights, turning off room lights, and encouraging care teams to be as quiet as possible. Other strategies focused largely on minimizing care‐related disruptions. These included, when appropriate, administering nighttime medications in the early evening, minimizing fluids overnight, and closing patient room doors where appropriate. Further, patients were offered the following sleep‐promoting items to choose from: ear plugs, eye masks, warm blankets, and relaxation music. The final component of our intervention was 30‐minute sleep hygiene education taught by a physician. It highlighted basic sleep physiology and healthy sleep behavior adapted from Buysse.[16] Patients learned the role of behaviors such as reducing time lying awake in bed, setting standard wake‐up time and sleep time, and going to bed only when sleepy. This behavioral education was supplemented by a handout with sleep‐promoting suggestions.

The care team on the intervention unit received comprehensive study‐focused training in which night nursing teams were familiarized with the sleep‐promoting protocol through in‐service sessions facilitated by 1 of the authors (E.W.G.). To further promote study implementation, sleep‐promoting procedures were supported and encouraged by supervising nurses who made daily reminders to the intervention unit night care team of the goals of the sleep‐promoting study during evening huddles performed at the beginning of each shift. To assess the adherence of the sleep protocol, the nursing staff completed a daily checklist of elements within the protocol that were employed .

Data Collection and Measures

Baseline Measures

At the time of enrollment, study patients' demographic information, including use of chronic sleep medication prior to admission, was collected. Participants were assessed for baseline sleep disturbance prior to admission using standardized, validated sleep assessment tools: Pittsburgh Sleep Quality Index (PSQI), the Insomnia Severity Index (ISI), and the Epworth Sleepiness Scale (ESS). PSQI, a 19‐item tool, assessed self‐rated sleep quality measured over the prior month; a score of 5 or greater indicated poor sleep.[17] ISI, a 7‐item tool, identified the presence, rated the severity, and described the impact of insomnia; a score of 10 or greater indicated insomnia.[18] ESS, an 8‐item self‐rated tool, evaluated the impact of perceived sleepiness on daily functioning in 8 different environments; a score of 9 or greater was linked to burden of sleepiness. Participants were also screened for both obstructive sleep apnea (using the Berlin Sleep Apnea Index) and clinical depression (using Center for Epidemiologic Studies‐Depression 10‐point scale), as these conditions affect sleep patterns. These data are shown in Table 1.

Characteristics of Study Participants (n = 112)
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: The entry for number of sleep diaries per participant in intervention and control groups is presented after capping at 4 diaries. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; ESS, Epworth Sleepiness Scale; ISI, Insomnia Severity Index; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation.

Age, y, mean (SD) 58.2 (16) 56.9 (17) 0.69
Female, n (%) 26 (54.2) 36 (56.3) 0.83
Race, n (%)
Caucasian 33 (68.8) 46 (71.9) 0.92
African American 13 (27.1) 16 (25.0)
Other 2 (4.2) 2 (3.1)
BMI, mean (SD) 32.1 (9.2) 31.8 (9.3) 0.85
Admitting service, n (%)
Teaching 21 (43.8) 18 (28.1) 0.09
Nonteaching 27 (56.3) 46 (71.9)
Sleep medication prior to admission, n (%) 7 (14.9) 21 (32.8) 0.03
Length of stay, d, mean (SD) 4.9 (3) 5.8 (3.9) 0.19
Number of sleep diaries per participant, mean (SD) 2.2 (0.8) 2.6 (0.9) 0.02
Proportion of hospital days with sleep diaries per participant, (SD) 0.6 (0.2) 0.5 (0.2) 0.71
Number of nights with actigraphy per participant, mean (SD) 1.2 (0.7) 1.4 (0.8) 0.16
Proportion of hospital nights with actigraphy per participant (SD) 0.3 (0.2) 0.3 (0.1) 0.91
Baseline sleep measures
PSQI, mean (SD) 9.9 (4.6) 9.1 (4.5) 0.39
ESS, mean (SD) 7.4 (4.2) 7.7 (4.8) 0.79
ISI, mean (SD) 11.9 (7.6) 10.8 (7.4) 0.44
CESD‐10, mean (SD) 12.2 (7.2) 12.8 (7.6) 0.69
Berlin Sleep Apnea, mean (SD) 0.63 (0.5) 0.61 (0.5) 0.87

Sleep Diary Measures

A sleep diary completed each morning assessed the outcome measures, perceived sleep quality, how refreshing sleep was, and sleep durations. The diary employed a 5‐point Likert rating scale ranging from poor (1) to excellent (5). Perceived sleep duration was calculated from patients' reported time in bed, time to fall asleep, wake time, and number and duration of awakenings after sleep onset on their sleep diary. These data were used to compute total sleep time (TST) and sleep efficiency (SE). The sleep diary also included other pertinent sleep‐related measures including use of sleep medication the night prior and specific sleep disruptions from the prior night. To measure the impact of disruptions due to disturbances the prior night, we created a summed scale score of 4 items that negatively interfered with sleep (light, temperature, noise, and interruptions; 5 point scales from 1 = not at all to 5 = significant). Analysis of principal axis factors with varimax rotation yielded 1 disruption factor accounting for 55% of the variance, and Cronbach's was 0.73.

Actigraphy Measures

Actigraphy outcomes of sleep were recorded using the actigraphy wrist watch (ActiSleep Plus (GT3X+); ActiGraph, Pensacola, FL). Participants wore the monitor from the day of enrollment throughout the hospital stay or until transfer out of the unit. Objective data were analyzed and scored using ActiLife 6 data analysis software (version 6.10.1; Actigraph). Time in bed, given the unique inpatient setting, was calculated using sleep diary responses as the interval between sleep time and reported wake up time. These were entered into the Actilife 6 software for the sleep scoring analysis using a validated algorithm, Cole‐Kripke, to calculate actigraphy TST and SE.

Statistical Analysis

Descriptive and inferential statistics were computed using Statistical Package for the Social Sciences version 22 (IBM, Armonk, NY). We computed means, proportions, and measures of dispersion for all study variables. To test differences in sleep diary and actigraphy outcomes between the intervention and control arms, we used linear mixed models with full maximum likelihood estimation to model each of the 7 continuous sleep outcomes. These statistical methods are appropriate to account for the nonindependence of continuous repeated observations within hospital patients.[19] For all outcomes, the unit of analysis was nightly observations nested within patient‐ level characteristics. The use of full maximum likelihood estimation is a robust and preferred method for handling values missing at random in longitudinal datasets.[20]

To model repeated observations, mixed models included a term representing time in days. For each outcome, we specified unconditional growth models to examine the variability between and within patients by computing intraclass correlations and inspecting variance components. We used model fit indices (‐2LL deviance, Akaike's information criterion, and Schwartz's Bayesian criterion) as appropriate to determine best fitting model specifications in terms of random effects and covariance structure.[21, 22]

We tested the main effect of the intervention on sleep outcomes and the interactive effect of group (intervention vs control) by hospital day, to test whether there were group differences in slopes representing average change in sleep outcomes over hospital days. All models adjusted for age, body mass index, depression, and baseline sleep quality (PSQI) as time‐invariant covariates, and whether participants had taken a sleep medication the day before, as a time‐varying covariate. Adjustment for prehospitalization sleep quality was a matter of particular importance. We used the PSQI to control for sleep quality because it is both a well‐validated, multidimensional measure, and it includes prehospital use of sleep medications. In a series of sensitivity analyses, we also explored whether the dichotomous self‐reported measure of whether or not participants regularly took sleep medications prior to hospitalization, rather than the PSQI, would change our substantive findings. All covariates were centered at the grand‐mean following guidelines for appropriate interpretation of regression coefficients.[23]

RESULTS

Of the 112 study patients, 48 were in the intervention unit and 64 in the control unit. Eighty‐five percent of study participants endorsed poor sleep prior to hospital admission on the PSQI sleep quality measure, which was similar in both groups (Table 1).

Participants completed 1 to 8 sleep diary entries (mean = 2.5, standard deviation = 1.1). Because only 6 participants completed 5 or more diaries, we constrained the number of diaries included in the inferential analysis to 4 to avoid influential outliers identified by scatterplots. Fifty‐seven percent of participants had 1 night of valid actigraphy data (n = 64); 29%, 2 nights (n = 32), 8% had 3 or 4 nights, and 9 participants did not have any usable actigraphy data. The extent to which the intervention was accepted by patients in the intervention group was highly variable. Unit‐wide patient adherence with the 10 pm lights off, telephone off, and TV off policy was 87%, 67%, and 64% of intervention patients, respectively. Uptake of sleep menu items was also highly variable, and not a single element was used by more than half of patients (acceptance rates ranged from 11% to 44%). Eye masks (44%) and ear plugs (32%) were the most commonly utilized items.

A greater proportion of patients in the control arm (33%) had been taking sleep medications prior to hospitalization compared to the intervention arm (15%; 2 = 4.6, P < 0.05). However, hypnotic medication use in the hospital was similar across the both groups (intervention unit patients: 25% and controls: 21%, P = 0.49).

Intraclass correlations for the 7 sleep outcomes ranged from 0.59 to 0.76 on sleep diary outcomes, and from 0.61 to 0.85 on actigraphy. Dependency of sleep measures within patients accounted for 59% to 85% of variance in sleep outcomes. The best‐fit mixed models included random intercepts only. The results of mixed models testing the main effect of intervention versus comparison arm on sleep outcome measures, adjusted for covariates, are presented in Table 2. Total sleep time was the only outcome that was significantly different between groups; the average total sleep time, calculated from sleep diary data, was longer in the intervention group by 49 minutes.

Differences in Subjective and Objective Sleep Outcome Measures From Linear Mixed Models
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: All differences in sleep outcomes adjusted for age, BMI, baseline sleep quality (PSQI), depression (CES‐D), and whether a sleep medication was taken the previous night. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Sleep diary outcomes
Sleep quality, mean (SE) 3.14 (0.16) 3.08 (0.13) 0.79
Refreshed sleep, mean (SE) 2.94 (0.17) 2.74 (0.14) 0.38
Negative impact of sleep disruptions, mean (SE) 4.39 (0.58) 4.81 (0.48) 0.58
Total sleep time, min, mean (SE) 422 (16.2) 373 (13.2) 0.02
Sleep efficiency, %, mean (SE) 83.5 (2.3) 82.1 (1.9) 0.65
Actigraphy outcomes
Total sleep time, min, mean (SE) 377 (16.8) 356 (13.2) 0.32
Sleep efficiency, %, mean (SE) 72.7 (2.2) 74.8 (1.8) 0.45

Table 3 lists slopes representing average change in sleep measures over hospital days in both groups. The P values represent z tests of interaction terms in mixed models, after adjustment for covariates, testing whether slopes significantly differed between groups. Of the 7 outcomes, 3 sleep diary measures had significant interaction terms. For ratings of sleep quality, refreshing sleep, and sleep disruptions, slopes in the control group were flat, whereas slopes in the intervention group demonstrated improvements in ratings of sleep quality and refreshed sleep, and a decrease in the impact of sleep disruptions over the course of subsequent nights in the hospital. Figure 1 illustrates a plot of the adjusted average slopes for the refreshed sleep score across hospital days in intervention and control groups.

Average Change in Sleep Outcomes Across Hospital Days for Patients in Intervention and Comparison Groups
Intervention, Slope (SE), n = 48 Control, Slope (SE), n = 64 P Value
  • NOTE: Mixed models were adjusted for age, BMI, baseline sleep quality (PSQI), baseline depression (CES‐D), and whether or not a sleep medication was taken the previous night.

  • Each slope represents the average change in sleep diary outcome from night to night in each condition. P values represent the Wald test of the interaction term. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Refreshed sleep rating 0.55 (0.18) 0.03 (0.13) 0.006
Sleep quality rating 0.52 (0.16) 0.02 (0.11) 0.012
Negative impact of sleep interruptions 1.65 (0.48) 0.05 (0.32) 0.006
Total sleep time, diary 11.2 (18.1) 6.3 (13.0) 0.44
Total sleep time, actigraphy 7.3 (25.5) 1.0 (15.3) 0.83
Sleep efficiency, diary 1.1 (2.3) 1.5 (1.6) 0.89
Sleep efficiency, actigraphy 0.9 (4.0) 0.7 (2.4) 0.74
Figure 1
Plot of average changes in refreshed sleep over hospital days for intervention to control participants. *Slopes from linear mixed models are adjusted for age, BMI, depression score, prehospital sleep quality, and sleep medication taken the night before during hospitalization.

DISCUSSION

Poor sleep is common among hospitalized adults, both at home prior to the admission and especially when in the hospital. This pilot study demonstrated the feasibility of rolling out a sleep‐promoting intervention on a hospital's general medicine unit. Although participants on the intervention unit reported improved sleep quality and feeling more refreshed, this was not supported by actigraphy data (such as sleep time or sleep efficiency). Although care team engagement and implementation of unit‐wide interventions were high, patient use of individual components was imperfect. Of particular interest, however, the intervention group actually began to have improved sleep quality and fewer disruptions with subsequent nights sleeping in the hospital.

Our findings of the high prevalence of poor sleep among hospitalized patients is congruent with prior studies and supports the great need to screen for and address poor sleep within the hospital setting.[24, 25, 26] Attempts to promote sleep among hospitalized patients may be effective. Prior literature on sleep‐promoting intervention studies demonstrated relaxation techniques improved sleep quality by almost 38%,[27] and ear plugs and eye masks showed some benefit in promoting sleep within the hospital.[28] Our study's multicomponent intervention that attempted to minimize disruptions led to improvement in sleep quality, more restorative sleep, and decreased report of sleep disruptions, especially among patients who had a longer length of stay. As suggested by Thomas et al.[29] and seen in our data, this temporal relationship with improvement across subsequent nights suggests there may be an adaptation to the new environment and that it may take time for the sleep intervention to work.

Hospitalized patients often fail to reclaim the much‐needed restorative sleep at the time when they are most vulnerable. Patients cite routine care as the primary cause of sleep disruption, and often recognize the way that the hospital environment interferes with their ability to sleep.[30, 31, 32] The sleep‐promoting interventions used in our study would be characterized by most as low effort[33] and a potential for high yield, even though our patients only appreciated modest improvements in sleep outcomes.

Several limitations of this study should be considered. First, although we had hoped to collect substantial amounts of objective data, the average time of actigraphy observation was less than 48 hours. This may have constrained the group by time interaction analysis with actigraphy data, as studies have shown increased accuracy in actigraphy measures with longer wear.[34] By contrast, the sleep diary survey collected throughout hospitalization yielded significant improvements in consecutive daily measurements. Second, the proximity of the study units raised concern for study contamination, which could have reduced the differences in the outcome measures that may have been observed. Although the physicians work on both units, the nursing and support care teams are distinct and unit dependent. Finally, this was not a randomized trial. Patient assignment to the treatment arms was haphazard and occurred within the hospital's admitting strategy. Allocation of patients to either the intervention or the control group was based on bed availability at the time of admission. Although both groups were similar in most characteristics, more of the control participants reported taking more sleep medications prior to admission as compared to the intervention participants. Fortunately, hypnotic use was not different between groups during the admission, the time when sleep data were being captured.

Overall, this pilot study suggests that patients admitted to general medical ward fail to realize sufficient restorative sleep when they are in the hospital. Sleep disruption is rather frequent. This study demonstrates the opportunity for and feasibility of sleep‐promoting interventions where facilitating sleep is considered to be a top priority and vital component of the healthcare delivery. When trying to improve patients' sleep in the hospital, it may take several consecutive nights to realize a return on investment.

Acknowledgements

The authors acknowledge the Department of Nursing, Johns Hopkins Bayview Medical Center, and care teams of the Zieve Medicine Units, and the Center for Child and Community Health Research Biostatistics, Epidemiology and Data Management (BEAD) Core group.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Dr. Howell is the chief of the Division of Hospital Medicine at Johns Hopkins Bayview Medical Center and associate professor at Johns Hopkins School of Medicine. He served as the president of the Society of Hospital Medicine (SHM) in 2013 and currently serves as a board member. He is also a senior physician advisor for SHM. He is a coinvestigator grant recipient on an Agency for Healthcare Research and Quality grant on medication reconciliation funded through Baylor University. He was previously a coinvestigator grant recipient of Center for Medicare and Medicaid Innovations grant that ended in June 2015.

Files
References
  1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep disorders and sleep deprivation: an unmet public health problem. Washington, DC: National Academies Press; 2006. Available at: http://www.ncbi.nlm.nih.gov/books/NBK19960. Accessed September 16, 2014.
  2. Schoenborn CA, Adams PE. Health behaviors of adults: United States, 2005–2007. Vital Health Stat 10. 2010;245:1132.
  3. Mallon L, Broman JE, Hetta J. High incidence of diabetes in men with sleep complaints or short sleep duration: a 12‐year follow‐up study of a middle‐aged population. Diabetes Care. 2005;28:27622767.
  4. Donat M, Brown C, Williams N, et al. Linking sleep duration and obesity among black and white US adults. Clin Pract (Lond). 2013;10(5):661667.
  5. Cappuccio FP, Stranges S, Kandala NB, et al. Gender‐specific associations of short sleep duration with prevalent and incident hypertension: the Whitehall II Study. Hypertension. 2007;50:693700.
  6. Rod NH, Kumari M, Lange T, Kivimäki M, Shipley M, Ferrie J. The joint effect of sleep duration and disturbed sleep on cause‐specific mortality: results from the Whitehall II cohort study. PLoS One. 2014;9(4):e91965.
  7. Martin JL, Fiorentino L, Jouldjian S, Mitchell M, Josephson KR, Alessi CA. Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):17151721.
  8. Kahn‐Greene ET, Killgore DB, Kamimori GH, Balkin TJ, Killgore WD. The effects of sleep deprivation on symptoms of psychopathology in healthy adults. Sleep Med. 2007;8(3):215221.
  9. Irwin MR, Wang M, Campomayor CO, Collado‐Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med. 2006;166:17561762.
  10. Knutson KL, Spiegel K, Penev P, Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163178.
  11. Isaia G, Corsinovi L, Bo M, et al. Insomnia among hospitalized elderly patients: prevalence, clinical characteristics and risk factors. Arch Gerontol Geriatr. 2011;52:133137.
  12. Rocha FL, Hara C, Rodrigues CV, et al. Is insomnia a marker for psychiatric disorders in general hospitals? Sleep Med. 2005;6:549553.
  13. Adachi M, Staisiunas PG, Knutson KL, Beveridge C, Meltzer DO, Arora VM. Perceived control and sleep in hospitalized older adults: a sound hypothesis? J Hosp Med. 2013;8:184190.
  14. Buxton OM, Ellenbogen JM, Wang W, et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Intern Med. 2012;157:170179.
  15. Redeker NS. Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):3138.
  16. Buysse D. Physical health as it relates to insomnia. Talk presented at: Center for Behavior and Health, Lecture Series in Johns Hopkins Bayview Medical Center; July 17, 2012; Baltimore, MD.
  17. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28:193213.
  18. Smith MT, Wegener ST. Measures of sleep: The Insomnia Severity Index, Medical Outcomes Study (MOS) Sleep Scale, Pittsburgh Sleep Diary (PSD), and Pittsburgh Sleep Quality Index (PSQI). Arthritis Rheumatol. 2003;49:S184S196.
  19. Brown H, Prescott R. Applied Mixed Models in Medicine. 3rd ed. Somerset, NJ: Wiley; 2014:539.
  20. Blackwell E, Leon CF, Miller GE, Applying mixed regression models to the analysis of repeated‐measures data in psychosomatic medicine. Psychosom Med. 2006;68(6):870878.
  21. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross‐sectional and longitudinal multilevel models. Educ Psychol Meas. 2005;65(5):717741.
  22. McCoach DB, Black AC. Introduction to estimation issues in multilevel modeling. New Dir Inst Res. 2012;2012(154):2339.
  23. Enders CK, Tofighi D. Centering predictor variables in cross‐sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12(2):121138.
  24. Manian F, Manian C. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):5660.
  25. Shear TC, Balachandran JS, Mokhlesi B, et al. Risk of sleep apnea in hospitalized older patients. J Clin Sleep Med. 2014;10:10611066.
  26. Edinger JD, Lipper S, Wheeler B. Hospital ward policy and patients' sleep patterns: a multiple baseline study. Rehabil Psychol. 1989;34(1):4350.
  27. Tamrat R, Huynh‐Le MP, Goyal M. Non‐pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29:788795.
  28. Le Guen M, Nicolas‐Robin A, Lebard C, Arnulf I, Langeron O, Earplugs and eye masks vs routine care prevent sleep impairment in post‐anaesthesia care unit: a randomized study. Br J Anaesth. 2014;112(1):8995.
  29. Thomas KP, Salas RE, Gamaldo C, et al. Sleep rounds: a multidisciplinary approach to optimize sleep quality and satisfaction in hospitalized patients. J Hosp Med. 2012;7:508512.
  30. Bihari S, McEvoy RD, Kim S, Woodman RJ, Bersten AD. Factors affecting sleep quality of patients in intensive care unit. J Clin Sleep Med. 2012;8(3):301307.
  31. Flaherty JH. Insomnia among hospitalized older persons. Clin Geriatr Med. 2008;24(1):5167.
  32. McDowell JA, Mion LC, Lydon TJ, Inouye SK. A nonpharmacological sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700705.
  33. The Action Priority Matrix: making the most of your opportunities. TimeAnalyzer website. Available at: http://www.timeanalyzer.com/lib/priority.htm. Published 2006. Accessed July 10, 2015.
  34. Marino M, Li Y, Rueschman MN, et al. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. Sleep. 2013;36(11):17471755.
Article PDF
Issue
Journal of Hospital Medicine - 11(7)
Publications
Page Number
467-472
Sections
Files
Files
Article PDF
Article PDF

Approximately 70 million adults within the United States have sleep disorders,[1] and up to 30% of adults report sleeping less than 6 hours per night.[2] Poor sleep has been associated with undesirable health outcomes.[1] Suboptimal sleep duration and sleep quality has been associated with a higher prevalence of chronic health conditions including hypertension, type 2 diabetes, coronary artery disease, stroke, and obesity, as well as increased overall mortality.[3, 4, 5, 6, 7]

Sleep plays an important role in restoration of wellness. Poor sleep is associated with physiological disturbances that may result in poor healing.[8, 9, 10] In the literature, prevalence of insomnia among elderly hospitalized patients was 36.7%,[11] whereas in younger hospitalized patients it was 50%.[12] Hospitalized patients frequently cite their acute illness, hospital‐related environmental factors, and disruptions that are part of routine care as causes for poor sleep during hospitalization.[13, 14, 15] Although the pervasiveness of poor sleep among hospitalized patients is high, interventions that prioritize sleep optimization as routine care, are uncommon. Few studies have reviewed the effect of sleep‐promoting measures on both sleep quality and sleep duration among patients hospitalized on general medicine units.

In this study, we aimed to assess the feasibility of incorporating sleep‐promoting interventions on a general medicine unit. We sought to identify differences in sleep measures between intervention and control groups. The primary outcome that we hoped to influence and lengthen in the intervention group was sleep duration. This outcome was measured both by sleep diary and with actigraphy. Secondary outcomes that we hypothesized should improve in the intervention group included feeling more refreshed in the mornings, sleep efficiency, and fewer sleep disruptions. As a feasibility pilot, we also wanted to explore the ease or difficulty with which sleep‐promoting interventions could be incorporated to the team's workflow.

METHODS

Study Design

A quasi‐experimental prospective pilot study was conducted at a single academic center, the Johns Hopkins Bayview Medical Center. Participants included adult patients admitted to the general medicine ward from July 2013 through January 2014. Patients with dementia; inability to complete survey questionnaires due to delirium, disability, or a language barrier; active withdrawal from alcohol or controlled substances; or acute psychiatric illness were excluded in this study.

The medicine ward at our medical center is comprised of 2 structurally identical units that admit patients with similar diagnoses, disease severity, and case‐mix disease groups. Nursing and support staff are unit specific. Pertaining to the sleep environment, the units both have semiprivate and private rooms. Visitors are encouraged to leave by 10 pm. Patients admitted from the emergency room to the medicine ward are assigned haphazardly to either unit based on bed availability. For the purpose of this study, we selected 1 unit to be a control unit and identified the other as the sleep‐promoting intervention unit.

Study Procedure

Upon arrival to the medicine unit, the research team approached all patients who met study eligibility criteria for study participation. Patients were provided full disclosure of the study using institutional research guidelines, and those interested in participating were consented. Participants were not explicitly told about their group assignment. This study was approved by the Johns Hopkins Institutional Review Board for human subject research.

In this study, the control group participants received standard of care as it pertains to sleep promotion. No additional sleep‐promoting measures were implemented to routine medical care, medication administration, nursing care, and overnight monitoring. Patients who used sleep medications at home, prior to admission, had those medicines continued only if they requested them and they were not contraindicated given their acute illness. Participants on the intervention unit were exposed to a nurse‐delivered sleep‐promoting protocol aimed at transforming the culture of care such that helping patients to sleep soundly was made a top priority. Environmental changes included unit‐wide efforts to minimize light and noise disturbances by dimming hallway lights, turning off room lights, and encouraging care teams to be as quiet as possible. Other strategies focused largely on minimizing care‐related disruptions. These included, when appropriate, administering nighttime medications in the early evening, minimizing fluids overnight, and closing patient room doors where appropriate. Further, patients were offered the following sleep‐promoting items to choose from: ear plugs, eye masks, warm blankets, and relaxation music. The final component of our intervention was 30‐minute sleep hygiene education taught by a physician. It highlighted basic sleep physiology and healthy sleep behavior adapted from Buysse.[16] Patients learned the role of behaviors such as reducing time lying awake in bed, setting standard wake‐up time and sleep time, and going to bed only when sleepy. This behavioral education was supplemented by a handout with sleep‐promoting suggestions.

The care team on the intervention unit received comprehensive study‐focused training in which night nursing teams were familiarized with the sleep‐promoting protocol through in‐service sessions facilitated by 1 of the authors (E.W.G.). To further promote study implementation, sleep‐promoting procedures were supported and encouraged by supervising nurses who made daily reminders to the intervention unit night care team of the goals of the sleep‐promoting study during evening huddles performed at the beginning of each shift. To assess the adherence of the sleep protocol, the nursing staff completed a daily checklist of elements within the protocol that were employed .

Data Collection and Measures

Baseline Measures

At the time of enrollment, study patients' demographic information, including use of chronic sleep medication prior to admission, was collected. Participants were assessed for baseline sleep disturbance prior to admission using standardized, validated sleep assessment tools: Pittsburgh Sleep Quality Index (PSQI), the Insomnia Severity Index (ISI), and the Epworth Sleepiness Scale (ESS). PSQI, a 19‐item tool, assessed self‐rated sleep quality measured over the prior month; a score of 5 or greater indicated poor sleep.[17] ISI, a 7‐item tool, identified the presence, rated the severity, and described the impact of insomnia; a score of 10 or greater indicated insomnia.[18] ESS, an 8‐item self‐rated tool, evaluated the impact of perceived sleepiness on daily functioning in 8 different environments; a score of 9 or greater was linked to burden of sleepiness. Participants were also screened for both obstructive sleep apnea (using the Berlin Sleep Apnea Index) and clinical depression (using Center for Epidemiologic Studies‐Depression 10‐point scale), as these conditions affect sleep patterns. These data are shown in Table 1.

Characteristics of Study Participants (n = 112)
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: The entry for number of sleep diaries per participant in intervention and control groups is presented after capping at 4 diaries. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; ESS, Epworth Sleepiness Scale; ISI, Insomnia Severity Index; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation.

Age, y, mean (SD) 58.2 (16) 56.9 (17) 0.69
Female, n (%) 26 (54.2) 36 (56.3) 0.83
Race, n (%)
Caucasian 33 (68.8) 46 (71.9) 0.92
African American 13 (27.1) 16 (25.0)
Other 2 (4.2) 2 (3.1)
BMI, mean (SD) 32.1 (9.2) 31.8 (9.3) 0.85
Admitting service, n (%)
Teaching 21 (43.8) 18 (28.1) 0.09
Nonteaching 27 (56.3) 46 (71.9)
Sleep medication prior to admission, n (%) 7 (14.9) 21 (32.8) 0.03
Length of stay, d, mean (SD) 4.9 (3) 5.8 (3.9) 0.19
Number of sleep diaries per participant, mean (SD) 2.2 (0.8) 2.6 (0.9) 0.02
Proportion of hospital days with sleep diaries per participant, (SD) 0.6 (0.2) 0.5 (0.2) 0.71
Number of nights with actigraphy per participant, mean (SD) 1.2 (0.7) 1.4 (0.8) 0.16
Proportion of hospital nights with actigraphy per participant (SD) 0.3 (0.2) 0.3 (0.1) 0.91
Baseline sleep measures
PSQI, mean (SD) 9.9 (4.6) 9.1 (4.5) 0.39
ESS, mean (SD) 7.4 (4.2) 7.7 (4.8) 0.79
ISI, mean (SD) 11.9 (7.6) 10.8 (7.4) 0.44
CESD‐10, mean (SD) 12.2 (7.2) 12.8 (7.6) 0.69
Berlin Sleep Apnea, mean (SD) 0.63 (0.5) 0.61 (0.5) 0.87

Sleep Diary Measures

A sleep diary completed each morning assessed the outcome measures, perceived sleep quality, how refreshing sleep was, and sleep durations. The diary employed a 5‐point Likert rating scale ranging from poor (1) to excellent (5). Perceived sleep duration was calculated from patients' reported time in bed, time to fall asleep, wake time, and number and duration of awakenings after sleep onset on their sleep diary. These data were used to compute total sleep time (TST) and sleep efficiency (SE). The sleep diary also included other pertinent sleep‐related measures including use of sleep medication the night prior and specific sleep disruptions from the prior night. To measure the impact of disruptions due to disturbances the prior night, we created a summed scale score of 4 items that negatively interfered with sleep (light, temperature, noise, and interruptions; 5 point scales from 1 = not at all to 5 = significant). Analysis of principal axis factors with varimax rotation yielded 1 disruption factor accounting for 55% of the variance, and Cronbach's was 0.73.

Actigraphy Measures

Actigraphy outcomes of sleep were recorded using the actigraphy wrist watch (ActiSleep Plus (GT3X+); ActiGraph, Pensacola, FL). Participants wore the monitor from the day of enrollment throughout the hospital stay or until transfer out of the unit. Objective data were analyzed and scored using ActiLife 6 data analysis software (version 6.10.1; Actigraph). Time in bed, given the unique inpatient setting, was calculated using sleep diary responses as the interval between sleep time and reported wake up time. These were entered into the Actilife 6 software for the sleep scoring analysis using a validated algorithm, Cole‐Kripke, to calculate actigraphy TST and SE.

Statistical Analysis

Descriptive and inferential statistics were computed using Statistical Package for the Social Sciences version 22 (IBM, Armonk, NY). We computed means, proportions, and measures of dispersion for all study variables. To test differences in sleep diary and actigraphy outcomes between the intervention and control arms, we used linear mixed models with full maximum likelihood estimation to model each of the 7 continuous sleep outcomes. These statistical methods are appropriate to account for the nonindependence of continuous repeated observations within hospital patients.[19] For all outcomes, the unit of analysis was nightly observations nested within patient‐ level characteristics. The use of full maximum likelihood estimation is a robust and preferred method for handling values missing at random in longitudinal datasets.[20]

To model repeated observations, mixed models included a term representing time in days. For each outcome, we specified unconditional growth models to examine the variability between and within patients by computing intraclass correlations and inspecting variance components. We used model fit indices (‐2LL deviance, Akaike's information criterion, and Schwartz's Bayesian criterion) as appropriate to determine best fitting model specifications in terms of random effects and covariance structure.[21, 22]

We tested the main effect of the intervention on sleep outcomes and the interactive effect of group (intervention vs control) by hospital day, to test whether there were group differences in slopes representing average change in sleep outcomes over hospital days. All models adjusted for age, body mass index, depression, and baseline sleep quality (PSQI) as time‐invariant covariates, and whether participants had taken a sleep medication the day before, as a time‐varying covariate. Adjustment for prehospitalization sleep quality was a matter of particular importance. We used the PSQI to control for sleep quality because it is both a well‐validated, multidimensional measure, and it includes prehospital use of sleep medications. In a series of sensitivity analyses, we also explored whether the dichotomous self‐reported measure of whether or not participants regularly took sleep medications prior to hospitalization, rather than the PSQI, would change our substantive findings. All covariates were centered at the grand‐mean following guidelines for appropriate interpretation of regression coefficients.[23]

RESULTS

Of the 112 study patients, 48 were in the intervention unit and 64 in the control unit. Eighty‐five percent of study participants endorsed poor sleep prior to hospital admission on the PSQI sleep quality measure, which was similar in both groups (Table 1).

Participants completed 1 to 8 sleep diary entries (mean = 2.5, standard deviation = 1.1). Because only 6 participants completed 5 or more diaries, we constrained the number of diaries included in the inferential analysis to 4 to avoid influential outliers identified by scatterplots. Fifty‐seven percent of participants had 1 night of valid actigraphy data (n = 64); 29%, 2 nights (n = 32), 8% had 3 or 4 nights, and 9 participants did not have any usable actigraphy data. The extent to which the intervention was accepted by patients in the intervention group was highly variable. Unit‐wide patient adherence with the 10 pm lights off, telephone off, and TV off policy was 87%, 67%, and 64% of intervention patients, respectively. Uptake of sleep menu items was also highly variable, and not a single element was used by more than half of patients (acceptance rates ranged from 11% to 44%). Eye masks (44%) and ear plugs (32%) were the most commonly utilized items.

A greater proportion of patients in the control arm (33%) had been taking sleep medications prior to hospitalization compared to the intervention arm (15%; 2 = 4.6, P < 0.05). However, hypnotic medication use in the hospital was similar across the both groups (intervention unit patients: 25% and controls: 21%, P = 0.49).

Intraclass correlations for the 7 sleep outcomes ranged from 0.59 to 0.76 on sleep diary outcomes, and from 0.61 to 0.85 on actigraphy. Dependency of sleep measures within patients accounted for 59% to 85% of variance in sleep outcomes. The best‐fit mixed models included random intercepts only. The results of mixed models testing the main effect of intervention versus comparison arm on sleep outcome measures, adjusted for covariates, are presented in Table 2. Total sleep time was the only outcome that was significantly different between groups; the average total sleep time, calculated from sleep diary data, was longer in the intervention group by 49 minutes.

Differences in Subjective and Objective Sleep Outcome Measures From Linear Mixed Models
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: All differences in sleep outcomes adjusted for age, BMI, baseline sleep quality (PSQI), depression (CES‐D), and whether a sleep medication was taken the previous night. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Sleep diary outcomes
Sleep quality, mean (SE) 3.14 (0.16) 3.08 (0.13) 0.79
Refreshed sleep, mean (SE) 2.94 (0.17) 2.74 (0.14) 0.38
Negative impact of sleep disruptions, mean (SE) 4.39 (0.58) 4.81 (0.48) 0.58
Total sleep time, min, mean (SE) 422 (16.2) 373 (13.2) 0.02
Sleep efficiency, %, mean (SE) 83.5 (2.3) 82.1 (1.9) 0.65
Actigraphy outcomes
Total sleep time, min, mean (SE) 377 (16.8) 356 (13.2) 0.32
Sleep efficiency, %, mean (SE) 72.7 (2.2) 74.8 (1.8) 0.45

Table 3 lists slopes representing average change in sleep measures over hospital days in both groups. The P values represent z tests of interaction terms in mixed models, after adjustment for covariates, testing whether slopes significantly differed between groups. Of the 7 outcomes, 3 sleep diary measures had significant interaction terms. For ratings of sleep quality, refreshing sleep, and sleep disruptions, slopes in the control group were flat, whereas slopes in the intervention group demonstrated improvements in ratings of sleep quality and refreshed sleep, and a decrease in the impact of sleep disruptions over the course of subsequent nights in the hospital. Figure 1 illustrates a plot of the adjusted average slopes for the refreshed sleep score across hospital days in intervention and control groups.

Average Change in Sleep Outcomes Across Hospital Days for Patients in Intervention and Comparison Groups
Intervention, Slope (SE), n = 48 Control, Slope (SE), n = 64 P Value
  • NOTE: Mixed models were adjusted for age, BMI, baseline sleep quality (PSQI), baseline depression (CES‐D), and whether or not a sleep medication was taken the previous night.

  • Each slope represents the average change in sleep diary outcome from night to night in each condition. P values represent the Wald test of the interaction term. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Refreshed sleep rating 0.55 (0.18) 0.03 (0.13) 0.006
Sleep quality rating 0.52 (0.16) 0.02 (0.11) 0.012
Negative impact of sleep interruptions 1.65 (0.48) 0.05 (0.32) 0.006
Total sleep time, diary 11.2 (18.1) 6.3 (13.0) 0.44
Total sleep time, actigraphy 7.3 (25.5) 1.0 (15.3) 0.83
Sleep efficiency, diary 1.1 (2.3) 1.5 (1.6) 0.89
Sleep efficiency, actigraphy 0.9 (4.0) 0.7 (2.4) 0.74
Figure 1
Plot of average changes in refreshed sleep over hospital days for intervention to control participants. *Slopes from linear mixed models are adjusted for age, BMI, depression score, prehospital sleep quality, and sleep medication taken the night before during hospitalization.

DISCUSSION

Poor sleep is common among hospitalized adults, both at home prior to the admission and especially when in the hospital. This pilot study demonstrated the feasibility of rolling out a sleep‐promoting intervention on a hospital's general medicine unit. Although participants on the intervention unit reported improved sleep quality and feeling more refreshed, this was not supported by actigraphy data (such as sleep time or sleep efficiency). Although care team engagement and implementation of unit‐wide interventions were high, patient use of individual components was imperfect. Of particular interest, however, the intervention group actually began to have improved sleep quality and fewer disruptions with subsequent nights sleeping in the hospital.

Our findings of the high prevalence of poor sleep among hospitalized patients is congruent with prior studies and supports the great need to screen for and address poor sleep within the hospital setting.[24, 25, 26] Attempts to promote sleep among hospitalized patients may be effective. Prior literature on sleep‐promoting intervention studies demonstrated relaxation techniques improved sleep quality by almost 38%,[27] and ear plugs and eye masks showed some benefit in promoting sleep within the hospital.[28] Our study's multicomponent intervention that attempted to minimize disruptions led to improvement in sleep quality, more restorative sleep, and decreased report of sleep disruptions, especially among patients who had a longer length of stay. As suggested by Thomas et al.[29] and seen in our data, this temporal relationship with improvement across subsequent nights suggests there may be an adaptation to the new environment and that it may take time for the sleep intervention to work.

Hospitalized patients often fail to reclaim the much‐needed restorative sleep at the time when they are most vulnerable. Patients cite routine care as the primary cause of sleep disruption, and often recognize the way that the hospital environment interferes with their ability to sleep.[30, 31, 32] The sleep‐promoting interventions used in our study would be characterized by most as low effort[33] and a potential for high yield, even though our patients only appreciated modest improvements in sleep outcomes.

Several limitations of this study should be considered. First, although we had hoped to collect substantial amounts of objective data, the average time of actigraphy observation was less than 48 hours. This may have constrained the group by time interaction analysis with actigraphy data, as studies have shown increased accuracy in actigraphy measures with longer wear.[34] By contrast, the sleep diary survey collected throughout hospitalization yielded significant improvements in consecutive daily measurements. Second, the proximity of the study units raised concern for study contamination, which could have reduced the differences in the outcome measures that may have been observed. Although the physicians work on both units, the nursing and support care teams are distinct and unit dependent. Finally, this was not a randomized trial. Patient assignment to the treatment arms was haphazard and occurred within the hospital's admitting strategy. Allocation of patients to either the intervention or the control group was based on bed availability at the time of admission. Although both groups were similar in most characteristics, more of the control participants reported taking more sleep medications prior to admission as compared to the intervention participants. Fortunately, hypnotic use was not different between groups during the admission, the time when sleep data were being captured.

Overall, this pilot study suggests that patients admitted to general medical ward fail to realize sufficient restorative sleep when they are in the hospital. Sleep disruption is rather frequent. This study demonstrates the opportunity for and feasibility of sleep‐promoting interventions where facilitating sleep is considered to be a top priority and vital component of the healthcare delivery. When trying to improve patients' sleep in the hospital, it may take several consecutive nights to realize a return on investment.

Acknowledgements

The authors acknowledge the Department of Nursing, Johns Hopkins Bayview Medical Center, and care teams of the Zieve Medicine Units, and the Center for Child and Community Health Research Biostatistics, Epidemiology and Data Management (BEAD) Core group.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Dr. Howell is the chief of the Division of Hospital Medicine at Johns Hopkins Bayview Medical Center and associate professor at Johns Hopkins School of Medicine. He served as the president of the Society of Hospital Medicine (SHM) in 2013 and currently serves as a board member. He is also a senior physician advisor for SHM. He is a coinvestigator grant recipient on an Agency for Healthcare Research and Quality grant on medication reconciliation funded through Baylor University. He was previously a coinvestigator grant recipient of Center for Medicare and Medicaid Innovations grant that ended in June 2015.

Approximately 70 million adults within the United States have sleep disorders,[1] and up to 30% of adults report sleeping less than 6 hours per night.[2] Poor sleep has been associated with undesirable health outcomes.[1] Suboptimal sleep duration and sleep quality has been associated with a higher prevalence of chronic health conditions including hypertension, type 2 diabetes, coronary artery disease, stroke, and obesity, as well as increased overall mortality.[3, 4, 5, 6, 7]

Sleep plays an important role in restoration of wellness. Poor sleep is associated with physiological disturbances that may result in poor healing.[8, 9, 10] In the literature, prevalence of insomnia among elderly hospitalized patients was 36.7%,[11] whereas in younger hospitalized patients it was 50%.[12] Hospitalized patients frequently cite their acute illness, hospital‐related environmental factors, and disruptions that are part of routine care as causes for poor sleep during hospitalization.[13, 14, 15] Although the pervasiveness of poor sleep among hospitalized patients is high, interventions that prioritize sleep optimization as routine care, are uncommon. Few studies have reviewed the effect of sleep‐promoting measures on both sleep quality and sleep duration among patients hospitalized on general medicine units.

In this study, we aimed to assess the feasibility of incorporating sleep‐promoting interventions on a general medicine unit. We sought to identify differences in sleep measures between intervention and control groups. The primary outcome that we hoped to influence and lengthen in the intervention group was sleep duration. This outcome was measured both by sleep diary and with actigraphy. Secondary outcomes that we hypothesized should improve in the intervention group included feeling more refreshed in the mornings, sleep efficiency, and fewer sleep disruptions. As a feasibility pilot, we also wanted to explore the ease or difficulty with which sleep‐promoting interventions could be incorporated to the team's workflow.

METHODS

Study Design

A quasi‐experimental prospective pilot study was conducted at a single academic center, the Johns Hopkins Bayview Medical Center. Participants included adult patients admitted to the general medicine ward from July 2013 through January 2014. Patients with dementia; inability to complete survey questionnaires due to delirium, disability, or a language barrier; active withdrawal from alcohol or controlled substances; or acute psychiatric illness were excluded in this study.

The medicine ward at our medical center is comprised of 2 structurally identical units that admit patients with similar diagnoses, disease severity, and case‐mix disease groups. Nursing and support staff are unit specific. Pertaining to the sleep environment, the units both have semiprivate and private rooms. Visitors are encouraged to leave by 10 pm. Patients admitted from the emergency room to the medicine ward are assigned haphazardly to either unit based on bed availability. For the purpose of this study, we selected 1 unit to be a control unit and identified the other as the sleep‐promoting intervention unit.

Study Procedure

Upon arrival to the medicine unit, the research team approached all patients who met study eligibility criteria for study participation. Patients were provided full disclosure of the study using institutional research guidelines, and those interested in participating were consented. Participants were not explicitly told about their group assignment. This study was approved by the Johns Hopkins Institutional Review Board for human subject research.

In this study, the control group participants received standard of care as it pertains to sleep promotion. No additional sleep‐promoting measures were implemented to routine medical care, medication administration, nursing care, and overnight monitoring. Patients who used sleep medications at home, prior to admission, had those medicines continued only if they requested them and they were not contraindicated given their acute illness. Participants on the intervention unit were exposed to a nurse‐delivered sleep‐promoting protocol aimed at transforming the culture of care such that helping patients to sleep soundly was made a top priority. Environmental changes included unit‐wide efforts to minimize light and noise disturbances by dimming hallway lights, turning off room lights, and encouraging care teams to be as quiet as possible. Other strategies focused largely on minimizing care‐related disruptions. These included, when appropriate, administering nighttime medications in the early evening, minimizing fluids overnight, and closing patient room doors where appropriate. Further, patients were offered the following sleep‐promoting items to choose from: ear plugs, eye masks, warm blankets, and relaxation music. The final component of our intervention was 30‐minute sleep hygiene education taught by a physician. It highlighted basic sleep physiology and healthy sleep behavior adapted from Buysse.[16] Patients learned the role of behaviors such as reducing time lying awake in bed, setting standard wake‐up time and sleep time, and going to bed only when sleepy. This behavioral education was supplemented by a handout with sleep‐promoting suggestions.

The care team on the intervention unit received comprehensive study‐focused training in which night nursing teams were familiarized with the sleep‐promoting protocol through in‐service sessions facilitated by 1 of the authors (E.W.G.). To further promote study implementation, sleep‐promoting procedures were supported and encouraged by supervising nurses who made daily reminders to the intervention unit night care team of the goals of the sleep‐promoting study during evening huddles performed at the beginning of each shift. To assess the adherence of the sleep protocol, the nursing staff completed a daily checklist of elements within the protocol that were employed .

Data Collection and Measures

Baseline Measures

At the time of enrollment, study patients' demographic information, including use of chronic sleep medication prior to admission, was collected. Participants were assessed for baseline sleep disturbance prior to admission using standardized, validated sleep assessment tools: Pittsburgh Sleep Quality Index (PSQI), the Insomnia Severity Index (ISI), and the Epworth Sleepiness Scale (ESS). PSQI, a 19‐item tool, assessed self‐rated sleep quality measured over the prior month; a score of 5 or greater indicated poor sleep.[17] ISI, a 7‐item tool, identified the presence, rated the severity, and described the impact of insomnia; a score of 10 or greater indicated insomnia.[18] ESS, an 8‐item self‐rated tool, evaluated the impact of perceived sleepiness on daily functioning in 8 different environments; a score of 9 or greater was linked to burden of sleepiness. Participants were also screened for both obstructive sleep apnea (using the Berlin Sleep Apnea Index) and clinical depression (using Center for Epidemiologic Studies‐Depression 10‐point scale), as these conditions affect sleep patterns. These data are shown in Table 1.

Characteristics of Study Participants (n = 112)
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: The entry for number of sleep diaries per participant in intervention and control groups is presented after capping at 4 diaries. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; ESS, Epworth Sleepiness Scale; ISI, Insomnia Severity Index; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation.

Age, y, mean (SD) 58.2 (16) 56.9 (17) 0.69
Female, n (%) 26 (54.2) 36 (56.3) 0.83
Race, n (%)
Caucasian 33 (68.8) 46 (71.9) 0.92
African American 13 (27.1) 16 (25.0)
Other 2 (4.2) 2 (3.1)
BMI, mean (SD) 32.1 (9.2) 31.8 (9.3) 0.85
Admitting service, n (%)
Teaching 21 (43.8) 18 (28.1) 0.09
Nonteaching 27 (56.3) 46 (71.9)
Sleep medication prior to admission, n (%) 7 (14.9) 21 (32.8) 0.03
Length of stay, d, mean (SD) 4.9 (3) 5.8 (3.9) 0.19
Number of sleep diaries per participant, mean (SD) 2.2 (0.8) 2.6 (0.9) 0.02
Proportion of hospital days with sleep diaries per participant, (SD) 0.6 (0.2) 0.5 (0.2) 0.71
Number of nights with actigraphy per participant, mean (SD) 1.2 (0.7) 1.4 (0.8) 0.16
Proportion of hospital nights with actigraphy per participant (SD) 0.3 (0.2) 0.3 (0.1) 0.91
Baseline sleep measures
PSQI, mean (SD) 9.9 (4.6) 9.1 (4.5) 0.39
ESS, mean (SD) 7.4 (4.2) 7.7 (4.8) 0.79
ISI, mean (SD) 11.9 (7.6) 10.8 (7.4) 0.44
CESD‐10, mean (SD) 12.2 (7.2) 12.8 (7.6) 0.69
Berlin Sleep Apnea, mean (SD) 0.63 (0.5) 0.61 (0.5) 0.87

Sleep Diary Measures

A sleep diary completed each morning assessed the outcome measures, perceived sleep quality, how refreshing sleep was, and sleep durations. The diary employed a 5‐point Likert rating scale ranging from poor (1) to excellent (5). Perceived sleep duration was calculated from patients' reported time in bed, time to fall asleep, wake time, and number and duration of awakenings after sleep onset on their sleep diary. These data were used to compute total sleep time (TST) and sleep efficiency (SE). The sleep diary also included other pertinent sleep‐related measures including use of sleep medication the night prior and specific sleep disruptions from the prior night. To measure the impact of disruptions due to disturbances the prior night, we created a summed scale score of 4 items that negatively interfered with sleep (light, temperature, noise, and interruptions; 5 point scales from 1 = not at all to 5 = significant). Analysis of principal axis factors with varimax rotation yielded 1 disruption factor accounting for 55% of the variance, and Cronbach's was 0.73.

Actigraphy Measures

Actigraphy outcomes of sleep were recorded using the actigraphy wrist watch (ActiSleep Plus (GT3X+); ActiGraph, Pensacola, FL). Participants wore the monitor from the day of enrollment throughout the hospital stay or until transfer out of the unit. Objective data were analyzed and scored using ActiLife 6 data analysis software (version 6.10.1; Actigraph). Time in bed, given the unique inpatient setting, was calculated using sleep diary responses as the interval between sleep time and reported wake up time. These were entered into the Actilife 6 software for the sleep scoring analysis using a validated algorithm, Cole‐Kripke, to calculate actigraphy TST and SE.

Statistical Analysis

Descriptive and inferential statistics were computed using Statistical Package for the Social Sciences version 22 (IBM, Armonk, NY). We computed means, proportions, and measures of dispersion for all study variables. To test differences in sleep diary and actigraphy outcomes between the intervention and control arms, we used linear mixed models with full maximum likelihood estimation to model each of the 7 continuous sleep outcomes. These statistical methods are appropriate to account for the nonindependence of continuous repeated observations within hospital patients.[19] For all outcomes, the unit of analysis was nightly observations nested within patient‐ level characteristics. The use of full maximum likelihood estimation is a robust and preferred method for handling values missing at random in longitudinal datasets.[20]

To model repeated observations, mixed models included a term representing time in days. For each outcome, we specified unconditional growth models to examine the variability between and within patients by computing intraclass correlations and inspecting variance components. We used model fit indices (‐2LL deviance, Akaike's information criterion, and Schwartz's Bayesian criterion) as appropriate to determine best fitting model specifications in terms of random effects and covariance structure.[21, 22]

We tested the main effect of the intervention on sleep outcomes and the interactive effect of group (intervention vs control) by hospital day, to test whether there were group differences in slopes representing average change in sleep outcomes over hospital days. All models adjusted for age, body mass index, depression, and baseline sleep quality (PSQI) as time‐invariant covariates, and whether participants had taken a sleep medication the day before, as a time‐varying covariate. Adjustment for prehospitalization sleep quality was a matter of particular importance. We used the PSQI to control for sleep quality because it is both a well‐validated, multidimensional measure, and it includes prehospital use of sleep medications. In a series of sensitivity analyses, we also explored whether the dichotomous self‐reported measure of whether or not participants regularly took sleep medications prior to hospitalization, rather than the PSQI, would change our substantive findings. All covariates were centered at the grand‐mean following guidelines for appropriate interpretation of regression coefficients.[23]

RESULTS

Of the 112 study patients, 48 were in the intervention unit and 64 in the control unit. Eighty‐five percent of study participants endorsed poor sleep prior to hospital admission on the PSQI sleep quality measure, which was similar in both groups (Table 1).

Participants completed 1 to 8 sleep diary entries (mean = 2.5, standard deviation = 1.1). Because only 6 participants completed 5 or more diaries, we constrained the number of diaries included in the inferential analysis to 4 to avoid influential outliers identified by scatterplots. Fifty‐seven percent of participants had 1 night of valid actigraphy data (n = 64); 29%, 2 nights (n = 32), 8% had 3 or 4 nights, and 9 participants did not have any usable actigraphy data. The extent to which the intervention was accepted by patients in the intervention group was highly variable. Unit‐wide patient adherence with the 10 pm lights off, telephone off, and TV off policy was 87%, 67%, and 64% of intervention patients, respectively. Uptake of sleep menu items was also highly variable, and not a single element was used by more than half of patients (acceptance rates ranged from 11% to 44%). Eye masks (44%) and ear plugs (32%) were the most commonly utilized items.

A greater proportion of patients in the control arm (33%) had been taking sleep medications prior to hospitalization compared to the intervention arm (15%; 2 = 4.6, P < 0.05). However, hypnotic medication use in the hospital was similar across the both groups (intervention unit patients: 25% and controls: 21%, P = 0.49).

Intraclass correlations for the 7 sleep outcomes ranged from 0.59 to 0.76 on sleep diary outcomes, and from 0.61 to 0.85 on actigraphy. Dependency of sleep measures within patients accounted for 59% to 85% of variance in sleep outcomes. The best‐fit mixed models included random intercepts only. The results of mixed models testing the main effect of intervention versus comparison arm on sleep outcome measures, adjusted for covariates, are presented in Table 2. Total sleep time was the only outcome that was significantly different between groups; the average total sleep time, calculated from sleep diary data, was longer in the intervention group by 49 minutes.

Differences in Subjective and Objective Sleep Outcome Measures From Linear Mixed Models
Intervention, n = 48 Control, n = 64 P Value
  • NOTE: All differences in sleep outcomes adjusted for age, BMI, baseline sleep quality (PSQI), depression (CES‐D), and whether a sleep medication was taken the previous night. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Sleep diary outcomes
Sleep quality, mean (SE) 3.14 (0.16) 3.08 (0.13) 0.79
Refreshed sleep, mean (SE) 2.94 (0.17) 2.74 (0.14) 0.38
Negative impact of sleep disruptions, mean (SE) 4.39 (0.58) 4.81 (0.48) 0.58
Total sleep time, min, mean (SE) 422 (16.2) 373 (13.2) 0.02
Sleep efficiency, %, mean (SE) 83.5 (2.3) 82.1 (1.9) 0.65
Actigraphy outcomes
Total sleep time, min, mean (SE) 377 (16.8) 356 (13.2) 0.32
Sleep efficiency, %, mean (SE) 72.7 (2.2) 74.8 (1.8) 0.45

Table 3 lists slopes representing average change in sleep measures over hospital days in both groups. The P values represent z tests of interaction terms in mixed models, after adjustment for covariates, testing whether slopes significantly differed between groups. Of the 7 outcomes, 3 sleep diary measures had significant interaction terms. For ratings of sleep quality, refreshing sleep, and sleep disruptions, slopes in the control group were flat, whereas slopes in the intervention group demonstrated improvements in ratings of sleep quality and refreshed sleep, and a decrease in the impact of sleep disruptions over the course of subsequent nights in the hospital. Figure 1 illustrates a plot of the adjusted average slopes for the refreshed sleep score across hospital days in intervention and control groups.

Average Change in Sleep Outcomes Across Hospital Days for Patients in Intervention and Comparison Groups
Intervention, Slope (SE), n = 48 Control, Slope (SE), n = 64 P Value
  • NOTE: Mixed models were adjusted for age, BMI, baseline sleep quality (PSQI), baseline depression (CES‐D), and whether or not a sleep medication was taken the previous night.

  • Each slope represents the average change in sleep diary outcome from night to night in each condition. P values represent the Wald test of the interaction term. Abbreviations: BMI, body mass index; CESD‐10, Center for Epidemiologic Studies‐Depression 10‐point scale; PSQI, Pittsburgh Sleep Quality Index; SE, standard error.

Refreshed sleep rating 0.55 (0.18) 0.03 (0.13) 0.006
Sleep quality rating 0.52 (0.16) 0.02 (0.11) 0.012
Negative impact of sleep interruptions 1.65 (0.48) 0.05 (0.32) 0.006
Total sleep time, diary 11.2 (18.1) 6.3 (13.0) 0.44
Total sleep time, actigraphy 7.3 (25.5) 1.0 (15.3) 0.83
Sleep efficiency, diary 1.1 (2.3) 1.5 (1.6) 0.89
Sleep efficiency, actigraphy 0.9 (4.0) 0.7 (2.4) 0.74
Figure 1
Plot of average changes in refreshed sleep over hospital days for intervention to control participants. *Slopes from linear mixed models are adjusted for age, BMI, depression score, prehospital sleep quality, and sleep medication taken the night before during hospitalization.

DISCUSSION

Poor sleep is common among hospitalized adults, both at home prior to the admission and especially when in the hospital. This pilot study demonstrated the feasibility of rolling out a sleep‐promoting intervention on a hospital's general medicine unit. Although participants on the intervention unit reported improved sleep quality and feeling more refreshed, this was not supported by actigraphy data (such as sleep time or sleep efficiency). Although care team engagement and implementation of unit‐wide interventions were high, patient use of individual components was imperfect. Of particular interest, however, the intervention group actually began to have improved sleep quality and fewer disruptions with subsequent nights sleeping in the hospital.

Our findings of the high prevalence of poor sleep among hospitalized patients is congruent with prior studies and supports the great need to screen for and address poor sleep within the hospital setting.[24, 25, 26] Attempts to promote sleep among hospitalized patients may be effective. Prior literature on sleep‐promoting intervention studies demonstrated relaxation techniques improved sleep quality by almost 38%,[27] and ear plugs and eye masks showed some benefit in promoting sleep within the hospital.[28] Our study's multicomponent intervention that attempted to minimize disruptions led to improvement in sleep quality, more restorative sleep, and decreased report of sleep disruptions, especially among patients who had a longer length of stay. As suggested by Thomas et al.[29] and seen in our data, this temporal relationship with improvement across subsequent nights suggests there may be an adaptation to the new environment and that it may take time for the sleep intervention to work.

Hospitalized patients often fail to reclaim the much‐needed restorative sleep at the time when they are most vulnerable. Patients cite routine care as the primary cause of sleep disruption, and often recognize the way that the hospital environment interferes with their ability to sleep.[30, 31, 32] The sleep‐promoting interventions used in our study would be characterized by most as low effort[33] and a potential for high yield, even though our patients only appreciated modest improvements in sleep outcomes.

Several limitations of this study should be considered. First, although we had hoped to collect substantial amounts of objective data, the average time of actigraphy observation was less than 48 hours. This may have constrained the group by time interaction analysis with actigraphy data, as studies have shown increased accuracy in actigraphy measures with longer wear.[34] By contrast, the sleep diary survey collected throughout hospitalization yielded significant improvements in consecutive daily measurements. Second, the proximity of the study units raised concern for study contamination, which could have reduced the differences in the outcome measures that may have been observed. Although the physicians work on both units, the nursing and support care teams are distinct and unit dependent. Finally, this was not a randomized trial. Patient assignment to the treatment arms was haphazard and occurred within the hospital's admitting strategy. Allocation of patients to either the intervention or the control group was based on bed availability at the time of admission. Although both groups were similar in most characteristics, more of the control participants reported taking more sleep medications prior to admission as compared to the intervention participants. Fortunately, hypnotic use was not different between groups during the admission, the time when sleep data were being captured.

Overall, this pilot study suggests that patients admitted to general medical ward fail to realize sufficient restorative sleep when they are in the hospital. Sleep disruption is rather frequent. This study demonstrates the opportunity for and feasibility of sleep‐promoting interventions where facilitating sleep is considered to be a top priority and vital component of the healthcare delivery. When trying to improve patients' sleep in the hospital, it may take several consecutive nights to realize a return on investment.

Acknowledgements

The authors acknowledge the Department of Nursing, Johns Hopkins Bayview Medical Center, and care teams of the Zieve Medicine Units, and the Center for Child and Community Health Research Biostatistics, Epidemiology and Data Management (BEAD) Core group.

Disclosures: Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. Dr. Howell is the chief of the Division of Hospital Medicine at Johns Hopkins Bayview Medical Center and associate professor at Johns Hopkins School of Medicine. He served as the president of the Society of Hospital Medicine (SHM) in 2013 and currently serves as a board member. He is also a senior physician advisor for SHM. He is a coinvestigator grant recipient on an Agency for Healthcare Research and Quality grant on medication reconciliation funded through Baylor University. He was previously a coinvestigator grant recipient of Center for Medicare and Medicaid Innovations grant that ended in June 2015.

References
  1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep disorders and sleep deprivation: an unmet public health problem. Washington, DC: National Academies Press; 2006. Available at: http://www.ncbi.nlm.nih.gov/books/NBK19960. Accessed September 16, 2014.
  2. Schoenborn CA, Adams PE. Health behaviors of adults: United States, 2005–2007. Vital Health Stat 10. 2010;245:1132.
  3. Mallon L, Broman JE, Hetta J. High incidence of diabetes in men with sleep complaints or short sleep duration: a 12‐year follow‐up study of a middle‐aged population. Diabetes Care. 2005;28:27622767.
  4. Donat M, Brown C, Williams N, et al. Linking sleep duration and obesity among black and white US adults. Clin Pract (Lond). 2013;10(5):661667.
  5. Cappuccio FP, Stranges S, Kandala NB, et al. Gender‐specific associations of short sleep duration with prevalent and incident hypertension: the Whitehall II Study. Hypertension. 2007;50:693700.
  6. Rod NH, Kumari M, Lange T, Kivimäki M, Shipley M, Ferrie J. The joint effect of sleep duration and disturbed sleep on cause‐specific mortality: results from the Whitehall II cohort study. PLoS One. 2014;9(4):e91965.
  7. Martin JL, Fiorentino L, Jouldjian S, Mitchell M, Josephson KR, Alessi CA. Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):17151721.
  8. Kahn‐Greene ET, Killgore DB, Kamimori GH, Balkin TJ, Killgore WD. The effects of sleep deprivation on symptoms of psychopathology in healthy adults. Sleep Med. 2007;8(3):215221.
  9. Irwin MR, Wang M, Campomayor CO, Collado‐Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med. 2006;166:17561762.
  10. Knutson KL, Spiegel K, Penev P, Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163178.
  11. Isaia G, Corsinovi L, Bo M, et al. Insomnia among hospitalized elderly patients: prevalence, clinical characteristics and risk factors. Arch Gerontol Geriatr. 2011;52:133137.
  12. Rocha FL, Hara C, Rodrigues CV, et al. Is insomnia a marker for psychiatric disorders in general hospitals? Sleep Med. 2005;6:549553.
  13. Adachi M, Staisiunas PG, Knutson KL, Beveridge C, Meltzer DO, Arora VM. Perceived control and sleep in hospitalized older adults: a sound hypothesis? J Hosp Med. 2013;8:184190.
  14. Buxton OM, Ellenbogen JM, Wang W, et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Intern Med. 2012;157:170179.
  15. Redeker NS. Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):3138.
  16. Buysse D. Physical health as it relates to insomnia. Talk presented at: Center for Behavior and Health, Lecture Series in Johns Hopkins Bayview Medical Center; July 17, 2012; Baltimore, MD.
  17. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28:193213.
  18. Smith MT, Wegener ST. Measures of sleep: The Insomnia Severity Index, Medical Outcomes Study (MOS) Sleep Scale, Pittsburgh Sleep Diary (PSD), and Pittsburgh Sleep Quality Index (PSQI). Arthritis Rheumatol. 2003;49:S184S196.
  19. Brown H, Prescott R. Applied Mixed Models in Medicine. 3rd ed. Somerset, NJ: Wiley; 2014:539.
  20. Blackwell E, Leon CF, Miller GE, Applying mixed regression models to the analysis of repeated‐measures data in psychosomatic medicine. Psychosom Med. 2006;68(6):870878.
  21. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross‐sectional and longitudinal multilevel models. Educ Psychol Meas. 2005;65(5):717741.
  22. McCoach DB, Black AC. Introduction to estimation issues in multilevel modeling. New Dir Inst Res. 2012;2012(154):2339.
  23. Enders CK, Tofighi D. Centering predictor variables in cross‐sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12(2):121138.
  24. Manian F, Manian C. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):5660.
  25. Shear TC, Balachandran JS, Mokhlesi B, et al. Risk of sleep apnea in hospitalized older patients. J Clin Sleep Med. 2014;10:10611066.
  26. Edinger JD, Lipper S, Wheeler B. Hospital ward policy and patients' sleep patterns: a multiple baseline study. Rehabil Psychol. 1989;34(1):4350.
  27. Tamrat R, Huynh‐Le MP, Goyal M. Non‐pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29:788795.
  28. Le Guen M, Nicolas‐Robin A, Lebard C, Arnulf I, Langeron O, Earplugs and eye masks vs routine care prevent sleep impairment in post‐anaesthesia care unit: a randomized study. Br J Anaesth. 2014;112(1):8995.
  29. Thomas KP, Salas RE, Gamaldo C, et al. Sleep rounds: a multidisciplinary approach to optimize sleep quality and satisfaction in hospitalized patients. J Hosp Med. 2012;7:508512.
  30. Bihari S, McEvoy RD, Kim S, Woodman RJ, Bersten AD. Factors affecting sleep quality of patients in intensive care unit. J Clin Sleep Med. 2012;8(3):301307.
  31. Flaherty JH. Insomnia among hospitalized older persons. Clin Geriatr Med. 2008;24(1):5167.
  32. McDowell JA, Mion LC, Lydon TJ, Inouye SK. A nonpharmacological sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700705.
  33. The Action Priority Matrix: making the most of your opportunities. TimeAnalyzer website. Available at: http://www.timeanalyzer.com/lib/priority.htm. Published 2006. Accessed July 10, 2015.
  34. Marino M, Li Y, Rueschman MN, et al. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. Sleep. 2013;36(11):17471755.
References
  1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep disorders and sleep deprivation: an unmet public health problem. Washington, DC: National Academies Press; 2006. Available at: http://www.ncbi.nlm.nih.gov/books/NBK19960. Accessed September 16, 2014.
  2. Schoenborn CA, Adams PE. Health behaviors of adults: United States, 2005–2007. Vital Health Stat 10. 2010;245:1132.
  3. Mallon L, Broman JE, Hetta J. High incidence of diabetes in men with sleep complaints or short sleep duration: a 12‐year follow‐up study of a middle‐aged population. Diabetes Care. 2005;28:27622767.
  4. Donat M, Brown C, Williams N, et al. Linking sleep duration and obesity among black and white US adults. Clin Pract (Lond). 2013;10(5):661667.
  5. Cappuccio FP, Stranges S, Kandala NB, et al. Gender‐specific associations of short sleep duration with prevalent and incident hypertension: the Whitehall II Study. Hypertension. 2007;50:693700.
  6. Rod NH, Kumari M, Lange T, Kivimäki M, Shipley M, Ferrie J. The joint effect of sleep duration and disturbed sleep on cause‐specific mortality: results from the Whitehall II cohort study. PLoS One. 2014;9(4):e91965.
  7. Martin JL, Fiorentino L, Jouldjian S, Mitchell M, Josephson KR, Alessi CA. Poor self‐reported sleep quality predicts mortality within one year of inpatient post‐acute rehabilitation among older adults. Sleep. 2011;34(12):17151721.
  8. Kahn‐Greene ET, Killgore DB, Kamimori GH, Balkin TJ, Killgore WD. The effects of sleep deprivation on symptoms of psychopathology in healthy adults. Sleep Med. 2007;8(3):215221.
  9. Irwin MR, Wang M, Campomayor CO, Collado‐Hidalgo A, Cole S. Sleep deprivation and activation of morning levels of cellular and genomic markers of inflammation. Arch Intern Med. 2006;166:17561762.
  10. Knutson KL, Spiegel K, Penev P, Cauter E. The metabolic consequences of sleep deprivation. Sleep Med Rev. 2007;11(3):163178.
  11. Isaia G, Corsinovi L, Bo M, et al. Insomnia among hospitalized elderly patients: prevalence, clinical characteristics and risk factors. Arch Gerontol Geriatr. 2011;52:133137.
  12. Rocha FL, Hara C, Rodrigues CV, et al. Is insomnia a marker for psychiatric disorders in general hospitals? Sleep Med. 2005;6:549553.
  13. Adachi M, Staisiunas PG, Knutson KL, Beveridge C, Meltzer DO, Arora VM. Perceived control and sleep in hospitalized older adults: a sound hypothesis? J Hosp Med. 2013;8:184190.
  14. Buxton OM, Ellenbogen JM, Wang W, et al. Sleep disruption due to hospital noises: a prospective evaluation. Ann Intern Med. 2012;157:170179.
  15. Redeker NS. Sleep in acute care settings: an integrative review. J Nurs Scholarsh. 2000;32(1):3138.
  16. Buysse D. Physical health as it relates to insomnia. Talk presented at: Center for Behavior and Health, Lecture Series in Johns Hopkins Bayview Medical Center; July 17, 2012; Baltimore, MD.
  17. Buysse DJ, Reynolds CF, Monk TH, Berman SR, Kupfer DJ. The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research. Psychiatry Res. 1989;28:193213.
  18. Smith MT, Wegener ST. Measures of sleep: The Insomnia Severity Index, Medical Outcomes Study (MOS) Sleep Scale, Pittsburgh Sleep Diary (PSD), and Pittsburgh Sleep Quality Index (PSQI). Arthritis Rheumatol. 2003;49:S184S196.
  19. Brown H, Prescott R. Applied Mixed Models in Medicine. 3rd ed. Somerset, NJ: Wiley; 2014:539.
  20. Blackwell E, Leon CF, Miller GE, Applying mixed regression models to the analysis of repeated‐measures data in psychosomatic medicine. Psychosom Med. 2006;68(6):870878.
  21. Peugh JL, Enders CK. Using the SPSS mixed procedure to fit cross‐sectional and longitudinal multilevel models. Educ Psychol Meas. 2005;65(5):717741.
  22. McCoach DB, Black AC. Introduction to estimation issues in multilevel modeling. New Dir Inst Res. 2012;2012(154):2339.
  23. Enders CK, Tofighi D. Centering predictor variables in cross‐sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12(2):121138.
  24. Manian F, Manian C. Sleep quality in adult hospitalized patients with infection: an observational study. Am J Med Sci. 2015;349(1):5660.
  25. Shear TC, Balachandran JS, Mokhlesi B, et al. Risk of sleep apnea in hospitalized older patients. J Clin Sleep Med. 2014;10:10611066.
  26. Edinger JD, Lipper S, Wheeler B. Hospital ward policy and patients' sleep patterns: a multiple baseline study. Rehabil Psychol. 1989;34(1):4350.
  27. Tamrat R, Huynh‐Le MP, Goyal M. Non‐pharmacologic interventions to improve the sleep of hospitalized patients: a systematic review. J Gen Intern Med. 2014;29:788795.
  28. Le Guen M, Nicolas‐Robin A, Lebard C, Arnulf I, Langeron O, Earplugs and eye masks vs routine care prevent sleep impairment in post‐anaesthesia care unit: a randomized study. Br J Anaesth. 2014;112(1):8995.
  29. Thomas KP, Salas RE, Gamaldo C, et al. Sleep rounds: a multidisciplinary approach to optimize sleep quality and satisfaction in hospitalized patients. J Hosp Med. 2012;7:508512.
  30. Bihari S, McEvoy RD, Kim S, Woodman RJ, Bersten AD. Factors affecting sleep quality of patients in intensive care unit. J Clin Sleep Med. 2012;8(3):301307.
  31. Flaherty JH. Insomnia among hospitalized older persons. Clin Geriatr Med. 2008;24(1):5167.
  32. McDowell JA, Mion LC, Lydon TJ, Inouye SK. A nonpharmacological sleep protocol for hospitalized older patients. J Am Geriatr Soc. 1998;46(6):700705.
  33. The Action Priority Matrix: making the most of your opportunities. TimeAnalyzer website. Available at: http://www.timeanalyzer.com/lib/priority.htm. Published 2006. Accessed July 10, 2015.
  34. Marino M, Li Y, Rueschman MN, et al. Measuring sleep: accuracy, sensitivity, and specificity of wrist actigraphy compared to polysomnography. Sleep. 2013;36(11):17471755.
Issue
Journal of Hospital Medicine - 11(7)
Issue
Journal of Hospital Medicine - 11(7)
Page Number
467-472
Page Number
467-472
Publications
Publications
Article Type
Display Headline
Pilot study aiming to support sleep quality and duration during hospitalizations
Display Headline
Pilot study aiming to support sleep quality and duration during hospitalizations
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Evelyn Gathecha, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Avenue, MFL Building West Tower, 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: egathec1@jhmi.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Development and Validation of TAISCH

Article Type
Changed
Sun, 05/21/2017 - 14:00
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

Files
References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
Article PDF
Issue
Journal of Hospital Medicine - 9(9)
Publications
Page Number
553-558
Sections
Files
Files
Article PDF
Article PDF

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
Issue
Journal of Hospital Medicine - 9(9)
Issue
Journal of Hospital Medicine - 9(9)
Page Number
553-558
Page Number
553-558
Publications
Publications
Article Type
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Haruka Torok, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Ave., MFL Bldg, West Tower 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: htorok1@jhmi.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Sepsis Outcomes Across Settings

Article Type
Changed
Mon, 05/22/2017 - 18:30
Display Headline
Does sepsis treatment differ between primary and overflow intensive care units?

Sepsis is a major cause of death in hospitalized patients.13 It is recommended that patients with sepsis be treated with early appropriate antibiotics, as well as early goal‐directed therapy including fluid and vasopressor support according to evidence‐based guidelines.46 Following such evidence‐based protocols and process‐of‐care interventions has been shown to be associated with better patient outcomes, including decreased mortality.7, 8

Most patients with severe sepsis are cared for in intensive care units (ICUs). At times, there are no beds available in the primary ICU and patients presenting to the hospital with sepsis are cared for in other units. Patients admitted to a non‐preferred clinical inpatient setting are sometimes referred to as overflow.9 ICUs can differ significantly in staffing patterns, equipment, and training.10 It is not known if overflow sepsis patients receive similar care when admitted to non‐primary ICUs.

At our hospital, we have an active bed management system led by the hospitalist division.11 This system includes protocols to place sepsis patients in the overflow ICU if the primary ICU is full. We hypothesized that process‐of‐care interventions would be more strictly adhered to when sepsis patients were in the primary ICU rather than in the overflow unit at our institution.

METHODS

Design

This was a retrospective cohort study of all patients with sepsis admitted to either the primary medical intensive care unit (MICU) or the overflow cardiac intensive care unit (CICU) at our hospital between July 2009 and February 2010. We reviewed the admission database starting with the month of February 2010 and proceeded backwards, month by month, until we reached the target number of patients.

Setting

The study was conducted at our 320‐bed, university‐affiliated academic medical center in Baltimore, MD. The MICU and the CICU are closed units that are located adjacent to each other and have 12 beds each. They are staffed by separate pools of attending physicians trained in pulmonary/critical care medicine and cardiovascular diseases, respectively, and no attending physician attends in both units. During the study period, there were 10 unique MICU and 14 unique CICU attending physicians; while most attending physicians covered the unit for 14 days, none of the physicians were on service more than 2 of the 2‐week blocks (28 days). Each unit is additionally staffed by fellows of the respective specialties, and internal medicine residents and interns belonging to the same residency program (who rotate through both ICUs). Residents and fellows are generally assigned to these ICUs for 4 continuous weeks. The assignment of specific attendings, fellows, and residents to either ICU is performed by individual division administrators on a rotational basis based on residency, fellowship, and faculty service requirements. The teams in each ICU function independently of each other. Clinical care of patients requiring the assistance of the other specialty (pulmonary medicine or cardiology) have guidance conferred via an official consultation. Orders on patients in both ICUs are written by the residents using the same computerized order entry system (CPOE) under the supervision of their attending physicians. The nursing staff is exclusive to each ICU. The respiratory therapists spend time in both units. The nursing and respiratory therapy staff in both ICUs are similarly trained and certified, and have the same patient‐to‐nursing ratios.

Subjects

All patients admitted with a possible diagnosis of sepsis to either the MICU or CICU were identified by querying the hospital electronic triage database called etriage. This Web‐based application is used to admit patients to all the Medicine services at our hospital. We employed a wide case‐finding net using keywords that included pneumonia, sepsis, hypotension, high lactate, hypoxia, UTI (urinary tract infection)/urosepsis, SIRS (systemic inflammatory response syndrome), hypothermia, and respiratory failure. A total of 197 adult patients were identified. The charts and the electronic medical record (EMR) of these patients were then reviewed to determine the presence of a sepsis diagnosis using standard consensus criteria.12 Severe sepsis was defined by sepsis associated with organ dysfunction, hypoperfusion, or hypotension using criteria described by Bone et al.12

Fifty‐six did not meet the criteria for sepsis and were excluded from the analysis. A total of 141 patients were included in the study. This being a pilot study, we did not have any preliminary data regarding adherence to sepsis guidelines in overflow ICUs to calculate appropriate sample size. However, in 2 recent studies of dedicated ICUs (Ferrer et al13 and Castellanos‐Ortega et al14), the averaged adherence to a single measure like checking of lactate level was 27% pre‐intervention and 62% post‐intervention. With alpha level 0.05 and 80% power, one would need 31 patients in each unit to detect such differences with respect to this intervention. Although this data does not necessarily apply to overflow ICUs or for combination of processes, we used a goal of having at least 31 patients in each ICU.

The study was approved by the Johns Hopkins Institutional Review Board. The need for informed consent was waived given the retrospective nature of the study.

Data Extraction Process and Procedures

The clinical data was extracted from the EMR and patient charts using a standardized data extraction instrument, modified from a case report form (CRF) used and validated in previous studies.15, 16 The following procedures were used for the data extraction:

  • The data extractors included 4 physicians and 1 research assistant and were trained and tested by a single expert in data review and extraction.

  • Lab data was transcribed directly from the EMR. Calculation of acute physiology and chronic health evaluation (APACHE II) scores were done using the website http://www.sfar.org/subores2/apache22.html (Socit Franaise d'Anesthsie et de Ranimation). Sepsis‐related organ failure assessment (SOFA) scores were calculated using usual criteria.17

  • Delivery of specific treatments and interventions, including their timing, was extracted from the EMR.

  • The attending physicians' notes were used as the final source to assign diagnoses such as presence of acute lung injury, site of infection, and record interventions.

 

Data Analysis

Analyses focused primarily on assessing whether patients were treated differently between the MICU and CICU. The primary exposure variables were the process‐of‐care measures. We specifically used measurement of central venous saturation, checking of lactate level, and administration of antibiotics within 60 minutes in patients with severe sepsis as our primary process‐of‐care measures.13 Continuous variables were reported as mean standard deviation, and Student's t tests were used to compare the 2 groups. Categorical data were expressed as frequency distributions, and chi‐square tests were used to identify differences between the 2 groups. All tests were 2‐tailed with statistical significance set at 0.05. Statistical analysis was performed using SPSS version 19.0. (IBM, Armonk, NY).

To overcome data constraints, we created a dichotomous variable for each of the 3 primary processes‐of‐care (indicating receipt of process or not) and then combined them into 1 dichotomous variable indicating whether or not the patients with severe sepsis received all 3 primary processes‐of‐care. The combined variable was the key independent variable in the model.

We performed logistic regression analysis on patients with severe sepsis. The equation Logit [P(ICU Type = CICU)] = + 1Combined + 2Age describes the framework of the model, with ICU type being the dependent variable, and the combined variable of patients receiving all primary measures being the independent variable and controlled for age. Logistic regression was performed using JMP (SAS Institute, Inc, Cary, NC).

We additionally performed a secondary analysis to explore possible predictors of mortality using a logistic regression model, with the event of death as the dependent variable, and age, APACHE II scores, combined processes‐of‐care, and ICU type included as independent variables.

RESULTS

There were 100 patients admitted to the MICU and 41 patients admitted to the CICU during the study period (Table 1). The majority of the patients were admitted to the ICUs directly from the emergency department (ED) (n = 129), with a small number of patients who were transferred from the Medicine floors (n = 12).

Baseline Patient Characteristics for the 141 Patients Admitted to Intensive Care Units With Sepsis During the Study Period
 MICU (N =100)CICU (N =41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; MICU, medical intensive care unit; APACHE II, acute physiology and chronic health evaluation; SOFA, sepsis‐related organ failure assessment.

Age in years, mean SD67 14.872 15.10.11
Female, n (%)57 (57)27 (66)0.33
Patients with chronic organ insufficiency, n (%)59 (59)22 (54)0.56
Patients with severe sepsis, n (%)88 (88)21 (51)<0.001
Patients needing mechanical ventilation, n (%)43 (43)14 (34)0.33
APACHE II score, mean SD25.53 9.1124.37 9.530.50
SOFA score on day 1, mean SD7.09 3.556.71 4.570.60
Patients with acute lung injury on presentation, n (%)8 (8)2 (5)0.50

There were no significant differences between the 2 study groups in terms of age, sex, primary site of infection, mean APACHE II score, SOFA scores on day 1, chronic organ insufficiency, immune suppression, or need for mechanical ventilation (Table 1). The most common site of infection was lung. There were significantly more patients with severe sepsis in the MICU (88% vs 51%, P <0.001).

Sepsis Process‐of‐Care Measures

There were no significant differences in the proportion of severe sepsis patients who had central venous saturation checked (MICU: 46% vs CICU: 41%, P = 0.67), lactate level checked (95% vs 100%, P = 0.37), or received antibiotics within 60 minutes of presentation (75% vs 69%, P = 0.59) (Table 2). Multiple other processes and treatments were delivered similarly, as shown in Table 2.

ICU Treatments and Processes‐of‐Care for Patients With Sepsis During the Study Period
Primary Process‐of‐Care Measures (Severe Sepsis Patients)MICU (N = 88)CICU (N = 21)P Value
  • Abbreviations: CICU, cardiac intensive care unit; DVT, deep vein thrombosis; GI, gastrointestinal; ICU, intensive care unit; MICU, medical intensive care unit; RBC, red blood cell; SD, standard deviation. * Missing data causes percentages to be other than what might be suspected if it were available for all patients.

Patients with central venous oxygen saturation checked, n (%)*31 (46)7 (41)0.67
Patients with lactate level checked, n (%)*58 (95)16 (100)0.37
Received antibiotics within 60 min, n (%)*46 (75)11 (69)0.59
Patients who had all 3 above processes and treatments, n (%)19 (22)4 (19)0.79
Received vasopressor, n (%)25 (28)8 (38)0.55
ICU Treatments and Processes (All Sepsis Patients)(N =100)(N = 41) 
Fluid balance 24 h after admission in liters, mean SD1.96 2.421.42 2.630.24
Patients who received stress dose steroids, n (%)11 (11)4 (10)0.83
Patients who received Drotrecogin alfa, n (%)0 (0)0 (0) 
Morning glucose 24 h after admission in mg/dL, mean SD161 111144 800.38
Received DVT prophylaxis within 24 h of admission, n (%)74 (74)20 (49)0.004
Received GI prophylaxis within 24 h of admission, n (%)68 (68)18 (44)0.012
Received RBC transfusion within 24 h of admission, n (%)8 (8)7 (17)0.11
Received renal replacement therapy, n (%)13 (13)3 (7)0.33
Received a spontaneous breathing trial within 24 h of admission, n (%)*4 (11)4 (33)0.07

Logistic regression analysis examining the receipt of all 3 primary processes‐of‐care while controlling for age revealed that the odds of the being in one of the ICUs was not significantly different (P = 0.85). The secondary analysis regression models revealed that only the APACHE II score (odds ratio [OR] = 1.21; confidence interval [CI], 1.121.31) was significantly associated with higher odds of mortality. ICU‐type [MICU vs CICU] (OR = 1.85; CI, 0.428.20), age (OR = 1.01; CI, 0.971.06), and combined processes of care (OR = 0.26; CI, 0.071.01) did not have significant associations with odds of mortality.

A review of microbiologic sensitivities revealed a trend towards significance that the cultured microorganism(s) was likely to be resistant to the initial antibiotics administered in MICU vs CICU (15% vs 5%, respectively, P = 0.09).

Mechanical Ventilation Parameters

The majority of the ventilated patients were admitted to each ICU in assist control (AC) mode. There were no significant differences in categories of mean tidal volume (TV) (P = 0.3), mean plateau pressures (P = 0.12), mean fraction of inspired oxygen (FiO2) (P = 0.95), and mean positive end‐expiratory pressures (PEEP) (P = 0.98) noted across the 2 units at the time of ICU admission, and also 24 hours after ICU admission. Further comparison of measurements of tidal volumes and plateau pressures over 7 days of ICU stay revealed no significant differences in the 2 ICUs (P = 0.40 and 0.57, respectively, on day 7 of ICU admission). There was a trend towards significance in fewer patients in the MICU receiving spontaneous breathing trial within 24 hours of ICU admission (11% vs 33%, P = 0.07) (Table 2).

Patient Outcomes

There were no significant differences in ICU mortality (MICU 19% vs CICU 10%, P = 0.18), or hospital mortality (21% vs 15%, P = 0.38) across the units (Table 3). Mean ICU and hospital length of stay (LOS) and proportion of patients discharged home with unassisted breathing were similar (Table 3).

Patient Outcomes for the 141 Patients Admitted to the Intensive Care Units With Sepsis During the Study Period
Patient OutcomesMICU (N = 100)CICU (N = 41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; ICU, intensive care unit; MICU, medical intensive care unit; SD, standard deviation.

ICU mortality, n (%)19 (19)4 (10)0.18
Hospital mortality, n (%)21 (21)6 (15)0.38
Discharged home with unassisted breathing, n (%)33 (33)19 (46)0.14
ICU length of stay in days, mean SD4.78 6.244.92 6.320.97
Hospital length of stay in days, mean SD9.68 9.229.73 9.330.98

DISCUSSION

Since sepsis is more commonly treated in the medical ICU and some data suggests that specialty ICUs may be better at providing desired care,18, 19 we believed that patients treated in the MICU would be more likely to receive guideline‐concordant care. The study refutes our a priori hypothesis and reveals that evidence‐based processes‐of‐care associated with improved outcomes for sepsis are similarly implemented at our institution in the primary and overflow ICU. These findings are important, as ICU bed availability is a frequent problem and many hospitals overflow patients to non‐primary ICUs.9, 20

The observed equivalence in the care delivered may be a function of the relatively high number of patients with sepsis treated in the overflow unit, thereby giving the delivery teams enough experience to provide the desired care. An alternative explanation could be that the residents in CICU brought with them the experience from having previously trained in the MICU. Although, some of the care processes for sepsis patients are influenced by the CPOE (with embedded order sets and protocols), it is unlikely that CPOE can fully account for similarity in care because many processes and therapies (like use of steroids, amount of fluid delivered in first 24 hours, packed red blood cells [PRBC] transfusion, and spontaneous breathing trials) are not embedded within order sets.

The significant difference noted in the areas of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of ICU admission was unexpected. These preventive therapies are included in initial order sets in the CPOE, which prompt physicians to order them as standard‐of‐care. With respect to DVT prophylaxis, we suspect that some of the difference might be attributable to specific contraindications to its use, which could have been more common in one of the units. There were more patients in MICU on mechanical ventilation (although not statistically significant) and with severe sepsis (statistically significant) at time of admission, which might have contributed to the difference noted in use of GI prophylaxis. It is also plausible that these differences might have disappeared if they were reassessed beyond 24 hours into the ICU admission. We cannot rule out the presence of unit‐ and physician‐level differences that contributed to this. Likewise, there was an unexpected trend towards significance, wherein more patients in CICU had spontaneous breathing trials within 24 hours of admission. This might also be explained by the higher number of patients with severe sepsis in the MICU (preempting any weaning attempts). These caveats aside, it is reassuring that, at our institution, admitting septic patients to the first available ICU bed does not adversely affect important processes‐of‐care.

One might ask whether this study's data should reassure other sites who are boarding septic patients in non‐primary ICUs. Irrespective of the number of patients studied or the degree of statistical significance of the associations, an observational study design cannot prove that boarding septic patients in non‐primary ICUs is either safe or unsafe. However, we hope that readers reflect on, and take inventory of, systems issues that may be different between unitswith an eye towards eliminating variation such that all units managing septic patients are primed to deliver guideline‐concordant care. Other hospitals that use CPOE with sepsis order sets, have protocols for sepsis care, and who train nursing and respiratory therapists to meet high standards might be pleased to see that the patients in our study received comparable, high‐quality care across the 2 units. While our data suggests that boarding patients in overflow units may be safe, these findings would need to be replicated at other sites using prospective designs to prove safety.

Length of emergency room stay prior to admission is associated with higher mortality rates.2123 At many hospitals, critical care beds are a scarce resource such that most hospitals have a policy for the triage of patients to critical care beds.24, 25 Lundberg and colleagues' study demonstrated that patients who developed septic shock on the medical wards experienced delays in receipt of intravenous fluids, inotropic agents and transfer to a critical care setting.26 Thus, rather than waiting in the ED or on the medical service for an MICU bed to become available, it may be most wise to admit a critically sick septic patient to the first available ICU bed, even to an overflow ICU. In a recent study by Sidlow and Aggarwal, 1104 patients discharged from the coronary care unit (CCU) with a non‐cardiac primary diagnosis were compared to patients admitted to the MICU in the same hospital.27 The study found no differences in patient mortality, 30‐day readmission rate, hospital LOS, ICU LOS, and safety outcomes of ventilator‐associated pneumonia and catheter‐associated bloodstream infections between ICUs. However, their study did not examine processes‐of‐care delivered between the primary ICU and the overflow unit, and did not validate the primary diagnoses of patients admitted to the ICU.

Several limitations of this study should be considered. First, this study was conducted at a single center. Second, we used a retrospective study design; however, a prospective study randomizing patients to 1 of the 2 units would likely never be possible. Third, the relatively small number of patients limited the power of the study to detect mortality differences between the units. However, this was a pilot study focused on processes of care as opposed to clinical outcomes. Fourth, it is possible that we did not capture every single patient with sepsis with our keyword search. Our use of a previously validated screening process should have limited the number of missed cases.15, 16 Fifth, although the 2 ICUs have exclusive nursing staff and attending physicians, the housestaff and respiratory therapists do rotate between the 2 ICUs and place orders in the common CPOE. The rotating housestaff may certainly represent a source for confounding, but the large numbers (>30) of evenly spread housestaff over the study period minimizes the potential for any trainee to be responsible for a large proportion of observed practice. Sixth, ICU attendings are the physicians of record and could influence the results. Because no attending physician was on service for more than 4 weeks during the study period, and patients were equally spread over this same time, concerns about clustering and biases this may have created should be minimal but cannot be ruled out. Seventh, some interventions and processes, such as antibiotic administration and measurement of lactate, may have been initiated in the ED, thereby decreasing the potential for differences between the groups. Additionally, we cannot rule out the possibility that factors other than bed availability drove the admission process (we found that the relative proportion of patients admitted to overflow ICU during hours of ambulance diversion was similar to the overflow ICU admissions during non‐ambulance diversion hours). It is possible that some selection bias by the hospitalist assigning patients to specific ICUs influenced their triage decisionsalthough all triaging doctors go through the same process of training in active bed management.11 While more patients admitted to the MICU had severe sepsis, there were no differences between groups in APACHE II or SOFA scores. However, we cannot rule out that there were other residual confounders. Finally, in a small number of cases (4/41, 10%), the CICU team consulted the MICU attending for assistance. This input had the potential to reduce disparities in care between the units.

Overflowing patients to non‐primary ICUs occurs in many hospitals. Our study demonstrates that sepsis treatment for overflow patients may be similar to that received in the primary ICU. While a large multicentered and randomized trial could determine whether significant management and outcome differences exist between primary and overflow ICUs, feasibility concerns make it unlikely that such a study will ever be conducted.

Acknowledgements

Disclosure: Dr Wright is a Miller‐Coulson Family Scholar and this work is supported by the Miller‐Coulson family through the Johns Hopkins Center for Innovative Medicine. Dr Sevransky was supported with a grant from National Institute of General Medical Sciences, NIGMS K‐23‐1399. All other authors disclose no relevant or financial conflicts of interest.

Files
References
  1. Angus DC,Linde‐Zwirble WT,Lidicker J,Clermont G,Carcillo J,Pinsky MR.Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.Crit Care Med.2001;29(7):13031310.
  2. Kumar G,Kumar N,Taneja A, et al;for the Milwaukee Initiative in Critical Care Outcomes Research (MICCOR) Group of Investigators.Nationwide trends of severe sepsis in the twenty first century (2000–2007).Chest.2011;140(5):12231231.
  3. Dombrovskiy VY,Martin AA,Sunderram J,Paz HL.Rapid increase in hospitalization and mortality rates for severe sepsis in the United States: a trend analysis from 1993 to 2003.Crit Care Med.2007;35(5):12441250.
  4. Dellinger RP,Levy MM,Carlet JM, et al.Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008.Crit Care Med.2008;36(1):296327.
  5. Jones AE,Shapiro NI,Trzeciak S, et al.Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial.JAMA.2010;303(8):739746.
  6. Rivers E,Nguyen B,Havstad S, et al.Early goal‐directed therapy in the treatment of severe sepsis and septic shock.N Engl J Med.2001;345(19):13681377.
  7. Nguyen HB,Corbett SW,Steele R, et al.Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality.Crit Care Med.2007;35(4):11051112.
  8. Kumar A,Zarychanski R,Light B, et al.Early combination antibiotic therapy yields improved survival compared with monotherapy in septic shock: a propensity‐matched analysis.Crit Care Med.2010;38(9):17731785.
  9. Johannes MS.A new dimension of the PACU: the dilemma of the ICU overflow patient.J Post Anesth Nurs.1994;9(5):297300.
  10. Groeger JS,Strosberg MA,Halpern NA, et al.Descriptive analysis of critical care units in the United States.Crit Care Med.1992;20(6):846863.
  11. Howell E,Bessman E,Kravet S,Kolodner K,Marshall R,Wright S.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149(11):804811.
  12. Bone RC,Balk RA,Cerra FB, et al.Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee, American College of Chest Physicians/Society of Critical Care Medicine.Chest.1992;101(6):16441655.
  13. Ferrer R,Artigas A,Levy MM, et al.Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.JAMA.2008;299(19):22942303.
  14. Castellanos‐Ortega A,Suberviola B,Garcia‐Astudillo LA, et al.Impact of the surviving sepsis campaign protocols on hospital length of stay and mortality in septic shock patients: results of a three‐year follow‐up quasi‐experimental study.Crit Care Med.2010;38(4):10361043.
  15. Needham DM,Dennison CR,Dowdy DW, et al.Study protocol: the improving care of acute lung injury patients (ICAP) study.Crit Care.2006;10(1):R9.
  16. Ali N,Gutteridge D,Shahul S,Checkley W,Sevransky J,Martin G.Critical illness outcome study: an observational study of protocols and mortality in intensive care units.Open Access J Clin Trials.2011;3(September):5565.
  17. Vincent JL,Moreno R,Takala J, et al.The SOFA (sepsis‐related organ failure assessment) score to describe organ dysfunction/failure: on behalf of the Working Group on Sepsis‐Related Problems of the European Society of Intensive Care Medicine.Intensive Care Med.1996;22(7):707710.
  18. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288(17):21512162.
  19. Fuchs RJ,Berenholtz SM,Dorman T.Do intensivists in ICU improve outcome?Best Pract Res Clin Anaesthesiol.2005;19(1):125135.
  20. Lindsay M.Is the postanesthesia care unit becoming an intensive care unit?J Perianesth Nurs.1999;14(2):7377.
  21. Chalfin DB,Trzeciak S,Likourezos A,Baumann BM,Dellinger RP;for the DELAY‐ED Study Group.Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit.Crit Care Med.2007;35(6):14771483.
  22. Renaud B,Santin A,Coma E, et al.Association between timing of intensive care unit admission and outcomes for emergency department patients with community‐acquired pneumonia.Crit Care Med.2009;37(11):28672874.
  23. Shen YC,Hsia RY.Association between ambulance diversion and survival among patients with acute myocardial infarction.JAMA.2011;305(23):24402447.
  24. Teres D.Civilian triage in the intensive care unit: the ritual of the last bed.Crit Care Med.1993;21(4):598606.
  25. Sinuff T,Kahnamoui K,Cook DJ,Luce JM,Levy MM;for the Values Ethics and Rationing in Critical Care Task Force.Rationing critical care beds: a systematic review.Crit Care Med.2004;32(7):15881597.
  26. Lundberg JS,Perl TM,Wiblin T, et al.Septic shock: an analysis of outcomes for patients with onset on hospital wards versus intensive care units.Crit Care Med.1998;26(6):10201024.
  27. Sidlow R,Aggarwal V.“The MICU is full”: one hospital's experience with an overflow triage policy.Jt Comm J Qual Patient Saf.2011;37(10):456460.
Article PDF
Issue
Journal of Hospital Medicine - 7(8)
Publications
Page Number
600-605
Sections
Files
Files
Article PDF
Article PDF

Sepsis is a major cause of death in hospitalized patients.13 It is recommended that patients with sepsis be treated with early appropriate antibiotics, as well as early goal‐directed therapy including fluid and vasopressor support according to evidence‐based guidelines.46 Following such evidence‐based protocols and process‐of‐care interventions has been shown to be associated with better patient outcomes, including decreased mortality.7, 8

Most patients with severe sepsis are cared for in intensive care units (ICUs). At times, there are no beds available in the primary ICU and patients presenting to the hospital with sepsis are cared for in other units. Patients admitted to a non‐preferred clinical inpatient setting are sometimes referred to as overflow.9 ICUs can differ significantly in staffing patterns, equipment, and training.10 It is not known if overflow sepsis patients receive similar care when admitted to non‐primary ICUs.

At our hospital, we have an active bed management system led by the hospitalist division.11 This system includes protocols to place sepsis patients in the overflow ICU if the primary ICU is full. We hypothesized that process‐of‐care interventions would be more strictly adhered to when sepsis patients were in the primary ICU rather than in the overflow unit at our institution.

METHODS

Design

This was a retrospective cohort study of all patients with sepsis admitted to either the primary medical intensive care unit (MICU) or the overflow cardiac intensive care unit (CICU) at our hospital between July 2009 and February 2010. We reviewed the admission database starting with the month of February 2010 and proceeded backwards, month by month, until we reached the target number of patients.

Setting

The study was conducted at our 320‐bed, university‐affiliated academic medical center in Baltimore, MD. The MICU and the CICU are closed units that are located adjacent to each other and have 12 beds each. They are staffed by separate pools of attending physicians trained in pulmonary/critical care medicine and cardiovascular diseases, respectively, and no attending physician attends in both units. During the study period, there were 10 unique MICU and 14 unique CICU attending physicians; while most attending physicians covered the unit for 14 days, none of the physicians were on service more than 2 of the 2‐week blocks (28 days). Each unit is additionally staffed by fellows of the respective specialties, and internal medicine residents and interns belonging to the same residency program (who rotate through both ICUs). Residents and fellows are generally assigned to these ICUs for 4 continuous weeks. The assignment of specific attendings, fellows, and residents to either ICU is performed by individual division administrators on a rotational basis based on residency, fellowship, and faculty service requirements. The teams in each ICU function independently of each other. Clinical care of patients requiring the assistance of the other specialty (pulmonary medicine or cardiology) have guidance conferred via an official consultation. Orders on patients in both ICUs are written by the residents using the same computerized order entry system (CPOE) under the supervision of their attending physicians. The nursing staff is exclusive to each ICU. The respiratory therapists spend time in both units. The nursing and respiratory therapy staff in both ICUs are similarly trained and certified, and have the same patient‐to‐nursing ratios.

Subjects

All patients admitted with a possible diagnosis of sepsis to either the MICU or CICU were identified by querying the hospital electronic triage database called etriage. This Web‐based application is used to admit patients to all the Medicine services at our hospital. We employed a wide case‐finding net using keywords that included pneumonia, sepsis, hypotension, high lactate, hypoxia, UTI (urinary tract infection)/urosepsis, SIRS (systemic inflammatory response syndrome), hypothermia, and respiratory failure. A total of 197 adult patients were identified. The charts and the electronic medical record (EMR) of these patients were then reviewed to determine the presence of a sepsis diagnosis using standard consensus criteria.12 Severe sepsis was defined by sepsis associated with organ dysfunction, hypoperfusion, or hypotension using criteria described by Bone et al.12

Fifty‐six did not meet the criteria for sepsis and were excluded from the analysis. A total of 141 patients were included in the study. This being a pilot study, we did not have any preliminary data regarding adherence to sepsis guidelines in overflow ICUs to calculate appropriate sample size. However, in 2 recent studies of dedicated ICUs (Ferrer et al13 and Castellanos‐Ortega et al14), the averaged adherence to a single measure like checking of lactate level was 27% pre‐intervention and 62% post‐intervention. With alpha level 0.05 and 80% power, one would need 31 patients in each unit to detect such differences with respect to this intervention. Although this data does not necessarily apply to overflow ICUs or for combination of processes, we used a goal of having at least 31 patients in each ICU.

The study was approved by the Johns Hopkins Institutional Review Board. The need for informed consent was waived given the retrospective nature of the study.

Data Extraction Process and Procedures

The clinical data was extracted from the EMR and patient charts using a standardized data extraction instrument, modified from a case report form (CRF) used and validated in previous studies.15, 16 The following procedures were used for the data extraction:

  • The data extractors included 4 physicians and 1 research assistant and were trained and tested by a single expert in data review and extraction.

  • Lab data was transcribed directly from the EMR. Calculation of acute physiology and chronic health evaluation (APACHE II) scores were done using the website http://www.sfar.org/subores2/apache22.html (Socit Franaise d'Anesthsie et de Ranimation). Sepsis‐related organ failure assessment (SOFA) scores were calculated using usual criteria.17

  • Delivery of specific treatments and interventions, including their timing, was extracted from the EMR.

  • The attending physicians' notes were used as the final source to assign diagnoses such as presence of acute lung injury, site of infection, and record interventions.

 

Data Analysis

Analyses focused primarily on assessing whether patients were treated differently between the MICU and CICU. The primary exposure variables were the process‐of‐care measures. We specifically used measurement of central venous saturation, checking of lactate level, and administration of antibiotics within 60 minutes in patients with severe sepsis as our primary process‐of‐care measures.13 Continuous variables were reported as mean standard deviation, and Student's t tests were used to compare the 2 groups. Categorical data were expressed as frequency distributions, and chi‐square tests were used to identify differences between the 2 groups. All tests were 2‐tailed with statistical significance set at 0.05. Statistical analysis was performed using SPSS version 19.0. (IBM, Armonk, NY).

To overcome data constraints, we created a dichotomous variable for each of the 3 primary processes‐of‐care (indicating receipt of process or not) and then combined them into 1 dichotomous variable indicating whether or not the patients with severe sepsis received all 3 primary processes‐of‐care. The combined variable was the key independent variable in the model.

We performed logistic regression analysis on patients with severe sepsis. The equation Logit [P(ICU Type = CICU)] = + 1Combined + 2Age describes the framework of the model, with ICU type being the dependent variable, and the combined variable of patients receiving all primary measures being the independent variable and controlled for age. Logistic regression was performed using JMP (SAS Institute, Inc, Cary, NC).

We additionally performed a secondary analysis to explore possible predictors of mortality using a logistic regression model, with the event of death as the dependent variable, and age, APACHE II scores, combined processes‐of‐care, and ICU type included as independent variables.

RESULTS

There were 100 patients admitted to the MICU and 41 patients admitted to the CICU during the study period (Table 1). The majority of the patients were admitted to the ICUs directly from the emergency department (ED) (n = 129), with a small number of patients who were transferred from the Medicine floors (n = 12).

Baseline Patient Characteristics for the 141 Patients Admitted to Intensive Care Units With Sepsis During the Study Period
 MICU (N =100)CICU (N =41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; MICU, medical intensive care unit; APACHE II, acute physiology and chronic health evaluation; SOFA, sepsis‐related organ failure assessment.

Age in years, mean SD67 14.872 15.10.11
Female, n (%)57 (57)27 (66)0.33
Patients with chronic organ insufficiency, n (%)59 (59)22 (54)0.56
Patients with severe sepsis, n (%)88 (88)21 (51)<0.001
Patients needing mechanical ventilation, n (%)43 (43)14 (34)0.33
APACHE II score, mean SD25.53 9.1124.37 9.530.50
SOFA score on day 1, mean SD7.09 3.556.71 4.570.60
Patients with acute lung injury on presentation, n (%)8 (8)2 (5)0.50

There were no significant differences between the 2 study groups in terms of age, sex, primary site of infection, mean APACHE II score, SOFA scores on day 1, chronic organ insufficiency, immune suppression, or need for mechanical ventilation (Table 1). The most common site of infection was lung. There were significantly more patients with severe sepsis in the MICU (88% vs 51%, P <0.001).

Sepsis Process‐of‐Care Measures

There were no significant differences in the proportion of severe sepsis patients who had central venous saturation checked (MICU: 46% vs CICU: 41%, P = 0.67), lactate level checked (95% vs 100%, P = 0.37), or received antibiotics within 60 minutes of presentation (75% vs 69%, P = 0.59) (Table 2). Multiple other processes and treatments were delivered similarly, as shown in Table 2.

ICU Treatments and Processes‐of‐Care for Patients With Sepsis During the Study Period
Primary Process‐of‐Care Measures (Severe Sepsis Patients)MICU (N = 88)CICU (N = 21)P Value
  • Abbreviations: CICU, cardiac intensive care unit; DVT, deep vein thrombosis; GI, gastrointestinal; ICU, intensive care unit; MICU, medical intensive care unit; RBC, red blood cell; SD, standard deviation. * Missing data causes percentages to be other than what might be suspected if it were available for all patients.

Patients with central venous oxygen saturation checked, n (%)*31 (46)7 (41)0.67
Patients with lactate level checked, n (%)*58 (95)16 (100)0.37
Received antibiotics within 60 min, n (%)*46 (75)11 (69)0.59
Patients who had all 3 above processes and treatments, n (%)19 (22)4 (19)0.79
Received vasopressor, n (%)25 (28)8 (38)0.55
ICU Treatments and Processes (All Sepsis Patients)(N =100)(N = 41) 
Fluid balance 24 h after admission in liters, mean SD1.96 2.421.42 2.630.24
Patients who received stress dose steroids, n (%)11 (11)4 (10)0.83
Patients who received Drotrecogin alfa, n (%)0 (0)0 (0) 
Morning glucose 24 h after admission in mg/dL, mean SD161 111144 800.38
Received DVT prophylaxis within 24 h of admission, n (%)74 (74)20 (49)0.004
Received GI prophylaxis within 24 h of admission, n (%)68 (68)18 (44)0.012
Received RBC transfusion within 24 h of admission, n (%)8 (8)7 (17)0.11
Received renal replacement therapy, n (%)13 (13)3 (7)0.33
Received a spontaneous breathing trial within 24 h of admission, n (%)*4 (11)4 (33)0.07

Logistic regression analysis examining the receipt of all 3 primary processes‐of‐care while controlling for age revealed that the odds of the being in one of the ICUs was not significantly different (P = 0.85). The secondary analysis regression models revealed that only the APACHE II score (odds ratio [OR] = 1.21; confidence interval [CI], 1.121.31) was significantly associated with higher odds of mortality. ICU‐type [MICU vs CICU] (OR = 1.85; CI, 0.428.20), age (OR = 1.01; CI, 0.971.06), and combined processes of care (OR = 0.26; CI, 0.071.01) did not have significant associations with odds of mortality.

A review of microbiologic sensitivities revealed a trend towards significance that the cultured microorganism(s) was likely to be resistant to the initial antibiotics administered in MICU vs CICU (15% vs 5%, respectively, P = 0.09).

Mechanical Ventilation Parameters

The majority of the ventilated patients were admitted to each ICU in assist control (AC) mode. There were no significant differences in categories of mean tidal volume (TV) (P = 0.3), mean plateau pressures (P = 0.12), mean fraction of inspired oxygen (FiO2) (P = 0.95), and mean positive end‐expiratory pressures (PEEP) (P = 0.98) noted across the 2 units at the time of ICU admission, and also 24 hours after ICU admission. Further comparison of measurements of tidal volumes and plateau pressures over 7 days of ICU stay revealed no significant differences in the 2 ICUs (P = 0.40 and 0.57, respectively, on day 7 of ICU admission). There was a trend towards significance in fewer patients in the MICU receiving spontaneous breathing trial within 24 hours of ICU admission (11% vs 33%, P = 0.07) (Table 2).

Patient Outcomes

There were no significant differences in ICU mortality (MICU 19% vs CICU 10%, P = 0.18), or hospital mortality (21% vs 15%, P = 0.38) across the units (Table 3). Mean ICU and hospital length of stay (LOS) and proportion of patients discharged home with unassisted breathing were similar (Table 3).

Patient Outcomes for the 141 Patients Admitted to the Intensive Care Units With Sepsis During the Study Period
Patient OutcomesMICU (N = 100)CICU (N = 41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; ICU, intensive care unit; MICU, medical intensive care unit; SD, standard deviation.

ICU mortality, n (%)19 (19)4 (10)0.18
Hospital mortality, n (%)21 (21)6 (15)0.38
Discharged home with unassisted breathing, n (%)33 (33)19 (46)0.14
ICU length of stay in days, mean SD4.78 6.244.92 6.320.97
Hospital length of stay in days, mean SD9.68 9.229.73 9.330.98

DISCUSSION

Since sepsis is more commonly treated in the medical ICU and some data suggests that specialty ICUs may be better at providing desired care,18, 19 we believed that patients treated in the MICU would be more likely to receive guideline‐concordant care. The study refutes our a priori hypothesis and reveals that evidence‐based processes‐of‐care associated with improved outcomes for sepsis are similarly implemented at our institution in the primary and overflow ICU. These findings are important, as ICU bed availability is a frequent problem and many hospitals overflow patients to non‐primary ICUs.9, 20

The observed equivalence in the care delivered may be a function of the relatively high number of patients with sepsis treated in the overflow unit, thereby giving the delivery teams enough experience to provide the desired care. An alternative explanation could be that the residents in CICU brought with them the experience from having previously trained in the MICU. Although, some of the care processes for sepsis patients are influenced by the CPOE (with embedded order sets and protocols), it is unlikely that CPOE can fully account for similarity in care because many processes and therapies (like use of steroids, amount of fluid delivered in first 24 hours, packed red blood cells [PRBC] transfusion, and spontaneous breathing trials) are not embedded within order sets.

The significant difference noted in the areas of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of ICU admission was unexpected. These preventive therapies are included in initial order sets in the CPOE, which prompt physicians to order them as standard‐of‐care. With respect to DVT prophylaxis, we suspect that some of the difference might be attributable to specific contraindications to its use, which could have been more common in one of the units. There were more patients in MICU on mechanical ventilation (although not statistically significant) and with severe sepsis (statistically significant) at time of admission, which might have contributed to the difference noted in use of GI prophylaxis. It is also plausible that these differences might have disappeared if they were reassessed beyond 24 hours into the ICU admission. We cannot rule out the presence of unit‐ and physician‐level differences that contributed to this. Likewise, there was an unexpected trend towards significance, wherein more patients in CICU had spontaneous breathing trials within 24 hours of admission. This might also be explained by the higher number of patients with severe sepsis in the MICU (preempting any weaning attempts). These caveats aside, it is reassuring that, at our institution, admitting septic patients to the first available ICU bed does not adversely affect important processes‐of‐care.

One might ask whether this study's data should reassure other sites who are boarding septic patients in non‐primary ICUs. Irrespective of the number of patients studied or the degree of statistical significance of the associations, an observational study design cannot prove that boarding septic patients in non‐primary ICUs is either safe or unsafe. However, we hope that readers reflect on, and take inventory of, systems issues that may be different between unitswith an eye towards eliminating variation such that all units managing septic patients are primed to deliver guideline‐concordant care. Other hospitals that use CPOE with sepsis order sets, have protocols for sepsis care, and who train nursing and respiratory therapists to meet high standards might be pleased to see that the patients in our study received comparable, high‐quality care across the 2 units. While our data suggests that boarding patients in overflow units may be safe, these findings would need to be replicated at other sites using prospective designs to prove safety.

Length of emergency room stay prior to admission is associated with higher mortality rates.2123 At many hospitals, critical care beds are a scarce resource such that most hospitals have a policy for the triage of patients to critical care beds.24, 25 Lundberg and colleagues' study demonstrated that patients who developed septic shock on the medical wards experienced delays in receipt of intravenous fluids, inotropic agents and transfer to a critical care setting.26 Thus, rather than waiting in the ED or on the medical service for an MICU bed to become available, it may be most wise to admit a critically sick septic patient to the first available ICU bed, even to an overflow ICU. In a recent study by Sidlow and Aggarwal, 1104 patients discharged from the coronary care unit (CCU) with a non‐cardiac primary diagnosis were compared to patients admitted to the MICU in the same hospital.27 The study found no differences in patient mortality, 30‐day readmission rate, hospital LOS, ICU LOS, and safety outcomes of ventilator‐associated pneumonia and catheter‐associated bloodstream infections between ICUs. However, their study did not examine processes‐of‐care delivered between the primary ICU and the overflow unit, and did not validate the primary diagnoses of patients admitted to the ICU.

Several limitations of this study should be considered. First, this study was conducted at a single center. Second, we used a retrospective study design; however, a prospective study randomizing patients to 1 of the 2 units would likely never be possible. Third, the relatively small number of patients limited the power of the study to detect mortality differences between the units. However, this was a pilot study focused on processes of care as opposed to clinical outcomes. Fourth, it is possible that we did not capture every single patient with sepsis with our keyword search. Our use of a previously validated screening process should have limited the number of missed cases.15, 16 Fifth, although the 2 ICUs have exclusive nursing staff and attending physicians, the housestaff and respiratory therapists do rotate between the 2 ICUs and place orders in the common CPOE. The rotating housestaff may certainly represent a source for confounding, but the large numbers (>30) of evenly spread housestaff over the study period minimizes the potential for any trainee to be responsible for a large proportion of observed practice. Sixth, ICU attendings are the physicians of record and could influence the results. Because no attending physician was on service for more than 4 weeks during the study period, and patients were equally spread over this same time, concerns about clustering and biases this may have created should be minimal but cannot be ruled out. Seventh, some interventions and processes, such as antibiotic administration and measurement of lactate, may have been initiated in the ED, thereby decreasing the potential for differences between the groups. Additionally, we cannot rule out the possibility that factors other than bed availability drove the admission process (we found that the relative proportion of patients admitted to overflow ICU during hours of ambulance diversion was similar to the overflow ICU admissions during non‐ambulance diversion hours). It is possible that some selection bias by the hospitalist assigning patients to specific ICUs influenced their triage decisionsalthough all triaging doctors go through the same process of training in active bed management.11 While more patients admitted to the MICU had severe sepsis, there were no differences between groups in APACHE II or SOFA scores. However, we cannot rule out that there were other residual confounders. Finally, in a small number of cases (4/41, 10%), the CICU team consulted the MICU attending for assistance. This input had the potential to reduce disparities in care between the units.

Overflowing patients to non‐primary ICUs occurs in many hospitals. Our study demonstrates that sepsis treatment for overflow patients may be similar to that received in the primary ICU. While a large multicentered and randomized trial could determine whether significant management and outcome differences exist between primary and overflow ICUs, feasibility concerns make it unlikely that such a study will ever be conducted.

Acknowledgements

Disclosure: Dr Wright is a Miller‐Coulson Family Scholar and this work is supported by the Miller‐Coulson family through the Johns Hopkins Center for Innovative Medicine. Dr Sevransky was supported with a grant from National Institute of General Medical Sciences, NIGMS K‐23‐1399. All other authors disclose no relevant or financial conflicts of interest.

Sepsis is a major cause of death in hospitalized patients.13 It is recommended that patients with sepsis be treated with early appropriate antibiotics, as well as early goal‐directed therapy including fluid and vasopressor support according to evidence‐based guidelines.46 Following such evidence‐based protocols and process‐of‐care interventions has been shown to be associated with better patient outcomes, including decreased mortality.7, 8

Most patients with severe sepsis are cared for in intensive care units (ICUs). At times, there are no beds available in the primary ICU and patients presenting to the hospital with sepsis are cared for in other units. Patients admitted to a non‐preferred clinical inpatient setting are sometimes referred to as overflow.9 ICUs can differ significantly in staffing patterns, equipment, and training.10 It is not known if overflow sepsis patients receive similar care when admitted to non‐primary ICUs.

At our hospital, we have an active bed management system led by the hospitalist division.11 This system includes protocols to place sepsis patients in the overflow ICU if the primary ICU is full. We hypothesized that process‐of‐care interventions would be more strictly adhered to when sepsis patients were in the primary ICU rather than in the overflow unit at our institution.

METHODS

Design

This was a retrospective cohort study of all patients with sepsis admitted to either the primary medical intensive care unit (MICU) or the overflow cardiac intensive care unit (CICU) at our hospital between July 2009 and February 2010. We reviewed the admission database starting with the month of February 2010 and proceeded backwards, month by month, until we reached the target number of patients.

Setting

The study was conducted at our 320‐bed, university‐affiliated academic medical center in Baltimore, MD. The MICU and the CICU are closed units that are located adjacent to each other and have 12 beds each. They are staffed by separate pools of attending physicians trained in pulmonary/critical care medicine and cardiovascular diseases, respectively, and no attending physician attends in both units. During the study period, there were 10 unique MICU and 14 unique CICU attending physicians; while most attending physicians covered the unit for 14 days, none of the physicians were on service more than 2 of the 2‐week blocks (28 days). Each unit is additionally staffed by fellows of the respective specialties, and internal medicine residents and interns belonging to the same residency program (who rotate through both ICUs). Residents and fellows are generally assigned to these ICUs for 4 continuous weeks. The assignment of specific attendings, fellows, and residents to either ICU is performed by individual division administrators on a rotational basis based on residency, fellowship, and faculty service requirements. The teams in each ICU function independently of each other. Clinical care of patients requiring the assistance of the other specialty (pulmonary medicine or cardiology) have guidance conferred via an official consultation. Orders on patients in both ICUs are written by the residents using the same computerized order entry system (CPOE) under the supervision of their attending physicians. The nursing staff is exclusive to each ICU. The respiratory therapists spend time in both units. The nursing and respiratory therapy staff in both ICUs are similarly trained and certified, and have the same patient‐to‐nursing ratios.

Subjects

All patients admitted with a possible diagnosis of sepsis to either the MICU or CICU were identified by querying the hospital electronic triage database called etriage. This Web‐based application is used to admit patients to all the Medicine services at our hospital. We employed a wide case‐finding net using keywords that included pneumonia, sepsis, hypotension, high lactate, hypoxia, UTI (urinary tract infection)/urosepsis, SIRS (systemic inflammatory response syndrome), hypothermia, and respiratory failure. A total of 197 adult patients were identified. The charts and the electronic medical record (EMR) of these patients were then reviewed to determine the presence of a sepsis diagnosis using standard consensus criteria.12 Severe sepsis was defined by sepsis associated with organ dysfunction, hypoperfusion, or hypotension using criteria described by Bone et al.12

Fifty‐six did not meet the criteria for sepsis and were excluded from the analysis. A total of 141 patients were included in the study. This being a pilot study, we did not have any preliminary data regarding adherence to sepsis guidelines in overflow ICUs to calculate appropriate sample size. However, in 2 recent studies of dedicated ICUs (Ferrer et al13 and Castellanos‐Ortega et al14), the averaged adherence to a single measure like checking of lactate level was 27% pre‐intervention and 62% post‐intervention. With alpha level 0.05 and 80% power, one would need 31 patients in each unit to detect such differences with respect to this intervention. Although this data does not necessarily apply to overflow ICUs or for combination of processes, we used a goal of having at least 31 patients in each ICU.

The study was approved by the Johns Hopkins Institutional Review Board. The need for informed consent was waived given the retrospective nature of the study.

Data Extraction Process and Procedures

The clinical data was extracted from the EMR and patient charts using a standardized data extraction instrument, modified from a case report form (CRF) used and validated in previous studies.15, 16 The following procedures were used for the data extraction:

  • The data extractors included 4 physicians and 1 research assistant and were trained and tested by a single expert in data review and extraction.

  • Lab data was transcribed directly from the EMR. Calculation of acute physiology and chronic health evaluation (APACHE II) scores were done using the website http://www.sfar.org/subores2/apache22.html (Socit Franaise d'Anesthsie et de Ranimation). Sepsis‐related organ failure assessment (SOFA) scores were calculated using usual criteria.17

  • Delivery of specific treatments and interventions, including their timing, was extracted from the EMR.

  • The attending physicians' notes were used as the final source to assign diagnoses such as presence of acute lung injury, site of infection, and record interventions.

 

Data Analysis

Analyses focused primarily on assessing whether patients were treated differently between the MICU and CICU. The primary exposure variables were the process‐of‐care measures. We specifically used measurement of central venous saturation, checking of lactate level, and administration of antibiotics within 60 minutes in patients with severe sepsis as our primary process‐of‐care measures.13 Continuous variables were reported as mean standard deviation, and Student's t tests were used to compare the 2 groups. Categorical data were expressed as frequency distributions, and chi‐square tests were used to identify differences between the 2 groups. All tests were 2‐tailed with statistical significance set at 0.05. Statistical analysis was performed using SPSS version 19.0. (IBM, Armonk, NY).

To overcome data constraints, we created a dichotomous variable for each of the 3 primary processes‐of‐care (indicating receipt of process or not) and then combined them into 1 dichotomous variable indicating whether or not the patients with severe sepsis received all 3 primary processes‐of‐care. The combined variable was the key independent variable in the model.

We performed logistic regression analysis on patients with severe sepsis. The equation Logit [P(ICU Type = CICU)] = + 1Combined + 2Age describes the framework of the model, with ICU type being the dependent variable, and the combined variable of patients receiving all primary measures being the independent variable and controlled for age. Logistic regression was performed using JMP (SAS Institute, Inc, Cary, NC).

We additionally performed a secondary analysis to explore possible predictors of mortality using a logistic regression model, with the event of death as the dependent variable, and age, APACHE II scores, combined processes‐of‐care, and ICU type included as independent variables.

RESULTS

There were 100 patients admitted to the MICU and 41 patients admitted to the CICU during the study period (Table 1). The majority of the patients were admitted to the ICUs directly from the emergency department (ED) (n = 129), with a small number of patients who were transferred from the Medicine floors (n = 12).

Baseline Patient Characteristics for the 141 Patients Admitted to Intensive Care Units With Sepsis During the Study Period
 MICU (N =100)CICU (N =41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; MICU, medical intensive care unit; APACHE II, acute physiology and chronic health evaluation; SOFA, sepsis‐related organ failure assessment.

Age in years, mean SD67 14.872 15.10.11
Female, n (%)57 (57)27 (66)0.33
Patients with chronic organ insufficiency, n (%)59 (59)22 (54)0.56
Patients with severe sepsis, n (%)88 (88)21 (51)<0.001
Patients needing mechanical ventilation, n (%)43 (43)14 (34)0.33
APACHE II score, mean SD25.53 9.1124.37 9.530.50
SOFA score on day 1, mean SD7.09 3.556.71 4.570.60
Patients with acute lung injury on presentation, n (%)8 (8)2 (5)0.50

There were no significant differences between the 2 study groups in terms of age, sex, primary site of infection, mean APACHE II score, SOFA scores on day 1, chronic organ insufficiency, immune suppression, or need for mechanical ventilation (Table 1). The most common site of infection was lung. There were significantly more patients with severe sepsis in the MICU (88% vs 51%, P <0.001).

Sepsis Process‐of‐Care Measures

There were no significant differences in the proportion of severe sepsis patients who had central venous saturation checked (MICU: 46% vs CICU: 41%, P = 0.67), lactate level checked (95% vs 100%, P = 0.37), or received antibiotics within 60 minutes of presentation (75% vs 69%, P = 0.59) (Table 2). Multiple other processes and treatments were delivered similarly, as shown in Table 2.

ICU Treatments and Processes‐of‐Care for Patients With Sepsis During the Study Period
Primary Process‐of‐Care Measures (Severe Sepsis Patients)MICU (N = 88)CICU (N = 21)P Value
  • Abbreviations: CICU, cardiac intensive care unit; DVT, deep vein thrombosis; GI, gastrointestinal; ICU, intensive care unit; MICU, medical intensive care unit; RBC, red blood cell; SD, standard deviation. * Missing data causes percentages to be other than what might be suspected if it were available for all patients.

Patients with central venous oxygen saturation checked, n (%)*31 (46)7 (41)0.67
Patients with lactate level checked, n (%)*58 (95)16 (100)0.37
Received antibiotics within 60 min, n (%)*46 (75)11 (69)0.59
Patients who had all 3 above processes and treatments, n (%)19 (22)4 (19)0.79
Received vasopressor, n (%)25 (28)8 (38)0.55
ICU Treatments and Processes (All Sepsis Patients)(N =100)(N = 41) 
Fluid balance 24 h after admission in liters, mean SD1.96 2.421.42 2.630.24
Patients who received stress dose steroids, n (%)11 (11)4 (10)0.83
Patients who received Drotrecogin alfa, n (%)0 (0)0 (0) 
Morning glucose 24 h after admission in mg/dL, mean SD161 111144 800.38
Received DVT prophylaxis within 24 h of admission, n (%)74 (74)20 (49)0.004
Received GI prophylaxis within 24 h of admission, n (%)68 (68)18 (44)0.012
Received RBC transfusion within 24 h of admission, n (%)8 (8)7 (17)0.11
Received renal replacement therapy, n (%)13 (13)3 (7)0.33
Received a spontaneous breathing trial within 24 h of admission, n (%)*4 (11)4 (33)0.07

Logistic regression analysis examining the receipt of all 3 primary processes‐of‐care while controlling for age revealed that the odds of the being in one of the ICUs was not significantly different (P = 0.85). The secondary analysis regression models revealed that only the APACHE II score (odds ratio [OR] = 1.21; confidence interval [CI], 1.121.31) was significantly associated with higher odds of mortality. ICU‐type [MICU vs CICU] (OR = 1.85; CI, 0.428.20), age (OR = 1.01; CI, 0.971.06), and combined processes of care (OR = 0.26; CI, 0.071.01) did not have significant associations with odds of mortality.

A review of microbiologic sensitivities revealed a trend towards significance that the cultured microorganism(s) was likely to be resistant to the initial antibiotics administered in MICU vs CICU (15% vs 5%, respectively, P = 0.09).

Mechanical Ventilation Parameters

The majority of the ventilated patients were admitted to each ICU in assist control (AC) mode. There were no significant differences in categories of mean tidal volume (TV) (P = 0.3), mean plateau pressures (P = 0.12), mean fraction of inspired oxygen (FiO2) (P = 0.95), and mean positive end‐expiratory pressures (PEEP) (P = 0.98) noted across the 2 units at the time of ICU admission, and also 24 hours after ICU admission. Further comparison of measurements of tidal volumes and plateau pressures over 7 days of ICU stay revealed no significant differences in the 2 ICUs (P = 0.40 and 0.57, respectively, on day 7 of ICU admission). There was a trend towards significance in fewer patients in the MICU receiving spontaneous breathing trial within 24 hours of ICU admission (11% vs 33%, P = 0.07) (Table 2).

Patient Outcomes

There were no significant differences in ICU mortality (MICU 19% vs CICU 10%, P = 0.18), or hospital mortality (21% vs 15%, P = 0.38) across the units (Table 3). Mean ICU and hospital length of stay (LOS) and proportion of patients discharged home with unassisted breathing were similar (Table 3).

Patient Outcomes for the 141 Patients Admitted to the Intensive Care Units With Sepsis During the Study Period
Patient OutcomesMICU (N = 100)CICU (N = 41)P Value
  • Abbreviations: CICU, cardiac intensive care unit; ICU, intensive care unit; MICU, medical intensive care unit; SD, standard deviation.

ICU mortality, n (%)19 (19)4 (10)0.18
Hospital mortality, n (%)21 (21)6 (15)0.38
Discharged home with unassisted breathing, n (%)33 (33)19 (46)0.14
ICU length of stay in days, mean SD4.78 6.244.92 6.320.97
Hospital length of stay in days, mean SD9.68 9.229.73 9.330.98

DISCUSSION

Since sepsis is more commonly treated in the medical ICU and some data suggests that specialty ICUs may be better at providing desired care,18, 19 we believed that patients treated in the MICU would be more likely to receive guideline‐concordant care. The study refutes our a priori hypothesis and reveals that evidence‐based processes‐of‐care associated with improved outcomes for sepsis are similarly implemented at our institution in the primary and overflow ICU. These findings are important, as ICU bed availability is a frequent problem and many hospitals overflow patients to non‐primary ICUs.9, 20

The observed equivalence in the care delivered may be a function of the relatively high number of patients with sepsis treated in the overflow unit, thereby giving the delivery teams enough experience to provide the desired care. An alternative explanation could be that the residents in CICU brought with them the experience from having previously trained in the MICU. Although, some of the care processes for sepsis patients are influenced by the CPOE (with embedded order sets and protocols), it is unlikely that CPOE can fully account for similarity in care because many processes and therapies (like use of steroids, amount of fluid delivered in first 24 hours, packed red blood cells [PRBC] transfusion, and spontaneous breathing trials) are not embedded within order sets.

The significant difference noted in the areas of deep vein thrombosis (DVT) and gastrointestinal (GI) prophylaxis within 24 hours of ICU admission was unexpected. These preventive therapies are included in initial order sets in the CPOE, which prompt physicians to order them as standard‐of‐care. With respect to DVT prophylaxis, we suspect that some of the difference might be attributable to specific contraindications to its use, which could have been more common in one of the units. There were more patients in MICU on mechanical ventilation (although not statistically significant) and with severe sepsis (statistically significant) at time of admission, which might have contributed to the difference noted in use of GI prophylaxis. It is also plausible that these differences might have disappeared if they were reassessed beyond 24 hours into the ICU admission. We cannot rule out the presence of unit‐ and physician‐level differences that contributed to this. Likewise, there was an unexpected trend towards significance, wherein more patients in CICU had spontaneous breathing trials within 24 hours of admission. This might also be explained by the higher number of patients with severe sepsis in the MICU (preempting any weaning attempts). These caveats aside, it is reassuring that, at our institution, admitting septic patients to the first available ICU bed does not adversely affect important processes‐of‐care.

One might ask whether this study's data should reassure other sites who are boarding septic patients in non‐primary ICUs. Irrespective of the number of patients studied or the degree of statistical significance of the associations, an observational study design cannot prove that boarding septic patients in non‐primary ICUs is either safe or unsafe. However, we hope that readers reflect on, and take inventory of, systems issues that may be different between unitswith an eye towards eliminating variation such that all units managing septic patients are primed to deliver guideline‐concordant care. Other hospitals that use CPOE with sepsis order sets, have protocols for sepsis care, and who train nursing and respiratory therapists to meet high standards might be pleased to see that the patients in our study received comparable, high‐quality care across the 2 units. While our data suggests that boarding patients in overflow units may be safe, these findings would need to be replicated at other sites using prospective designs to prove safety.

Length of emergency room stay prior to admission is associated with higher mortality rates.2123 At many hospitals, critical care beds are a scarce resource such that most hospitals have a policy for the triage of patients to critical care beds.24, 25 Lundberg and colleagues' study demonstrated that patients who developed septic shock on the medical wards experienced delays in receipt of intravenous fluids, inotropic agents and transfer to a critical care setting.26 Thus, rather than waiting in the ED or on the medical service for an MICU bed to become available, it may be most wise to admit a critically sick septic patient to the first available ICU bed, even to an overflow ICU. In a recent study by Sidlow and Aggarwal, 1104 patients discharged from the coronary care unit (CCU) with a non‐cardiac primary diagnosis were compared to patients admitted to the MICU in the same hospital.27 The study found no differences in patient mortality, 30‐day readmission rate, hospital LOS, ICU LOS, and safety outcomes of ventilator‐associated pneumonia and catheter‐associated bloodstream infections between ICUs. However, their study did not examine processes‐of‐care delivered between the primary ICU and the overflow unit, and did not validate the primary diagnoses of patients admitted to the ICU.

Several limitations of this study should be considered. First, this study was conducted at a single center. Second, we used a retrospective study design; however, a prospective study randomizing patients to 1 of the 2 units would likely never be possible. Third, the relatively small number of patients limited the power of the study to detect mortality differences between the units. However, this was a pilot study focused on processes of care as opposed to clinical outcomes. Fourth, it is possible that we did not capture every single patient with sepsis with our keyword search. Our use of a previously validated screening process should have limited the number of missed cases.15, 16 Fifth, although the 2 ICUs have exclusive nursing staff and attending physicians, the housestaff and respiratory therapists do rotate between the 2 ICUs and place orders in the common CPOE. The rotating housestaff may certainly represent a source for confounding, but the large numbers (>30) of evenly spread housestaff over the study period minimizes the potential for any trainee to be responsible for a large proportion of observed practice. Sixth, ICU attendings are the physicians of record and could influence the results. Because no attending physician was on service for more than 4 weeks during the study period, and patients were equally spread over this same time, concerns about clustering and biases this may have created should be minimal but cannot be ruled out. Seventh, some interventions and processes, such as antibiotic administration and measurement of lactate, may have been initiated in the ED, thereby decreasing the potential for differences between the groups. Additionally, we cannot rule out the possibility that factors other than bed availability drove the admission process (we found that the relative proportion of patients admitted to overflow ICU during hours of ambulance diversion was similar to the overflow ICU admissions during non‐ambulance diversion hours). It is possible that some selection bias by the hospitalist assigning patients to specific ICUs influenced their triage decisionsalthough all triaging doctors go through the same process of training in active bed management.11 While more patients admitted to the MICU had severe sepsis, there were no differences between groups in APACHE II or SOFA scores. However, we cannot rule out that there were other residual confounders. Finally, in a small number of cases (4/41, 10%), the CICU team consulted the MICU attending for assistance. This input had the potential to reduce disparities in care between the units.

Overflowing patients to non‐primary ICUs occurs in many hospitals. Our study demonstrates that sepsis treatment for overflow patients may be similar to that received in the primary ICU. While a large multicentered and randomized trial could determine whether significant management and outcome differences exist between primary and overflow ICUs, feasibility concerns make it unlikely that such a study will ever be conducted.

Acknowledgements

Disclosure: Dr Wright is a Miller‐Coulson Family Scholar and this work is supported by the Miller‐Coulson family through the Johns Hopkins Center for Innovative Medicine. Dr Sevransky was supported with a grant from National Institute of General Medical Sciences, NIGMS K‐23‐1399. All other authors disclose no relevant or financial conflicts of interest.

References
  1. Angus DC,Linde‐Zwirble WT,Lidicker J,Clermont G,Carcillo J,Pinsky MR.Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.Crit Care Med.2001;29(7):13031310.
  2. Kumar G,Kumar N,Taneja A, et al;for the Milwaukee Initiative in Critical Care Outcomes Research (MICCOR) Group of Investigators.Nationwide trends of severe sepsis in the twenty first century (2000–2007).Chest.2011;140(5):12231231.
  3. Dombrovskiy VY,Martin AA,Sunderram J,Paz HL.Rapid increase in hospitalization and mortality rates for severe sepsis in the United States: a trend analysis from 1993 to 2003.Crit Care Med.2007;35(5):12441250.
  4. Dellinger RP,Levy MM,Carlet JM, et al.Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008.Crit Care Med.2008;36(1):296327.
  5. Jones AE,Shapiro NI,Trzeciak S, et al.Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial.JAMA.2010;303(8):739746.
  6. Rivers E,Nguyen B,Havstad S, et al.Early goal‐directed therapy in the treatment of severe sepsis and septic shock.N Engl J Med.2001;345(19):13681377.
  7. Nguyen HB,Corbett SW,Steele R, et al.Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality.Crit Care Med.2007;35(4):11051112.
  8. Kumar A,Zarychanski R,Light B, et al.Early combination antibiotic therapy yields improved survival compared with monotherapy in septic shock: a propensity‐matched analysis.Crit Care Med.2010;38(9):17731785.
  9. Johannes MS.A new dimension of the PACU: the dilemma of the ICU overflow patient.J Post Anesth Nurs.1994;9(5):297300.
  10. Groeger JS,Strosberg MA,Halpern NA, et al.Descriptive analysis of critical care units in the United States.Crit Care Med.1992;20(6):846863.
  11. Howell E,Bessman E,Kravet S,Kolodner K,Marshall R,Wright S.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149(11):804811.
  12. Bone RC,Balk RA,Cerra FB, et al.Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee, American College of Chest Physicians/Society of Critical Care Medicine.Chest.1992;101(6):16441655.
  13. Ferrer R,Artigas A,Levy MM, et al.Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.JAMA.2008;299(19):22942303.
  14. Castellanos‐Ortega A,Suberviola B,Garcia‐Astudillo LA, et al.Impact of the surviving sepsis campaign protocols on hospital length of stay and mortality in septic shock patients: results of a three‐year follow‐up quasi‐experimental study.Crit Care Med.2010;38(4):10361043.
  15. Needham DM,Dennison CR,Dowdy DW, et al.Study protocol: the improving care of acute lung injury patients (ICAP) study.Crit Care.2006;10(1):R9.
  16. Ali N,Gutteridge D,Shahul S,Checkley W,Sevransky J,Martin G.Critical illness outcome study: an observational study of protocols and mortality in intensive care units.Open Access J Clin Trials.2011;3(September):5565.
  17. Vincent JL,Moreno R,Takala J, et al.The SOFA (sepsis‐related organ failure assessment) score to describe organ dysfunction/failure: on behalf of the Working Group on Sepsis‐Related Problems of the European Society of Intensive Care Medicine.Intensive Care Med.1996;22(7):707710.
  18. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288(17):21512162.
  19. Fuchs RJ,Berenholtz SM,Dorman T.Do intensivists in ICU improve outcome?Best Pract Res Clin Anaesthesiol.2005;19(1):125135.
  20. Lindsay M.Is the postanesthesia care unit becoming an intensive care unit?J Perianesth Nurs.1999;14(2):7377.
  21. Chalfin DB,Trzeciak S,Likourezos A,Baumann BM,Dellinger RP;for the DELAY‐ED Study Group.Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit.Crit Care Med.2007;35(6):14771483.
  22. Renaud B,Santin A,Coma E, et al.Association between timing of intensive care unit admission and outcomes for emergency department patients with community‐acquired pneumonia.Crit Care Med.2009;37(11):28672874.
  23. Shen YC,Hsia RY.Association between ambulance diversion and survival among patients with acute myocardial infarction.JAMA.2011;305(23):24402447.
  24. Teres D.Civilian triage in the intensive care unit: the ritual of the last bed.Crit Care Med.1993;21(4):598606.
  25. Sinuff T,Kahnamoui K,Cook DJ,Luce JM,Levy MM;for the Values Ethics and Rationing in Critical Care Task Force.Rationing critical care beds: a systematic review.Crit Care Med.2004;32(7):15881597.
  26. Lundberg JS,Perl TM,Wiblin T, et al.Septic shock: an analysis of outcomes for patients with onset on hospital wards versus intensive care units.Crit Care Med.1998;26(6):10201024.
  27. Sidlow R,Aggarwal V.“The MICU is full”: one hospital's experience with an overflow triage policy.Jt Comm J Qual Patient Saf.2011;37(10):456460.
References
  1. Angus DC,Linde‐Zwirble WT,Lidicker J,Clermont G,Carcillo J,Pinsky MR.Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.Crit Care Med.2001;29(7):13031310.
  2. Kumar G,Kumar N,Taneja A, et al;for the Milwaukee Initiative in Critical Care Outcomes Research (MICCOR) Group of Investigators.Nationwide trends of severe sepsis in the twenty first century (2000–2007).Chest.2011;140(5):12231231.
  3. Dombrovskiy VY,Martin AA,Sunderram J,Paz HL.Rapid increase in hospitalization and mortality rates for severe sepsis in the United States: a trend analysis from 1993 to 2003.Crit Care Med.2007;35(5):12441250.
  4. Dellinger RP,Levy MM,Carlet JM, et al.Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2008.Crit Care Med.2008;36(1):296327.
  5. Jones AE,Shapiro NI,Trzeciak S, et al.Lactate clearance vs central venous oxygen saturation as goals of early sepsis therapy: a randomized clinical trial.JAMA.2010;303(8):739746.
  6. Rivers E,Nguyen B,Havstad S, et al.Early goal‐directed therapy in the treatment of severe sepsis and septic shock.N Engl J Med.2001;345(19):13681377.
  7. Nguyen HB,Corbett SW,Steele R, et al.Implementation of a bundle of quality indicators for the early management of severe sepsis and septic shock is associated with decreased mortality.Crit Care Med.2007;35(4):11051112.
  8. Kumar A,Zarychanski R,Light B, et al.Early combination antibiotic therapy yields improved survival compared with monotherapy in septic shock: a propensity‐matched analysis.Crit Care Med.2010;38(9):17731785.
  9. Johannes MS.A new dimension of the PACU: the dilemma of the ICU overflow patient.J Post Anesth Nurs.1994;9(5):297300.
  10. Groeger JS,Strosberg MA,Halpern NA, et al.Descriptive analysis of critical care units in the United States.Crit Care Med.1992;20(6):846863.
  11. Howell E,Bessman E,Kravet S,Kolodner K,Marshall R,Wright S.Active bed management by hospitalists and emergency department throughput.Ann Intern Med.2008;149(11):804811.
  12. Bone RC,Balk RA,Cerra FB, et al.Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee, American College of Chest Physicians/Society of Critical Care Medicine.Chest.1992;101(6):16441655.
  13. Ferrer R,Artigas A,Levy MM, et al.Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.JAMA.2008;299(19):22942303.
  14. Castellanos‐Ortega A,Suberviola B,Garcia‐Astudillo LA, et al.Impact of the surviving sepsis campaign protocols on hospital length of stay and mortality in septic shock patients: results of a three‐year follow‐up quasi‐experimental study.Crit Care Med.2010;38(4):10361043.
  15. Needham DM,Dennison CR,Dowdy DW, et al.Study protocol: the improving care of acute lung injury patients (ICAP) study.Crit Care.2006;10(1):R9.
  16. Ali N,Gutteridge D,Shahul S,Checkley W,Sevransky J,Martin G.Critical illness outcome study: an observational study of protocols and mortality in intensive care units.Open Access J Clin Trials.2011;3(September):5565.
  17. Vincent JL,Moreno R,Takala J, et al.The SOFA (sepsis‐related organ failure assessment) score to describe organ dysfunction/failure: on behalf of the Working Group on Sepsis‐Related Problems of the European Society of Intensive Care Medicine.Intensive Care Med.1996;22(7):707710.
  18. Pronovost PJ,Angus DC,Dorman T,Robinson KA,Dremsizov TT,Young TL.Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review.JAMA.2002;288(17):21512162.
  19. Fuchs RJ,Berenholtz SM,Dorman T.Do intensivists in ICU improve outcome?Best Pract Res Clin Anaesthesiol.2005;19(1):125135.
  20. Lindsay M.Is the postanesthesia care unit becoming an intensive care unit?J Perianesth Nurs.1999;14(2):7377.
  21. Chalfin DB,Trzeciak S,Likourezos A,Baumann BM,Dellinger RP;for the DELAY‐ED Study Group.Impact of delayed transfer of critically ill patients from the emergency department to the intensive care unit.Crit Care Med.2007;35(6):14771483.
  22. Renaud B,Santin A,Coma E, et al.Association between timing of intensive care unit admission and outcomes for emergency department patients with community‐acquired pneumonia.Crit Care Med.2009;37(11):28672874.
  23. Shen YC,Hsia RY.Association between ambulance diversion and survival among patients with acute myocardial infarction.JAMA.2011;305(23):24402447.
  24. Teres D.Civilian triage in the intensive care unit: the ritual of the last bed.Crit Care Med.1993;21(4):598606.
  25. Sinuff T,Kahnamoui K,Cook DJ,Luce JM,Levy MM;for the Values Ethics and Rationing in Critical Care Task Force.Rationing critical care beds: a systematic review.Crit Care Med.2004;32(7):15881597.
  26. Lundberg JS,Perl TM,Wiblin T, et al.Septic shock: an analysis of outcomes for patients with onset on hospital wards versus intensive care units.Crit Care Med.1998;26(6):10201024.
  27. Sidlow R,Aggarwal V.“The MICU is full”: one hospital's experience with an overflow triage policy.Jt Comm J Qual Patient Saf.2011;37(10):456460.
Issue
Journal of Hospital Medicine - 7(8)
Issue
Journal of Hospital Medicine - 7(8)
Page Number
600-605
Page Number
600-605
Publications
Publications
Article Type
Display Headline
Does sepsis treatment differ between primary and overflow intensive care units?
Display Headline
Does sepsis treatment differ between primary and overflow intensive care units?
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Johns Hopkins Bayview Medical Center, 5200 Eastern Ave, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Learning Needs of Physician Assistants

Article Type
Changed
Mon, 05/22/2017 - 19:38
Display Headline
Learning needs of physician assistants working in hospital medicine

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

Files
References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
Article PDF
Issue
Journal of Hospital Medicine - 7(3)
Publications
Page Number
190-194
Sections
Files
Files
Article PDF
Article PDF

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
Issue
Journal of Hospital Medicine - 7(3)
Issue
Journal of Hospital Medicine - 7(3)
Page Number
190-194
Page Number
190-194
Publications
Publications
Article Type
Display Headline
Learning needs of physician assistants working in hospital medicine
Display Headline
Learning needs of physician assistants working in hospital medicine
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Avenue, MFL Building West Tower 6F CIMS Suite, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Consultation Improvement Teaching Module

Article Type
Changed
Sun, 05/28/2017 - 21:18
Display Headline
A case‐based teaching module combined with audit and feedback to improve the quality of consultations

An important role of the internist is that of inpatient medical consultant.13 As consultants, internists make recommendations regarding the patient's medical care and help the primary team to care for the patient. This requires familiarity with the body of knowledge of consultative medicine, as well as process skills that relate to working with teams of providers.1, 4, 5 For some physicians, the knowledge and skills of medical consultation are acquired during residency; however, many internists feel inadequately prepared for their roles of consultants.68 Because no specific requirements for medical consultation curricula during graduate medical education have been set forth, internists and other physicians do not receive uniform or comprehensive training in this area.3, 57, 9 Although internal medicine residents may gain experience while performing consultations on subspecialty rotations (eg, cardiology), the teaching on these blocks tends to be focused on the specialty content and less so on consultative principles.1, 4

As inpatient care is increasingly being taken over by hospitalists, the role of the hospitalist has expanded to include medical consultation. It is estimated that 92% of hospitalists care for patients on medical consultation services.8 The Society of Hospital Medicine (SHM) has also included medical consultation as one of the core competencies of the hospitalist.2 Therefore, it is essential that hospitalists master the knowledge and skills that are required to serve as effective consultants.10, 11

An educational strategy that has been shown to be effective in improving medical practice is audit and feedback.1215 Providing physicians with feedback on their clinical practice has been shown to improve performance more so than other educational methods.12 Practice‐based learning and improvement (PBLI) utilizes this strategy and it has become one of the core competencies stressed by the Accreditation Council for Graduate Medical Education (ACGME). It involves analyzing one's patient care practices in order to identify areas for improvement. In this study, we tested the impact of a newly developed one‐on‐one medical consultation educational module that was combined with audit and feedback in an attempt to improve the quality of the consultations being performed by our hospitalists.

Materials and Methods

Study Design and Setting

This single group pre‐post educational intervention study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 353‐bed university‐affiliated tertiary care medical center in Baltimore, MD, during the 2006‐2007 academic year.

Study Subjects

All 7 members of the hospitalist group at JHBMC who were serving on the medical consultation service during the study period participated. The internal medicine residents who elected to rotate on the consultation service during the study period were also exposed to the case‐based module component of the intervention.

Intervention

The educational intervention was delivered as a one‐on‐one session and lasted approximately 1 hour. The time was spent on the following activities:

  • A true‐false pretest to assess knowledge based on clinical scenarios (Appendix 1).

  • A case‐based module emphasizing the core principles of consultative medicine.16 The module was purposively designed to teach and stimulate thought around 3 complex general medical consultations. These cases are followed by questions about scenarios. The cases specifically address the role of medical consultant and the ways to be most effective in this role based on the recommendations of experts in the field.1, 10 Additional details about the content and format can be viewed at http://www.jhcme.com/site.16 As the physician was working through the teaching cases, the teacher would facilitate discussion around wrong answers and issues that the learner wanted to discuss.

  • The true‐false test to assess knowledge was once again administered (the posttest was identical to the pretest).

  • For the hospitalist faculty members only (and not the residents), audit and feedback was utilized. The physician was shown 2 of his/her most recent consults and was asked to reflect upon the strengths and weaknesses of the consult. The hospitalist was explicitly asked to critique them in light of the knowledge they gained from the consultation module. The teacher also gave specific feedback, both positive and negative, about the written consultations with attention directed specifically toward: the number of recommendations, the specificity of the guidance (eg, exact dosing of medications), clear documentation of their name and contact information, and documentation that the suggestions were verbally passed on to the primary team.

 

Evaluation Data

Learner knowledge, both at baseline and after the case‐based module, was assessed using a written test.

Consultations performed before and after the intervention were compared. Copies of up to 5 consults done by each hospitalist during the year before or after the educational intervention were collected. Identifiers and dates were removed from the consults so that scorers did not know whether the consults were preintervention or postintervention. Consults were scored out of a possible total of 4 to 6 pointsdepending on whether specific elements were applicable. One point was given for each of the following: (1) number of recommendations 5; (2) specific details for all drugs listed [if applicable]; (3) specific details for imaging studies suggested [if applicable]; (4) specific follow‐up documented; (5) consultant's name being clearly written; and (6) verbal contact with the referring team documented. These 6 elements were included based on expert recommendation.10 All consults were scored by 2 hospitalists independently. Disagreements in scores were infrequent (on <10% of the 48 consults scored) and these were only off by 1 point for the overall score. The disagreements were settled by discussion and consensus. All consult scores were converted to a score out of 5, to allow comparisons to be made.

Following the intervention, each participant completed an overall assessment of the educational experience.

Data Analysis

We examined the frequency of responses for each variable and reviewed the distributions. The knowledge scores on the written pretests were not normally distributed and therefore when making comparisons to the posttest, we used the Wilcoxon rank signed test. In comparing the performance scores on the consults across the 2 time periods, we compared the results with both Wilcoxon rank signed test and paired t tests. Because the results were equivalent with both tests, the means from the t tests are shown. Data were analyzed using STATA version 8 (Stata Corp., College Station, TX).

Results

Study Subjects

Among the 14 hospitalist faculty members who were on staff during the study period, 7 were performing medical consults and therefore participated in the study. The 7 faculty members had a mean age of 35 years; 5 (71%) were female, and 5 (71%) were board‐certified in Internal Medicine. The average elapsed time since completion of residency was 5.1 years and average number of years practicing as a hospitalist was 3.8 years (Table 1).

Characteristics of the Faculty Members and House Officers Who Participated in the Study
Faculty (n = 7) 
Age in years, mean (SD)35.57 (5.1)
Female, n (%)5 (71%)
Board certified, n (%)5 (71%)
Years since completion of residency, mean (SD)5.1 (4.4)
Number of years in practice, mean (SD)3.8 (2.9)
Weeks spent in medical consult rotation, mean (SD)3.7 (0.8)
Have read consultation books, n (%)5 (71%)
Housestaff (n = 11) 
Age in years, mean (SD)29.1 (1.8)
Female, n (%)7 (64%)
Residency year, n (%) 
PGY10 (0%)
PGY22 (20%)
PGY37 (70%)
PGY41 (10%)
Weeks spent in medical consult rotation, mean (SD)1.5 (0.85)
Have read consultation books, n (%)5 (50%)

There were 12 house‐staff members who were on their medical consultation rotation during the study period and were exposed to the intervention. Of the 12 house‐staff members, 11 provided demographic information. Characteristics of the 11 house‐staff participants are also shown in Table 1.

Premodule vs. Postmodule Knowledge Assessment

Both faculty and house‐staff performed very well on the true/false pretest. The small changes in the median scores from pretest to posttest did not change significantly for the faculty (pretest: 11/14, posttest: 12/14; P = 0.08), but did reach statistical significance for the house‐staff (pretest: 10/14, posttest: 12/14; P = 0.03).

Audit and Feedback

Of the 7 faculty who participated in the study, 6 performed consults both before and after the intervention. Using the consult scoring system, the scores for all 6 physicians' consults improved after the intervention compared to their earlier consults (Table 2). For 1 faculty member, the consult scores were statistically significantly higher after the intervention (P = 0.017). When all consults completed by the hospitalists were compared before and after the training, there was statistically significant improvement in consult scores (P < 0.001) (Table 2).

Comparisons of Scores for the Consultations Performed Before and After the Intervention
 Preintervention (n =27)Postintervention (n = 21) 
ConsultantScores*MeanScores*MeanP value
  • Total possible score = 5.

  • P value obtained using t test. Significance of results was equivalent when analyzed using the Wilcoxon ranked sign test.

A2, 3, 3.75, 3, 2.52.83, 3, 3, 4, 43.40.093
B3, 3, 3, 3, 12.64, 3, 3, 2.53.10.18
C2, 1.671.84, 2, 33.00.11
D4, 2.5, 3.75, 2.5, 3.753.33.75, 33.40.45
E2, 3, 1, 2, 22.03, 3, 3.753.30.017
F3, 3.75, 2.5, 4, 23.12, 3.75, 4, 43.30.27
All 2.7 3.30.0006

Satisfaction with Consultation Curricula

All faculty and house‐staff participants felt that the intervention had an impact on them (19/19, 100%). Eighteen out of 19 participants (95%) would recommend the educational session to colleagues. After participating, 82% of learners felt confident in performing medical consultations. With respect to the audit and feedback process of reviewing their previously performed consultations, all physicians claimed that their written consultation notes would change in the future.

Discussion

This curricular intervention using a case‐based module combined with audit and feedback appears to have resulted not only in improved knowledge, but also changed physician behavior in the form of higher‐quality written consultations. The teaching sessions were also well received and valued by busy hospitalists.

A review of randomized trials of audit and feedback12 revealed that this strategy is effective in improving professional practice in a variety of areas, including laboratory overutilization,13, 14 clinical practice guideline adherence,15, 17 and antibiotic utilization.13 In 1 study, internal medicine specialists audited their consultation letters and most believed that there had been lasting improvements to their notes.18 However, this study did not objectively compare the consultation letters from before audit and feedback to those written afterward but instead relied solely on the respondents' self‐assessment. It is known that many residents and recent graduates of internal medicine programs feel inadequately prepared in the role of consultant.6, 8 This work describes a curricular intervention that served to augment confidence, knowledge, and actual performance in consultation medicine of physicians. Goldman et al.'s10 Ten Commandments for Effective Consultations, which were later modified by Salerno et al.,11 were highlighted in our case‐based teachings: determine the question being asked or how you can help the requesting physician, establish the urgency of the consultation, gather primary data, be as brief as appropriate in your report, provide specific recommendations, provide contingency plans and discuss their execution, define your role in conjunction with the requesting physician, offer educational information, communicate recommendations directly to the requesting physician, and provide daily follow‐up. These tenets informed the development of the consultation scoring system that was used to assess the quality of the written consultations produced by our consultant hospitalists.

Audit and feedback is similar to PBLI, one of the ACGME core competencies for residency training. Both attempt to engage individuals by having them analyze their patient care practices, looking critically to: (1) identify areas needing improvement, and (2) consider strategies that can be implemented to enhance clinical performance. We now show that consultative medicine is an area that appears to be responsive to a mixed methodological educational intervention that includes audit and feedback.

Faculty and house‐staff knowledge of consultative medicine was assessed both before and after the case‐based educational module. Both groups scored very highly on the true/false pretest, suggesting either that their knowledge was excellent at baseline or the test was not sufficiently challenging. If their knowledge was truly very high, then the intervention need not have focused on improving knowledge. It is our interpretation that the true/false knowledge assessment was not challenging enough and therefore failed to comprehensively characterize their knowledge of consultative medicine.

Several limitations of this study should be considered. First, the sample size was small, including only 7 faculty and 12 house‐staff members. However, these numbers were sufficient to show statistically significant overall improvements in both knowledge and on the consultation scores. Second, few consultations were performed by each faculty member, ranging from 2 to 5, before and after the intervention. This may explain why only 1 out of 6 faculty members showed statistically significant improvement in the quality of consults after the intervention. Third, the true/false format of the knowledge tests allowed the subjects to score very high on the pretest, thereby making it difficult to detect knowledge gained after the intervention. Fourth, the scale used to evaluate consults has not been previously validated. The elements assessed by this scale were decided upon based on guidance from the literature10 and the authors' expertise, thereby affording it content validity evidence.19 The recommendations that guided the scale's development have been shown to improve compliance with the recommendations put forth by the consultant.1, 11 Internal structure validity evidence was conferred by the high level of agreement in scores between the independent raters. Relation to other variables validity evidence may be considered because doctors D and F scored highest on this scale and they are the 2 physicians most experienced in consult medicine. Finally, the educational intervention was time‐intensive for both learners and teacher. It consisted of a 1 hour‐long one‐on‐one session. This can be difficult to incorporate into a busy hospitalist program. The intervention can be made more efficient by having learners take the web‐based module online independently, and then meeting with the teacher for the audit and feedback component.

This consult medicine curricular intervention involving audit and feedback was beneficial to hospitalists and resulted in improved consultation notes. While resource intensive, the one‐on‐one teaching session appears to have worked and resulted in outcomes that are meaningful with respect to patient care.

References
  1. Gross R, Caputo G.Kammerer and Gross' Medical Consultation: the Internist on Surgical, Obstetric, and Psychiatric Services.3rd ed.Baltimore:Williams and Wilkins;1998.
  2. Society of Hospital Medicine.Hospitalist as consultant.J Hosp Med.2006;1(S1):70.
  3. Deyo R.The internist as consultant.Arch Intern Med.1980;140:137138.
  4. Byyny R, Siegler M, Tarlov A.Development of an academic section of general internal medicine.Am J Med.1977;63(4):493498.
  5. Moore R, Kammerer W, McGlynn T, Trautlein J, Burnside J.Consultations in internal medicine: a training program resource.J Med Educ.1977;52(4):323327.
  6. Devor M, Renvall M, Ramsdell J.Practice patterns and the adequacy of residency training in consultation medicine.J Gen Intern Med.1993;8(10):554560.
  7. Bomalaski J, Martin G, Webster J.General internal medicine consultation: the last bridge.Arch Intern Med.1983;143:875876.
  8. Plauth W,Pantilat S, Wachter R, Fenton C.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  9. Robie P.The service and educational contributions of a general medicine consultation service.J Gen Intern Med.1986;1:225227.
  10. Goldman L, Lee T, Rudd P.Ten commandments for effective consultations.Arch Intern Med.1983;143:17531755.
  11. Salerno S, Hurst F, Halvorson S, Mercado D.Principles of effective consultation, an update for the 21st‐century consultant.Arch Intern Med.2007;167:271275.
  12. Jamtvedt G, Young J, Kristoffersen D, O'Brien M, Oxman A.Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback.Qual Saf Health Care.2006;15:433436.
  13. Miyakis S, Karamanof G, Liontos M, Mountokalakis T.Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy.Postgrad Med J.2006;82:823829.
  14. Winkens R, Pop P, Grol R, et al.Effects of routine individual feedback over nine years on general practitioners' requests for tests.BMJ.1996;312:490.
  15. Kisuule F, Wright S, Barreto J, Zenilman J.Improving antibiotic utilization among hospitalists: a pilot academic detailing project with a public health approach.J Hosp Med.2008;3(1):6470.
  16. Feldman L, Minter‐Jordan M. The role of the medical consultant. Johns Hopkins Consultative Medicine Essentials for Hospitalists. Available at:http://www.jhcme.com/site/article.cfm?ID=8. Accessed April2009.
  17. Hysong S, Best R, Pugh J.Audit and feedback and clinical practice guideline adherence: making feedback actionable.Implement Sci.2006;1:9.
  18. Keely E, Myers K, Dojeiji S, Campbell C.Peer assessment of outpatient consultation letters—feasibility and satisfaction.BMC Med Educ.2007;7:13.
  19. Beckman TJ, Cook DA, Mandrekar JN.What is the validity evidence for assessment of clinical teaching?J Gen Intern Med.2005;20:11591164.
Article PDF
Issue
Journal of Hospital Medicine - 4(8)
Publications
Page Number
486-489
Legacy Keywords
audit and feedback, medical consultation, medical education
Sections
Article PDF
Article PDF

An important role of the internist is that of inpatient medical consultant.13 As consultants, internists make recommendations regarding the patient's medical care and help the primary team to care for the patient. This requires familiarity with the body of knowledge of consultative medicine, as well as process skills that relate to working with teams of providers.1, 4, 5 For some physicians, the knowledge and skills of medical consultation are acquired during residency; however, many internists feel inadequately prepared for their roles of consultants.68 Because no specific requirements for medical consultation curricula during graduate medical education have been set forth, internists and other physicians do not receive uniform or comprehensive training in this area.3, 57, 9 Although internal medicine residents may gain experience while performing consultations on subspecialty rotations (eg, cardiology), the teaching on these blocks tends to be focused on the specialty content and less so on consultative principles.1, 4

As inpatient care is increasingly being taken over by hospitalists, the role of the hospitalist has expanded to include medical consultation. It is estimated that 92% of hospitalists care for patients on medical consultation services.8 The Society of Hospital Medicine (SHM) has also included medical consultation as one of the core competencies of the hospitalist.2 Therefore, it is essential that hospitalists master the knowledge and skills that are required to serve as effective consultants.10, 11

An educational strategy that has been shown to be effective in improving medical practice is audit and feedback.1215 Providing physicians with feedback on their clinical practice has been shown to improve performance more so than other educational methods.12 Practice‐based learning and improvement (PBLI) utilizes this strategy and it has become one of the core competencies stressed by the Accreditation Council for Graduate Medical Education (ACGME). It involves analyzing one's patient care practices in order to identify areas for improvement. In this study, we tested the impact of a newly developed one‐on‐one medical consultation educational module that was combined with audit and feedback in an attempt to improve the quality of the consultations being performed by our hospitalists.

Materials and Methods

Study Design and Setting

This single group pre‐post educational intervention study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 353‐bed university‐affiliated tertiary care medical center in Baltimore, MD, during the 2006‐2007 academic year.

Study Subjects

All 7 members of the hospitalist group at JHBMC who were serving on the medical consultation service during the study period participated. The internal medicine residents who elected to rotate on the consultation service during the study period were also exposed to the case‐based module component of the intervention.

Intervention

The educational intervention was delivered as a one‐on‐one session and lasted approximately 1 hour. The time was spent on the following activities:

  • A true‐false pretest to assess knowledge based on clinical scenarios (Appendix 1).

  • A case‐based module emphasizing the core principles of consultative medicine.16 The module was purposively designed to teach and stimulate thought around 3 complex general medical consultations. These cases are followed by questions about scenarios. The cases specifically address the role of medical consultant and the ways to be most effective in this role based on the recommendations of experts in the field.1, 10 Additional details about the content and format can be viewed at http://www.jhcme.com/site.16 As the physician was working through the teaching cases, the teacher would facilitate discussion around wrong answers and issues that the learner wanted to discuss.

  • The true‐false test to assess knowledge was once again administered (the posttest was identical to the pretest).

  • For the hospitalist faculty members only (and not the residents), audit and feedback was utilized. The physician was shown 2 of his/her most recent consults and was asked to reflect upon the strengths and weaknesses of the consult. The hospitalist was explicitly asked to critique them in light of the knowledge they gained from the consultation module. The teacher also gave specific feedback, both positive and negative, about the written consultations with attention directed specifically toward: the number of recommendations, the specificity of the guidance (eg, exact dosing of medications), clear documentation of their name and contact information, and documentation that the suggestions were verbally passed on to the primary team.

 

Evaluation Data

Learner knowledge, both at baseline and after the case‐based module, was assessed using a written test.

Consultations performed before and after the intervention were compared. Copies of up to 5 consults done by each hospitalist during the year before or after the educational intervention were collected. Identifiers and dates were removed from the consults so that scorers did not know whether the consults were preintervention or postintervention. Consults were scored out of a possible total of 4 to 6 pointsdepending on whether specific elements were applicable. One point was given for each of the following: (1) number of recommendations 5; (2) specific details for all drugs listed [if applicable]; (3) specific details for imaging studies suggested [if applicable]; (4) specific follow‐up documented; (5) consultant's name being clearly written; and (6) verbal contact with the referring team documented. These 6 elements were included based on expert recommendation.10 All consults were scored by 2 hospitalists independently. Disagreements in scores were infrequent (on <10% of the 48 consults scored) and these were only off by 1 point for the overall score. The disagreements were settled by discussion and consensus. All consult scores were converted to a score out of 5, to allow comparisons to be made.

Following the intervention, each participant completed an overall assessment of the educational experience.

Data Analysis

We examined the frequency of responses for each variable and reviewed the distributions. The knowledge scores on the written pretests were not normally distributed and therefore when making comparisons to the posttest, we used the Wilcoxon rank signed test. In comparing the performance scores on the consults across the 2 time periods, we compared the results with both Wilcoxon rank signed test and paired t tests. Because the results were equivalent with both tests, the means from the t tests are shown. Data were analyzed using STATA version 8 (Stata Corp., College Station, TX).

Results

Study Subjects

Among the 14 hospitalist faculty members who were on staff during the study period, 7 were performing medical consults and therefore participated in the study. The 7 faculty members had a mean age of 35 years; 5 (71%) were female, and 5 (71%) were board‐certified in Internal Medicine. The average elapsed time since completion of residency was 5.1 years and average number of years practicing as a hospitalist was 3.8 years (Table 1).

Characteristics of the Faculty Members and House Officers Who Participated in the Study
Faculty (n = 7) 
Age in years, mean (SD)35.57 (5.1)
Female, n (%)5 (71%)
Board certified, n (%)5 (71%)
Years since completion of residency, mean (SD)5.1 (4.4)
Number of years in practice, mean (SD)3.8 (2.9)
Weeks spent in medical consult rotation, mean (SD)3.7 (0.8)
Have read consultation books, n (%)5 (71%)
Housestaff (n = 11) 
Age in years, mean (SD)29.1 (1.8)
Female, n (%)7 (64%)
Residency year, n (%) 
PGY10 (0%)
PGY22 (20%)
PGY37 (70%)
PGY41 (10%)
Weeks spent in medical consult rotation, mean (SD)1.5 (0.85)
Have read consultation books, n (%)5 (50%)

There were 12 house‐staff members who were on their medical consultation rotation during the study period and were exposed to the intervention. Of the 12 house‐staff members, 11 provided demographic information. Characteristics of the 11 house‐staff participants are also shown in Table 1.

Premodule vs. Postmodule Knowledge Assessment

Both faculty and house‐staff performed very well on the true/false pretest. The small changes in the median scores from pretest to posttest did not change significantly for the faculty (pretest: 11/14, posttest: 12/14; P = 0.08), but did reach statistical significance for the house‐staff (pretest: 10/14, posttest: 12/14; P = 0.03).

Audit and Feedback

Of the 7 faculty who participated in the study, 6 performed consults both before and after the intervention. Using the consult scoring system, the scores for all 6 physicians' consults improved after the intervention compared to their earlier consults (Table 2). For 1 faculty member, the consult scores were statistically significantly higher after the intervention (P = 0.017). When all consults completed by the hospitalists were compared before and after the training, there was statistically significant improvement in consult scores (P < 0.001) (Table 2).

Comparisons of Scores for the Consultations Performed Before and After the Intervention
 Preintervention (n =27)Postintervention (n = 21) 
ConsultantScores*MeanScores*MeanP value
  • Total possible score = 5.

  • P value obtained using t test. Significance of results was equivalent when analyzed using the Wilcoxon ranked sign test.

A2, 3, 3.75, 3, 2.52.83, 3, 3, 4, 43.40.093
B3, 3, 3, 3, 12.64, 3, 3, 2.53.10.18
C2, 1.671.84, 2, 33.00.11
D4, 2.5, 3.75, 2.5, 3.753.33.75, 33.40.45
E2, 3, 1, 2, 22.03, 3, 3.753.30.017
F3, 3.75, 2.5, 4, 23.12, 3.75, 4, 43.30.27
All 2.7 3.30.0006

Satisfaction with Consultation Curricula

All faculty and house‐staff participants felt that the intervention had an impact on them (19/19, 100%). Eighteen out of 19 participants (95%) would recommend the educational session to colleagues. After participating, 82% of learners felt confident in performing medical consultations. With respect to the audit and feedback process of reviewing their previously performed consultations, all physicians claimed that their written consultation notes would change in the future.

Discussion

This curricular intervention using a case‐based module combined with audit and feedback appears to have resulted not only in improved knowledge, but also changed physician behavior in the form of higher‐quality written consultations. The teaching sessions were also well received and valued by busy hospitalists.

A review of randomized trials of audit and feedback12 revealed that this strategy is effective in improving professional practice in a variety of areas, including laboratory overutilization,13, 14 clinical practice guideline adherence,15, 17 and antibiotic utilization.13 In 1 study, internal medicine specialists audited their consultation letters and most believed that there had been lasting improvements to their notes.18 However, this study did not objectively compare the consultation letters from before audit and feedback to those written afterward but instead relied solely on the respondents' self‐assessment. It is known that many residents and recent graduates of internal medicine programs feel inadequately prepared in the role of consultant.6, 8 This work describes a curricular intervention that served to augment confidence, knowledge, and actual performance in consultation medicine of physicians. Goldman et al.'s10 Ten Commandments for Effective Consultations, which were later modified by Salerno et al.,11 were highlighted in our case‐based teachings: determine the question being asked or how you can help the requesting physician, establish the urgency of the consultation, gather primary data, be as brief as appropriate in your report, provide specific recommendations, provide contingency plans and discuss their execution, define your role in conjunction with the requesting physician, offer educational information, communicate recommendations directly to the requesting physician, and provide daily follow‐up. These tenets informed the development of the consultation scoring system that was used to assess the quality of the written consultations produced by our consultant hospitalists.

Audit and feedback is similar to PBLI, one of the ACGME core competencies for residency training. Both attempt to engage individuals by having them analyze their patient care practices, looking critically to: (1) identify areas needing improvement, and (2) consider strategies that can be implemented to enhance clinical performance. We now show that consultative medicine is an area that appears to be responsive to a mixed methodological educational intervention that includes audit and feedback.

Faculty and house‐staff knowledge of consultative medicine was assessed both before and after the case‐based educational module. Both groups scored very highly on the true/false pretest, suggesting either that their knowledge was excellent at baseline or the test was not sufficiently challenging. If their knowledge was truly very high, then the intervention need not have focused on improving knowledge. It is our interpretation that the true/false knowledge assessment was not challenging enough and therefore failed to comprehensively characterize their knowledge of consultative medicine.

Several limitations of this study should be considered. First, the sample size was small, including only 7 faculty and 12 house‐staff members. However, these numbers were sufficient to show statistically significant overall improvements in both knowledge and on the consultation scores. Second, few consultations were performed by each faculty member, ranging from 2 to 5, before and after the intervention. This may explain why only 1 out of 6 faculty members showed statistically significant improvement in the quality of consults after the intervention. Third, the true/false format of the knowledge tests allowed the subjects to score very high on the pretest, thereby making it difficult to detect knowledge gained after the intervention. Fourth, the scale used to evaluate consults has not been previously validated. The elements assessed by this scale were decided upon based on guidance from the literature10 and the authors' expertise, thereby affording it content validity evidence.19 The recommendations that guided the scale's development have been shown to improve compliance with the recommendations put forth by the consultant.1, 11 Internal structure validity evidence was conferred by the high level of agreement in scores between the independent raters. Relation to other variables validity evidence may be considered because doctors D and F scored highest on this scale and they are the 2 physicians most experienced in consult medicine. Finally, the educational intervention was time‐intensive for both learners and teacher. It consisted of a 1 hour‐long one‐on‐one session. This can be difficult to incorporate into a busy hospitalist program. The intervention can be made more efficient by having learners take the web‐based module online independently, and then meeting with the teacher for the audit and feedback component.

This consult medicine curricular intervention involving audit and feedback was beneficial to hospitalists and resulted in improved consultation notes. While resource intensive, the one‐on‐one teaching session appears to have worked and resulted in outcomes that are meaningful with respect to patient care.

An important role of the internist is that of inpatient medical consultant.13 As consultants, internists make recommendations regarding the patient's medical care and help the primary team to care for the patient. This requires familiarity with the body of knowledge of consultative medicine, as well as process skills that relate to working with teams of providers.1, 4, 5 For some physicians, the knowledge and skills of medical consultation are acquired during residency; however, many internists feel inadequately prepared for their roles of consultants.68 Because no specific requirements for medical consultation curricula during graduate medical education have been set forth, internists and other physicians do not receive uniform or comprehensive training in this area.3, 57, 9 Although internal medicine residents may gain experience while performing consultations on subspecialty rotations (eg, cardiology), the teaching on these blocks tends to be focused on the specialty content and less so on consultative principles.1, 4

As inpatient care is increasingly being taken over by hospitalists, the role of the hospitalist has expanded to include medical consultation. It is estimated that 92% of hospitalists care for patients on medical consultation services.8 The Society of Hospital Medicine (SHM) has also included medical consultation as one of the core competencies of the hospitalist.2 Therefore, it is essential that hospitalists master the knowledge and skills that are required to serve as effective consultants.10, 11

An educational strategy that has been shown to be effective in improving medical practice is audit and feedback.1215 Providing physicians with feedback on their clinical practice has been shown to improve performance more so than other educational methods.12 Practice‐based learning and improvement (PBLI) utilizes this strategy and it has become one of the core competencies stressed by the Accreditation Council for Graduate Medical Education (ACGME). It involves analyzing one's patient care practices in order to identify areas for improvement. In this study, we tested the impact of a newly developed one‐on‐one medical consultation educational module that was combined with audit and feedback in an attempt to improve the quality of the consultations being performed by our hospitalists.

Materials and Methods

Study Design and Setting

This single group pre‐post educational intervention study took place at Johns Hopkins Bayview Medical Center (JHBMC), a 353‐bed university‐affiliated tertiary care medical center in Baltimore, MD, during the 2006‐2007 academic year.

Study Subjects

All 7 members of the hospitalist group at JHBMC who were serving on the medical consultation service during the study period participated. The internal medicine residents who elected to rotate on the consultation service during the study period were also exposed to the case‐based module component of the intervention.

Intervention

The educational intervention was delivered as a one‐on‐one session and lasted approximately 1 hour. The time was spent on the following activities:

  • A true‐false pretest to assess knowledge based on clinical scenarios (Appendix 1).

  • A case‐based module emphasizing the core principles of consultative medicine.16 The module was purposively designed to teach and stimulate thought around 3 complex general medical consultations. These cases are followed by questions about scenarios. The cases specifically address the role of medical consultant and the ways to be most effective in this role based on the recommendations of experts in the field.1, 10 Additional details about the content and format can be viewed at http://www.jhcme.com/site.16 As the physician was working through the teaching cases, the teacher would facilitate discussion around wrong answers and issues that the learner wanted to discuss.

  • The true‐false test to assess knowledge was once again administered (the posttest was identical to the pretest).

  • For the hospitalist faculty members only (and not the residents), audit and feedback was utilized. The physician was shown 2 of his/her most recent consults and was asked to reflect upon the strengths and weaknesses of the consult. The hospitalist was explicitly asked to critique them in light of the knowledge they gained from the consultation module. The teacher also gave specific feedback, both positive and negative, about the written consultations with attention directed specifically toward: the number of recommendations, the specificity of the guidance (eg, exact dosing of medications), clear documentation of their name and contact information, and documentation that the suggestions were verbally passed on to the primary team.

 

Evaluation Data

Learner knowledge, both at baseline and after the case‐based module, was assessed using a written test.

Consultations performed before and after the intervention were compared. Copies of up to 5 consults done by each hospitalist during the year before or after the educational intervention were collected. Identifiers and dates were removed from the consults so that scorers did not know whether the consults were preintervention or postintervention. Consults were scored out of a possible total of 4 to 6 pointsdepending on whether specific elements were applicable. One point was given for each of the following: (1) number of recommendations 5; (2) specific details for all drugs listed [if applicable]; (3) specific details for imaging studies suggested [if applicable]; (4) specific follow‐up documented; (5) consultant's name being clearly written; and (6) verbal contact with the referring team documented. These 6 elements were included based on expert recommendation.10 All consults were scored by 2 hospitalists independently. Disagreements in scores were infrequent (on <10% of the 48 consults scored) and these were only off by 1 point for the overall score. The disagreements were settled by discussion and consensus. All consult scores were converted to a score out of 5, to allow comparisons to be made.

Following the intervention, each participant completed an overall assessment of the educational experience.

Data Analysis

We examined the frequency of responses for each variable and reviewed the distributions. The knowledge scores on the written pretests were not normally distributed and therefore when making comparisons to the posttest, we used the Wilcoxon rank signed test. In comparing the performance scores on the consults across the 2 time periods, we compared the results with both Wilcoxon rank signed test and paired t tests. Because the results were equivalent with both tests, the means from the t tests are shown. Data were analyzed using STATA version 8 (Stata Corp., College Station, TX).

Results

Study Subjects

Among the 14 hospitalist faculty members who were on staff during the study period, 7 were performing medical consults and therefore participated in the study. The 7 faculty members had a mean age of 35 years; 5 (71%) were female, and 5 (71%) were board‐certified in Internal Medicine. The average elapsed time since completion of residency was 5.1 years and average number of years practicing as a hospitalist was 3.8 years (Table 1).

Characteristics of the Faculty Members and House Officers Who Participated in the Study
Faculty (n = 7) 
Age in years, mean (SD)35.57 (5.1)
Female, n (%)5 (71%)
Board certified, n (%)5 (71%)
Years since completion of residency, mean (SD)5.1 (4.4)
Number of years in practice, mean (SD)3.8 (2.9)
Weeks spent in medical consult rotation, mean (SD)3.7 (0.8)
Have read consultation books, n (%)5 (71%)
Housestaff (n = 11) 
Age in years, mean (SD)29.1 (1.8)
Female, n (%)7 (64%)
Residency year, n (%) 
PGY10 (0%)
PGY22 (20%)
PGY37 (70%)
PGY41 (10%)
Weeks spent in medical consult rotation, mean (SD)1.5 (0.85)
Have read consultation books, n (%)5 (50%)

There were 12 house‐staff members who were on their medical consultation rotation during the study period and were exposed to the intervention. Of the 12 house‐staff members, 11 provided demographic information. Characteristics of the 11 house‐staff participants are also shown in Table 1.

Premodule vs. Postmodule Knowledge Assessment

Both faculty and house‐staff performed very well on the true/false pretest. The small changes in the median scores from pretest to posttest did not change significantly for the faculty (pretest: 11/14, posttest: 12/14; P = 0.08), but did reach statistical significance for the house‐staff (pretest: 10/14, posttest: 12/14; P = 0.03).

Audit and Feedback

Of the 7 faculty who participated in the study, 6 performed consults both before and after the intervention. Using the consult scoring system, the scores for all 6 physicians' consults improved after the intervention compared to their earlier consults (Table 2). For 1 faculty member, the consult scores were statistically significantly higher after the intervention (P = 0.017). When all consults completed by the hospitalists were compared before and after the training, there was statistically significant improvement in consult scores (P < 0.001) (Table 2).

Comparisons of Scores for the Consultations Performed Before and After the Intervention
 Preintervention (n =27)Postintervention (n = 21) 
ConsultantScores*MeanScores*MeanP value
  • Total possible score = 5.

  • P value obtained using t test. Significance of results was equivalent when analyzed using the Wilcoxon ranked sign test.

A2, 3, 3.75, 3, 2.52.83, 3, 3, 4, 43.40.093
B3, 3, 3, 3, 12.64, 3, 3, 2.53.10.18
C2, 1.671.84, 2, 33.00.11
D4, 2.5, 3.75, 2.5, 3.753.33.75, 33.40.45
E2, 3, 1, 2, 22.03, 3, 3.753.30.017
F3, 3.75, 2.5, 4, 23.12, 3.75, 4, 43.30.27
All 2.7 3.30.0006

Satisfaction with Consultation Curricula

All faculty and house‐staff participants felt that the intervention had an impact on them (19/19, 100%). Eighteen out of 19 participants (95%) would recommend the educational session to colleagues. After participating, 82% of learners felt confident in performing medical consultations. With respect to the audit and feedback process of reviewing their previously performed consultations, all physicians claimed that their written consultation notes would change in the future.

Discussion

This curricular intervention using a case‐based module combined with audit and feedback appears to have resulted not only in improved knowledge, but also changed physician behavior in the form of higher‐quality written consultations. The teaching sessions were also well received and valued by busy hospitalists.

A review of randomized trials of audit and feedback12 revealed that this strategy is effective in improving professional practice in a variety of areas, including laboratory overutilization,13, 14 clinical practice guideline adherence,15, 17 and antibiotic utilization.13 In 1 study, internal medicine specialists audited their consultation letters and most believed that there had been lasting improvements to their notes.18 However, this study did not objectively compare the consultation letters from before audit and feedback to those written afterward but instead relied solely on the respondents' self‐assessment. It is known that many residents and recent graduates of internal medicine programs feel inadequately prepared in the role of consultant.6, 8 This work describes a curricular intervention that served to augment confidence, knowledge, and actual performance in consultation medicine of physicians. Goldman et al.'s10 Ten Commandments for Effective Consultations, which were later modified by Salerno et al.,11 were highlighted in our case‐based teachings: determine the question being asked or how you can help the requesting physician, establish the urgency of the consultation, gather primary data, be as brief as appropriate in your report, provide specific recommendations, provide contingency plans and discuss their execution, define your role in conjunction with the requesting physician, offer educational information, communicate recommendations directly to the requesting physician, and provide daily follow‐up. These tenets informed the development of the consultation scoring system that was used to assess the quality of the written consultations produced by our consultant hospitalists.

Audit and feedback is similar to PBLI, one of the ACGME core competencies for residency training. Both attempt to engage individuals by having them analyze their patient care practices, looking critically to: (1) identify areas needing improvement, and (2) consider strategies that can be implemented to enhance clinical performance. We now show that consultative medicine is an area that appears to be responsive to a mixed methodological educational intervention that includes audit and feedback.

Faculty and house‐staff knowledge of consultative medicine was assessed both before and after the case‐based educational module. Both groups scored very highly on the true/false pretest, suggesting either that their knowledge was excellent at baseline or the test was not sufficiently challenging. If their knowledge was truly very high, then the intervention need not have focused on improving knowledge. It is our interpretation that the true/false knowledge assessment was not challenging enough and therefore failed to comprehensively characterize their knowledge of consultative medicine.

Several limitations of this study should be considered. First, the sample size was small, including only 7 faculty and 12 house‐staff members. However, these numbers were sufficient to show statistically significant overall improvements in both knowledge and on the consultation scores. Second, few consultations were performed by each faculty member, ranging from 2 to 5, before and after the intervention. This may explain why only 1 out of 6 faculty members showed statistically significant improvement in the quality of consults after the intervention. Third, the true/false format of the knowledge tests allowed the subjects to score very high on the pretest, thereby making it difficult to detect knowledge gained after the intervention. Fourth, the scale used to evaluate consults has not been previously validated. The elements assessed by this scale were decided upon based on guidance from the literature10 and the authors' expertise, thereby affording it content validity evidence.19 The recommendations that guided the scale's development have been shown to improve compliance with the recommendations put forth by the consultant.1, 11 Internal structure validity evidence was conferred by the high level of agreement in scores between the independent raters. Relation to other variables validity evidence may be considered because doctors D and F scored highest on this scale and they are the 2 physicians most experienced in consult medicine. Finally, the educational intervention was time‐intensive for both learners and teacher. It consisted of a 1 hour‐long one‐on‐one session. This can be difficult to incorporate into a busy hospitalist program. The intervention can be made more efficient by having learners take the web‐based module online independently, and then meeting with the teacher for the audit and feedback component.

This consult medicine curricular intervention involving audit and feedback was beneficial to hospitalists and resulted in improved consultation notes. While resource intensive, the one‐on‐one teaching session appears to have worked and resulted in outcomes that are meaningful with respect to patient care.

References
  1. Gross R, Caputo G.Kammerer and Gross' Medical Consultation: the Internist on Surgical, Obstetric, and Psychiatric Services.3rd ed.Baltimore:Williams and Wilkins;1998.
  2. Society of Hospital Medicine.Hospitalist as consultant.J Hosp Med.2006;1(S1):70.
  3. Deyo R.The internist as consultant.Arch Intern Med.1980;140:137138.
  4. Byyny R, Siegler M, Tarlov A.Development of an academic section of general internal medicine.Am J Med.1977;63(4):493498.
  5. Moore R, Kammerer W, McGlynn T, Trautlein J, Burnside J.Consultations in internal medicine: a training program resource.J Med Educ.1977;52(4):323327.
  6. Devor M, Renvall M, Ramsdell J.Practice patterns and the adequacy of residency training in consultation medicine.J Gen Intern Med.1993;8(10):554560.
  7. Bomalaski J, Martin G, Webster J.General internal medicine consultation: the last bridge.Arch Intern Med.1983;143:875876.
  8. Plauth W,Pantilat S, Wachter R, Fenton C.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  9. Robie P.The service and educational contributions of a general medicine consultation service.J Gen Intern Med.1986;1:225227.
  10. Goldman L, Lee T, Rudd P.Ten commandments for effective consultations.Arch Intern Med.1983;143:17531755.
  11. Salerno S, Hurst F, Halvorson S, Mercado D.Principles of effective consultation, an update for the 21st‐century consultant.Arch Intern Med.2007;167:271275.
  12. Jamtvedt G, Young J, Kristoffersen D, O'Brien M, Oxman A.Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback.Qual Saf Health Care.2006;15:433436.
  13. Miyakis S, Karamanof G, Liontos M, Mountokalakis T.Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy.Postgrad Med J.2006;82:823829.
  14. Winkens R, Pop P, Grol R, et al.Effects of routine individual feedback over nine years on general practitioners' requests for tests.BMJ.1996;312:490.
  15. Kisuule F, Wright S, Barreto J, Zenilman J.Improving antibiotic utilization among hospitalists: a pilot academic detailing project with a public health approach.J Hosp Med.2008;3(1):6470.
  16. Feldman L, Minter‐Jordan M. The role of the medical consultant. Johns Hopkins Consultative Medicine Essentials for Hospitalists. Available at:http://www.jhcme.com/site/article.cfm?ID=8. Accessed April2009.
  17. Hysong S, Best R, Pugh J.Audit and feedback and clinical practice guideline adherence: making feedback actionable.Implement Sci.2006;1:9.
  18. Keely E, Myers K, Dojeiji S, Campbell C.Peer assessment of outpatient consultation letters—feasibility and satisfaction.BMC Med Educ.2007;7:13.
  19. Beckman TJ, Cook DA, Mandrekar JN.What is the validity evidence for assessment of clinical teaching?J Gen Intern Med.2005;20:11591164.
References
  1. Gross R, Caputo G.Kammerer and Gross' Medical Consultation: the Internist on Surgical, Obstetric, and Psychiatric Services.3rd ed.Baltimore:Williams and Wilkins;1998.
  2. Society of Hospital Medicine.Hospitalist as consultant.J Hosp Med.2006;1(S1):70.
  3. Deyo R.The internist as consultant.Arch Intern Med.1980;140:137138.
  4. Byyny R, Siegler M, Tarlov A.Development of an academic section of general internal medicine.Am J Med.1977;63(4):493498.
  5. Moore R, Kammerer W, McGlynn T, Trautlein J, Burnside J.Consultations in internal medicine: a training program resource.J Med Educ.1977;52(4):323327.
  6. Devor M, Renvall M, Ramsdell J.Practice patterns and the adequacy of residency training in consultation medicine.J Gen Intern Med.1993;8(10):554560.
  7. Bomalaski J, Martin G, Webster J.General internal medicine consultation: the last bridge.Arch Intern Med.1983;143:875876.
  8. Plauth W,Pantilat S, Wachter R, Fenton C.Hospitalists' perceptions of their residency training needs: results of a national survey.Am J Med.2001;111(3):247254.
  9. Robie P.The service and educational contributions of a general medicine consultation service.J Gen Intern Med.1986;1:225227.
  10. Goldman L, Lee T, Rudd P.Ten commandments for effective consultations.Arch Intern Med.1983;143:17531755.
  11. Salerno S, Hurst F, Halvorson S, Mercado D.Principles of effective consultation, an update for the 21st‐century consultant.Arch Intern Med.2007;167:271275.
  12. Jamtvedt G, Young J, Kristoffersen D, O'Brien M, Oxman A.Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback.Qual Saf Health Care.2006;15:433436.
  13. Miyakis S, Karamanof G, Liontos M, Mountokalakis T.Factors contributing to inappropriate ordering of tests in an academic medical department and the effect of an educational feedback strategy.Postgrad Med J.2006;82:823829.
  14. Winkens R, Pop P, Grol R, et al.Effects of routine individual feedback over nine years on general practitioners' requests for tests.BMJ.1996;312:490.
  15. Kisuule F, Wright S, Barreto J, Zenilman J.Improving antibiotic utilization among hospitalists: a pilot academic detailing project with a public health approach.J Hosp Med.2008;3(1):6470.
  16. Feldman L, Minter‐Jordan M. The role of the medical consultant. Johns Hopkins Consultative Medicine Essentials for Hospitalists. Available at:http://www.jhcme.com/site/article.cfm?ID=8. Accessed April2009.
  17. Hysong S, Best R, Pugh J.Audit and feedback and clinical practice guideline adherence: making feedback actionable.Implement Sci.2006;1:9.
  18. Keely E, Myers K, Dojeiji S, Campbell C.Peer assessment of outpatient consultation letters—feasibility and satisfaction.BMC Med Educ.2007;7:13.
  19. Beckman TJ, Cook DA, Mandrekar JN.What is the validity evidence for assessment of clinical teaching?J Gen Intern Med.2005;20:11591164.
Issue
Journal of Hospital Medicine - 4(8)
Issue
Journal of Hospital Medicine - 4(8)
Page Number
486-489
Page Number
486-489
Publications
Publications
Article Type
Display Headline
A case‐based teaching module combined with audit and feedback to improve the quality of consultations
Display Headline
A case‐based teaching module combined with audit and feedback to improve the quality of consultations
Legacy Keywords
audit and feedback, medical consultation, medical education
Legacy Keywords
audit and feedback, medical consultation, medical education
Sections
Article Source

Copyright © 2009 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
The Collaborative Inpatient Medicine, Service (CIMS), Johns Hopkins Bayview Medical Center, 5200 Eastern Ave., MFL West, 6th Floor, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media