User login
Occupational Hazard: Disruptive Behavior in Patients
While private or other public health care organizations can refuse to care for patients who have displayed disruptive behavior (DB), the VA Response to Disruptive Behavior of Patients law (38 CFR §17.107) prohibits the Veterans Health Administration (VHA) of the Department of Veterans Affairs (VA) from refusing care to veterans who display DB.1 The VHA defines DB as any behavior that is intimidating, threatening, or dangerous or that has, or could, jeopardize the health or safety of patients, VHA staff, or others.2
VA Response to DB Law
The VA Response to Disruptive Behavior of Patients requires the VHA to provide alternative care options that minimize risk while ensuring services; for example, providing care at a different location and/or time when additional staff are available to assist and monitor the patient. This can provide a unique opportunity to capture data on DB and the results of alternative forms of caring for this population.
The reason public health care organizations refuse care to persons who display DB is clear: DBs hinder business operations, are financially taxing, and put health care workers at risk.3-10 “In 2009, the VHA spent close to $5.5 million on workers’ compensation and medical expenditures for 425 incidents–or about $130,000 per DB incident (Hodgson M, Drummond D, Van Male L. Unpublished data, 2010).” In another study, 106 of 762 nurses in 1 hospital system reported an assault by a patient, and 30 required medical attention, which resulted in a total cost of $94,156.8 From 2002 to 2013, incidents of serious workplace violence requiring days off for an injured worker to recover on average were 4 times more common in health care than in other industries.6-11 Incidents of patient violence and aggression toward staff transcend specialization; however, hospital nurses and staff from the emergency, rehabilitation and gerontology departments, psychiatric unit, and home-based services are more susceptible and vulnerable to DB incidents than are other types of employees.8,10-19
Data reported by health care staff suggest that patients rather than staff members or visitors initiate > 70% of serious physical attacks against health care workers.9,13,20-23 A 2015 study of VHA health care providers (HCPs) found that > 60% had experienced some form of DB, verbal abuse being the most prevalent, followed by sexual abuse and physical abuse.20 Of 72,000 VHA staff responding to a nationwide survey, 13% experienced, on average, ≥ 1 assault by a veteran (eg, something was thrown at them; they were pushed, kicked, slapped; or were threatened or injured by a weapon).8,21
To meet its legal obligations and deliver empathetic care, the VHA documents and analyzes data on all patients who exhibit DB. A local DB Committee (DBC) reviews the data, whether it occurs in an inpatient or outpatient setting, such as community-based outpatient clinics. Once a DB incident is reported, the DBC begins an evidence-based risk evaluation, including the option of contacting the persons who displayed or experienced the DB. Goals are to (1) prevent future DB incidents; (2) detect vulnerabilities in the environment; and (3) collaborate with HCPs and patients to provide optimal care while improving the patient/provider interactions.
Effects of Disruptive Behavior
DB has negative consequences for both patients and health care workers and results in poor evaluations of care from both groups.27-32 Aside from interfering with safe medical care, DB also impacts care for other patients by delaying access to care and increasing appointment wait times due to employee absenteeism and staff shortages.3,4,20,32,33 For HCPs, patient violence is associated with unwillingness to provide care, briefer treatment periods, and decreases in occupational satisfaction, performance, and commitment
Harmful health effects experienced by HCPs who have been victims of DB include fear, mood disorders, anxiety, all symptoms of psychological distress and posttraumatic stress disorder (PTSD).10,22,30,34-36 In a study of the impact on productivity of PTSD triggered by job-related DB, PTSD symptoms were associated with withdrawal from or minimizing encounters with patients, job turnover, and troubles with thinking
Reporting Disruptive Behavior
The literature suggests that consistent and effective DB reporting is pivotal to improving the outcome and quality of care for those displaying DB.37-39 To provide high-quality health services to veterans who display DB, the VHA must promote the management and reporting of DB. Without knowledge of the full spectrum of DB events at VHA facilities, efforts to prevent or manage DB and ensure safety may have limited impact.7,37 Reports can be used for clinical decision making to optimize staff training in delivery of quality care while assuring staff safety. More than 80% of DB incidents occur during interactions with patients, thus this is a clinical issue that can affect the outcome of patient care.8,21
Documented DB reports are used to analyze the degree, frequency, and nature of incidents, which might reveal risk factors and develop preventive efforts and training for specific hazards.8,39 Some have argued that implementing a standardized DB reporting system is a crucial first step toward minimizing hazards and improving health care.38,40,41
When DB incidents were recorded through a hospital electronic reporting system and discussed in meetings, staff reported: (1) increased awareness of DB; (2) improved ability to manage DB incidents; and (3) amplified reporting of incidents.38,41,42 These findings support similar results from studies of an intervention implemented at VA Community Living Centers (CLCs) from 2013 to 2017: Staff Training in Assisted Living Residences (STAR-VA).4,12,19 The aim of STAR-VA was to minimize challenging dementia-related DB in CLCs. The intervention initially was established to train direct-care, assisted-living staff to provide better care to older patients displaying DB. Data revealed that documentation of DBs was, the first step to ensuring staff and patient safety.18,40
VHA Reporting System
In 2013, the VA Office of Inspector General (OIG) found no standardized documentation of DB events across the VA health care system.42 Instead, DB events were documented in multiple records in various locations, including administrative and progress notes in the electronic health record (EHR), police reports, e-mails, or letters submitted to DBC chairs.42 This situation reduced administrators’ ability to consider all relevant information and render appropriate decisions in DB cases.42 In 2015, based on OIG recommendations, the VHA implemented the Disruptive Behavior Reporting System (DBRS) nationwide, which allowed all VHA staff to report DB events. The DBRS was designed to address factors likely to impede reporting and management of DB, namely, complexity of and lack of access to a central reporting system.43,44 The DBRS is currently the primary VHA tool to document DB events.
The DBRS consists of 32 questions in 5 sections relating to the (1) location and time of DB event; (2) reporter; (3) disrupter; (4) DB event details; and (5) the person who experienced (experiencer) the event. The system also provides a list of the types of DB, such as inappropriate communication, bullying and/or intimidation, verbal or written threat of physical harm, physical violence, sexual harassment, sexual assault, and property damage. The DBRS has the potential to provide useful data on DB and DB reporting, such as the typical staff entering data and the number and/or types of DB occurring.
The DBRS complements the preexisting VHA policies and committees for care of veterans who display DB.1-3,14,21,24,25 The VHA Workplace Violence Prevention Program (WVPP) required facilities to submit data on DB events through a Workplace Behavioral Risk report. Data for the report were obtained from police reports, patient safety reports, DBC records, and notes in the EHR. Following implementations of DBRS, the number of DB events per year became a part of facility performance standards.
VHA is creating novel approaches to handling DB that allow health care workers to render care in a safe and effective manner guided by documented information. For example, DBCs can recommend the use of Category I Patient Record Flags (PRFs) following documented DB, which informs staff of the potential risk of DB and provides guidance on protective methods to use when meeting with the patient.2,21,24 A survey of 140 VA hospital chiefs of staff indicated that DBC procedures were related to a decrease in the rates of assaults.1 Additionally, VA provides training for staff in techniques to promote personal safety, such as identifying signs that precede DB, using verbal deescalation, and practicing therapeutic containment.
Resistance to Reporting
Many health care employees and employers are reticent to report DBs.22,31,43,45-48 Studies suggest health care organizations can cultivate a culture that is resistant to reporting DB.49,50 This complicates the ability of the health care system to design and maintain safety protocols and safer treatment plans.3,41,51 Worldwide, < 30% of DBs are reported.47 One barrier may be that supervisors may not wish to acknowledge DBs on their units or may not provide sufficient staff time for training or reporting.31,46,47 HCPs may worry that a DB report will stigmatize patients, especially those who are elderly or have cognitive impairment, brain injury, psychological illness, or developmental disability. Patients with cognitive conditions are reportedly 20% more likely to be violent toward caregivers and providers.31 A dementia diagnosis, for example, is associated with a high likelihood for DB.30,52 More than 80% of DB events displayed by patients with dementia may go unreported.26,31,50,52
Some clinicians may attribute DB to physiologic conditions that need to be treated, not reported. However, employers can face various legal liabilities if steps are not taken to protect employees.47,51 Federal and state statutes require that organizations provide a healthy and safe employment environment for workers. This requires that employers institute reasonable protective measures, such as procedures to intervene, policies on addressing DB incidents, and/or training to minimize or deescalate DB.51,53 Also, employees may sue employers if security measures are inadequate or deficient in properly investigating current and past evidence of DB or identifying vulnerabilities in the workplace. Unwillingness to investigate DB and safety-related workplace concerns have contributed to increased workplace violence and legal liability.52,53 The mission of caring and trust is consistent with assuring a safe environment.
Training and Empathetic Care
To combat cultural resistance to reporting DBs, more and perhaps different contextual approaches to education and training may be needed that address ethical dilemmas and concerns of providers. The success of training relies on administrators supporting staff in reporting DB. Training must address providers’ conflicting beliefs and assist with identifying strategies to provide the best possible care for patients who display DB.1,38 HCPs are less likely to document a DB if they feel that administrators are creating documentation that will have negative consequences for a patient. Thus, leadership is responsible for ensuring that misconceptions are dispelled through training and other efforts and information on how reported DB data will be used is communicated through strategic channels.
Education and training must consider empathic care that attempts to understand why patients behave as they do through the information gathered.55 Empathy in health care is multifaceted: It involves comprehending a patient’s viewpoint, circumstances, and feelings and the capacity to analyze whether one is comprehending these accurately in order to demonstrate supportive care.54,55
Improving patient and staff interaction once a problematic behavior is identified is the aim of empathic care. Increasing empathic care can improve compassionate, patient-centered interactions that begin once the patient seeks care. This approach has proven to decrease DB by patients with dementia and improve their care, lessen staff problems during interactions, and increase staff morale.20 Experts call for the adoption of an interpersonal approach to patient encounters, and there is evidence that creating organizational change by moving toward compassionate care can lead to a positive impact for patients.54,55
Future Studies
There are growth opportunities in utilization of the DBRS. Analysis of the DBRS database by the VA Central Office (VACO) showed that the system is underutilized by facilities across the VA system.56 In response to this current underutilization, VACO is taking steps to close these gaps through increasing training to staff and promotion of the use of the DBRS. A 2015 pilot study of VHA providers showed that > 70% of providers had experienced a DB as defined by VHA, but only 34% of them reported their most recently experienced DB within the past 12 months.20 Thus, DBRS use must be studied within the context that patient-perpetrated DB is underreported in health care organizations.5,9,29,41,43,57,58 Studies addressing national DBRS utilization patterns and the cost associated with implementing the DBRS also are needed. One study suggests that there is an association between measures of facility complexity and staff perceptions of safety, which should be considered in analyzing DBRS usage.57 Studies addressing the role of the DBRS and misconceptions that the tool may represent a punitive tool also are needed. VHA should consider how the attribution “disruptive behavior” assigns a negative connotation and leads HCPs to avoid using the DBRS. Additionally, DB reporting may increase when HCPs understand that DB reporting is part of the comprehensive, consultative strategy to provide the best care to patients.
Conclusion
Accurate reporting of DB events enables the development of strategies for multidisciplinary teams to work together to minimize hazards and to provide interventions that provide for the safe delivery of health care to all patients. Improving reporting ensures there is an accurate representation of how disruptive events impact care provided within a facility—and what types of variables may be associated with increased risk for these types of events.
Additionally, ensuring that reporting is maximized also provides the VHA with opportunities for DBCs to offer evidence-based risk assessment of violence and consultation to staff members who may benefit from improved competencies in working with patients who display DB. These potential improvements are consistent with the VHA I CARE values and will provide data that can inform recommendations for health care in other agencies/health care organizations.
Acknowledgments
This work was supported by the Center of Innovation on Disability and Rehabilitation Research (CINDRR) of the Health Services Research and Development Service, Office of Research and Development, Department of Veterans Affairs.
1. Hodgson MJ, Mohr DC, Drummond DJ, Bell M, Van Male L. Managing disruptive patients in health care: necessary solutions to a difficult problem. Am J Ind Med. 2012;55(11):1009-1017.
2. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 2010-053. Patient Record Flags. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2341 Published December 3, 2010. Accessed March 29, 2019.
3. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 2012-026. Sexual Assaults and Other Defined Public Safety Incidents in VHA Facilities. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2797. Published September 27, 2012. Accessed March 29, 2019.
4. Curyto KJ, McCurry SM, Luci K, Karlin BE, Teri L, Karel MJ. Managing challenging behaviors of dementia in veterans: identifying and changing activators and consequences using STAR-VA. J Gerontol Nurs. 2017;43(2):33-43.
5. Speroni KG, Fitch T, Dawson E, Dugan L, Atherton M. Incidence and cost of nurse workplace violence perpetrated by hospital patients or patient visitors. J Emerg Nurs. 2014;40(3):218-228.
6. Phillips JP. Workplace violence against health care workers in the United States. NEJM. 2016;374(17):1661-1669.
7. Janocha JA, Smith RT. Workplace safety and health in the health care and social assistance industry, 2003–07. https://www.bls.gov/opub/mlr/cwc/workplace-safety-and-health-in-the-health-care-and-social-assistance-industry-2003-07.pdf. Published August 30, 2010. Accessed February 19, 2019.
8. US Department of Labor, Occupational Safety and Health Administration. Workplace violence in healthcare: understanding the challenge. https://www.osha.gov/Publications/OSHA3826.pdf. Published December 2015. Accessed February 19, 2019.
9. US Department of Labor, Occupational Safety and Health Administration. Prevention of Workplace Violence in Healthcare and Social Assistance. Occupational Safety and Health Administration, https://www.govinfo.gov/content/pkg/FR-2016-12-07/pdf/2016-29197.pdf. Accessed January 20, 2017.
10. Gerberich SG, Church TR, McGovern PM, et al. An epidemiological study of the magnitude and consequences of work related violence: the Minnesota Nurses’ Study. Occup Environ Med. 2004;61(6):495-503.
11. Sherman MF, Gershon RRM, Samar SM, Pearson JM, Canton AN, Damsky MR. Safety factors predictive of job satisfaction and job retention among home healthcare aides. J Occup Environ Med. 2008;50(12):1430-1441.
12. Karel MJ, Teri L, McConnell E, Visnic S, Karlin BE. Effectiveness of expanded implementation of STAR-VA for managing dementia-related behaviors among veterans. Gerontologist. 2016;56(1):126-134.
13. US Department of Labor, Bureau of Labor Statistics. Nonfatal occupational injuries and illnesses requiring days away from work. https://www.bls.gov/news.release/archives/osh2_11192015.htm. Published November 19, 2015.
14. Beech B, Leather P. Workplace violence in the health care sector: A review of staff training and integration of training evaluation models. Aggression Violent Behav. 2006;11(1):27-43.
15. Campbell CL, McCoy S, Burg MA, Hoffman N. Enhancing home care staff safety through reducing client aggression and violence in noninstitutional care settings: a systematic review. Home Health Care Manage Pract. 2014;26(1):3-10.
16. Gallant-Roman MA. Strategies and tools to reduce workplace violence. AAOHNJ. 2008;56(11):449-454.
17. Weinberger LE, Sreenivasan S, Smee DE, McGuire J, Garrick T. Balancing safety against obstruction to health care access: an examination of behavioral flags in the VA health care system. J Threat Assess Manage. 2018;5(1):35-41.
18. Elbogen EB, Johnson SC, Wagner HR, et al. Protective factors and risk modification of violence in Iraq and Afghanistan war veterans. J Clin Psychiatry. 2012;73(6):e767-e773.
19. Karlin BE, Visnic S, McGee JS, Teri L. Results from the multisite implementation of STAR-VA: a multicomponent psychosocial intervention for managing challenging dementia-related behaviors of veterans. Psychol Serv. 2014;11(2):200-208.
20. Semeah LM, Campbell CL, Cowper DC, Peet AC. Serving our homeless veterans: patient perpetrated violence as a barrier to health care access. J Pub Nonprofit Aff. 2017;3(2):223-234.
21. Hodgson MJ, Reed R, Craig T, et al. Violence in healthcare facilities: lessons from the Veterans Health Administration. J Occup Environ Med. 2004;46(11):1158-1165.
22. Farrell GA, Bobrowski C, Bobrowski P. Scoping workplace aggression in nursing: findings from an Australian study. J Adv Nurs. 2006;55(6):778-787.
23. Barling J, Rogers AG, Kelloway EK. Behind closed doors: in-home workers’ experience of sexual harassment and workplace violence. J Occup Health Psychol. 2001;6(3):255-269.
24. Pompeii LA, Schoenfisch AL, Lipscomb HJ, Dement JM, Smith CD, Upadhyaya M. Physical assault, physical threat, and verbal abuse perpetrated against hospital workers by patients or visitors in six U.S. hospitals. Am J Ind Med. 2015;58(11):1194-1204.
25. Sippel LM, Mota NP, Kachadourian LK, et al. The burden of hostility in U.S. veterans: results from the National Health and Resilience in Veterans Study. Psychiatry Res. 2016;243(suppl C):421-430.
26. Campbell C. Patient Violence and Aggression in Non-Institutional Health Care Settings: Predictors of Reporting By Healthcare Providers [doctoral dissertation]. Orlando: University of Central Florida; 2016.
27. Galinsky T, Feng HA, Streit J, et al. Risk factors associated with patient assaults of home healthcare workers. Rehabil Nurs. 2010;35(5):206-215.
28. Campbell CL. Incident reporting by health-care workers in noninstitutional care settings. Trauma, Violence Abuse. 2017;18(4):445-456.
29. Arnetz JE, Arnetz BB. Violence towards health care staff and possible effects on the quality of patient care. Soc Sci Med. 2001;52(3):417-427.
30. Gates D, Fitzwater E, Succop P. Relationships of stressors, strain, and anger to caregiver assaults. Issues Ment Health Nurs. 2003;24(8):775-793.
31. Brillhart B, Kruse B, Heard L. Safety concerns for rehabilitation nurses in home care. Rehabil Nurs. 2004;29(6):227-229.
32. Taylor H. Patient violence against clinicians: managing the risk. Innov Clin Neurosci. 2013;10(3):40-42.
33. US Department of Veterans Affairs, Office of Public and Intergovernmental Affairs. The Joint Commission releases results of surveys of the VA health care system. https://www.va.gov/opa/pressrel/pressrelease.cfm?id=2808. Updated August 5, 2014. Accessed February 19, 2019.
34. Büssing A, Höge T. Aggression and violence against home care workers. J Occup Health Psychol. 2004;9(3):206-219.
35. Geiger-Brown J, Muntaner C, McPhaul K, Lipscomb J, Trinkoff A. Abuse and violence during home care work as predictor of worker depression. Home Health Care Serv Q. 2007;26(1):59-77.
36. Gates DM, Gillespie GL, Succop P. Violence against nurses and its impact on stress and productivity. Nurs Econ. 2011;29(2):59-66.
37. Petterson IL, Arnetz BB. Psychosocial stressors and well-being in health care workers: the impact of an intervention program. Soc Sci Med. 1998;47(11):1763-1772.
38. Arnetz JE, Arnetz BB. Implementation and evaluation of a practical intervention programme for dealing with violence towards health care workers. J Adv Nurs. 2000;31(3):668-680.
39. Arnetz JE, Hamblin L, Russell J, et al. Preventing patient-to-worker violence in hospitals: outcome of a randomized controlled intervention. J Occup Environ Med. 2017;59(1):18-27.
40. Elbogen EB, Tomkins AJ, Pothuloori AP, Scalora MJ. Documentation of violence risk information in psychiatric hospital patient charts: an empirical examination. J Am Acad Psychiatry Law. 2003;31(1):58-64.
41. Winsvold Prang I, Jelson-Jorgensen LP. Should I report? A qualitative study of barriers to incident reporting among nurses working in nursing homes. Geriatr Nurs. 2014;35(6):441-447.
42. US Department of Veterans Affairs, Office of Inspector General. Healthcare inspection: management of disruptive patient behavior at VA medical facilities. Report No. 11-02585-129. https://www.va.gov/oig/pubs/VAOIG-11-02585-129.pdf. Published Mrach 7, 2013. Accessed February 21, 2019.
43. Lipscomb J, London M. Not Part of the Job: How to Take a Stand Against Violence in the Work Setting. Silver Spring, MD: American Nurses Association; 2015.
44. May DD, Grubbs LM. The extent, nature, and precipitating factors of nurse assault among three groups of registered nurses in a regional medical center. J Emerg Nurs. 2002;28(1):11-17.
45. Wharton TC, Ford BK. What is known about dementia care recipient violence and aggression against caregivers? J Gerontol Soc Work. 2014;57(5):460-477.
46. Brennan C, Worrall-Davies A, McMillan D, Gilbody S, House A. The hospital anxiety and depression scale: a diagnostic meta-analysis of case-finding ability. J Psychosom Res. 2010;69(4):371-378.
47. McPhaul K, Lipscomb J, Johnson J. Assessing risk for violence on home health visits. Home Healthc Nurse. 2010;28(5):278-289.
48. McPhaul KM, London M, Murrett K, Flannery K, Rosen J, Lipscomb J. Environmental evaluation for workplace violence in healthcare and social services. J Safety Res. 2008;39(2):237-250.
49. Kelly JA, Somlai AM, DiFranceisco WJ, et al. Bridging the gap between the science and service of HIV prevention: transferring effective research-based HIV prevention interventions to community AIDS service providers. Am J Public Health. 2000;90(7):1082-1088.
50. Pawlin S. Reporting violence. Emerg Nurse. 2008;16(4):16-21.
51. Brakel SJ. Legal liability and workplace violence. J Am Acad Psychiatry Law. 1998;26(4):553-562.
52. Neuman JH, Baron RA. Workplace violence and workplace aggression: evidence concerning specific forms, potential causes, and preferred targets. J Manage. 1998;24(3):391-419.53. Ferns T, Chojnacka I. Angels and swingers, matrons and sinners: nursing stereotypes. Br J Nurs. 2005;14(19):1028-1032.
54. Mercer SW, Reynolds WJ. Empathy and quality of care. Br J Gen Pract 2002;52(suppl):S9-S12.
55. Lee TH. An Epidemic of Empathy in Healthcare: How to Deliver Compassionate, Connected Patient Care That Creates a Competitive Advantage. Columbus, OH: McGraw-Hill Education; 2015.
56. US Department of Veterans Affairs, Veterans Health Administrastion. Veterans Health Administration workplace violence prevention program (WVPP): disruptive behavior reporting system utilization report. Published 2017. https://vaww.portal2.va.gov/sites/wvpp/Shared%20Documents/DBRS%20Utilization%20Reports/FY2017%20DBRS%20Quarterly%20Utilization%20Report%20(Quarter%201).pdf. [Source not verified.]
57. Campbell CL, Burg, MA, Gammonley D. Measures for incident reporting of patient violence and aggression towards healthcare providers: a systematic review. Aggression Violent Behav. 2015;25(part B):314-322.
58. Carney PT, West P, Neily J, Mills PD, Bagian JP. The effect of facility complexity on perceptions of safety climate in the operating room: size matters. Am J Med Qual. 2010;25(6):457-461.
While private or other public health care organizations can refuse to care for patients who have displayed disruptive behavior (DB), the VA Response to Disruptive Behavior of Patients law (38 CFR §17.107) prohibits the Veterans Health Administration (VHA) of the Department of Veterans Affairs (VA) from refusing care to veterans who display DB.1 The VHA defines DB as any behavior that is intimidating, threatening, or dangerous or that has, or could, jeopardize the health or safety of patients, VHA staff, or others.2
VA Response to DB Law
The VA Response to Disruptive Behavior of Patients requires the VHA to provide alternative care options that minimize risk while ensuring services; for example, providing care at a different location and/or time when additional staff are available to assist and monitor the patient. This can provide a unique opportunity to capture data on DB and the results of alternative forms of caring for this population.
The reason public health care organizations refuse care to persons who display DB is clear: DBs hinder business operations, are financially taxing, and put health care workers at risk.3-10 “In 2009, the VHA spent close to $5.5 million on workers’ compensation and medical expenditures for 425 incidents–or about $130,000 per DB incident (Hodgson M, Drummond D, Van Male L. Unpublished data, 2010).” In another study, 106 of 762 nurses in 1 hospital system reported an assault by a patient, and 30 required medical attention, which resulted in a total cost of $94,156.8 From 2002 to 2013, incidents of serious workplace violence requiring days off for an injured worker to recover on average were 4 times more common in health care than in other industries.6-11 Incidents of patient violence and aggression toward staff transcend specialization; however, hospital nurses and staff from the emergency, rehabilitation and gerontology departments, psychiatric unit, and home-based services are more susceptible and vulnerable to DB incidents than are other types of employees.8,10-19
Data reported by health care staff suggest that patients rather than staff members or visitors initiate > 70% of serious physical attacks against health care workers.9,13,20-23 A 2015 study of VHA health care providers (HCPs) found that > 60% had experienced some form of DB, verbal abuse being the most prevalent, followed by sexual abuse and physical abuse.20 Of 72,000 VHA staff responding to a nationwide survey, 13% experienced, on average, ≥ 1 assault by a veteran (eg, something was thrown at them; they were pushed, kicked, slapped; or were threatened or injured by a weapon).8,21
To meet its legal obligations and deliver empathetic care, the VHA documents and analyzes data on all patients who exhibit DB. A local DB Committee (DBC) reviews the data, whether it occurs in an inpatient or outpatient setting, such as community-based outpatient clinics. Once a DB incident is reported, the DBC begins an evidence-based risk evaluation, including the option of contacting the persons who displayed or experienced the DB. Goals are to (1) prevent future DB incidents; (2) detect vulnerabilities in the environment; and (3) collaborate with HCPs and patients to provide optimal care while improving the patient/provider interactions.
Effects of Disruptive Behavior
DB has negative consequences for both patients and health care workers and results in poor evaluations of care from both groups.27-32 Aside from interfering with safe medical care, DB also impacts care for other patients by delaying access to care and increasing appointment wait times due to employee absenteeism and staff shortages.3,4,20,32,33 For HCPs, patient violence is associated with unwillingness to provide care, briefer treatment periods, and decreases in occupational satisfaction, performance, and commitment
Harmful health effects experienced by HCPs who have been victims of DB include fear, mood disorders, anxiety, all symptoms of psychological distress and posttraumatic stress disorder (PTSD).10,22,30,34-36 In a study of the impact on productivity of PTSD triggered by job-related DB, PTSD symptoms were associated with withdrawal from or minimizing encounters with patients, job turnover, and troubles with thinking
Reporting Disruptive Behavior
The literature suggests that consistent and effective DB reporting is pivotal to improving the outcome and quality of care for those displaying DB.37-39 To provide high-quality health services to veterans who display DB, the VHA must promote the management and reporting of DB. Without knowledge of the full spectrum of DB events at VHA facilities, efforts to prevent or manage DB and ensure safety may have limited impact.7,37 Reports can be used for clinical decision making to optimize staff training in delivery of quality care while assuring staff safety. More than 80% of DB incidents occur during interactions with patients, thus this is a clinical issue that can affect the outcome of patient care.8,21
Documented DB reports are used to analyze the degree, frequency, and nature of incidents, which might reveal risk factors and develop preventive efforts and training for specific hazards.8,39 Some have argued that implementing a standardized DB reporting system is a crucial first step toward minimizing hazards and improving health care.38,40,41
When DB incidents were recorded through a hospital electronic reporting system and discussed in meetings, staff reported: (1) increased awareness of DB; (2) improved ability to manage DB incidents; and (3) amplified reporting of incidents.38,41,42 These findings support similar results from studies of an intervention implemented at VA Community Living Centers (CLCs) from 2013 to 2017: Staff Training in Assisted Living Residences (STAR-VA).4,12,19 The aim of STAR-VA was to minimize challenging dementia-related DB in CLCs. The intervention initially was established to train direct-care, assisted-living staff to provide better care to older patients displaying DB. Data revealed that documentation of DBs was, the first step to ensuring staff and patient safety.18,40
VHA Reporting System
In 2013, the VA Office of Inspector General (OIG) found no standardized documentation of DB events across the VA health care system.42 Instead, DB events were documented in multiple records in various locations, including administrative and progress notes in the electronic health record (EHR), police reports, e-mails, or letters submitted to DBC chairs.42 This situation reduced administrators’ ability to consider all relevant information and render appropriate decisions in DB cases.42 In 2015, based on OIG recommendations, the VHA implemented the Disruptive Behavior Reporting System (DBRS) nationwide, which allowed all VHA staff to report DB events. The DBRS was designed to address factors likely to impede reporting and management of DB, namely, complexity of and lack of access to a central reporting system.43,44 The DBRS is currently the primary VHA tool to document DB events.
The DBRS consists of 32 questions in 5 sections relating to the (1) location and time of DB event; (2) reporter; (3) disrupter; (4) DB event details; and (5) the person who experienced (experiencer) the event. The system also provides a list of the types of DB, such as inappropriate communication, bullying and/or intimidation, verbal or written threat of physical harm, physical violence, sexual harassment, sexual assault, and property damage. The DBRS has the potential to provide useful data on DB and DB reporting, such as the typical staff entering data and the number and/or types of DB occurring.
The DBRS complements the preexisting VHA policies and committees for care of veterans who display DB.1-3,14,21,24,25 The VHA Workplace Violence Prevention Program (WVPP) required facilities to submit data on DB events through a Workplace Behavioral Risk report. Data for the report were obtained from police reports, patient safety reports, DBC records, and notes in the EHR. Following implementations of DBRS, the number of DB events per year became a part of facility performance standards.
VHA is creating novel approaches to handling DB that allow health care workers to render care in a safe and effective manner guided by documented information. For example, DBCs can recommend the use of Category I Patient Record Flags (PRFs) following documented DB, which informs staff of the potential risk of DB and provides guidance on protective methods to use when meeting with the patient.2,21,24 A survey of 140 VA hospital chiefs of staff indicated that DBC procedures were related to a decrease in the rates of assaults.1 Additionally, VA provides training for staff in techniques to promote personal safety, such as identifying signs that precede DB, using verbal deescalation, and practicing therapeutic containment.
Resistance to Reporting
Many health care employees and employers are reticent to report DBs.22,31,43,45-48 Studies suggest health care organizations can cultivate a culture that is resistant to reporting DB.49,50 This complicates the ability of the health care system to design and maintain safety protocols and safer treatment plans.3,41,51 Worldwide, < 30% of DBs are reported.47 One barrier may be that supervisors may not wish to acknowledge DBs on their units or may not provide sufficient staff time for training or reporting.31,46,47 HCPs may worry that a DB report will stigmatize patients, especially those who are elderly or have cognitive impairment, brain injury, psychological illness, or developmental disability. Patients with cognitive conditions are reportedly 20% more likely to be violent toward caregivers and providers.31 A dementia diagnosis, for example, is associated with a high likelihood for DB.30,52 More than 80% of DB events displayed by patients with dementia may go unreported.26,31,50,52
Some clinicians may attribute DB to physiologic conditions that need to be treated, not reported. However, employers can face various legal liabilities if steps are not taken to protect employees.47,51 Federal and state statutes require that organizations provide a healthy and safe employment environment for workers. This requires that employers institute reasonable protective measures, such as procedures to intervene, policies on addressing DB incidents, and/or training to minimize or deescalate DB.51,53 Also, employees may sue employers if security measures are inadequate or deficient in properly investigating current and past evidence of DB or identifying vulnerabilities in the workplace. Unwillingness to investigate DB and safety-related workplace concerns have contributed to increased workplace violence and legal liability.52,53 The mission of caring and trust is consistent with assuring a safe environment.
Training and Empathetic Care
To combat cultural resistance to reporting DBs, more and perhaps different contextual approaches to education and training may be needed that address ethical dilemmas and concerns of providers. The success of training relies on administrators supporting staff in reporting DB. Training must address providers’ conflicting beliefs and assist with identifying strategies to provide the best possible care for patients who display DB.1,38 HCPs are less likely to document a DB if they feel that administrators are creating documentation that will have negative consequences for a patient. Thus, leadership is responsible for ensuring that misconceptions are dispelled through training and other efforts and information on how reported DB data will be used is communicated through strategic channels.
Education and training must consider empathic care that attempts to understand why patients behave as they do through the information gathered.55 Empathy in health care is multifaceted: It involves comprehending a patient’s viewpoint, circumstances, and feelings and the capacity to analyze whether one is comprehending these accurately in order to demonstrate supportive care.54,55
Improving patient and staff interaction once a problematic behavior is identified is the aim of empathic care. Increasing empathic care can improve compassionate, patient-centered interactions that begin once the patient seeks care. This approach has proven to decrease DB by patients with dementia and improve their care, lessen staff problems during interactions, and increase staff morale.20 Experts call for the adoption of an interpersonal approach to patient encounters, and there is evidence that creating organizational change by moving toward compassionate care can lead to a positive impact for patients.54,55
Future Studies
There are growth opportunities in utilization of the DBRS. Analysis of the DBRS database by the VA Central Office (VACO) showed that the system is underutilized by facilities across the VA system.56 In response to this current underutilization, VACO is taking steps to close these gaps through increasing training to staff and promotion of the use of the DBRS. A 2015 pilot study of VHA providers showed that > 70% of providers had experienced a DB as defined by VHA, but only 34% of them reported their most recently experienced DB within the past 12 months.20 Thus, DBRS use must be studied within the context that patient-perpetrated DB is underreported in health care organizations.5,9,29,41,43,57,58 Studies addressing national DBRS utilization patterns and the cost associated with implementing the DBRS also are needed. One study suggests that there is an association between measures of facility complexity and staff perceptions of safety, which should be considered in analyzing DBRS usage.57 Studies addressing the role of the DBRS and misconceptions that the tool may represent a punitive tool also are needed. VHA should consider how the attribution “disruptive behavior” assigns a negative connotation and leads HCPs to avoid using the DBRS. Additionally, DB reporting may increase when HCPs understand that DB reporting is part of the comprehensive, consultative strategy to provide the best care to patients.
Conclusion
Accurate reporting of DB events enables the development of strategies for multidisciplinary teams to work together to minimize hazards and to provide interventions that provide for the safe delivery of health care to all patients. Improving reporting ensures there is an accurate representation of how disruptive events impact care provided within a facility—and what types of variables may be associated with increased risk for these types of events.
Additionally, ensuring that reporting is maximized also provides the VHA with opportunities for DBCs to offer evidence-based risk assessment of violence and consultation to staff members who may benefit from improved competencies in working with patients who display DB. These potential improvements are consistent with the VHA I CARE values and will provide data that can inform recommendations for health care in other agencies/health care organizations.
Acknowledgments
This work was supported by the Center of Innovation on Disability and Rehabilitation Research (CINDRR) of the Health Services Research and Development Service, Office of Research and Development, Department of Veterans Affairs.
While private or other public health care organizations can refuse to care for patients who have displayed disruptive behavior (DB), the VA Response to Disruptive Behavior of Patients law (38 CFR §17.107) prohibits the Veterans Health Administration (VHA) of the Department of Veterans Affairs (VA) from refusing care to veterans who display DB.1 The VHA defines DB as any behavior that is intimidating, threatening, or dangerous or that has, or could, jeopardize the health or safety of patients, VHA staff, or others.2
VA Response to DB Law
The VA Response to Disruptive Behavior of Patients requires the VHA to provide alternative care options that minimize risk while ensuring services; for example, providing care at a different location and/or time when additional staff are available to assist and monitor the patient. This can provide a unique opportunity to capture data on DB and the results of alternative forms of caring for this population.
The reason public health care organizations refuse care to persons who display DB is clear: DBs hinder business operations, are financially taxing, and put health care workers at risk.3-10 “In 2009, the VHA spent close to $5.5 million on workers’ compensation and medical expenditures for 425 incidents–or about $130,000 per DB incident (Hodgson M, Drummond D, Van Male L. Unpublished data, 2010).” In another study, 106 of 762 nurses in 1 hospital system reported an assault by a patient, and 30 required medical attention, which resulted in a total cost of $94,156.8 From 2002 to 2013, incidents of serious workplace violence requiring days off for an injured worker to recover on average were 4 times more common in health care than in other industries.6-11 Incidents of patient violence and aggression toward staff transcend specialization; however, hospital nurses and staff from the emergency, rehabilitation and gerontology departments, psychiatric unit, and home-based services are more susceptible and vulnerable to DB incidents than are other types of employees.8,10-19
Data reported by health care staff suggest that patients rather than staff members or visitors initiate > 70% of serious physical attacks against health care workers.9,13,20-23 A 2015 study of VHA health care providers (HCPs) found that > 60% had experienced some form of DB, verbal abuse being the most prevalent, followed by sexual abuse and physical abuse.20 Of 72,000 VHA staff responding to a nationwide survey, 13% experienced, on average, ≥ 1 assault by a veteran (eg, something was thrown at them; they were pushed, kicked, slapped; or were threatened or injured by a weapon).8,21
To meet its legal obligations and deliver empathetic care, the VHA documents and analyzes data on all patients who exhibit DB. A local DB Committee (DBC) reviews the data, whether it occurs in an inpatient or outpatient setting, such as community-based outpatient clinics. Once a DB incident is reported, the DBC begins an evidence-based risk evaluation, including the option of contacting the persons who displayed or experienced the DB. Goals are to (1) prevent future DB incidents; (2) detect vulnerabilities in the environment; and (3) collaborate with HCPs and patients to provide optimal care while improving the patient/provider interactions.
Effects of Disruptive Behavior
DB has negative consequences for both patients and health care workers and results in poor evaluations of care from both groups.27-32 Aside from interfering with safe medical care, DB also impacts care for other patients by delaying access to care and increasing appointment wait times due to employee absenteeism and staff shortages.3,4,20,32,33 For HCPs, patient violence is associated with unwillingness to provide care, briefer treatment periods, and decreases in occupational satisfaction, performance, and commitment
Harmful health effects experienced by HCPs who have been victims of DB include fear, mood disorders, anxiety, all symptoms of psychological distress and posttraumatic stress disorder (PTSD).10,22,30,34-36 In a study of the impact on productivity of PTSD triggered by job-related DB, PTSD symptoms were associated with withdrawal from or minimizing encounters with patients, job turnover, and troubles with thinking
Reporting Disruptive Behavior
The literature suggests that consistent and effective DB reporting is pivotal to improving the outcome and quality of care for those displaying DB.37-39 To provide high-quality health services to veterans who display DB, the VHA must promote the management and reporting of DB. Without knowledge of the full spectrum of DB events at VHA facilities, efforts to prevent or manage DB and ensure safety may have limited impact.7,37 Reports can be used for clinical decision making to optimize staff training in delivery of quality care while assuring staff safety. More than 80% of DB incidents occur during interactions with patients, thus this is a clinical issue that can affect the outcome of patient care.8,21
Documented DB reports are used to analyze the degree, frequency, and nature of incidents, which might reveal risk factors and develop preventive efforts and training for specific hazards.8,39 Some have argued that implementing a standardized DB reporting system is a crucial first step toward minimizing hazards and improving health care.38,40,41
When DB incidents were recorded through a hospital electronic reporting system and discussed in meetings, staff reported: (1) increased awareness of DB; (2) improved ability to manage DB incidents; and (3) amplified reporting of incidents.38,41,42 These findings support similar results from studies of an intervention implemented at VA Community Living Centers (CLCs) from 2013 to 2017: Staff Training in Assisted Living Residences (STAR-VA).4,12,19 The aim of STAR-VA was to minimize challenging dementia-related DB in CLCs. The intervention initially was established to train direct-care, assisted-living staff to provide better care to older patients displaying DB. Data revealed that documentation of DBs was, the first step to ensuring staff and patient safety.18,40
VHA Reporting System
In 2013, the VA Office of Inspector General (OIG) found no standardized documentation of DB events across the VA health care system.42 Instead, DB events were documented in multiple records in various locations, including administrative and progress notes in the electronic health record (EHR), police reports, e-mails, or letters submitted to DBC chairs.42 This situation reduced administrators’ ability to consider all relevant information and render appropriate decisions in DB cases.42 In 2015, based on OIG recommendations, the VHA implemented the Disruptive Behavior Reporting System (DBRS) nationwide, which allowed all VHA staff to report DB events. The DBRS was designed to address factors likely to impede reporting and management of DB, namely, complexity of and lack of access to a central reporting system.43,44 The DBRS is currently the primary VHA tool to document DB events.
The DBRS consists of 32 questions in 5 sections relating to the (1) location and time of DB event; (2) reporter; (3) disrupter; (4) DB event details; and (5) the person who experienced (experiencer) the event. The system also provides a list of the types of DB, such as inappropriate communication, bullying and/or intimidation, verbal or written threat of physical harm, physical violence, sexual harassment, sexual assault, and property damage. The DBRS has the potential to provide useful data on DB and DB reporting, such as the typical staff entering data and the number and/or types of DB occurring.
The DBRS complements the preexisting VHA policies and committees for care of veterans who display DB.1-3,14,21,24,25 The VHA Workplace Violence Prevention Program (WVPP) required facilities to submit data on DB events through a Workplace Behavioral Risk report. Data for the report were obtained from police reports, patient safety reports, DBC records, and notes in the EHR. Following implementations of DBRS, the number of DB events per year became a part of facility performance standards.
VHA is creating novel approaches to handling DB that allow health care workers to render care in a safe and effective manner guided by documented information. For example, DBCs can recommend the use of Category I Patient Record Flags (PRFs) following documented DB, which informs staff of the potential risk of DB and provides guidance on protective methods to use when meeting with the patient.2,21,24 A survey of 140 VA hospital chiefs of staff indicated that DBC procedures were related to a decrease in the rates of assaults.1 Additionally, VA provides training for staff in techniques to promote personal safety, such as identifying signs that precede DB, using verbal deescalation, and practicing therapeutic containment.
Resistance to Reporting
Many health care employees and employers are reticent to report DBs.22,31,43,45-48 Studies suggest health care organizations can cultivate a culture that is resistant to reporting DB.49,50 This complicates the ability of the health care system to design and maintain safety protocols and safer treatment plans.3,41,51 Worldwide, < 30% of DBs are reported.47 One barrier may be that supervisors may not wish to acknowledge DBs on their units or may not provide sufficient staff time for training or reporting.31,46,47 HCPs may worry that a DB report will stigmatize patients, especially those who are elderly or have cognitive impairment, brain injury, psychological illness, or developmental disability. Patients with cognitive conditions are reportedly 20% more likely to be violent toward caregivers and providers.31 A dementia diagnosis, for example, is associated with a high likelihood for DB.30,52 More than 80% of DB events displayed by patients with dementia may go unreported.26,31,50,52
Some clinicians may attribute DB to physiologic conditions that need to be treated, not reported. However, employers can face various legal liabilities if steps are not taken to protect employees.47,51 Federal and state statutes require that organizations provide a healthy and safe employment environment for workers. This requires that employers institute reasonable protective measures, such as procedures to intervene, policies on addressing DB incidents, and/or training to minimize or deescalate DB.51,53 Also, employees may sue employers if security measures are inadequate or deficient in properly investigating current and past evidence of DB or identifying vulnerabilities in the workplace. Unwillingness to investigate DB and safety-related workplace concerns have contributed to increased workplace violence and legal liability.52,53 The mission of caring and trust is consistent with assuring a safe environment.
Training and Empathetic Care
To combat cultural resistance to reporting DBs, more and perhaps different contextual approaches to education and training may be needed that address ethical dilemmas and concerns of providers. The success of training relies on administrators supporting staff in reporting DB. Training must address providers’ conflicting beliefs and assist with identifying strategies to provide the best possible care for patients who display DB.1,38 HCPs are less likely to document a DB if they feel that administrators are creating documentation that will have negative consequences for a patient. Thus, leadership is responsible for ensuring that misconceptions are dispelled through training and other efforts and information on how reported DB data will be used is communicated through strategic channels.
Education and training must consider empathic care that attempts to understand why patients behave as they do through the information gathered.55 Empathy in health care is multifaceted: It involves comprehending a patient’s viewpoint, circumstances, and feelings and the capacity to analyze whether one is comprehending these accurately in order to demonstrate supportive care.54,55
Improving patient and staff interaction once a problematic behavior is identified is the aim of empathic care. Increasing empathic care can improve compassionate, patient-centered interactions that begin once the patient seeks care. This approach has proven to decrease DB by patients with dementia and improve their care, lessen staff problems during interactions, and increase staff morale.20 Experts call for the adoption of an interpersonal approach to patient encounters, and there is evidence that creating organizational change by moving toward compassionate care can lead to a positive impact for patients.54,55
Future Studies
There are growth opportunities in utilization of the DBRS. Analysis of the DBRS database by the VA Central Office (VACO) showed that the system is underutilized by facilities across the VA system.56 In response to this current underutilization, VACO is taking steps to close these gaps through increasing training to staff and promotion of the use of the DBRS. A 2015 pilot study of VHA providers showed that > 70% of providers had experienced a DB as defined by VHA, but only 34% of them reported their most recently experienced DB within the past 12 months.20 Thus, DBRS use must be studied within the context that patient-perpetrated DB is underreported in health care organizations.5,9,29,41,43,57,58 Studies addressing national DBRS utilization patterns and the cost associated with implementing the DBRS also are needed. One study suggests that there is an association between measures of facility complexity and staff perceptions of safety, which should be considered in analyzing DBRS usage.57 Studies addressing the role of the DBRS and misconceptions that the tool may represent a punitive tool also are needed. VHA should consider how the attribution “disruptive behavior” assigns a negative connotation and leads HCPs to avoid using the DBRS. Additionally, DB reporting may increase when HCPs understand that DB reporting is part of the comprehensive, consultative strategy to provide the best care to patients.
Conclusion
Accurate reporting of DB events enables the development of strategies for multidisciplinary teams to work together to minimize hazards and to provide interventions that provide for the safe delivery of health care to all patients. Improving reporting ensures there is an accurate representation of how disruptive events impact care provided within a facility—and what types of variables may be associated with increased risk for these types of events.
Additionally, ensuring that reporting is maximized also provides the VHA with opportunities for DBCs to offer evidence-based risk assessment of violence and consultation to staff members who may benefit from improved competencies in working with patients who display DB. These potential improvements are consistent with the VHA I CARE values and will provide data that can inform recommendations for health care in other agencies/health care organizations.
Acknowledgments
This work was supported by the Center of Innovation on Disability and Rehabilitation Research (CINDRR) of the Health Services Research and Development Service, Office of Research and Development, Department of Veterans Affairs.
1. Hodgson MJ, Mohr DC, Drummond DJ, Bell M, Van Male L. Managing disruptive patients in health care: necessary solutions to a difficult problem. Am J Ind Med. 2012;55(11):1009-1017.
2. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 2010-053. Patient Record Flags. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2341 Published December 3, 2010. Accessed March 29, 2019.
3. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 2012-026. Sexual Assaults and Other Defined Public Safety Incidents in VHA Facilities. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2797. Published September 27, 2012. Accessed March 29, 2019.
4. Curyto KJ, McCurry SM, Luci K, Karlin BE, Teri L, Karel MJ. Managing challenging behaviors of dementia in veterans: identifying and changing activators and consequences using STAR-VA. J Gerontol Nurs. 2017;43(2):33-43.
5. Speroni KG, Fitch T, Dawson E, Dugan L, Atherton M. Incidence and cost of nurse workplace violence perpetrated by hospital patients or patient visitors. J Emerg Nurs. 2014;40(3):218-228.
6. Phillips JP. Workplace violence against health care workers in the United States. NEJM. 2016;374(17):1661-1669.
7. Janocha JA, Smith RT. Workplace safety and health in the health care and social assistance industry, 2003–07. https://www.bls.gov/opub/mlr/cwc/workplace-safety-and-health-in-the-health-care-and-social-assistance-industry-2003-07.pdf. Published August 30, 2010. Accessed February 19, 2019.
8. US Department of Labor, Occupational Safety and Health Administration. Workplace violence in healthcare: understanding the challenge. https://www.osha.gov/Publications/OSHA3826.pdf. Published December 2015. Accessed February 19, 2019.
9. US Department of Labor, Occupational Safety and Health Administration. Prevention of Workplace Violence in Healthcare and Social Assistance. Occupational Safety and Health Administration, https://www.govinfo.gov/content/pkg/FR-2016-12-07/pdf/2016-29197.pdf. Accessed January 20, 2017.
10. Gerberich SG, Church TR, McGovern PM, et al. An epidemiological study of the magnitude and consequences of work related violence: the Minnesota Nurses’ Study. Occup Environ Med. 2004;61(6):495-503.
11. Sherman MF, Gershon RRM, Samar SM, Pearson JM, Canton AN, Damsky MR. Safety factors predictive of job satisfaction and job retention among home healthcare aides. J Occup Environ Med. 2008;50(12):1430-1441.
12. Karel MJ, Teri L, McConnell E, Visnic S, Karlin BE. Effectiveness of expanded implementation of STAR-VA for managing dementia-related behaviors among veterans. Gerontologist. 2016;56(1):126-134.
13. US Department of Labor, Bureau of Labor Statistics. Nonfatal occupational injuries and illnesses requiring days away from work. https://www.bls.gov/news.release/archives/osh2_11192015.htm. Published November 19, 2015.
14. Beech B, Leather P. Workplace violence in the health care sector: A review of staff training and integration of training evaluation models. Aggression Violent Behav. 2006;11(1):27-43.
15. Campbell CL, McCoy S, Burg MA, Hoffman N. Enhancing home care staff safety through reducing client aggression and violence in noninstitutional care settings: a systematic review. Home Health Care Manage Pract. 2014;26(1):3-10.
16. Gallant-Roman MA. Strategies and tools to reduce workplace violence. AAOHNJ. 2008;56(11):449-454.
17. Weinberger LE, Sreenivasan S, Smee DE, McGuire J, Garrick T. Balancing safety against obstruction to health care access: an examination of behavioral flags in the VA health care system. J Threat Assess Manage. 2018;5(1):35-41.
18. Elbogen EB, Johnson SC, Wagner HR, et al. Protective factors and risk modification of violence in Iraq and Afghanistan war veterans. J Clin Psychiatry. 2012;73(6):e767-e773.
19. Karlin BE, Visnic S, McGee JS, Teri L. Results from the multisite implementation of STAR-VA: a multicomponent psychosocial intervention for managing challenging dementia-related behaviors of veterans. Psychol Serv. 2014;11(2):200-208.
20. Semeah LM, Campbell CL, Cowper DC, Peet AC. Serving our homeless veterans: patient perpetrated violence as a barrier to health care access. J Pub Nonprofit Aff. 2017;3(2):223-234.
21. Hodgson MJ, Reed R, Craig T, et al. Violence in healthcare facilities: lessons from the Veterans Health Administration. J Occup Environ Med. 2004;46(11):1158-1165.
22. Farrell GA, Bobrowski C, Bobrowski P. Scoping workplace aggression in nursing: findings from an Australian study. J Adv Nurs. 2006;55(6):778-787.
23. Barling J, Rogers AG, Kelloway EK. Behind closed doors: in-home workers’ experience of sexual harassment and workplace violence. J Occup Health Psychol. 2001;6(3):255-269.
24. Pompeii LA, Schoenfisch AL, Lipscomb HJ, Dement JM, Smith CD, Upadhyaya M. Physical assault, physical threat, and verbal abuse perpetrated against hospital workers by patients or visitors in six U.S. hospitals. Am J Ind Med. 2015;58(11):1194-1204.
25. Sippel LM, Mota NP, Kachadourian LK, et al. The burden of hostility in U.S. veterans: results from the National Health and Resilience in Veterans Study. Psychiatry Res. 2016;243(suppl C):421-430.
26. Campbell C. Patient Violence and Aggression in Non-Institutional Health Care Settings: Predictors of Reporting By Healthcare Providers [doctoral dissertation]. Orlando: University of Central Florida; 2016.
27. Galinsky T, Feng HA, Streit J, et al. Risk factors associated with patient assaults of home healthcare workers. Rehabil Nurs. 2010;35(5):206-215.
28. Campbell CL. Incident reporting by health-care workers in noninstitutional care settings. Trauma, Violence Abuse. 2017;18(4):445-456.
29. Arnetz JE, Arnetz BB. Violence towards health care staff and possible effects on the quality of patient care. Soc Sci Med. 2001;52(3):417-427.
30. Gates D, Fitzwater E, Succop P. Relationships of stressors, strain, and anger to caregiver assaults. Issues Ment Health Nurs. 2003;24(8):775-793.
31. Brillhart B, Kruse B, Heard L. Safety concerns for rehabilitation nurses in home care. Rehabil Nurs. 2004;29(6):227-229.
32. Taylor H. Patient violence against clinicians: managing the risk. Innov Clin Neurosci. 2013;10(3):40-42.
33. US Department of Veterans Affairs, Office of Public and Intergovernmental Affairs. The Joint Commission releases results of surveys of the VA health care system. https://www.va.gov/opa/pressrel/pressrelease.cfm?id=2808. Updated August 5, 2014. Accessed February 19, 2019.
34. Büssing A, Höge T. Aggression and violence against home care workers. J Occup Health Psychol. 2004;9(3):206-219.
35. Geiger-Brown J, Muntaner C, McPhaul K, Lipscomb J, Trinkoff A. Abuse and violence during home care work as predictor of worker depression. Home Health Care Serv Q. 2007;26(1):59-77.
36. Gates DM, Gillespie GL, Succop P. Violence against nurses and its impact on stress and productivity. Nurs Econ. 2011;29(2):59-66.
37. Petterson IL, Arnetz BB. Psychosocial stressors and well-being in health care workers: the impact of an intervention program. Soc Sci Med. 1998;47(11):1763-1772.
38. Arnetz JE, Arnetz BB. Implementation and evaluation of a practical intervention programme for dealing with violence towards health care workers. J Adv Nurs. 2000;31(3):668-680.
39. Arnetz JE, Hamblin L, Russell J, et al. Preventing patient-to-worker violence in hospitals: outcome of a randomized controlled intervention. J Occup Environ Med. 2017;59(1):18-27.
40. Elbogen EB, Tomkins AJ, Pothuloori AP, Scalora MJ. Documentation of violence risk information in psychiatric hospital patient charts: an empirical examination. J Am Acad Psychiatry Law. 2003;31(1):58-64.
41. Winsvold Prang I, Jelson-Jorgensen LP. Should I report? A qualitative study of barriers to incident reporting among nurses working in nursing homes. Geriatr Nurs. 2014;35(6):441-447.
42. US Department of Veterans Affairs, Office of Inspector General. Healthcare inspection: management of disruptive patient behavior at VA medical facilities. Report No. 11-02585-129. https://www.va.gov/oig/pubs/VAOIG-11-02585-129.pdf. Published Mrach 7, 2013. Accessed February 21, 2019.
43. Lipscomb J, London M. Not Part of the Job: How to Take a Stand Against Violence in the Work Setting. Silver Spring, MD: American Nurses Association; 2015.
44. May DD, Grubbs LM. The extent, nature, and precipitating factors of nurse assault among three groups of registered nurses in a regional medical center. J Emerg Nurs. 2002;28(1):11-17.
45. Wharton TC, Ford BK. What is known about dementia care recipient violence and aggression against caregivers? J Gerontol Soc Work. 2014;57(5):460-477.
46. Brennan C, Worrall-Davies A, McMillan D, Gilbody S, House A. The hospital anxiety and depression scale: a diagnostic meta-analysis of case-finding ability. J Psychosom Res. 2010;69(4):371-378.
47. McPhaul K, Lipscomb J, Johnson J. Assessing risk for violence on home health visits. Home Healthc Nurse. 2010;28(5):278-289.
48. McPhaul KM, London M, Murrett K, Flannery K, Rosen J, Lipscomb J. Environmental evaluation for workplace violence in healthcare and social services. J Safety Res. 2008;39(2):237-250.
49. Kelly JA, Somlai AM, DiFranceisco WJ, et al. Bridging the gap between the science and service of HIV prevention: transferring effective research-based HIV prevention interventions to community AIDS service providers. Am J Public Health. 2000;90(7):1082-1088.
50. Pawlin S. Reporting violence. Emerg Nurse. 2008;16(4):16-21.
51. Brakel SJ. Legal liability and workplace violence. J Am Acad Psychiatry Law. 1998;26(4):553-562.
52. Neuman JH, Baron RA. Workplace violence and workplace aggression: evidence concerning specific forms, potential causes, and preferred targets. J Manage. 1998;24(3):391-419.53. Ferns T, Chojnacka I. Angels and swingers, matrons and sinners: nursing stereotypes. Br J Nurs. 2005;14(19):1028-1032.
54. Mercer SW, Reynolds WJ. Empathy and quality of care. Br J Gen Pract 2002;52(suppl):S9-S12.
55. Lee TH. An Epidemic of Empathy in Healthcare: How to Deliver Compassionate, Connected Patient Care That Creates a Competitive Advantage. Columbus, OH: McGraw-Hill Education; 2015.
56. US Department of Veterans Affairs, Veterans Health Administrastion. Veterans Health Administration workplace violence prevention program (WVPP): disruptive behavior reporting system utilization report. Published 2017. https://vaww.portal2.va.gov/sites/wvpp/Shared%20Documents/DBRS%20Utilization%20Reports/FY2017%20DBRS%20Quarterly%20Utilization%20Report%20(Quarter%201).pdf. [Source not verified.]
57. Campbell CL, Burg, MA, Gammonley D. Measures for incident reporting of patient violence and aggression towards healthcare providers: a systematic review. Aggression Violent Behav. 2015;25(part B):314-322.
58. Carney PT, West P, Neily J, Mills PD, Bagian JP. The effect of facility complexity on perceptions of safety climate in the operating room: size matters. Am J Med Qual. 2010;25(6):457-461.
1. Hodgson MJ, Mohr DC, Drummond DJ, Bell M, Van Male L. Managing disruptive patients in health care: necessary solutions to a difficult problem. Am J Ind Med. 2012;55(11):1009-1017.
2. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 2010-053. Patient Record Flags. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2341 Published December 3, 2010. Accessed March 29, 2019.
3. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 2012-026. Sexual Assaults and Other Defined Public Safety Incidents in VHA Facilities. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=2797. Published September 27, 2012. Accessed March 29, 2019.
4. Curyto KJ, McCurry SM, Luci K, Karlin BE, Teri L, Karel MJ. Managing challenging behaviors of dementia in veterans: identifying and changing activators and consequences using STAR-VA. J Gerontol Nurs. 2017;43(2):33-43.
5. Speroni KG, Fitch T, Dawson E, Dugan L, Atherton M. Incidence and cost of nurse workplace violence perpetrated by hospital patients or patient visitors. J Emerg Nurs. 2014;40(3):218-228.
6. Phillips JP. Workplace violence against health care workers in the United States. NEJM. 2016;374(17):1661-1669.
7. Janocha JA, Smith RT. Workplace safety and health in the health care and social assistance industry, 2003–07. https://www.bls.gov/opub/mlr/cwc/workplace-safety-and-health-in-the-health-care-and-social-assistance-industry-2003-07.pdf. Published August 30, 2010. Accessed February 19, 2019.
8. US Department of Labor, Occupational Safety and Health Administration. Workplace violence in healthcare: understanding the challenge. https://www.osha.gov/Publications/OSHA3826.pdf. Published December 2015. Accessed February 19, 2019.
9. US Department of Labor, Occupational Safety and Health Administration. Prevention of Workplace Violence in Healthcare and Social Assistance. Occupational Safety and Health Administration, https://www.govinfo.gov/content/pkg/FR-2016-12-07/pdf/2016-29197.pdf. Accessed January 20, 2017.
10. Gerberich SG, Church TR, McGovern PM, et al. An epidemiological study of the magnitude and consequences of work related violence: the Minnesota Nurses’ Study. Occup Environ Med. 2004;61(6):495-503.
11. Sherman MF, Gershon RRM, Samar SM, Pearson JM, Canton AN, Damsky MR. Safety factors predictive of job satisfaction and job retention among home healthcare aides. J Occup Environ Med. 2008;50(12):1430-1441.
12. Karel MJ, Teri L, McConnell E, Visnic S, Karlin BE. Effectiveness of expanded implementation of STAR-VA for managing dementia-related behaviors among veterans. Gerontologist. 2016;56(1):126-134.
13. US Department of Labor, Bureau of Labor Statistics. Nonfatal occupational injuries and illnesses requiring days away from work. https://www.bls.gov/news.release/archives/osh2_11192015.htm. Published November 19, 2015.
14. Beech B, Leather P. Workplace violence in the health care sector: A review of staff training and integration of training evaluation models. Aggression Violent Behav. 2006;11(1):27-43.
15. Campbell CL, McCoy S, Burg MA, Hoffman N. Enhancing home care staff safety through reducing client aggression and violence in noninstitutional care settings: a systematic review. Home Health Care Manage Pract. 2014;26(1):3-10.
16. Gallant-Roman MA. Strategies and tools to reduce workplace violence. AAOHNJ. 2008;56(11):449-454.
17. Weinberger LE, Sreenivasan S, Smee DE, McGuire J, Garrick T. Balancing safety against obstruction to health care access: an examination of behavioral flags in the VA health care system. J Threat Assess Manage. 2018;5(1):35-41.
18. Elbogen EB, Johnson SC, Wagner HR, et al. Protective factors and risk modification of violence in Iraq and Afghanistan war veterans. J Clin Psychiatry. 2012;73(6):e767-e773.
19. Karlin BE, Visnic S, McGee JS, Teri L. Results from the multisite implementation of STAR-VA: a multicomponent psychosocial intervention for managing challenging dementia-related behaviors of veterans. Psychol Serv. 2014;11(2):200-208.
20. Semeah LM, Campbell CL, Cowper DC, Peet AC. Serving our homeless veterans: patient perpetrated violence as a barrier to health care access. J Pub Nonprofit Aff. 2017;3(2):223-234.
21. Hodgson MJ, Reed R, Craig T, et al. Violence in healthcare facilities: lessons from the Veterans Health Administration. J Occup Environ Med. 2004;46(11):1158-1165.
22. Farrell GA, Bobrowski C, Bobrowski P. Scoping workplace aggression in nursing: findings from an Australian study. J Adv Nurs. 2006;55(6):778-787.
23. Barling J, Rogers AG, Kelloway EK. Behind closed doors: in-home workers’ experience of sexual harassment and workplace violence. J Occup Health Psychol. 2001;6(3):255-269.
24. Pompeii LA, Schoenfisch AL, Lipscomb HJ, Dement JM, Smith CD, Upadhyaya M. Physical assault, physical threat, and verbal abuse perpetrated against hospital workers by patients or visitors in six U.S. hospitals. Am J Ind Med. 2015;58(11):1194-1204.
25. Sippel LM, Mota NP, Kachadourian LK, et al. The burden of hostility in U.S. veterans: results from the National Health and Resilience in Veterans Study. Psychiatry Res. 2016;243(suppl C):421-430.
26. Campbell C. Patient Violence and Aggression in Non-Institutional Health Care Settings: Predictors of Reporting By Healthcare Providers [doctoral dissertation]. Orlando: University of Central Florida; 2016.
27. Galinsky T, Feng HA, Streit J, et al. Risk factors associated with patient assaults of home healthcare workers. Rehabil Nurs. 2010;35(5):206-215.
28. Campbell CL. Incident reporting by health-care workers in noninstitutional care settings. Trauma, Violence Abuse. 2017;18(4):445-456.
29. Arnetz JE, Arnetz BB. Violence towards health care staff and possible effects on the quality of patient care. Soc Sci Med. 2001;52(3):417-427.
30. Gates D, Fitzwater E, Succop P. Relationships of stressors, strain, and anger to caregiver assaults. Issues Ment Health Nurs. 2003;24(8):775-793.
31. Brillhart B, Kruse B, Heard L. Safety concerns for rehabilitation nurses in home care. Rehabil Nurs. 2004;29(6):227-229.
32. Taylor H. Patient violence against clinicians: managing the risk. Innov Clin Neurosci. 2013;10(3):40-42.
33. US Department of Veterans Affairs, Office of Public and Intergovernmental Affairs. The Joint Commission releases results of surveys of the VA health care system. https://www.va.gov/opa/pressrel/pressrelease.cfm?id=2808. Updated August 5, 2014. Accessed February 19, 2019.
34. Büssing A, Höge T. Aggression and violence against home care workers. J Occup Health Psychol. 2004;9(3):206-219.
35. Geiger-Brown J, Muntaner C, McPhaul K, Lipscomb J, Trinkoff A. Abuse and violence during home care work as predictor of worker depression. Home Health Care Serv Q. 2007;26(1):59-77.
36. Gates DM, Gillespie GL, Succop P. Violence against nurses and its impact on stress and productivity. Nurs Econ. 2011;29(2):59-66.
37. Petterson IL, Arnetz BB. Psychosocial stressors and well-being in health care workers: the impact of an intervention program. Soc Sci Med. 1998;47(11):1763-1772.
38. Arnetz JE, Arnetz BB. Implementation and evaluation of a practical intervention programme for dealing with violence towards health care workers. J Adv Nurs. 2000;31(3):668-680.
39. Arnetz JE, Hamblin L, Russell J, et al. Preventing patient-to-worker violence in hospitals: outcome of a randomized controlled intervention. J Occup Environ Med. 2017;59(1):18-27.
40. Elbogen EB, Tomkins AJ, Pothuloori AP, Scalora MJ. Documentation of violence risk information in psychiatric hospital patient charts: an empirical examination. J Am Acad Psychiatry Law. 2003;31(1):58-64.
41. Winsvold Prang I, Jelson-Jorgensen LP. Should I report? A qualitative study of barriers to incident reporting among nurses working in nursing homes. Geriatr Nurs. 2014;35(6):441-447.
42. US Department of Veterans Affairs, Office of Inspector General. Healthcare inspection: management of disruptive patient behavior at VA medical facilities. Report No. 11-02585-129. https://www.va.gov/oig/pubs/VAOIG-11-02585-129.pdf. Published Mrach 7, 2013. Accessed February 21, 2019.
43. Lipscomb J, London M. Not Part of the Job: How to Take a Stand Against Violence in the Work Setting. Silver Spring, MD: American Nurses Association; 2015.
44. May DD, Grubbs LM. The extent, nature, and precipitating factors of nurse assault among three groups of registered nurses in a regional medical center. J Emerg Nurs. 2002;28(1):11-17.
45. Wharton TC, Ford BK. What is known about dementia care recipient violence and aggression against caregivers? J Gerontol Soc Work. 2014;57(5):460-477.
46. Brennan C, Worrall-Davies A, McMillan D, Gilbody S, House A. The hospital anxiety and depression scale: a diagnostic meta-analysis of case-finding ability. J Psychosom Res. 2010;69(4):371-378.
47. McPhaul K, Lipscomb J, Johnson J. Assessing risk for violence on home health visits. Home Healthc Nurse. 2010;28(5):278-289.
48. McPhaul KM, London M, Murrett K, Flannery K, Rosen J, Lipscomb J. Environmental evaluation for workplace violence in healthcare and social services. J Safety Res. 2008;39(2):237-250.
49. Kelly JA, Somlai AM, DiFranceisco WJ, et al. Bridging the gap between the science and service of HIV prevention: transferring effective research-based HIV prevention interventions to community AIDS service providers. Am J Public Health. 2000;90(7):1082-1088.
50. Pawlin S. Reporting violence. Emerg Nurse. 2008;16(4):16-21.
51. Brakel SJ. Legal liability and workplace violence. J Am Acad Psychiatry Law. 1998;26(4):553-562.
52. Neuman JH, Baron RA. Workplace violence and workplace aggression: evidence concerning specific forms, potential causes, and preferred targets. J Manage. 1998;24(3):391-419.53. Ferns T, Chojnacka I. Angels and swingers, matrons and sinners: nursing stereotypes. Br J Nurs. 2005;14(19):1028-1032.
54. Mercer SW, Reynolds WJ. Empathy and quality of care. Br J Gen Pract 2002;52(suppl):S9-S12.
55. Lee TH. An Epidemic of Empathy in Healthcare: How to Deliver Compassionate, Connected Patient Care That Creates a Competitive Advantage. Columbus, OH: McGraw-Hill Education; 2015.
56. US Department of Veterans Affairs, Veterans Health Administrastion. Veterans Health Administration workplace violence prevention program (WVPP): disruptive behavior reporting system utilization report. Published 2017. https://vaww.portal2.va.gov/sites/wvpp/Shared%20Documents/DBRS%20Utilization%20Reports/FY2017%20DBRS%20Quarterly%20Utilization%20Report%20(Quarter%201).pdf. [Source not verified.]
57. Campbell CL, Burg, MA, Gammonley D. Measures for incident reporting of patient violence and aggression towards healthcare providers: a systematic review. Aggression Violent Behav. 2015;25(part B):314-322.
58. Carney PT, West P, Neily J, Mills PD, Bagian JP. The effect of facility complexity on perceptions of safety climate in the operating room: size matters. Am J Med Qual. 2010;25(6):457-461.
Effects of Process Improvement on Guideline-Concordant Cardiac Enzyme Testing
In recent years, driven by accelerating health care costs and desire for improved health care value, major specialty group guidelines have incorporated resource utilization and value calculations into their recommendations. High-value care has the characteristics of enhancing outcomes, safety, and patient satisfaction at a reasonable cost. As one example, the American College of Cardiology (ACC) recently published a consensus statement on its clinical practice guidelines with a specific focus on cost and value.1 This guideline acknowledges the difficulty in incorporating value into clinical decision making but stresses a need for increased transparency and consistency to boost value in everyday practice.
Chest pain and related symptoms were listed as the second leading principle reasons for emergency department visits in the US in 2011 with 14% of patients undergoing cardiac enzyme testing.2 The ACC guidelines advocate use of troponin as the preferred laboratory test for the initial evaluation of acute coronary syndrome (ACS). Fractionated creatine kinase (CK-MB) is an acceptable alternative only when a cardiac troponin test is not available.3 Furthermore, troponins should be obtained no more than 3 times for the initial evaluation of a single event, and further trending provides no additional benefit or prognostic information.
A recent study from an academic hospital showed that process improvement interventions focused on eliminating unnecessary cardiac enzyme testing led to a 1-year cost savings of $1.25 million while increasing the rate of ACS diagnosis.4 Common clinical practice at Naval Medical Center Portsmouth (NMCP) in Virginia still routinely includes both troponin as well as a CK panel comprised of CK, CK-MB, and a calculated CK-MB/CK index. Our study focuses on the implementation of quality improvement efforts described by Larochelle and colleagues at NMCP.4 The study aimed to determine the impact of implementing interventions designed to improve the ordering practices and reduce the cost of cardiac enzyme testing.
Methods
The primary focus of the intervention was on ordering practices of the emergency medicine department (EMD), internal medicine (IM) inpatient services, and cardiology inpatient services. Specific interventions were: (1) removal of the CK panel from the chest pain order set in the EMD electronic health record (EHR); (2) removal of the CK panel from the inpatient cardiology order set; (3) education of staff on the changes in CK panel utility via direct communication during IM academic seminars; (4) education of nursing staff ordering laboratory results on behalf of physicians on the cardiology service at the morning and evening huddles; and (5) addition of “max of 3 tests indicated” comment to the inpatient EHR ordering page of the troponin test
Data Source
The process improvement interventions were considered exempt from institutional review board (IRB) approval; however, we obtained expedited IRB approval with waiver of consent for the research aspect of the project. We obtained clinical administrative data from the Military Health System Data Repository (MDR). We identified all adult patients aged ≥ 18 years who had a troponin test, CK-MB, or both drawn at NMCP on the following services: the EMD, IM, and cardiology. A troponin or CK-MB test was defined using Current Procedural Terminology (CPT) codes and unique Logical Observation Identifiers Names and Codes (LOINC).
Measures
The study was divided into 3 periods: the preintervention period from August 1, 2013 to July 31, 2014; the intervention period from August 1, 2014 to January 31, 2015; and the postintervention period February 1, 2015 to January 31, 2016.
The primary outcomes measured were the frequency of guideline concordance and total costs for tests ordered per month using the Centers for Medicare and Medicaid Services (CMS) clinical laboratory fee schedule of $13.40 for troponin and $16.17 for CK-MB.5Concordance was defined as ≤ 3 troponin tests and no CK-MB tests ordered during 1 encounter for a patient without an ACS diagnosis in the preceding 7 days. Due to faster cellular release kinetics of CK-MB compared with that of troponin, this test has utility in evaluating new or worsening chest pain in the setting of a recent myocardial infarction (MI). Therefore, we excluded any patient who had a MI within the preceding 7 days of an order for either CK-MB or troponin tests. Additionally, the number of tests, both CK-MB and troponin, ordered per patient encounter (hereafter referred to as an episode) were measured. Finally, we measured the monthly prevalence of ACS diagnosis and percentage of visits having that diagnosis.
Data Analysis
Descriptive statistics were used to calculate population demographics of age group, sex, beneficiary category, sponsor service, and clinical setting. Monthly data were grouped into the preintervention and postintervention periods. The analysis was performed using t tests to compare mean values and CIs before and after the intervention. Simple linear regression with attention to correlation was used to create best fit lines with confidence bands before and after the intervention. Interrupted time series (ITS) regression was used to describe all data points throughout the study. Consistency between these various methods was verified. Mean values and CIs were reported from the t tests. Statistical significance was reported when appropriate. Equations and confidence predictions on the simple linear regressions were produced and reported. These were used to identify values at the start, midpoint, and end of the pre- and postintervention periods.
Results
There were a total of 6,281 patients in the study population. More patients were seen during the postintervention period than in the preintervention period. The mean age of patients was slightly higher during the preintervention period (Table 1).
Guideline Concordance
To determine whether ordering practices for cardiac enzyme testing improved, we assessed the changes in the frequency of guideline concordance during the pre- and postintervention period. On average during the preintervention year, the percentage of tests ordered that met guideline concordance was 10.1% (95% CI, 7.4%-12.9%), increasing by 0.80% (95% CI, 0.17%-1.42%) each month.
Costs
We assessed changes in total dollars spent on cardiac enzyme testing during the pre- and postintervention periods. During the preintervention year, $9,400 (95% CI, $8,700-$10,100) was spent on average each month, which did not change significantly throughout the period. During the postintervention year, the cost was stable at $5,000 (95% CI, $4,600-$5,300) on average each month, a reduction of $4,400 (95% CI, $3,700-$5,100) (Figure 2).
CK-MB and Troponin Tests per Patient
To further assess ordering practices for cardiac enzyme testing, we compared the changes in the monthly number of tests and the average number of CK-MB and troponin tests ordered per episode pre- and postintervention. On average during the preintervention year, 297 tests (95% CI, 278-315) were run per month, with an average of 1.21 CK tests (95% CI, 1.15-1.27) per episode (Table 2, Figure 3).
The changes in troponin testing were not as dramatic. The counts of tests each month remained similar, with a preintervention year average of 341 (95% CI, 306-377) and postintervention year average of 310 (95% CI, 287-332), which were not statistically different. However, there was a statistically significant decrease in the number of tests per episode. During the preintervention year, 1.38 troponin tests (95% CI, 1.31-1.45) were ordered per patient on average. This dropped by 0.17 (95% CI, 0.09-0.24) to the postintervention average of 1.21 (95% CI, 1.17-1.25) (Table 2, Figure 4).
ACS Prevalence
To determine whether there was an impact on ACS diagnoses, we looked at the numbers of ACS diagnoses and their prevalence among visits before and after the intervention. During the preintervention year, the average monthly number of diagnoses was 29.7 (95% CI, 26.1-33.2), and prevalence of ACS was 0.56% (95% CI, 0.48%-0.63%) of all episodes. Although the monthly rate was statistically decreasing by 0.022% (95% CI, 0.003-0.41), this has little meaning since the level of correlation (r2 = 0.2522, not displayed) was poor due to the essentially nonexistent correlation in number of visits each month (r2 = 0.0112, not displayed). During the postintervention year, the average number of diagnoses was 32.2 (95% CI, 27.9-36.6), and the prevalence of ACS was 0.62% (95% CI, 0.54-0.65). Neither of these values changed significantly between the pre- and postintervention period. All ICD-9 and ICD-10 diagnosis codes used for the analysis are available upon request from the authors.
Discussion
Our data demonstrate the ability of simple process improvement interventions to decrease unnecessary testing in the workup of ACS, increasing the rate of guideline concordant testing by > 70% at a single military treatment facility (MTF). In particular, with the now widespread use of EHR, the order set presents a high-yield target for process improvement in an easily implemented, durable fashion. We had expected to see some decrease in the efficacy of the intervention at a time of staff turnover in the summer of 2015 because ongoing dedicated teaching sessions were not performed. Despite that, the intervention remained effective without further dedicated teaching sessions. This outcome was certainly attributable to the hardwired interventions made (mainly via order sets), but possibly indicates an institutional memory that can take hold after an initial concerted effort is made.
We reduced the estimated preintervention annual cost of $113,000 by $53,000 (95% CI, $42,000-$64,000). Although on a much smaller scale than the study by Larochelle, our study represents a nearly 50% reduction in the total cost of initial testing for possible ACS and a > 80% reduction in unnecessary CK-MB testing.4 This result was achieved with no statistical change in the prevalence of ACS. The cost reduction does not account for the labor costs to clinically follow-up and address additional unnecessary lab results. The estimated cost of intervention was limited to the time required to educate residents, interns, and nursing staff as well as the implementation of the automated, reflexive laboratory results ordering process.
Unique to our study, we also demonstrated an intervention that satisfied all the major stakeholders in the ordering of these laboratory results. By instituting the reflexive ordering of CK-MB tests for positive troponins, we obtained the support of the facility’s interventional cardiology department, which finds value in that data. Appreciating the time-sensitive nature of an ACS diagnosis, the reflexive ordering minimized the delay in receiving these data while still greatly reducing the number of tests performed. That being said, if the current trend away from CK-MB in favor of exclusively testing troponin continues, removing the reflexive ordering for positive laboratory results protocol would be an easy follow-on intervention.
Limitations
Our study presented several limitations. First, reporting errors due to improper or insufficient medical coding as well as data entry errors may exist within the MDR; therefore, the results of this analysis may be over- or underestimated. Specifically, CPT codes for troponin and CK-MB were available only in 1 of the 2 data sets used for this study, which primarily contains outpatient patient encounters. For this reason, most of the laboratory testing comes from the EMD rather than from inpatient services. However, because we excluded all patients who eventually had an ACS diagnosis (patients who likely had more inpatient time and better indication for repeat troponin), we feel that our intervention was still thoroughly investigated. Second, the number of tests drawn per patient was significantly < 2, the expected minimum number of tests to rule out ACS in patients with appropriate symptoms.
This study was not designed to answer the source of variation from guidelines. Many patients had only 1 test, which we feel represents an opportunity for future study to identify other ways cardiac enzyme testing is being used clinically. These tests might be used for patients without convincing symptoms and signs of coronary syndromes or for patients with other primary problems. Third, by using the ITS analysis, we assumed that the outcome during each intervention period follows a linear pattern. However, changes may follow a nonlinear pattern over a long period. Finally, our intervention was limited to only a single MTF, which may limit generalizability to other facilities across military medicine. However, we feel this study should serve as a guide for other MTFs as well as US Department of Veterans Affairs facilities that could institute similar process improvements.
Conclusion
We made easily implemented and durable process improvement interventions that changed institution-wide ordering practices. These changes dramatically increased the rate of guideline-concordant testing, decreasing cost and furthering the goal of high-value medical care.
1. Anderson JL, Heidenreich PA, Barnett PG, et al; ACC/AHA Task Force on Performance Measures; ACC/AHA Task Force on Practice Guidelines. ACC/AHA statement on cost/value methodology in clinical practice guidelines and performance measures: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures and Task Force on Practice Guidelines. Circulation. 2014;129(22):2329-2345.
2. Centers for Disease Control and Prevention, National Center for Health Statistics. National hospital ambulatory medical care survey: 2010 emergency department summary tables. https://www.cdc.gov/nchs/data/ahcd/nhamcs_emergency/2010_ed_web_tables.pdf. Accessed March 15, 2019.
3. Morrow DA, Cannon CP, Jesse RL, et al; National Academy of Clinical Biochemistry. National Academy of Clinical Biochemistry Laboratory Medicine Practice Guidelines: Clinical characteristics and utilization of biochemical markers in acute coronary syndromes. Circulation. 2007;115(13):e356-e375.
4. Larochelle MR, Knight AM, Pantle H, Riedel S, Trost JC. Reducing excess cardiac biomarker testing at an academic medical center. J Gen Intern Med. 2014;29(11):1468-1474.
5. Centers for Medicare and Medicaid Services. 2016 clinical laboratory fee schedule. https://www.cms.gov/Medicare/Medicare-Fee -for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files-Items/16CLAB.html?DLPage=1&DLEntries=10&DLSort=2&DLSortDir=descending. Accessed March 15, 2019.
In recent years, driven by accelerating health care costs and desire for improved health care value, major specialty group guidelines have incorporated resource utilization and value calculations into their recommendations. High-value care has the characteristics of enhancing outcomes, safety, and patient satisfaction at a reasonable cost. As one example, the American College of Cardiology (ACC) recently published a consensus statement on its clinical practice guidelines with a specific focus on cost and value.1 This guideline acknowledges the difficulty in incorporating value into clinical decision making but stresses a need for increased transparency and consistency to boost value in everyday practice.
Chest pain and related symptoms were listed as the second leading principle reasons for emergency department visits in the US in 2011 with 14% of patients undergoing cardiac enzyme testing.2 The ACC guidelines advocate use of troponin as the preferred laboratory test for the initial evaluation of acute coronary syndrome (ACS). Fractionated creatine kinase (CK-MB) is an acceptable alternative only when a cardiac troponin test is not available.3 Furthermore, troponins should be obtained no more than 3 times for the initial evaluation of a single event, and further trending provides no additional benefit or prognostic information.
A recent study from an academic hospital showed that process improvement interventions focused on eliminating unnecessary cardiac enzyme testing led to a 1-year cost savings of $1.25 million while increasing the rate of ACS diagnosis.4 Common clinical practice at Naval Medical Center Portsmouth (NMCP) in Virginia still routinely includes both troponin as well as a CK panel comprised of CK, CK-MB, and a calculated CK-MB/CK index. Our study focuses on the implementation of quality improvement efforts described by Larochelle and colleagues at NMCP.4 The study aimed to determine the impact of implementing interventions designed to improve the ordering practices and reduce the cost of cardiac enzyme testing.
Methods
The primary focus of the intervention was on ordering practices of the emergency medicine department (EMD), internal medicine (IM) inpatient services, and cardiology inpatient services. Specific interventions were: (1) removal of the CK panel from the chest pain order set in the EMD electronic health record (EHR); (2) removal of the CK panel from the inpatient cardiology order set; (3) education of staff on the changes in CK panel utility via direct communication during IM academic seminars; (4) education of nursing staff ordering laboratory results on behalf of physicians on the cardiology service at the morning and evening huddles; and (5) addition of “max of 3 tests indicated” comment to the inpatient EHR ordering page of the troponin test
Data Source
The process improvement interventions were considered exempt from institutional review board (IRB) approval; however, we obtained expedited IRB approval with waiver of consent for the research aspect of the project. We obtained clinical administrative data from the Military Health System Data Repository (MDR). We identified all adult patients aged ≥ 18 years who had a troponin test, CK-MB, or both drawn at NMCP on the following services: the EMD, IM, and cardiology. A troponin or CK-MB test was defined using Current Procedural Terminology (CPT) codes and unique Logical Observation Identifiers Names and Codes (LOINC).
Measures
The study was divided into 3 periods: the preintervention period from August 1, 2013 to July 31, 2014; the intervention period from August 1, 2014 to January 31, 2015; and the postintervention period February 1, 2015 to January 31, 2016.
The primary outcomes measured were the frequency of guideline concordance and total costs for tests ordered per month using the Centers for Medicare and Medicaid Services (CMS) clinical laboratory fee schedule of $13.40 for troponin and $16.17 for CK-MB.5Concordance was defined as ≤ 3 troponin tests and no CK-MB tests ordered during 1 encounter for a patient without an ACS diagnosis in the preceding 7 days. Due to faster cellular release kinetics of CK-MB compared with that of troponin, this test has utility in evaluating new or worsening chest pain in the setting of a recent myocardial infarction (MI). Therefore, we excluded any patient who had a MI within the preceding 7 days of an order for either CK-MB or troponin tests. Additionally, the number of tests, both CK-MB and troponin, ordered per patient encounter (hereafter referred to as an episode) were measured. Finally, we measured the monthly prevalence of ACS diagnosis and percentage of visits having that diagnosis.
Data Analysis
Descriptive statistics were used to calculate population demographics of age group, sex, beneficiary category, sponsor service, and clinical setting. Monthly data were grouped into the preintervention and postintervention periods. The analysis was performed using t tests to compare mean values and CIs before and after the intervention. Simple linear regression with attention to correlation was used to create best fit lines with confidence bands before and after the intervention. Interrupted time series (ITS) regression was used to describe all data points throughout the study. Consistency between these various methods was verified. Mean values and CIs were reported from the t tests. Statistical significance was reported when appropriate. Equations and confidence predictions on the simple linear regressions were produced and reported. These were used to identify values at the start, midpoint, and end of the pre- and postintervention periods.
Results
There were a total of 6,281 patients in the study population. More patients were seen during the postintervention period than in the preintervention period. The mean age of patients was slightly higher during the preintervention period (Table 1).
Guideline Concordance
To determine whether ordering practices for cardiac enzyme testing improved, we assessed the changes in the frequency of guideline concordance during the pre- and postintervention period. On average during the preintervention year, the percentage of tests ordered that met guideline concordance was 10.1% (95% CI, 7.4%-12.9%), increasing by 0.80% (95% CI, 0.17%-1.42%) each month.
Costs
We assessed changes in total dollars spent on cardiac enzyme testing during the pre- and postintervention periods. During the preintervention year, $9,400 (95% CI, $8,700-$10,100) was spent on average each month, which did not change significantly throughout the period. During the postintervention year, the cost was stable at $5,000 (95% CI, $4,600-$5,300) on average each month, a reduction of $4,400 (95% CI, $3,700-$5,100) (Figure 2).
CK-MB and Troponin Tests per Patient
To further assess ordering practices for cardiac enzyme testing, we compared the changes in the monthly number of tests and the average number of CK-MB and troponin tests ordered per episode pre- and postintervention. On average during the preintervention year, 297 tests (95% CI, 278-315) were run per month, with an average of 1.21 CK tests (95% CI, 1.15-1.27) per episode (Table 2, Figure 3).
The changes in troponin testing were not as dramatic. The counts of tests each month remained similar, with a preintervention year average of 341 (95% CI, 306-377) and postintervention year average of 310 (95% CI, 287-332), which were not statistically different. However, there was a statistically significant decrease in the number of tests per episode. During the preintervention year, 1.38 troponin tests (95% CI, 1.31-1.45) were ordered per patient on average. This dropped by 0.17 (95% CI, 0.09-0.24) to the postintervention average of 1.21 (95% CI, 1.17-1.25) (Table 2, Figure 4).
ACS Prevalence
To determine whether there was an impact on ACS diagnoses, we looked at the numbers of ACS diagnoses and their prevalence among visits before and after the intervention. During the preintervention year, the average monthly number of diagnoses was 29.7 (95% CI, 26.1-33.2), and prevalence of ACS was 0.56% (95% CI, 0.48%-0.63%) of all episodes. Although the monthly rate was statistically decreasing by 0.022% (95% CI, 0.003-0.41), this has little meaning since the level of correlation (r2 = 0.2522, not displayed) was poor due to the essentially nonexistent correlation in number of visits each month (r2 = 0.0112, not displayed). During the postintervention year, the average number of diagnoses was 32.2 (95% CI, 27.9-36.6), and the prevalence of ACS was 0.62% (95% CI, 0.54-0.65). Neither of these values changed significantly between the pre- and postintervention period. All ICD-9 and ICD-10 diagnosis codes used for the analysis are available upon request from the authors.
Discussion
Our data demonstrate the ability of simple process improvement interventions to decrease unnecessary testing in the workup of ACS, increasing the rate of guideline concordant testing by > 70% at a single military treatment facility (MTF). In particular, with the now widespread use of EHR, the order set presents a high-yield target for process improvement in an easily implemented, durable fashion. We had expected to see some decrease in the efficacy of the intervention at a time of staff turnover in the summer of 2015 because ongoing dedicated teaching sessions were not performed. Despite that, the intervention remained effective without further dedicated teaching sessions. This outcome was certainly attributable to the hardwired interventions made (mainly via order sets), but possibly indicates an institutional memory that can take hold after an initial concerted effort is made.
We reduced the estimated preintervention annual cost of $113,000 by $53,000 (95% CI, $42,000-$64,000). Although on a much smaller scale than the study by Larochelle, our study represents a nearly 50% reduction in the total cost of initial testing for possible ACS and a > 80% reduction in unnecessary CK-MB testing.4 This result was achieved with no statistical change in the prevalence of ACS. The cost reduction does not account for the labor costs to clinically follow-up and address additional unnecessary lab results. The estimated cost of intervention was limited to the time required to educate residents, interns, and nursing staff as well as the implementation of the automated, reflexive laboratory results ordering process.
Unique to our study, we also demonstrated an intervention that satisfied all the major stakeholders in the ordering of these laboratory results. By instituting the reflexive ordering of CK-MB tests for positive troponins, we obtained the support of the facility’s interventional cardiology department, which finds value in that data. Appreciating the time-sensitive nature of an ACS diagnosis, the reflexive ordering minimized the delay in receiving these data while still greatly reducing the number of tests performed. That being said, if the current trend away from CK-MB in favor of exclusively testing troponin continues, removing the reflexive ordering for positive laboratory results protocol would be an easy follow-on intervention.
Limitations
Our study presented several limitations. First, reporting errors due to improper or insufficient medical coding as well as data entry errors may exist within the MDR; therefore, the results of this analysis may be over- or underestimated. Specifically, CPT codes for troponin and CK-MB were available only in 1 of the 2 data sets used for this study, which primarily contains outpatient patient encounters. For this reason, most of the laboratory testing comes from the EMD rather than from inpatient services. However, because we excluded all patients who eventually had an ACS diagnosis (patients who likely had more inpatient time and better indication for repeat troponin), we feel that our intervention was still thoroughly investigated. Second, the number of tests drawn per patient was significantly < 2, the expected minimum number of tests to rule out ACS in patients with appropriate symptoms.
This study was not designed to answer the source of variation from guidelines. Many patients had only 1 test, which we feel represents an opportunity for future study to identify other ways cardiac enzyme testing is being used clinically. These tests might be used for patients without convincing symptoms and signs of coronary syndromes or for patients with other primary problems. Third, by using the ITS analysis, we assumed that the outcome during each intervention period follows a linear pattern. However, changes may follow a nonlinear pattern over a long period. Finally, our intervention was limited to only a single MTF, which may limit generalizability to other facilities across military medicine. However, we feel this study should serve as a guide for other MTFs as well as US Department of Veterans Affairs facilities that could institute similar process improvements.
Conclusion
We made easily implemented and durable process improvement interventions that changed institution-wide ordering practices. These changes dramatically increased the rate of guideline-concordant testing, decreasing cost and furthering the goal of high-value medical care.
In recent years, driven by accelerating health care costs and desire for improved health care value, major specialty group guidelines have incorporated resource utilization and value calculations into their recommendations. High-value care has the characteristics of enhancing outcomes, safety, and patient satisfaction at a reasonable cost. As one example, the American College of Cardiology (ACC) recently published a consensus statement on its clinical practice guidelines with a specific focus on cost and value.1 This guideline acknowledges the difficulty in incorporating value into clinical decision making but stresses a need for increased transparency and consistency to boost value in everyday practice.
Chest pain and related symptoms were listed as the second leading principle reasons for emergency department visits in the US in 2011 with 14% of patients undergoing cardiac enzyme testing.2 The ACC guidelines advocate use of troponin as the preferred laboratory test for the initial evaluation of acute coronary syndrome (ACS). Fractionated creatine kinase (CK-MB) is an acceptable alternative only when a cardiac troponin test is not available.3 Furthermore, troponins should be obtained no more than 3 times for the initial evaluation of a single event, and further trending provides no additional benefit or prognostic information.
A recent study from an academic hospital showed that process improvement interventions focused on eliminating unnecessary cardiac enzyme testing led to a 1-year cost savings of $1.25 million while increasing the rate of ACS diagnosis.4 Common clinical practice at Naval Medical Center Portsmouth (NMCP) in Virginia still routinely includes both troponin as well as a CK panel comprised of CK, CK-MB, and a calculated CK-MB/CK index. Our study focuses on the implementation of quality improvement efforts described by Larochelle and colleagues at NMCP.4 The study aimed to determine the impact of implementing interventions designed to improve the ordering practices and reduce the cost of cardiac enzyme testing.
Methods
The primary focus of the intervention was on ordering practices of the emergency medicine department (EMD), internal medicine (IM) inpatient services, and cardiology inpatient services. Specific interventions were: (1) removal of the CK panel from the chest pain order set in the EMD electronic health record (EHR); (2) removal of the CK panel from the inpatient cardiology order set; (3) education of staff on the changes in CK panel utility via direct communication during IM academic seminars; (4) education of nursing staff ordering laboratory results on behalf of physicians on the cardiology service at the morning and evening huddles; and (5) addition of “max of 3 tests indicated” comment to the inpatient EHR ordering page of the troponin test
Data Source
The process improvement interventions were considered exempt from institutional review board (IRB) approval; however, we obtained expedited IRB approval with waiver of consent for the research aspect of the project. We obtained clinical administrative data from the Military Health System Data Repository (MDR). We identified all adult patients aged ≥ 18 years who had a troponin test, CK-MB, or both drawn at NMCP on the following services: the EMD, IM, and cardiology. A troponin or CK-MB test was defined using Current Procedural Terminology (CPT) codes and unique Logical Observation Identifiers Names and Codes (LOINC).
Measures
The study was divided into 3 periods: the preintervention period from August 1, 2013 to July 31, 2014; the intervention period from August 1, 2014 to January 31, 2015; and the postintervention period February 1, 2015 to January 31, 2016.
The primary outcomes measured were the frequency of guideline concordance and total costs for tests ordered per month using the Centers for Medicare and Medicaid Services (CMS) clinical laboratory fee schedule of $13.40 for troponin and $16.17 for CK-MB.5Concordance was defined as ≤ 3 troponin tests and no CK-MB tests ordered during 1 encounter for a patient without an ACS diagnosis in the preceding 7 days. Due to faster cellular release kinetics of CK-MB compared with that of troponin, this test has utility in evaluating new or worsening chest pain in the setting of a recent myocardial infarction (MI). Therefore, we excluded any patient who had a MI within the preceding 7 days of an order for either CK-MB or troponin tests. Additionally, the number of tests, both CK-MB and troponin, ordered per patient encounter (hereafter referred to as an episode) were measured. Finally, we measured the monthly prevalence of ACS diagnosis and percentage of visits having that diagnosis.
Data Analysis
Descriptive statistics were used to calculate population demographics of age group, sex, beneficiary category, sponsor service, and clinical setting. Monthly data were grouped into the preintervention and postintervention periods. The analysis was performed using t tests to compare mean values and CIs before and after the intervention. Simple linear regression with attention to correlation was used to create best fit lines with confidence bands before and after the intervention. Interrupted time series (ITS) regression was used to describe all data points throughout the study. Consistency between these various methods was verified. Mean values and CIs were reported from the t tests. Statistical significance was reported when appropriate. Equations and confidence predictions on the simple linear regressions were produced and reported. These were used to identify values at the start, midpoint, and end of the pre- and postintervention periods.
Results
There were a total of 6,281 patients in the study population. More patients were seen during the postintervention period than in the preintervention period. The mean age of patients was slightly higher during the preintervention period (Table 1).
Guideline Concordance
To determine whether ordering practices for cardiac enzyme testing improved, we assessed the changes in the frequency of guideline concordance during the pre- and postintervention period. On average during the preintervention year, the percentage of tests ordered that met guideline concordance was 10.1% (95% CI, 7.4%-12.9%), increasing by 0.80% (95% CI, 0.17%-1.42%) each month.
Costs
We assessed changes in total dollars spent on cardiac enzyme testing during the pre- and postintervention periods. During the preintervention year, $9,400 (95% CI, $8,700-$10,100) was spent on average each month, which did not change significantly throughout the period. During the postintervention year, the cost was stable at $5,000 (95% CI, $4,600-$5,300) on average each month, a reduction of $4,400 (95% CI, $3,700-$5,100) (Figure 2).
CK-MB and Troponin Tests per Patient
To further assess ordering practices for cardiac enzyme testing, we compared the changes in the monthly number of tests and the average number of CK-MB and troponin tests ordered per episode pre- and postintervention. On average during the preintervention year, 297 tests (95% CI, 278-315) were run per month, with an average of 1.21 CK tests (95% CI, 1.15-1.27) per episode (Table 2, Figure 3).
The changes in troponin testing were not as dramatic. The counts of tests each month remained similar, with a preintervention year average of 341 (95% CI, 306-377) and postintervention year average of 310 (95% CI, 287-332), which were not statistically different. However, there was a statistically significant decrease in the number of tests per episode. During the preintervention year, 1.38 troponin tests (95% CI, 1.31-1.45) were ordered per patient on average. This dropped by 0.17 (95% CI, 0.09-0.24) to the postintervention average of 1.21 (95% CI, 1.17-1.25) (Table 2, Figure 4).
ACS Prevalence
To determine whether there was an impact on ACS diagnoses, we looked at the numbers of ACS diagnoses and their prevalence among visits before and after the intervention. During the preintervention year, the average monthly number of diagnoses was 29.7 (95% CI, 26.1-33.2), and prevalence of ACS was 0.56% (95% CI, 0.48%-0.63%) of all episodes. Although the monthly rate was statistically decreasing by 0.022% (95% CI, 0.003-0.41), this has little meaning since the level of correlation (r2 = 0.2522, not displayed) was poor due to the essentially nonexistent correlation in number of visits each month (r2 = 0.0112, not displayed). During the postintervention year, the average number of diagnoses was 32.2 (95% CI, 27.9-36.6), and the prevalence of ACS was 0.62% (95% CI, 0.54-0.65). Neither of these values changed significantly between the pre- and postintervention period. All ICD-9 and ICD-10 diagnosis codes used for the analysis are available upon request from the authors.
Discussion
Our data demonstrate the ability of simple process improvement interventions to decrease unnecessary testing in the workup of ACS, increasing the rate of guideline concordant testing by > 70% at a single military treatment facility (MTF). In particular, with the now widespread use of EHR, the order set presents a high-yield target for process improvement in an easily implemented, durable fashion. We had expected to see some decrease in the efficacy of the intervention at a time of staff turnover in the summer of 2015 because ongoing dedicated teaching sessions were not performed. Despite that, the intervention remained effective without further dedicated teaching sessions. This outcome was certainly attributable to the hardwired interventions made (mainly via order sets), but possibly indicates an institutional memory that can take hold after an initial concerted effort is made.
We reduced the estimated preintervention annual cost of $113,000 by $53,000 (95% CI, $42,000-$64,000). Although on a much smaller scale than the study by Larochelle, our study represents a nearly 50% reduction in the total cost of initial testing for possible ACS and a > 80% reduction in unnecessary CK-MB testing.4 This result was achieved with no statistical change in the prevalence of ACS. The cost reduction does not account for the labor costs to clinically follow-up and address additional unnecessary lab results. The estimated cost of intervention was limited to the time required to educate residents, interns, and nursing staff as well as the implementation of the automated, reflexive laboratory results ordering process.
Unique to our study, we also demonstrated an intervention that satisfied all the major stakeholders in the ordering of these laboratory results. By instituting the reflexive ordering of CK-MB tests for positive troponins, we obtained the support of the facility’s interventional cardiology department, which finds value in that data. Appreciating the time-sensitive nature of an ACS diagnosis, the reflexive ordering minimized the delay in receiving these data while still greatly reducing the number of tests performed. That being said, if the current trend away from CK-MB in favor of exclusively testing troponin continues, removing the reflexive ordering for positive laboratory results protocol would be an easy follow-on intervention.
Limitations
Our study presented several limitations. First, reporting errors due to improper or insufficient medical coding as well as data entry errors may exist within the MDR; therefore, the results of this analysis may be over- or underestimated. Specifically, CPT codes for troponin and CK-MB were available only in 1 of the 2 data sets used for this study, which primarily contains outpatient patient encounters. For this reason, most of the laboratory testing comes from the EMD rather than from inpatient services. However, because we excluded all patients who eventually had an ACS diagnosis (patients who likely had more inpatient time and better indication for repeat troponin), we feel that our intervention was still thoroughly investigated. Second, the number of tests drawn per patient was significantly < 2, the expected minimum number of tests to rule out ACS in patients with appropriate symptoms.
This study was not designed to answer the source of variation from guidelines. Many patients had only 1 test, which we feel represents an opportunity for future study to identify other ways cardiac enzyme testing is being used clinically. These tests might be used for patients without convincing symptoms and signs of coronary syndromes or for patients with other primary problems. Third, by using the ITS analysis, we assumed that the outcome during each intervention period follows a linear pattern. However, changes may follow a nonlinear pattern over a long period. Finally, our intervention was limited to only a single MTF, which may limit generalizability to other facilities across military medicine. However, we feel this study should serve as a guide for other MTFs as well as US Department of Veterans Affairs facilities that could institute similar process improvements.
Conclusion
We made easily implemented and durable process improvement interventions that changed institution-wide ordering practices. These changes dramatically increased the rate of guideline-concordant testing, decreasing cost and furthering the goal of high-value medical care.
1. Anderson JL, Heidenreich PA, Barnett PG, et al; ACC/AHA Task Force on Performance Measures; ACC/AHA Task Force on Practice Guidelines. ACC/AHA statement on cost/value methodology in clinical practice guidelines and performance measures: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures and Task Force on Practice Guidelines. Circulation. 2014;129(22):2329-2345.
2. Centers for Disease Control and Prevention, National Center for Health Statistics. National hospital ambulatory medical care survey: 2010 emergency department summary tables. https://www.cdc.gov/nchs/data/ahcd/nhamcs_emergency/2010_ed_web_tables.pdf. Accessed March 15, 2019.
3. Morrow DA, Cannon CP, Jesse RL, et al; National Academy of Clinical Biochemistry. National Academy of Clinical Biochemistry Laboratory Medicine Practice Guidelines: Clinical characteristics and utilization of biochemical markers in acute coronary syndromes. Circulation. 2007;115(13):e356-e375.
4. Larochelle MR, Knight AM, Pantle H, Riedel S, Trost JC. Reducing excess cardiac biomarker testing at an academic medical center. J Gen Intern Med. 2014;29(11):1468-1474.
5. Centers for Medicare and Medicaid Services. 2016 clinical laboratory fee schedule. https://www.cms.gov/Medicare/Medicare-Fee -for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files-Items/16CLAB.html?DLPage=1&DLEntries=10&DLSort=2&DLSortDir=descending. Accessed March 15, 2019.
1. Anderson JL, Heidenreich PA, Barnett PG, et al; ACC/AHA Task Force on Performance Measures; ACC/AHA Task Force on Practice Guidelines. ACC/AHA statement on cost/value methodology in clinical practice guidelines and performance measures: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures and Task Force on Practice Guidelines. Circulation. 2014;129(22):2329-2345.
2. Centers for Disease Control and Prevention, National Center for Health Statistics. National hospital ambulatory medical care survey: 2010 emergency department summary tables. https://www.cdc.gov/nchs/data/ahcd/nhamcs_emergency/2010_ed_web_tables.pdf. Accessed March 15, 2019.
3. Morrow DA, Cannon CP, Jesse RL, et al; National Academy of Clinical Biochemistry. National Academy of Clinical Biochemistry Laboratory Medicine Practice Guidelines: Clinical characteristics and utilization of biochemical markers in acute coronary syndromes. Circulation. 2007;115(13):e356-e375.
4. Larochelle MR, Knight AM, Pantle H, Riedel S, Trost JC. Reducing excess cardiac biomarker testing at an academic medical center. J Gen Intern Med. 2014;29(11):1468-1474.
5. Centers for Medicare and Medicaid Services. 2016 clinical laboratory fee schedule. https://www.cms.gov/Medicare/Medicare-Fee -for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files-Items/16CLAB.html?DLPage=1&DLEntries=10&DLSort=2&DLSortDir=descending. Accessed March 15, 2019.
Clinical Pharmacist Credentialing and Privileging: A Process for Ensuring High-Quality Patient Care
The Red Lake Indian Health Service (IHS) health care facility is in north-central Minnesota within the Red Lake Nation. The facility supports primary care, emergency, urgent care, pharmacy, inpatient, optometry, dental, radiology, laboratory, physical therapy, and behavioral health services to about 10,000 Red Lake Band of Chippewa Indian patients. The Red Lake pharmacy provides inpatient and outpatient medication services and pharmacist-managed clinical patient care.
In 2013, the Red Lake IHS medical staff endorsed the implementation of comprehensive clinical pharmacy services to increase health care access and optimize clinical outcomes for patients. During the evolution of pharmacy-based patient-centric care, the clinical programs offered by Red Lake IHS pharmacy expanded from 1 anticoagulation clinic to multiple advanced-practice clinical pharmacy services. This included pharmacy primary care, medication-assisted therapy, naloxone, hepatitis C, and behavioral health medication management clinics.
The immense clinical growth of the pharmacy department demonstrated a need to assess and monitor pharmacist competency to ensure the delivery of quality patient care. Essential quality improvement processes were lacking. To fill these quality improvement gaps, a robust pharmacist credentialing and privileging program was implemented in 2015.
Patient Care
As efforts within health care establishments across the US focus on the delivery of efficient, high-quality, affordable health care, pharmacists have become increasingly instrumental in providing patient care within expanded clinical roles.1-8 Many clinical pharmacy models have evolved into interdisciplinary approaches to care.9 Within these models, abiding by state and federal laws, pharmacists practice under the indirect supervision of licensed independent practitioners (LIPs), such as physicians, nurse practitioners, and physician assistants.8 Under collaborative practice agreements (CPAs), patients are initially diagnosed by LIPs, then referred to clinical pharmacists for therapeutic management.5,7
Clinical pharmacist functions encompass comprehensive medication management (ie, prescribing, monitoring, and adjustment of medications), nonpharmacologic guidance, and coordination of care. Interdisciplinary collaboration allows pharmacists opportunities to provide direct patient care or consultations by telecommunication in many different clinical environments, including disease management, primary care, or specialty care. Pharmacists may manage chronic or acute illnesses associated with endocrine, cardiovascular, respiratory, gastrointestinal, or other systems.
Pharmacists may also provide comprehensive medication review services, such as medication therapy management (MTM), transitions of care, or chronic care management. Examples of specialized areas include psychiatric, opioid use disorder, palliative care, infectious disease, chronic pain, or oncology services. For hospitalized patients, pharmacists may monitor pharmacokinetics and adjust dosing, transition patients from IV to oral medications, or complete medication reconciliation.10 Within these clinical roles, pharmacists assist in providing patient care during shortages of other health care providers (HCPs), improve patient outcomes, decrease health care-associated costs by preventing emergency department and hospital admissions or readmissions, increase access to patient care, and increase revenue through pharmacist-managed clinics and services.11
Pharmacist Credentialing
With the advancement of modern clinical pharmacy practice, many pharmacists have undertaken responsibilities to fulfill the complex duties of clinical care and diverse patient situations, but with few or no requirements to prove initial or ongoing clinical competency.2 Traditionally, pharmacist credentialing is limited to a onetime or periodic review of education and licensure, with little to no involvement in privileging and ongoing monitoring of clinical proficiency.10 These quality assurance disparities can be met and satisfied through credentialing and privileging processes. Credentialing and privileging are systematic, evidence-based processes that provide validation to HCPs, employers, and patients that pharmacists are qualified to practice clinically. 2,9 According to the Council on Credentialing in Pharmacy, clinical pharmacists should be held accountable for demonstrating competency and providing quality care through credentialing and privileging, as required for other HCPs.2,12
Credentialing and recredentialing is a primary source verification process. These processes ensure that there are no license restrictions or revocations; certifications are current; mandatory courses, certificates, and continuing education are complete; training and orientation are satisfactory; and any disciplinary action, malpractice claims, or history of impairment is reported. Privileging is the review of credentials and evaluation of clinical training and competence by the Clinical Director and Medical Executive Committee to determine whether a clinical pharmacist is competent to practice within requested privileges.11
Credentialing and privileging processes are designed not only to initially confirm that a pharmacist is competent to practice clinically, but also monitor ongoing performance.2,13 Participation in professional practice evaluations, which includes peer reviews, ongoing professional practice evaluations, and focused professional practice evaluations, is required for all credentialed and privileged practitioners. These evaluations are used to identify, assess, and correct unsatisfactory trends. Individual practices, documentation, and processes are evaluated against existing department standards (eg, CPAs, policies, processes)11,13 The results of individual professional practice evaluations are reviewed with practitioners on a regular basis and performance improvement plans implemented as needed.
Since 2015, 17 pharmacists at the Red Lake IHS health care facility have been granted membership to the medical staff as credentialed and privileged practitioners. In a retrospective review of professional practice evaluations by the Red Lake IHS pharmacy clinical coordinator, 971 outpatient clinical peer reviews, including the evaluation of 21,526 peer-review elements were completed by pharmacists from fiscal year 2015 through 2018. Peer-review elements assessed
Conclusion
Pharmacists have become increasingly instrumental in providing effective, cost-efficient, and accessible clinical services by continuing to move toward expanding and evolving roles within comprehensive, patient-centered clinical pharmacy practice settings.5,6 Multifaceted clinical responsibilities associated with health care delivery necessitate assessment and monitoring of pharmacist performance. Credentialing and privileging is an established and trusted systematic process that assures HCPs, employers, and patients that pharmacists are qualified and competent to practice clinically.2,4,12 Implementation of professional practice evaluations suggest improved staff compliance with visit documentation, patient care standards, and clinic processes required by CPAs, policies, and department standards to ensure the delivery of safe, high-quality patient care.
1. Giberson S, Yoder S, Lee MP. Improving patient and health system outcomes through advanced pharmacy practice. https://www.accp.com/docs/positions/misc/Improving_Patient_and_Health_System_Outcomes.pdf. Published December 2011. Accessed March 15, 2019.
2. Rouse MJ, Vlasses PH, Webb CE; Council on Credentialing in Pharmacy. Credentialing and privileging of pharmacists: a resource paper from the Council on Credentialing in Pharmacy. Am J Health Syst Pharm. 2014;71(21):e109-e118.
3. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769.
4. Blair MM, Carmichael J, Young E, Thrasher K; Qualified Provider Model Ad Hoc Committee. Pharmacist privileging in a health system: report of the Qualified Provider Model Ad Hoc Committee. Am J Health Syst Pharm. 2007;64(22):2373-2381.
5. Claxton KI, Wojtal P. Design and implementation of a credentialing and privileging model for ambulatory care pharmacists. Am J Health Syst Pharm. 2006;63(17):1627-1632.
6. Jordan TA, Hennenfent JA, Lewin JJ III, Nesbit TW, Weber R. Elevating pharmacists’ scope of practice through a health-system clinical privileging process. Am J Health Syst Pharm. 2016;73(18):1395-1405.
7. Centers for Disease Control and Prevention. Collaborative practice agreements and pharmacists’ patient care services: a resource for doctors, nurses, physician assistants, and other providers. https://www.cdc.gov/dhdsp/pubs/docs/Translational_Tools_Providers.pdf. Published October 2013. Accessed March 18, 2019.
8. Council on Credentialing in Pharmacy, Albanese NP, Rouse MJ. Scope of contemporary pharmacy practice: roles, responsibilities, and functions of practitioners and pharmacy technicians. J Am Pharm Assoc (2003). 2010;50(2):e35-e69.
9. Philip B, Weber R. Enhancing pharmacy practice models through pharmacists’ privileging. Hosp Pharm. 2013; 48(2):160-165.
10. Galt KA. Credentialing and privileging of pharmacists. Am J Health Syst Pharm. 2004;61(7):661-670.
11. Smith ML, Gemelas MF; US Public Health Service; Indian Health Service. Indian Health Service medical staff credentialing and privileging guide. https://www.ihs.gov/riskmanagement/includes/themes/newihstheme/display_objects/documents/IHS-Medical-Staff-Credentialing-and-Privileging-Guide.pdf. Published September 2005. Accessed March 15, 2019.
12. US Department of Health and Human Services, Indian Health Service. Indian health manual: medical credentials and privileges review process. https://www.ihs.gov/ihm/pc/part-3/p3c1. Accessed March 15, 2019.
13. Holley SL, Ketel C. Ongoing professional practice evaluation and focused professional practice evaluation: an overview for advanced practice clinicians. J Midwifery Women Health. 2014;59(4):452-459.
The Red Lake Indian Health Service (IHS) health care facility is in north-central Minnesota within the Red Lake Nation. The facility supports primary care, emergency, urgent care, pharmacy, inpatient, optometry, dental, radiology, laboratory, physical therapy, and behavioral health services to about 10,000 Red Lake Band of Chippewa Indian patients. The Red Lake pharmacy provides inpatient and outpatient medication services and pharmacist-managed clinical patient care.
In 2013, the Red Lake IHS medical staff endorsed the implementation of comprehensive clinical pharmacy services to increase health care access and optimize clinical outcomes for patients. During the evolution of pharmacy-based patient-centric care, the clinical programs offered by Red Lake IHS pharmacy expanded from 1 anticoagulation clinic to multiple advanced-practice clinical pharmacy services. This included pharmacy primary care, medication-assisted therapy, naloxone, hepatitis C, and behavioral health medication management clinics.
The immense clinical growth of the pharmacy department demonstrated a need to assess and monitor pharmacist competency to ensure the delivery of quality patient care. Essential quality improvement processes were lacking. To fill these quality improvement gaps, a robust pharmacist credentialing and privileging program was implemented in 2015.
Patient Care
As efforts within health care establishments across the US focus on the delivery of efficient, high-quality, affordable health care, pharmacists have become increasingly instrumental in providing patient care within expanded clinical roles.1-8 Many clinical pharmacy models have evolved into interdisciplinary approaches to care.9 Within these models, abiding by state and federal laws, pharmacists practice under the indirect supervision of licensed independent practitioners (LIPs), such as physicians, nurse practitioners, and physician assistants.8 Under collaborative practice agreements (CPAs), patients are initially diagnosed by LIPs, then referred to clinical pharmacists for therapeutic management.5,7
Clinical pharmacist functions encompass comprehensive medication management (ie, prescribing, monitoring, and adjustment of medications), nonpharmacologic guidance, and coordination of care. Interdisciplinary collaboration allows pharmacists opportunities to provide direct patient care or consultations by telecommunication in many different clinical environments, including disease management, primary care, or specialty care. Pharmacists may manage chronic or acute illnesses associated with endocrine, cardiovascular, respiratory, gastrointestinal, or other systems.
Pharmacists may also provide comprehensive medication review services, such as medication therapy management (MTM), transitions of care, or chronic care management. Examples of specialized areas include psychiatric, opioid use disorder, palliative care, infectious disease, chronic pain, or oncology services. For hospitalized patients, pharmacists may monitor pharmacokinetics and adjust dosing, transition patients from IV to oral medications, or complete medication reconciliation.10 Within these clinical roles, pharmacists assist in providing patient care during shortages of other health care providers (HCPs), improve patient outcomes, decrease health care-associated costs by preventing emergency department and hospital admissions or readmissions, increase access to patient care, and increase revenue through pharmacist-managed clinics and services.11
Pharmacist Credentialing
With the advancement of modern clinical pharmacy practice, many pharmacists have undertaken responsibilities to fulfill the complex duties of clinical care and diverse patient situations, but with few or no requirements to prove initial or ongoing clinical competency.2 Traditionally, pharmacist credentialing is limited to a onetime or periodic review of education and licensure, with little to no involvement in privileging and ongoing monitoring of clinical proficiency.10 These quality assurance disparities can be met and satisfied through credentialing and privileging processes. Credentialing and privileging are systematic, evidence-based processes that provide validation to HCPs, employers, and patients that pharmacists are qualified to practice clinically. 2,9 According to the Council on Credentialing in Pharmacy, clinical pharmacists should be held accountable for demonstrating competency and providing quality care through credentialing and privileging, as required for other HCPs.2,12
Credentialing and recredentialing is a primary source verification process. These processes ensure that there are no license restrictions or revocations; certifications are current; mandatory courses, certificates, and continuing education are complete; training and orientation are satisfactory; and any disciplinary action, malpractice claims, or history of impairment is reported. Privileging is the review of credentials and evaluation of clinical training and competence by the Clinical Director and Medical Executive Committee to determine whether a clinical pharmacist is competent to practice within requested privileges.11
Credentialing and privileging processes are designed not only to initially confirm that a pharmacist is competent to practice clinically, but also monitor ongoing performance.2,13 Participation in professional practice evaluations, which includes peer reviews, ongoing professional practice evaluations, and focused professional practice evaluations, is required for all credentialed and privileged practitioners. These evaluations are used to identify, assess, and correct unsatisfactory trends. Individual practices, documentation, and processes are evaluated against existing department standards (eg, CPAs, policies, processes)11,13 The results of individual professional practice evaluations are reviewed with practitioners on a regular basis and performance improvement plans implemented as needed.
Since 2015, 17 pharmacists at the Red Lake IHS health care facility have been granted membership to the medical staff as credentialed and privileged practitioners. In a retrospective review of professional practice evaluations by the Red Lake IHS pharmacy clinical coordinator, 971 outpatient clinical peer reviews, including the evaluation of 21,526 peer-review elements were completed by pharmacists from fiscal year 2015 through 2018. Peer-review elements assessed
Conclusion
Pharmacists have become increasingly instrumental in providing effective, cost-efficient, and accessible clinical services by continuing to move toward expanding and evolving roles within comprehensive, patient-centered clinical pharmacy practice settings.5,6 Multifaceted clinical responsibilities associated with health care delivery necessitate assessment and monitoring of pharmacist performance. Credentialing and privileging is an established and trusted systematic process that assures HCPs, employers, and patients that pharmacists are qualified and competent to practice clinically.2,4,12 Implementation of professional practice evaluations suggest improved staff compliance with visit documentation, patient care standards, and clinic processes required by CPAs, policies, and department standards to ensure the delivery of safe, high-quality patient care.
The Red Lake Indian Health Service (IHS) health care facility is in north-central Minnesota within the Red Lake Nation. The facility supports primary care, emergency, urgent care, pharmacy, inpatient, optometry, dental, radiology, laboratory, physical therapy, and behavioral health services to about 10,000 Red Lake Band of Chippewa Indian patients. The Red Lake pharmacy provides inpatient and outpatient medication services and pharmacist-managed clinical patient care.
In 2013, the Red Lake IHS medical staff endorsed the implementation of comprehensive clinical pharmacy services to increase health care access and optimize clinical outcomes for patients. During the evolution of pharmacy-based patient-centric care, the clinical programs offered by Red Lake IHS pharmacy expanded from 1 anticoagulation clinic to multiple advanced-practice clinical pharmacy services. This included pharmacy primary care, medication-assisted therapy, naloxone, hepatitis C, and behavioral health medication management clinics.
The immense clinical growth of the pharmacy department demonstrated a need to assess and monitor pharmacist competency to ensure the delivery of quality patient care. Essential quality improvement processes were lacking. To fill these quality improvement gaps, a robust pharmacist credentialing and privileging program was implemented in 2015.
Patient Care
As efforts within health care establishments across the US focus on the delivery of efficient, high-quality, affordable health care, pharmacists have become increasingly instrumental in providing patient care within expanded clinical roles.1-8 Many clinical pharmacy models have evolved into interdisciplinary approaches to care.9 Within these models, abiding by state and federal laws, pharmacists practice under the indirect supervision of licensed independent practitioners (LIPs), such as physicians, nurse practitioners, and physician assistants.8 Under collaborative practice agreements (CPAs), patients are initially diagnosed by LIPs, then referred to clinical pharmacists for therapeutic management.5,7
Clinical pharmacist functions encompass comprehensive medication management (ie, prescribing, monitoring, and adjustment of medications), nonpharmacologic guidance, and coordination of care. Interdisciplinary collaboration allows pharmacists opportunities to provide direct patient care or consultations by telecommunication in many different clinical environments, including disease management, primary care, or specialty care. Pharmacists may manage chronic or acute illnesses associated with endocrine, cardiovascular, respiratory, gastrointestinal, or other systems.
Pharmacists may also provide comprehensive medication review services, such as medication therapy management (MTM), transitions of care, or chronic care management. Examples of specialized areas include psychiatric, opioid use disorder, palliative care, infectious disease, chronic pain, or oncology services. For hospitalized patients, pharmacists may monitor pharmacokinetics and adjust dosing, transition patients from IV to oral medications, or complete medication reconciliation.10 Within these clinical roles, pharmacists assist in providing patient care during shortages of other health care providers (HCPs), improve patient outcomes, decrease health care-associated costs by preventing emergency department and hospital admissions or readmissions, increase access to patient care, and increase revenue through pharmacist-managed clinics and services.11
Pharmacist Credentialing
With the advancement of modern clinical pharmacy practice, many pharmacists have undertaken responsibilities to fulfill the complex duties of clinical care and diverse patient situations, but with few or no requirements to prove initial or ongoing clinical competency.2 Traditionally, pharmacist credentialing is limited to a onetime or periodic review of education and licensure, with little to no involvement in privileging and ongoing monitoring of clinical proficiency.10 These quality assurance disparities can be met and satisfied through credentialing and privileging processes. Credentialing and privileging are systematic, evidence-based processes that provide validation to HCPs, employers, and patients that pharmacists are qualified to practice clinically. 2,9 According to the Council on Credentialing in Pharmacy, clinical pharmacists should be held accountable for demonstrating competency and providing quality care through credentialing and privileging, as required for other HCPs.2,12
Credentialing and recredentialing is a primary source verification process. These processes ensure that there are no license restrictions or revocations; certifications are current; mandatory courses, certificates, and continuing education are complete; training and orientation are satisfactory; and any disciplinary action, malpractice claims, or history of impairment is reported. Privileging is the review of credentials and evaluation of clinical training and competence by the Clinical Director and Medical Executive Committee to determine whether a clinical pharmacist is competent to practice within requested privileges.11
Credentialing and privileging processes are designed not only to initially confirm that a pharmacist is competent to practice clinically, but also monitor ongoing performance.2,13 Participation in professional practice evaluations, which includes peer reviews, ongoing professional practice evaluations, and focused professional practice evaluations, is required for all credentialed and privileged practitioners. These evaluations are used to identify, assess, and correct unsatisfactory trends. Individual practices, documentation, and processes are evaluated against existing department standards (eg, CPAs, policies, processes)11,13 The results of individual professional practice evaluations are reviewed with practitioners on a regular basis and performance improvement plans implemented as needed.
Since 2015, 17 pharmacists at the Red Lake IHS health care facility have been granted membership to the medical staff as credentialed and privileged practitioners. In a retrospective review of professional practice evaluations by the Red Lake IHS pharmacy clinical coordinator, 971 outpatient clinical peer reviews, including the evaluation of 21,526 peer-review elements were completed by pharmacists from fiscal year 2015 through 2018. Peer-review elements assessed
Conclusion
Pharmacists have become increasingly instrumental in providing effective, cost-efficient, and accessible clinical services by continuing to move toward expanding and evolving roles within comprehensive, patient-centered clinical pharmacy practice settings.5,6 Multifaceted clinical responsibilities associated with health care delivery necessitate assessment and monitoring of pharmacist performance. Credentialing and privileging is an established and trusted systematic process that assures HCPs, employers, and patients that pharmacists are qualified and competent to practice clinically.2,4,12 Implementation of professional practice evaluations suggest improved staff compliance with visit documentation, patient care standards, and clinic processes required by CPAs, policies, and department standards to ensure the delivery of safe, high-quality patient care.
1. Giberson S, Yoder S, Lee MP. Improving patient and health system outcomes through advanced pharmacy practice. https://www.accp.com/docs/positions/misc/Improving_Patient_and_Health_System_Outcomes.pdf. Published December 2011. Accessed March 15, 2019.
2. Rouse MJ, Vlasses PH, Webb CE; Council on Credentialing in Pharmacy. Credentialing and privileging of pharmacists: a resource paper from the Council on Credentialing in Pharmacy. Am J Health Syst Pharm. 2014;71(21):e109-e118.
3. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769.
4. Blair MM, Carmichael J, Young E, Thrasher K; Qualified Provider Model Ad Hoc Committee. Pharmacist privileging in a health system: report of the Qualified Provider Model Ad Hoc Committee. Am J Health Syst Pharm. 2007;64(22):2373-2381.
5. Claxton KI, Wojtal P. Design and implementation of a credentialing and privileging model for ambulatory care pharmacists. Am J Health Syst Pharm. 2006;63(17):1627-1632.
6. Jordan TA, Hennenfent JA, Lewin JJ III, Nesbit TW, Weber R. Elevating pharmacists’ scope of practice through a health-system clinical privileging process. Am J Health Syst Pharm. 2016;73(18):1395-1405.
7. Centers for Disease Control and Prevention. Collaborative practice agreements and pharmacists’ patient care services: a resource for doctors, nurses, physician assistants, and other providers. https://www.cdc.gov/dhdsp/pubs/docs/Translational_Tools_Providers.pdf. Published October 2013. Accessed March 18, 2019.
8. Council on Credentialing in Pharmacy, Albanese NP, Rouse MJ. Scope of contemporary pharmacy practice: roles, responsibilities, and functions of practitioners and pharmacy technicians. J Am Pharm Assoc (2003). 2010;50(2):e35-e69.
9. Philip B, Weber R. Enhancing pharmacy practice models through pharmacists’ privileging. Hosp Pharm. 2013; 48(2):160-165.
10. Galt KA. Credentialing and privileging of pharmacists. Am J Health Syst Pharm. 2004;61(7):661-670.
11. Smith ML, Gemelas MF; US Public Health Service; Indian Health Service. Indian Health Service medical staff credentialing and privileging guide. https://www.ihs.gov/riskmanagement/includes/themes/newihstheme/display_objects/documents/IHS-Medical-Staff-Credentialing-and-Privileging-Guide.pdf. Published September 2005. Accessed March 15, 2019.
12. US Department of Health and Human Services, Indian Health Service. Indian health manual: medical credentials and privileges review process. https://www.ihs.gov/ihm/pc/part-3/p3c1. Accessed March 15, 2019.
13. Holley SL, Ketel C. Ongoing professional practice evaluation and focused professional practice evaluation: an overview for advanced practice clinicians. J Midwifery Women Health. 2014;59(4):452-459.
1. Giberson S, Yoder S, Lee MP. Improving patient and health system outcomes through advanced pharmacy practice. https://www.accp.com/docs/positions/misc/Improving_Patient_and_Health_System_Outcomes.pdf. Published December 2011. Accessed March 15, 2019.
2. Rouse MJ, Vlasses PH, Webb CE; Council on Credentialing in Pharmacy. Credentialing and privileging of pharmacists: a resource paper from the Council on Credentialing in Pharmacy. Am J Health Syst Pharm. 2014;71(21):e109-e118.
3. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769.
4. Blair MM, Carmichael J, Young E, Thrasher K; Qualified Provider Model Ad Hoc Committee. Pharmacist privileging in a health system: report of the Qualified Provider Model Ad Hoc Committee. Am J Health Syst Pharm. 2007;64(22):2373-2381.
5. Claxton KI, Wojtal P. Design and implementation of a credentialing and privileging model for ambulatory care pharmacists. Am J Health Syst Pharm. 2006;63(17):1627-1632.
6. Jordan TA, Hennenfent JA, Lewin JJ III, Nesbit TW, Weber R. Elevating pharmacists’ scope of practice through a health-system clinical privileging process. Am J Health Syst Pharm. 2016;73(18):1395-1405.
7. Centers for Disease Control and Prevention. Collaborative practice agreements and pharmacists’ patient care services: a resource for doctors, nurses, physician assistants, and other providers. https://www.cdc.gov/dhdsp/pubs/docs/Translational_Tools_Providers.pdf. Published October 2013. Accessed March 18, 2019.
8. Council on Credentialing in Pharmacy, Albanese NP, Rouse MJ. Scope of contemporary pharmacy practice: roles, responsibilities, and functions of practitioners and pharmacy technicians. J Am Pharm Assoc (2003). 2010;50(2):e35-e69.
9. Philip B, Weber R. Enhancing pharmacy practice models through pharmacists’ privileging. Hosp Pharm. 2013; 48(2):160-165.
10. Galt KA. Credentialing and privileging of pharmacists. Am J Health Syst Pharm. 2004;61(7):661-670.
11. Smith ML, Gemelas MF; US Public Health Service; Indian Health Service. Indian Health Service medical staff credentialing and privileging guide. https://www.ihs.gov/riskmanagement/includes/themes/newihstheme/display_objects/documents/IHS-Medical-Staff-Credentialing-and-Privileging-Guide.pdf. Published September 2005. Accessed March 15, 2019.
12. US Department of Health and Human Services, Indian Health Service. Indian health manual: medical credentials and privileges review process. https://www.ihs.gov/ihm/pc/part-3/p3c1. Accessed March 15, 2019.
13. Holley SL, Ketel C. Ongoing professional practice evaluation and focused professional practice evaluation: an overview for advanced practice clinicians. J Midwifery Women Health. 2014;59(4):452-459.
Use of GBCA in MRIs for High-Risk Patients
To the Editor:
We read with interest the case report of nephrogenic systemic fibrosis (NSF) by Chuang, Kaneshiro, and Betancourt in the June 2018 issue of Federal Practitioner.1 It was reported that a 61-year-old Hispanic male patient with a history of IV heroin abuse with end-stage renal disease (ESRD) secondary to membranous glomerulonephritis on hemodialysis and chronic hepatitis C infection received 15 mL gadoversetamide, a linear gadolinium-based contrast agent (GBCA) during magnetic resonance imaging (MRI) of the brain. Hemodialysis was performed 18 hours after the contrast administration.
Eight weeks after his initial presentation, the patient developed pyoderma gangrenosum on his right forearm, which was treated with high-dose steroids. He then developed thickening and induration of his bilateral forearm skin with peau d’orange appearance. NSF was confirmed by a skin biopsy. The patient developed contractures of his upper and lower extremities and was finally wheelchair bound.
This case is very concerning since no NSF cases in patients receiving GBCA have been published since 2009. Unfortunately, the authors give no information on the occurrence of this particular case. Thus, it is unclear whether this case was observed before or after the switch to macrocyclic agents in patients with reduced renal function. The reported patient with ESRD was on hemodialysis and received 15 mL gadoversetamide during MRI of the brain. In 2007 the ESUR (European Society of Urogenital Radiology) published guidelines indicating linear GBCA (gadodiamide, gadoversetamide, gadopentetate dimeglumine) as high-risk agents that may not be used in patients with eGFR < 30 mL/min/1.73 m2.2,3
Consequently in 2007, the European Medicines Agency contraindicated these linear GBCA in patients with chronic kidney disease grades 4 and 5. Also in 2007 the US Food and Drug Administration (FDA) requested a revision of the prescribing information for all 5 GBCA approved in the US.4 In response to accumulating more informative data, in 2010 the FDA again used this class labeling approach to more explicitly describe differences in NSF risks among the agents.4 FDA regulation and contraindication of the use of low-stability GBCA in patients with advanced renal impairment and robust local policies on the safe use of these agents have resulted in marked reduction in the prevalence of NSF in the US. This case report needs to clarify why a high-risk linear agent was administered to a patient with ESRD.
In 2006 Grobner and Marckmann and colleagues reported their observations of a previously unrecognized link between exposure to gadodiamide and the development of NSF.5,6 It soon became clear that NSF is a delayed adverse contrast reaction that may cause severe disability and even death. Advanced renal disease and high-risk linear GBCA are the main factors in the pathogenesis of NSF. Additionally, the dose of the agent may play a role. NSF can occur from hours to years after exposure to GBCA. Not all patients with severe kidney disease exposed to high-risk agents developed NSF. Thus, additional factors were proposed to play a role in the pathogenesis of NSF. Among those factors were erythropoietin, metabolic acidosis, anion gap, iron, increased phosphate, zinc loss, proinflammatory conditions/inflammation and angiotensin-converting enzyme (ACE) inhibitors.7 Although there is little proof with these assumptions, special care must be taken as shown by this reported patient with multiple inflammatory disorders.
- Gertraud Heinz, MD, MBA; Aart van der Molen, MD; and Giles Roditi, MD; on behalf of the ESUR Contrast Media Safety Committee
Author affiliations: Gertraud Heinz is former President ESUR and Head of the Department of Radiology, Diagnostics and Intervention University Hospital St. Pölten Karl Landsteiner University of Health Sciences.
Correspondence: Gertraud Heinz (gertraud.heinz@stpoelten .lknoe.at)
Disclosures: The authors report no conflict of interest with regard to this article.
References
1. Chuang K, Kaneshiro C, Betancourt J. Nephrogenic systemic fibrosis in a patient with multiple inflammatory disorders. Fed Pract. 2018;35(6):40-43.
2. Thomsen HS; European Society of Urogenital Radiology (ESUR). ESUR guideline: gadolinium based contrast media and nephrogenic systemic fibrosis. Eur Radiol. 2007;17(10):2692-2696.
3. Thomsen HS, Morcos SK, Almén T, et al; ESUR Contrast Medium Safety Committee. Nephrogenic systemic fibrosis and gadolinium-based contrast media: updated ESUR Contrast Media Safety Committee guidelines. Eur Radiol. 2013;23(2):307-318
4. Yang L, Krefting I, Gorovets A, et al. Nephrogenic systemic fibrosis and class labeling of gadolinium-based agents by the Food and Drug Administration. Radiology. 2012;265(1):248-253.
5. Grobner T. Gadolinium—a specific trigger for the development of nephrogenic fibrosing dermopathy and nephrogenic systemic fibrosis? Nephrol Dial Transplant. 2006;21(4):1104-1108.
6. Marckmann P, Skov L, Rossen K, et al. Nephrogenic systemic fibrosis: suspected causative role of gadodiamide used for contrast-enhanced magnetic resonance imaging. J Am Soc Nephrol. 2006;17(9):2359-2362.
7. Thomsen HS, Bennett CL. Six years after. Acta Radiol. 2012;53(8):827-829.
To the Editor:
With great interest, I read the case report by Chuang, Kaneshiro, and Betancourt.1 Patients with nephrogenic systemic fibrosis (NSF) are of special interest because the disease is still unclear as mentioned by the authors. Although new cases may occur,2 this case raises some concerns that I would like to address.
First, it would be of great interest to know the date when the patient received the high-risk gadolinium-based contrast agent (GBCA) gadoversetamide. Unfortunately, the authors did not mention the date of the injection of the GBCA that probably caused NSF. Due to the obvious association between the applications of special GBCAs in 2006, the US Food and Drug Administration (FDA) warned physicians not to inject these contrast agents in patients with compromised kidney function.3 Moreover, in 2007 the American College of Radiology (ACR) published guidelines for the safe use of GBCAs in patients with renal failure.4 Also, the European Medicines Agency (EMA) demanded that companies provide warning in product inserts about the acquisition of NSF in patients with severe kidney injury.5
Second, the clinical illustration of the case is inadequate. In the manuscript, we read that the patient acquired NSF-characteristic lesions like peau d’orange skin lesions and contractures of his extremities, but unfortunately, Chuang, Kaneshiro, and Betancourt did not provide figures that show them. On the other hand, Figure 1 shows an uncharacteristic dermal induration around inflammatory and ulcerated skin lesion (pyoderma gangrenosum).1 Such clinical signs are well known and occur perilesional of different conditions independently of NSF.6-8
Third, the histological features described as presence of fibrotic tissue in the deep dermis in Figure 2, and dermal fibrosis with thick collagen deposition in Figure 31 do not confirm the existence of NSF.
Taken together, the case presented by Chuang, Kaneshiro, and Betancourt contains some unclear aspects; therefore, it is questionable whether the published case describes a patient with NSF or not. In the current presentation, the diagnosis NSF seems to be an overestimation.
NSF still is a poorly understood disorder. Therefore, exactly documented new cases could be of clinical value when providing interesting information. Even single cases could shed some light in the darkness of the pathological mechanisms of this entity. On the other hand, we should not mix the existing cohort of published NSF cases with other scleroderma-like diseases, because this will lead to a confusion. Moreover, such a practice could inhibit the discovery of the pathophysiology of NSF.
- Ingrid Böhm, MD
Author affiliations: Ingrid Böhm is a Physician in the Department of Diagnostics, Interventional and Pediatric Radiology at the University Hospital of Bern, Inselspital, University of Bern in Bern, Switzerland.
Correspondence: Ingrid Böhm (ingrid.boehm@insel.ch)
Disclosures: The author reports no conflict of interest with regard to this article.
References
1. Chuang K, Kaneshiro C, Betancourt J. Nephrogenic systemic fibrosis in a patient with multiple inflammatory disorders. Fed Pract . 2018;35(6):40-43.
2. Larson KN, Gagnon AL, Darling MD, Patterson JW, Cropley TG. Nephrogenic systemic fibrosis manifesting a decade after exposure to gadolinium. JAMA Dermatol. 2015;151(10):1117-1120.
3. US Food and Drug Administration. A Public Health Advisory. Gadolinium-containing contrast agents for magnetic resonance imaging (MRI). http://wayback.archive-it.org/7993/20170112033022/http://www.fda.gov/Drugs/DrugSafety/PostmarketDrugSafetyInformation forPatientsandProviders/ucm053112.htm. Published June 8, 2006. Accessed March 15, 2019.
4. Kanal E, Barkovich AJ, Bell C, et al; ACR Blue Ribbon Panel on MR Safety. ACR guidance document for safe MR practices: 2007. AJR Am J Roentgenol. 2007;188(6):1447-1474.
5. European Medicines Agency. Public statement: Vasovist and nephrogenic systemic fibrosis (NSF). https://www.ema.europa.eu/en/news/public-statement-vasovist-nephrogenic-systemic-fibrosis-nsf. Published February 7, 2007. Accessed March 15, 2019.
6. Luke JC. The etiology and modern treatment of varicose ulcer. Can Med Assoc J. 1940;43(3):217-221.
7. Paulsen E, Bygum A. Keratin gel as an adjuvant in the treatment of recalcitrant pyoderma gangrenosum ulcers: a case report. Acta Derm Venereol. 2019;99(2):234-235.
8. Boehm I, Bauer R. Low-dose methotrexate controls a severe form of polyarteritis nodosa. Arch Dermatol. 2000; 136(2):167-169.
Response:
We thank Drs. Heinz, van der Molen, and Roditi for their valuable response. The following is the opinion of the authors and is not representative of the views or policies of our institution. The patient in this case received a gadolinium-based contrast agent (GBCA) in 2015 and was diagnosed with nephrogenic systemic fibrosis (NSF) 8 weeks later. We agree with the correspondents that linear GBCAs should not be used in patients with eGFR < 30 mL/min/1.73 m2. To date, a few cases of patients who received GBCA and developed NSF since 2009 have unfortunately continued to be reported in the literature.1-3 Our intention in publishing this case was to provide ongoing education to the medical community regarding this serious condition to ensure prevention of future cases.
We thank Dr. Böhm for her important inquiry. The patient received a histopathologic diagnosis of NSF. The report from the patient’s left dorsal forearm skin punch biopsy was read by our pathologist as “fibrosis and inflammation consistent with nephrogenic systemic fibrosis,” a diagnosis agreed upon by our colleagues in the dermatology and rheumatology departments based on the rapidity of his symptom onset and progression. While we acknowledge that this patient had other inflammatory disorders of the skin that may have coexisted with the diagnosis, after weighing the preponderance of clinical evidence in support of the biopsy results, we believe that this represents a case of NSF, which is associated with high morbidity and mortality. Thankfully, the patient in this case engaged extensively in physical and occupational therapy and is still alive nearly 4 years later. We would like to thank all the letter writers for their correspondence.
Author Affiliations: Kelley Chuang and Casey Kaneshiro are Hospitalists and Jaime Betancourt is a Pulmonologist, all in the Department of Medicine at the VA Greater Los Angeles Healthcare System in California.
Correspondence: Kelley Chuang (kelleychuang@mednet.ucla.edu)
Disclosures: The authors report no conflict of interest with regard to this article.
References
1. Aggarwal A, Froehlich AA, Essah P, Brinster N, High WA, Downs RW. Complications of nephrogenic systemic fibrosis following repeated exposure to gadolinium in a man with hypothyroidism: a case report. J Med Case Rep. 2011;5:566.
2. Fuah KW, Lim CT. Erythema nodosum masking nephrogenic systemic fibrosis as initial skin manifestation. BMC Nephrol. 2017;18(1):249.
3. Koratala A, Bhatti V. Nephrogenic systemic fibrosis. Clin Case Rep. 2017;5(7):1184-1185.
To the Editor:
We read with interest the case report of nephrogenic systemic fibrosis (NSF) by Chuang, Kaneshiro, and Betancourt in the June 2018 issue of Federal Practitioner.1 It was reported that a 61-year-old Hispanic male patient with a history of IV heroin abuse with end-stage renal disease (ESRD) secondary to membranous glomerulonephritis on hemodialysis and chronic hepatitis C infection received 15 mL gadoversetamide, a linear gadolinium-based contrast agent (GBCA) during magnetic resonance imaging (MRI) of the brain. Hemodialysis was performed 18 hours after the contrast administration.
Eight weeks after his initial presentation, the patient developed pyoderma gangrenosum on his right forearm, which was treated with high-dose steroids. He then developed thickening and induration of his bilateral forearm skin with peau d’orange appearance. NSF was confirmed by a skin biopsy. The patient developed contractures of his upper and lower extremities and was finally wheelchair bound.
This case is very concerning since no NSF cases in patients receiving GBCA have been published since 2009. Unfortunately, the authors give no information on the occurrence of this particular case. Thus, it is unclear whether this case was observed before or after the switch to macrocyclic agents in patients with reduced renal function. The reported patient with ESRD was on hemodialysis and received 15 mL gadoversetamide during MRI of the brain. In 2007 the ESUR (European Society of Urogenital Radiology) published guidelines indicating linear GBCA (gadodiamide, gadoversetamide, gadopentetate dimeglumine) as high-risk agents that may not be used in patients with eGFR < 30 mL/min/1.73 m2.2,3
Consequently in 2007, the European Medicines Agency contraindicated these linear GBCA in patients with chronic kidney disease grades 4 and 5. Also in 2007 the US Food and Drug Administration (FDA) requested a revision of the prescribing information for all 5 GBCA approved in the US.4 In response to accumulating more informative data, in 2010 the FDA again used this class labeling approach to more explicitly describe differences in NSF risks among the agents.4 FDA regulation and contraindication of the use of low-stability GBCA in patients with advanced renal impairment and robust local policies on the safe use of these agents have resulted in marked reduction in the prevalence of NSF in the US. This case report needs to clarify why a high-risk linear agent was administered to a patient with ESRD.
In 2006 Grobner and Marckmann and colleagues reported their observations of a previously unrecognized link between exposure to gadodiamide and the development of NSF.5,6 It soon became clear that NSF is a delayed adverse contrast reaction that may cause severe disability and even death. Advanced renal disease and high-risk linear GBCA are the main factors in the pathogenesis of NSF. Additionally, the dose of the agent may play a role. NSF can occur from hours to years after exposure to GBCA. Not all patients with severe kidney disease exposed to high-risk agents developed NSF. Thus, additional factors were proposed to play a role in the pathogenesis of NSF. Among those factors were erythropoietin, metabolic acidosis, anion gap, iron, increased phosphate, zinc loss, proinflammatory conditions/inflammation and angiotensin-converting enzyme (ACE) inhibitors.7 Although there is little proof with these assumptions, special care must be taken as shown by this reported patient with multiple inflammatory disorders.
- Gertraud Heinz, MD, MBA; Aart van der Molen, MD; and Giles Roditi, MD; on behalf of the ESUR Contrast Media Safety Committee
Author affiliations: Gertraud Heinz is former President ESUR and Head of the Department of Radiology, Diagnostics and Intervention University Hospital St. Pölten Karl Landsteiner University of Health Sciences.
Correspondence: Gertraud Heinz (gertraud.heinz@stpoelten .lknoe.at)
Disclosures: The authors report no conflict of interest with regard to this article.
References
1. Chuang K, Kaneshiro C, Betancourt J. Nephrogenic systemic fibrosis in a patient with multiple inflammatory disorders. Fed Pract. 2018;35(6):40-43.
2. Thomsen HS; European Society of Urogenital Radiology (ESUR). ESUR guideline: gadolinium based contrast media and nephrogenic systemic fibrosis. Eur Radiol. 2007;17(10):2692-2696.
3. Thomsen HS, Morcos SK, Almén T, et al; ESUR Contrast Medium Safety Committee. Nephrogenic systemic fibrosis and gadolinium-based contrast media: updated ESUR Contrast Media Safety Committee guidelines. Eur Radiol. 2013;23(2):307-318
4. Yang L, Krefting I, Gorovets A, et al. Nephrogenic systemic fibrosis and class labeling of gadolinium-based agents by the Food and Drug Administration. Radiology. 2012;265(1):248-253.
5. Grobner T. Gadolinium—a specific trigger for the development of nephrogenic fibrosing dermopathy and nephrogenic systemic fibrosis? Nephrol Dial Transplant. 2006;21(4):1104-1108.
6. Marckmann P, Skov L, Rossen K, et al. Nephrogenic systemic fibrosis: suspected causative role of gadodiamide used for contrast-enhanced magnetic resonance imaging. J Am Soc Nephrol. 2006;17(9):2359-2362.
7. Thomsen HS, Bennett CL. Six years after. Acta Radiol. 2012;53(8):827-829.
To the Editor:
With great interest, I read the case report by Chuang, Kaneshiro, and Betancourt.1 Patients with nephrogenic systemic fibrosis (NSF) are of special interest because the disease is still unclear as mentioned by the authors. Although new cases may occur,2 this case raises some concerns that I would like to address.
First, it would be of great interest to know the date when the patient received the high-risk gadolinium-based contrast agent (GBCA) gadoversetamide. Unfortunately, the authors did not mention the date of the injection of the GBCA that probably caused NSF. Due to the obvious association between the applications of special GBCAs in 2006, the US Food and Drug Administration (FDA) warned physicians not to inject these contrast agents in patients with compromised kidney function.3 Moreover, in 2007 the American College of Radiology (ACR) published guidelines for the safe use of GBCAs in patients with renal failure.4 Also, the European Medicines Agency (EMA) demanded that companies provide warning in product inserts about the acquisition of NSF in patients with severe kidney injury.5
Second, the clinical illustration of the case is inadequate. In the manuscript, we read that the patient acquired NSF-characteristic lesions like peau d’orange skin lesions and contractures of his extremities, but unfortunately, Chuang, Kaneshiro, and Betancourt did not provide figures that show them. On the other hand, Figure 1 shows an uncharacteristic dermal induration around inflammatory and ulcerated skin lesion (pyoderma gangrenosum).1 Such clinical signs are well known and occur perilesional of different conditions independently of NSF.6-8
Third, the histological features described as presence of fibrotic tissue in the deep dermis in Figure 2, and dermal fibrosis with thick collagen deposition in Figure 31 do not confirm the existence of NSF.
Taken together, the case presented by Chuang, Kaneshiro, and Betancourt contains some unclear aspects; therefore, it is questionable whether the published case describes a patient with NSF or not. In the current presentation, the diagnosis NSF seems to be an overestimation.
NSF still is a poorly understood disorder. Therefore, exactly documented new cases could be of clinical value when providing interesting information. Even single cases could shed some light in the darkness of the pathological mechanisms of this entity. On the other hand, we should not mix the existing cohort of published NSF cases with other scleroderma-like diseases, because this will lead to a confusion. Moreover, such a practice could inhibit the discovery of the pathophysiology of NSF.
- Ingrid Böhm, MD
Author affiliations: Ingrid Böhm is a Physician in the Department of Diagnostics, Interventional and Pediatric Radiology at the University Hospital of Bern, Inselspital, University of Bern in Bern, Switzerland.
Correspondence: Ingrid Böhm (ingrid.boehm@insel.ch)
Disclosures: The author reports no conflict of interest with regard to this article.
References
1. Chuang K, Kaneshiro C, Betancourt J. Nephrogenic systemic fibrosis in a patient with multiple inflammatory disorders. Fed Pract . 2018;35(6):40-43.
2. Larson KN, Gagnon AL, Darling MD, Patterson JW, Cropley TG. Nephrogenic systemic fibrosis manifesting a decade after exposure to gadolinium. JAMA Dermatol. 2015;151(10):1117-1120.
3. US Food and Drug Administration. A Public Health Advisory. Gadolinium-containing contrast agents for magnetic resonance imaging (MRI). http://wayback.archive-it.org/7993/20170112033022/http://www.fda.gov/Drugs/DrugSafety/PostmarketDrugSafetyInformation forPatientsandProviders/ucm053112.htm. Published June 8, 2006. Accessed March 15, 2019.
4. Kanal E, Barkovich AJ, Bell C, et al; ACR Blue Ribbon Panel on MR Safety. ACR guidance document for safe MR practices: 2007. AJR Am J Roentgenol. 2007;188(6):1447-1474.
5. European Medicines Agency. Public statement: Vasovist and nephrogenic systemic fibrosis (NSF). https://www.ema.europa.eu/en/news/public-statement-vasovist-nephrogenic-systemic-fibrosis-nsf. Published February 7, 2007. Accessed March 15, 2019.
6. Luke JC. The etiology and modern treatment of varicose ulcer. Can Med Assoc J. 1940;43(3):217-221.
7. Paulsen E, Bygum A. Keratin gel as an adjuvant in the treatment of recalcitrant pyoderma gangrenosum ulcers: a case report. Acta Derm Venereol. 2019;99(2):234-235.
8. Boehm I, Bauer R. Low-dose methotrexate controls a severe form of polyarteritis nodosa. Arch Dermatol. 2000; 136(2):167-169.
Response:
We thank Drs. Heinz, van der Molen, and Roditi for their valuable response. The following is the opinion of the authors and is not representative of the views or policies of our institution. The patient in this case received a gadolinium-based contrast agent (GBCA) in 2015 and was diagnosed with nephrogenic systemic fibrosis (NSF) 8 weeks later. We agree with the correspondents that linear GBCAs should not be used in patients with eGFR < 30 mL/min/1.73 m2. To date, a few cases of patients who received GBCA and developed NSF since 2009 have unfortunately continued to be reported in the literature.1-3 Our intention in publishing this case was to provide ongoing education to the medical community regarding this serious condition to ensure prevention of future cases.
We thank Dr. Böhm for her important inquiry. The patient received a histopathologic diagnosis of NSF. The report from the patient’s left dorsal forearm skin punch biopsy was read by our pathologist as “fibrosis and inflammation consistent with nephrogenic systemic fibrosis,” a diagnosis agreed upon by our colleagues in the dermatology and rheumatology departments based on the rapidity of his symptom onset and progression. While we acknowledge that this patient had other inflammatory disorders of the skin that may have coexisted with the diagnosis, after weighing the preponderance of clinical evidence in support of the biopsy results, we believe that this represents a case of NSF, which is associated with high morbidity and mortality. Thankfully, the patient in this case engaged extensively in physical and occupational therapy and is still alive nearly 4 years later. We would like to thank all the letter writers for their correspondence.
Author Affiliations: Kelley Chuang and Casey Kaneshiro are Hospitalists and Jaime Betancourt is a Pulmonologist, all in the Department of Medicine at the VA Greater Los Angeles Healthcare System in California.
Correspondence: Kelley Chuang (kelleychuang@mednet.ucla.edu)
Disclosures: The authors report no conflict of interest with regard to this article.
References
1. Aggarwal A, Froehlich AA, Essah P, Brinster N, High WA, Downs RW. Complications of nephrogenic systemic fibrosis following repeated exposure to gadolinium in a man with hypothyroidism: a case report. J Med Case Rep. 2011;5:566.
2. Fuah KW, Lim CT. Erythema nodosum masking nephrogenic systemic fibrosis as initial skin manifestation. BMC Nephrol. 2017;18(1):249.
3. Koratala A, Bhatti V. Nephrogenic systemic fibrosis. Clin Case Rep. 2017;5(7):1184-1185.
To the Editor:
We read with interest the case report of nephrogenic systemic fibrosis (NSF) by Chuang, Kaneshiro, and Betancourt in the June 2018 issue of Federal Practitioner.1 It was reported that a 61-year-old Hispanic male patient with a history of IV heroin abuse with end-stage renal disease (ESRD) secondary to membranous glomerulonephritis on hemodialysis and chronic hepatitis C infection received 15 mL gadoversetamide, a linear gadolinium-based contrast agent (GBCA) during magnetic resonance imaging (MRI) of the brain. Hemodialysis was performed 18 hours after the contrast administration.
Eight weeks after his initial presentation, the patient developed pyoderma gangrenosum on his right forearm, which was treated with high-dose steroids. He then developed thickening and induration of his bilateral forearm skin with peau d’orange appearance. NSF was confirmed by a skin biopsy. The patient developed contractures of his upper and lower extremities and was finally wheelchair bound.
This case is very concerning since no NSF cases in patients receiving GBCA have been published since 2009. Unfortunately, the authors give no information on the occurrence of this particular case. Thus, it is unclear whether this case was observed before or after the switch to macrocyclic agents in patients with reduced renal function. The reported patient with ESRD was on hemodialysis and received 15 mL gadoversetamide during MRI of the brain. In 2007 the ESUR (European Society of Urogenital Radiology) published guidelines indicating linear GBCA (gadodiamide, gadoversetamide, gadopentetate dimeglumine) as high-risk agents that may not be used in patients with eGFR < 30 mL/min/1.73 m2.2,3
Consequently in 2007, the European Medicines Agency contraindicated these linear GBCA in patients with chronic kidney disease grades 4 and 5. Also in 2007 the US Food and Drug Administration (FDA) requested a revision of the prescribing information for all 5 GBCA approved in the US.4 In response to accumulating more informative data, in 2010 the FDA again used this class labeling approach to more explicitly describe differences in NSF risks among the agents.4 FDA regulation and contraindication of the use of low-stability GBCA in patients with advanced renal impairment and robust local policies on the safe use of these agents have resulted in marked reduction in the prevalence of NSF in the US. This case report needs to clarify why a high-risk linear agent was administered to a patient with ESRD.
In 2006 Grobner and Marckmann and colleagues reported their observations of a previously unrecognized link between exposure to gadodiamide and the development of NSF.5,6 It soon became clear that NSF is a delayed adverse contrast reaction that may cause severe disability and even death. Advanced renal disease and high-risk linear GBCA are the main factors in the pathogenesis of NSF. Additionally, the dose of the agent may play a role. NSF can occur from hours to years after exposure to GBCA. Not all patients with severe kidney disease exposed to high-risk agents developed NSF. Thus, additional factors were proposed to play a role in the pathogenesis of NSF. Among those factors were erythropoietin, metabolic acidosis, anion gap, iron, increased phosphate, zinc loss, proinflammatory conditions/inflammation and angiotensin-converting enzyme (ACE) inhibitors.7 Although there is little proof with these assumptions, special care must be taken as shown by this reported patient with multiple inflammatory disorders.
- Gertraud Heinz, MD, MBA; Aart van der Molen, MD; and Giles Roditi, MD; on behalf of the ESUR Contrast Media Safety Committee
Author affiliations: Gertraud Heinz is former President ESUR and Head of the Department of Radiology, Diagnostics and Intervention University Hospital St. Pölten Karl Landsteiner University of Health Sciences.
Correspondence: Gertraud Heinz (gertraud.heinz@stpoelten .lknoe.at)
Disclosures: The authors report no conflict of interest with regard to this article.
References
1. Chuang K, Kaneshiro C, Betancourt J. Nephrogenic systemic fibrosis in a patient with multiple inflammatory disorders. Fed Pract. 2018;35(6):40-43.
2. Thomsen HS; European Society of Urogenital Radiology (ESUR). ESUR guideline: gadolinium based contrast media and nephrogenic systemic fibrosis. Eur Radiol. 2007;17(10):2692-2696.
3. Thomsen HS, Morcos SK, Almén T, et al; ESUR Contrast Medium Safety Committee. Nephrogenic systemic fibrosis and gadolinium-based contrast media: updated ESUR Contrast Media Safety Committee guidelines. Eur Radiol. 2013;23(2):307-318
4. Yang L, Krefting I, Gorovets A, et al. Nephrogenic systemic fibrosis and class labeling of gadolinium-based agents by the Food and Drug Administration. Radiology. 2012;265(1):248-253.
5. Grobner T. Gadolinium—a specific trigger for the development of nephrogenic fibrosing dermopathy and nephrogenic systemic fibrosis? Nephrol Dial Transplant. 2006;21(4):1104-1108.
6. Marckmann P, Skov L, Rossen K, et al. Nephrogenic systemic fibrosis: suspected causative role of gadodiamide used for contrast-enhanced magnetic resonance imaging. J Am Soc Nephrol. 2006;17(9):2359-2362.
7. Thomsen HS, Bennett CL. Six years after. Acta Radiol. 2012;53(8):827-829.
To the Editor:
With great interest, I read the case report by Chuang, Kaneshiro, and Betancourt.1 Patients with nephrogenic systemic fibrosis (NSF) are of special interest because the disease is still unclear as mentioned by the authors. Although new cases may occur,2 this case raises some concerns that I would like to address.
First, it would be of great interest to know the date when the patient received the high-risk gadolinium-based contrast agent (GBCA) gadoversetamide. Unfortunately, the authors did not mention the date of the injection of the GBCA that probably caused NSF. Due to the obvious association between the applications of special GBCAs in 2006, the US Food and Drug Administration (FDA) warned physicians not to inject these contrast agents in patients with compromised kidney function.3 Moreover, in 2007 the American College of Radiology (ACR) published guidelines for the safe use of GBCAs in patients with renal failure.4 Also, the European Medicines Agency (EMA) demanded that companies provide warning in product inserts about the acquisition of NSF in patients with severe kidney injury.5
Second, the clinical illustration of the case is inadequate. In the manuscript, we read that the patient acquired NSF-characteristic lesions like peau d’orange skin lesions and contractures of his extremities, but unfortunately, Chuang, Kaneshiro, and Betancourt did not provide figures that show them. On the other hand, Figure 1 shows an uncharacteristic dermal induration around inflammatory and ulcerated skin lesion (pyoderma gangrenosum).1 Such clinical signs are well known and occur perilesional of different conditions independently of NSF.6-8
Third, the histological features described as presence of fibrotic tissue in the deep dermis in Figure 2, and dermal fibrosis with thick collagen deposition in Figure 31 do not confirm the existence of NSF.
Taken together, the case presented by Chuang, Kaneshiro, and Betancourt contains some unclear aspects; therefore, it is questionable whether the published case describes a patient with NSF or not. In the current presentation, the diagnosis NSF seems to be an overestimation.
NSF still is a poorly understood disorder. Therefore, exactly documented new cases could be of clinical value when providing interesting information. Even single cases could shed some light in the darkness of the pathological mechanisms of this entity. On the other hand, we should not mix the existing cohort of published NSF cases with other scleroderma-like diseases, because this will lead to a confusion. Moreover, such a practice could inhibit the discovery of the pathophysiology of NSF.
- Ingrid Böhm, MD
Author affiliations: Ingrid Böhm is a Physician in the Department of Diagnostics, Interventional and Pediatric Radiology at the University Hospital of Bern, Inselspital, University of Bern in Bern, Switzerland.
Correspondence: Ingrid Böhm (ingrid.boehm@insel.ch)
Disclosures: The author reports no conflict of interest with regard to this article.
References
1. Chuang K, Kaneshiro C, Betancourt J. Nephrogenic systemic fibrosis in a patient with multiple inflammatory disorders. Fed Pract . 2018;35(6):40-43.
2. Larson KN, Gagnon AL, Darling MD, Patterson JW, Cropley TG. Nephrogenic systemic fibrosis manifesting a decade after exposure to gadolinium. JAMA Dermatol. 2015;151(10):1117-1120.
3. US Food and Drug Administration. A Public Health Advisory. Gadolinium-containing contrast agents for magnetic resonance imaging (MRI). http://wayback.archive-it.org/7993/20170112033022/http://www.fda.gov/Drugs/DrugSafety/PostmarketDrugSafetyInformation forPatientsandProviders/ucm053112.htm. Published June 8, 2006. Accessed March 15, 2019.
4. Kanal E, Barkovich AJ, Bell C, et al; ACR Blue Ribbon Panel on MR Safety. ACR guidance document for safe MR practices: 2007. AJR Am J Roentgenol. 2007;188(6):1447-1474.
5. European Medicines Agency. Public statement: Vasovist and nephrogenic systemic fibrosis (NSF). https://www.ema.europa.eu/en/news/public-statement-vasovist-nephrogenic-systemic-fibrosis-nsf. Published February 7, 2007. Accessed March 15, 2019.
6. Luke JC. The etiology and modern treatment of varicose ulcer. Can Med Assoc J. 1940;43(3):217-221.
7. Paulsen E, Bygum A. Keratin gel as an adjuvant in the treatment of recalcitrant pyoderma gangrenosum ulcers: a case report. Acta Derm Venereol. 2019;99(2):234-235.
8. Boehm I, Bauer R. Low-dose methotrexate controls a severe form of polyarteritis nodosa. Arch Dermatol. 2000; 136(2):167-169.
Response:
We thank Drs. Heinz, van der Molen, and Roditi for their valuable response. The following is the opinion of the authors and is not representative of the views or policies of our institution. The patient in this case received a gadolinium-based contrast agent (GBCA) in 2015 and was diagnosed with nephrogenic systemic fibrosis (NSF) 8 weeks later. We agree with the correspondents that linear GBCAs should not be used in patients with eGFR < 30 mL/min/1.73 m2. To date, a few cases of patients who received GBCA and developed NSF since 2009 have unfortunately continued to be reported in the literature.1-3 Our intention in publishing this case was to provide ongoing education to the medical community regarding this serious condition to ensure prevention of future cases.
We thank Dr. Böhm for her important inquiry. The patient received a histopathologic diagnosis of NSF. The report from the patient’s left dorsal forearm skin punch biopsy was read by our pathologist as “fibrosis and inflammation consistent with nephrogenic systemic fibrosis,” a diagnosis agreed upon by our colleagues in the dermatology and rheumatology departments based on the rapidity of his symptom onset and progression. While we acknowledge that this patient had other inflammatory disorders of the skin that may have coexisted with the diagnosis, after weighing the preponderance of clinical evidence in support of the biopsy results, we believe that this represents a case of NSF, which is associated with high morbidity and mortality. Thankfully, the patient in this case engaged extensively in physical and occupational therapy and is still alive nearly 4 years later. We would like to thank all the letter writers for their correspondence.
Author Affiliations: Kelley Chuang and Casey Kaneshiro are Hospitalists and Jaime Betancourt is a Pulmonologist, all in the Department of Medicine at the VA Greater Los Angeles Healthcare System in California.
Correspondence: Kelley Chuang (kelleychuang@mednet.ucla.edu)
Disclosures: The authors report no conflict of interest with regard to this article.
References
1. Aggarwal A, Froehlich AA, Essah P, Brinster N, High WA, Downs RW. Complications of nephrogenic systemic fibrosis following repeated exposure to gadolinium in a man with hypothyroidism: a case report. J Med Case Rep. 2011;5:566.
2. Fuah KW, Lim CT. Erythema nodosum masking nephrogenic systemic fibrosis as initial skin manifestation. BMC Nephrol. 2017;18(1):249.
3. Koratala A, Bhatti V. Nephrogenic systemic fibrosis. Clin Case Rep. 2017;5(7):1184-1185.
Revering Furry Valor
National K9 Veterans Day celebrates the loyalty, bravery, and sacrifice of canine warriors. On March 13, 1942, canines officially became members of the Armed Services, with the Army’s founding of its New War Dog Program, more popularly known as the K9 Corps. The dogs underwent basic training and then entered more specialized preparation just as human soldiers did.2 There had been unofficial dogs of war who served courageously and selflessly in almost all of our armed conflicts.3 Indeed, the title of this column is taken from a wonderful article of the same name narrating the heroism of dogs in the 2 world wars.4
The dedication of canines to those who serve is not confined to combat or even active duty. Thousands of military and veteran men and women have benefited immensely from their relationship with service and emotional support dogs.
Before I continue, let me state 2 important limitations of this column. First, I am a dog person. Of course, veterans have formed healing and caring relationships with many types of companions. Equine therapy is increasingly recognized as a powerful means of helping veterans reduce distress and find purpose.5 Nevertheless, for this column, I will focus exclusively on dogs. Second, there are many worthy organizations, projects, and programs that pair veterans with therapeutic dogs inside and outside the VA. I am in no way an expert and will invariably neglect many of these positive initiatives in this brief review.
The long, proud history of canines in the military and the many moving stories of men and women in and out of uniform for whom dogs have been life changing, if not life-saving, have created 2 ethical dilemmas for the VA that I examine here. Both dilemmas pivot on the terms of official recognition of service dogs, the benefits, and who can qualify for them in the VA.
Under VA regulation and VHA policy, a service companion only can be a dog that is individually trained to do work or perform tasks to assist a person with a disability; dogs whose sole function is to provide emotional support, well-being, comfort, or companionship are not considered service pets.6
Prior to the widespread implementation of VHA Directive 1188, some VA medical centers had, pardon the pun, “gone to the dogs,” in the sense that depending on the facility, emotional support companions were found in almost every area of hospitals and clinics. Their presence enabled many patients to feel comfortable enough to seek medical and mental health care, as the canine companion gave them a sense of security and calm. But some dogs had not received the extensive training that enables a service dog to follow commands and handle the stimulation of a large, busy hospital with all its sights, sounds, and smells. Infectious disease, police, and public health authorities raised legitimate public health and safety risks about the increasing number of dogs on VA grounds who were not formally certified as service dogs. In response to those concerns, in August 2015, VHA declared a uniform policy that restricted service dogs access to VA property.7 This was, as with most health policy, a necessary, albeit utilitarian decision, that the common good outweighed that of individual veterans. Unfortunately, some veterans experienced the decision as a form of psychological rejection, and others no longer felt able mentally or physically to master the stresses of seeking health care without a canine companion.
A valid question to ask is why couldn’t the most vulnerable of these veterans, for instance those with severe mental health conditions, have service dogs that could accompany them into at least most areas of the medical center? Part of the reason is cost: Some training organizations estimate it may cost as much as $27,000 to train service dogs.8 Though there are many wonderful volunteer and not-for-profit organizations that train mostly shelter dogs and their veteran handlers—a double rescue—the lengthy process and expense means that many veterans wait years for a companion.
Congressional representatives, ethicists, veterans advocates, and canine therapy groups claim that this was unjust discrimination against those suffering with the equally, if not more disabling, mental health conditions.9 For many years, the VA has done a very good deed: For those who qualify for a service dog, VA pays for veterinary care and the equipment to handle the dog, but not boarding, grooming, food, and other miscellaneous expenses.10 But until 2016, those veterans approved for service dogs in the main had sensory or physical disabilities.
A partial breakthrough emerged when the Center for Compassionate Care Innovation launched the Mental Health Mobility Service Dogs Program that expanded veterinary health benefits to veterans with a “substantial mobility limitation.” For example, veterans whose hypervigilance and hyperarousal are so severe that they cannot attend medical appointments.11
VA experts argue that at this time there is insufficient evidence to fund service dogs as even adjunctive PTSD therapy for the hundreds of veterans who might potentially qualify. It becomes an ethical question of prudent stewardship of public funds and trust. There is certainly plenty of compelling anecdotal testimony that companion canines are a high-benefit, relatively low-risk form of complementary and integrated therapy for the spectrum of trauma disorders that afflict many of the men and women who served in our conflicts. Demonstrating those positive effects scientifically may be more difficult than it seems, although early evidence is promising, and the VA is intensively researching the question.12 For some veterans and their legislators, the VA has not gone far enough, fast enough in mainstreaming therapy dogs, they are calling for VA to expand veterans’ benefits to include mental health service dogs and to define what benefits would be covered.
National K9 Veterans Day is an important step toward giving dogs of war the homage they have earned, as are increasing efforts to ensure care for military canines throughout their life cycle. But as the seventeenth century poet John Milton wrote when he reflected on his own worth despite his blindness, “Those also serve who only stand and wait.”13 The institutions charged to care for those the battle has most burdened are still trying to discover how to properly and proportionately revere that kind of furry valor.
1. Schweitzer A. Civilization and Ethics. Naish JP, trans. London, England: A. & C. Black; 1923.
2. Bergeron AW Jr. War dogs: the birth of the K-9 Corps. https://www.army.mil/article/7463/war_dogs_the_birth_of_the_k_9_corps. Published February 14, 2008. Accessed March 22, 2019.
3. Nye L. A brief history of dogs in warfare. https://www.military.com/undertheradar/2017/03/brief-history-dogs-warfare. Published March 20, 2017. Accessed March 24, 2019.
4. Liao S. Furry valor: The tactical dogs of WW I and II. Vet Herit. 2016;39(1):24-29.
5. Romaniuk M, Evans J, Kidd C. Evaluation of an equine-assisted therapy program for veterans who identify as ‘wounded, injured, or ill’ and their partners. PLoS One. 2018;13(9):e0203943.
6. US Department of Veterans Affairs. Frequently asked questions: service animals on VA property. https://www.blogs.va.gov/VAntage/wp-content/uploads/2015/08/FAQs_RegulationsAboutAnimalsonVAProperty.pdf. Published Accessed March 24, 2019.
7. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 1188: animals on Veterans Health Administration (VHA) property. https://www.boise.va.gov/docs/Service_Animal_Policy.pdf August 26, 2015.
8. Brulliard K. For military veterans suffering from PTSD, are service dogs good therapy? Washington Post. March 27, 2018.
9. Weinmeyer R. Service dogs for veterans with post-traumatic stress disorder. AMA J Ethics. 2015;17(6):547-552.
10. US Department of Veterans Affairs, Veterans Health Administration, Office of Patient Care Services. Guide and service dogs. https://www.prosthetics.va.gov/serviceandguidedogs.asp. Updated August 18, 2016. Accessed March 24, 2019.
11. US Department of Veterans Affairs. VA pilots program to expand veterinary benefits for mental health mobility service dogs. https://www.blogs.va.gov/VAntage/33379/va-pilots-program-to-expand-veterinary-health-benefit-for-mental-health-mobility-service-dogs. Published Accessed March 24, 2019.
12. Yarborough BJH, Stumbo SP, Yarborough MT, Owen-Smith A, Green CA. Benefits and challenges of using service dogs for veterans with posttraumatic stress disorder. Psychiatr Rehabil J. 2018;41(2):118-124.
National K9 Veterans Day celebrates the loyalty, bravery, and sacrifice of canine warriors. On March 13, 1942, canines officially became members of the Armed Services, with the Army’s founding of its New War Dog Program, more popularly known as the K9 Corps. The dogs underwent basic training and then entered more specialized preparation just as human soldiers did.2 There had been unofficial dogs of war who served courageously and selflessly in almost all of our armed conflicts.3 Indeed, the title of this column is taken from a wonderful article of the same name narrating the heroism of dogs in the 2 world wars.4
The dedication of canines to those who serve is not confined to combat or even active duty. Thousands of military and veteran men and women have benefited immensely from their relationship with service and emotional support dogs.
Before I continue, let me state 2 important limitations of this column. First, I am a dog person. Of course, veterans have formed healing and caring relationships with many types of companions. Equine therapy is increasingly recognized as a powerful means of helping veterans reduce distress and find purpose.5 Nevertheless, for this column, I will focus exclusively on dogs. Second, there are many worthy organizations, projects, and programs that pair veterans with therapeutic dogs inside and outside the VA. I am in no way an expert and will invariably neglect many of these positive initiatives in this brief review.
The long, proud history of canines in the military and the many moving stories of men and women in and out of uniform for whom dogs have been life changing, if not life-saving, have created 2 ethical dilemmas for the VA that I examine here. Both dilemmas pivot on the terms of official recognition of service dogs, the benefits, and who can qualify for them in the VA.
Under VA regulation and VHA policy, a service companion only can be a dog that is individually trained to do work or perform tasks to assist a person with a disability; dogs whose sole function is to provide emotional support, well-being, comfort, or companionship are not considered service pets.6
Prior to the widespread implementation of VHA Directive 1188, some VA medical centers had, pardon the pun, “gone to the dogs,” in the sense that depending on the facility, emotional support companions were found in almost every area of hospitals and clinics. Their presence enabled many patients to feel comfortable enough to seek medical and mental health care, as the canine companion gave them a sense of security and calm. But some dogs had not received the extensive training that enables a service dog to follow commands and handle the stimulation of a large, busy hospital with all its sights, sounds, and smells. Infectious disease, police, and public health authorities raised legitimate public health and safety risks about the increasing number of dogs on VA grounds who were not formally certified as service dogs. In response to those concerns, in August 2015, VHA declared a uniform policy that restricted service dogs access to VA property.7 This was, as with most health policy, a necessary, albeit utilitarian decision, that the common good outweighed that of individual veterans. Unfortunately, some veterans experienced the decision as a form of psychological rejection, and others no longer felt able mentally or physically to master the stresses of seeking health care without a canine companion.
A valid question to ask is why couldn’t the most vulnerable of these veterans, for instance those with severe mental health conditions, have service dogs that could accompany them into at least most areas of the medical center? Part of the reason is cost: Some training organizations estimate it may cost as much as $27,000 to train service dogs.8 Though there are many wonderful volunteer and not-for-profit organizations that train mostly shelter dogs and their veteran handlers—a double rescue—the lengthy process and expense means that many veterans wait years for a companion.
Congressional representatives, ethicists, veterans advocates, and canine therapy groups claim that this was unjust discrimination against those suffering with the equally, if not more disabling, mental health conditions.9 For many years, the VA has done a very good deed: For those who qualify for a service dog, VA pays for veterinary care and the equipment to handle the dog, but not boarding, grooming, food, and other miscellaneous expenses.10 But until 2016, those veterans approved for service dogs in the main had sensory or physical disabilities.
A partial breakthrough emerged when the Center for Compassionate Care Innovation launched the Mental Health Mobility Service Dogs Program that expanded veterinary health benefits to veterans with a “substantial mobility limitation.” For example, veterans whose hypervigilance and hyperarousal are so severe that they cannot attend medical appointments.11
VA experts argue that at this time there is insufficient evidence to fund service dogs as even adjunctive PTSD therapy for the hundreds of veterans who might potentially qualify. It becomes an ethical question of prudent stewardship of public funds and trust. There is certainly plenty of compelling anecdotal testimony that companion canines are a high-benefit, relatively low-risk form of complementary and integrated therapy for the spectrum of trauma disorders that afflict many of the men and women who served in our conflicts. Demonstrating those positive effects scientifically may be more difficult than it seems, although early evidence is promising, and the VA is intensively researching the question.12 For some veterans and their legislators, the VA has not gone far enough, fast enough in mainstreaming therapy dogs, they are calling for VA to expand veterans’ benefits to include mental health service dogs and to define what benefits would be covered.
National K9 Veterans Day is an important step toward giving dogs of war the homage they have earned, as are increasing efforts to ensure care for military canines throughout their life cycle. But as the seventeenth century poet John Milton wrote when he reflected on his own worth despite his blindness, “Those also serve who only stand and wait.”13 The institutions charged to care for those the battle has most burdened are still trying to discover how to properly and proportionately revere that kind of furry valor.
National K9 Veterans Day celebrates the loyalty, bravery, and sacrifice of canine warriors. On March 13, 1942, canines officially became members of the Armed Services, with the Army’s founding of its New War Dog Program, more popularly known as the K9 Corps. The dogs underwent basic training and then entered more specialized preparation just as human soldiers did.2 There had been unofficial dogs of war who served courageously and selflessly in almost all of our armed conflicts.3 Indeed, the title of this column is taken from a wonderful article of the same name narrating the heroism of dogs in the 2 world wars.4
The dedication of canines to those who serve is not confined to combat or even active duty. Thousands of military and veteran men and women have benefited immensely from their relationship with service and emotional support dogs.
Before I continue, let me state 2 important limitations of this column. First, I am a dog person. Of course, veterans have formed healing and caring relationships with many types of companions. Equine therapy is increasingly recognized as a powerful means of helping veterans reduce distress and find purpose.5 Nevertheless, for this column, I will focus exclusively on dogs. Second, there are many worthy organizations, projects, and programs that pair veterans with therapeutic dogs inside and outside the VA. I am in no way an expert and will invariably neglect many of these positive initiatives in this brief review.
The long, proud history of canines in the military and the many moving stories of men and women in and out of uniform for whom dogs have been life changing, if not life-saving, have created 2 ethical dilemmas for the VA that I examine here. Both dilemmas pivot on the terms of official recognition of service dogs, the benefits, and who can qualify for them in the VA.
Under VA regulation and VHA policy, a service companion only can be a dog that is individually trained to do work or perform tasks to assist a person with a disability; dogs whose sole function is to provide emotional support, well-being, comfort, or companionship are not considered service pets.6
Prior to the widespread implementation of VHA Directive 1188, some VA medical centers had, pardon the pun, “gone to the dogs,” in the sense that depending on the facility, emotional support companions were found in almost every area of hospitals and clinics. Their presence enabled many patients to feel comfortable enough to seek medical and mental health care, as the canine companion gave them a sense of security and calm. But some dogs had not received the extensive training that enables a service dog to follow commands and handle the stimulation of a large, busy hospital with all its sights, sounds, and smells. Infectious disease, police, and public health authorities raised legitimate public health and safety risks about the increasing number of dogs on VA grounds who were not formally certified as service dogs. In response to those concerns, in August 2015, VHA declared a uniform policy that restricted service dogs access to VA property.7 This was, as with most health policy, a necessary, albeit utilitarian decision, that the common good outweighed that of individual veterans. Unfortunately, some veterans experienced the decision as a form of psychological rejection, and others no longer felt able mentally or physically to master the stresses of seeking health care without a canine companion.
A valid question to ask is why couldn’t the most vulnerable of these veterans, for instance those with severe mental health conditions, have service dogs that could accompany them into at least most areas of the medical center? Part of the reason is cost: Some training organizations estimate it may cost as much as $27,000 to train service dogs.8 Though there are many wonderful volunteer and not-for-profit organizations that train mostly shelter dogs and their veteran handlers—a double rescue—the lengthy process and expense means that many veterans wait years for a companion.
Congressional representatives, ethicists, veterans advocates, and canine therapy groups claim that this was unjust discrimination against those suffering with the equally, if not more disabling, mental health conditions.9 For many years, the VA has done a very good deed: For those who qualify for a service dog, VA pays for veterinary care and the equipment to handle the dog, but not boarding, grooming, food, and other miscellaneous expenses.10 But until 2016, those veterans approved for service dogs in the main had sensory or physical disabilities.
A partial breakthrough emerged when the Center for Compassionate Care Innovation launched the Mental Health Mobility Service Dogs Program that expanded veterinary health benefits to veterans with a “substantial mobility limitation.” For example, veterans whose hypervigilance and hyperarousal are so severe that they cannot attend medical appointments.11
VA experts argue that at this time there is insufficient evidence to fund service dogs as even adjunctive PTSD therapy for the hundreds of veterans who might potentially qualify. It becomes an ethical question of prudent stewardship of public funds and trust. There is certainly plenty of compelling anecdotal testimony that companion canines are a high-benefit, relatively low-risk form of complementary and integrated therapy for the spectrum of trauma disorders that afflict many of the men and women who served in our conflicts. Demonstrating those positive effects scientifically may be more difficult than it seems, although early evidence is promising, and the VA is intensively researching the question.12 For some veterans and their legislators, the VA has not gone far enough, fast enough in mainstreaming therapy dogs, they are calling for VA to expand veterans’ benefits to include mental health service dogs and to define what benefits would be covered.
National K9 Veterans Day is an important step toward giving dogs of war the homage they have earned, as are increasing efforts to ensure care for military canines throughout their life cycle. But as the seventeenth century poet John Milton wrote when he reflected on his own worth despite his blindness, “Those also serve who only stand and wait.”13 The institutions charged to care for those the battle has most burdened are still trying to discover how to properly and proportionately revere that kind of furry valor.
1. Schweitzer A. Civilization and Ethics. Naish JP, trans. London, England: A. & C. Black; 1923.
2. Bergeron AW Jr. War dogs: the birth of the K-9 Corps. https://www.army.mil/article/7463/war_dogs_the_birth_of_the_k_9_corps. Published February 14, 2008. Accessed March 22, 2019.
3. Nye L. A brief history of dogs in warfare. https://www.military.com/undertheradar/2017/03/brief-history-dogs-warfare. Published March 20, 2017. Accessed March 24, 2019.
4. Liao S. Furry valor: The tactical dogs of WW I and II. Vet Herit. 2016;39(1):24-29.
5. Romaniuk M, Evans J, Kidd C. Evaluation of an equine-assisted therapy program for veterans who identify as ‘wounded, injured, or ill’ and their partners. PLoS One. 2018;13(9):e0203943.
6. US Department of Veterans Affairs. Frequently asked questions: service animals on VA property. https://www.blogs.va.gov/VAntage/wp-content/uploads/2015/08/FAQs_RegulationsAboutAnimalsonVAProperty.pdf. Published Accessed March 24, 2019.
7. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 1188: animals on Veterans Health Administration (VHA) property. https://www.boise.va.gov/docs/Service_Animal_Policy.pdf August 26, 2015.
8. Brulliard K. For military veterans suffering from PTSD, are service dogs good therapy? Washington Post. March 27, 2018.
9. Weinmeyer R. Service dogs for veterans with post-traumatic stress disorder. AMA J Ethics. 2015;17(6):547-552.
10. US Department of Veterans Affairs, Veterans Health Administration, Office of Patient Care Services. Guide and service dogs. https://www.prosthetics.va.gov/serviceandguidedogs.asp. Updated August 18, 2016. Accessed March 24, 2019.
11. US Department of Veterans Affairs. VA pilots program to expand veterinary benefits for mental health mobility service dogs. https://www.blogs.va.gov/VAntage/33379/va-pilots-program-to-expand-veterinary-health-benefit-for-mental-health-mobility-service-dogs. Published Accessed March 24, 2019.
12. Yarborough BJH, Stumbo SP, Yarborough MT, Owen-Smith A, Green CA. Benefits and challenges of using service dogs for veterans with posttraumatic stress disorder. Psychiatr Rehabil J. 2018;41(2):118-124.
1. Schweitzer A. Civilization and Ethics. Naish JP, trans. London, England: A. & C. Black; 1923.
2. Bergeron AW Jr. War dogs: the birth of the K-9 Corps. https://www.army.mil/article/7463/war_dogs_the_birth_of_the_k_9_corps. Published February 14, 2008. Accessed March 22, 2019.
3. Nye L. A brief history of dogs in warfare. https://www.military.com/undertheradar/2017/03/brief-history-dogs-warfare. Published March 20, 2017. Accessed March 24, 2019.
4. Liao S. Furry valor: The tactical dogs of WW I and II. Vet Herit. 2016;39(1):24-29.
5. Romaniuk M, Evans J, Kidd C. Evaluation of an equine-assisted therapy program for veterans who identify as ‘wounded, injured, or ill’ and their partners. PLoS One. 2018;13(9):e0203943.
6. US Department of Veterans Affairs. Frequently asked questions: service animals on VA property. https://www.blogs.va.gov/VAntage/wp-content/uploads/2015/08/FAQs_RegulationsAboutAnimalsonVAProperty.pdf. Published Accessed March 24, 2019.
7. US Department of Veterans Affairs, Veterans Health Administration. VHA Directive 1188: animals on Veterans Health Administration (VHA) property. https://www.boise.va.gov/docs/Service_Animal_Policy.pdf August 26, 2015.
8. Brulliard K. For military veterans suffering from PTSD, are service dogs good therapy? Washington Post. March 27, 2018.
9. Weinmeyer R. Service dogs for veterans with post-traumatic stress disorder. AMA J Ethics. 2015;17(6):547-552.
10. US Department of Veterans Affairs, Veterans Health Administration, Office of Patient Care Services. Guide and service dogs. https://www.prosthetics.va.gov/serviceandguidedogs.asp. Updated August 18, 2016. Accessed March 24, 2019.
11. US Department of Veterans Affairs. VA pilots program to expand veterinary benefits for mental health mobility service dogs. https://www.blogs.va.gov/VAntage/33379/va-pilots-program-to-expand-veterinary-health-benefit-for-mental-health-mobility-service-dogs. Published Accessed March 24, 2019.
12. Yarborough BJH, Stumbo SP, Yarborough MT, Owen-Smith A, Green CA. Benefits and challenges of using service dogs for veterans with posttraumatic stress disorder. Psychiatr Rehabil J. 2018;41(2):118-124.
Managing Eating Disorders on a General Pediatrics Unit: A Centralized Video Monitoring Pilot
Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5
Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8
Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11
We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.
METHODS
Setting and Participants
This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.
Supervision Interventions
A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.
Centralized Video Monitoring Implementation
Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.
Supervision Costs
NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.
CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.
Data Collection
Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight
Data Analysis
Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.
RESULTS
Patient Characteristics and Supervision Costs
The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).
Balancing Measures, Acceptability, and Feasibility
Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.
DISCUSSION
This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.
This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.
Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.
CONCLUSION
The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.
Disclosures
The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.
1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed
Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5
Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8
Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11
We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.
METHODS
Setting and Participants
This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.
Supervision Interventions
A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.
Centralized Video Monitoring Implementation
Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.
Supervision Costs
NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.
CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.
Data Collection
Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight
Data Analysis
Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.
RESULTS
Patient Characteristics and Supervision Costs
The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).
Balancing Measures, Acceptability, and Feasibility
Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.
DISCUSSION
This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.
This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.
Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.
CONCLUSION
The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.
Disclosures
The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.
Hospitalizations for nutritional rehabilitation of patients with restrictive eating disorders are increasing.1 Among primary mental health admissions at free-standing children’s hospitals, eating disorders represent 5.5% of hospitalizations and are associated with the longest length of stay (LOS; mean 14.3 days) and costliest care (mean $46,130).2 Admission is necessary to ensure initial weight restoration and monitoring for symptoms of refeeding syndrome, including electrolyte shifts and vital sign abnormalities.3-5
Supervision is generally considered an essential element of caring for hospitalized patients with eating disorders, who may experience difficulty adhering to nutritional treatment, perform excessive movement or exercise, or demonstrate purging or self-harming behaviors. Supervision is presumed to prevent counterproductive behaviors, facilitating weight gain and earlier discharge to psychiatric treatment. Best practices for patient supervision to address these challenges have not been established but often include meal time or continuous one-to-one supervision by nursing assistants (NAs) or other staff.6,7 While meal supervision has been shown to decrease medical LOS, it is costly, reduces staff availability for the care of other patient care, and can be a barrier to caring for patients with eating disorders in many institutions.8
Although not previously used in patients with eating disorders, centralized video monitoring (CVM) may provide an additional mode of supervision. CVM is an emerging technology consisting of real-time video streaming, without video recording, enabling tracking of patient movement, redirection of behaviors, and communication with unit nurses when necessary. CVM has been used in multiple patient safety initiatives to reduce falls, address staffing shortages, reduce costs,9,10 supervise patients at risk for self-harm or elopement, and prevent controlled medication diversion.10,11
We sought to pilot a novel use of CVM to replace our institution’s standard practice of continuous one-to-one nursing assistant (NA) supervision of patients admitted for medical stabilization of an eating disorder. Our objective was to evaluate the supervision cost and feasibility of CVM, using LOS and days to weight gain as balancing measures.
METHODS
Setting and Participants
This retrospective cohort study included patients 12-18 years old admitted to the pediatric hospital medicine service on a general unit of an academic quaternary care children’s hospital for medical stabilization of an eating disorder between September 2013 and March 2017. Patients were identified using administrative data based on primary or secondary diagnosis of anorexia nervosa, eating disorder not other wise specified, or another specified eating disorder (ICD 9 3071, 20759, or ICD 10 f5000, 5001, f5089, f509).12,13 This research study was considered exempt by the University of Wisconsin School of Medicine and Public Health’s Institutional Review Board.
Supervision Interventions
A standard medical stabilization protocol was used for patients admitted with an eating disorder throughout the study period (Appendix). All patients received continuous one-to-one NA supervision until they reached the target calorie intake and demonstrated the ability to follow the nutritional meal protocol. Beginning July 2015, patients received continuous CVM supervision unless they expressed suicidal ideation (SI), which triggered one-to-one NA supervision until they no longer endorsed suicidality.
Centralized Video Monitoring Implementation
Institutional CVM technology was AvaSys TeleSitter Solution (AvaSure, Inc). Our institution purchased CVM devices for use in adult settings, and one was assigned for pediatric CVM. Mobile CVM video carts were deployed to patient rooms and generated live video streams, without recorded capture, which were supervised by CVM technicians. These technicians were NAs hired and trained specifically for this role; worked four-, eight-, and 12-hour shifts; and observed up to eight camera feeds on a single monitor in a centralized room. Patients and family members could refuse CVM, which would trigger one-to-one NA supervision. Patients were not observed by CVM while in the restroom; staff were notified by either the patient or technician, and one-to-one supervision was provided. CVM had two-way audio communication, which allowed technicians to redirect patients verbally. Technicians could contact nursing staff directly by phone when additional intervention was needed.
Supervision Costs
NA supervision costs were estimated at $19/hour, based upon institutional human resources average NA salaries at that time. No additional mealtime supervision was included, as in-person supervision was already occurring.
CVM supervision costs were defined as the sum of the device cost plus CVM technician costs and two hours of one-to-one NA mealtime supervision per day. The CVM device cost was estimated at $2.10/hour, assuming a 10-year machine life expectancy (single unit cost $82,893 in 2015, 3,944 hours of use in fiscal year of 2018). CVM technician costs were $19/hour, based upon institutional human resources average CVM technician salaries at that time. Because technicians monitored an average of six patients simultaneously during this study, one-sixth of a CVM technician’s salary (ie, $3.17/hour) was used for each hour of CVM monitoring. Patients with mixed (NA and CVM) supervision were analyzed with those having CVM supervision. These patients’ costs were the sum of their NA supervision costs plus their CVM supervision costs.
Data Collection
Descriptive variables including age, gender, race/ethnicity, insurance, and LOS were collected from administrative data. The duration and type of supervision for all patients were collected from daily staffing logs. The eating disorder protocol standardized the process of obtaining daily weights (Appendix). Days to weight gain following admission were defined as the total number of days from admission to the first day of weight gain that was followed by another day of weight gain or maintaining the same weight
Data Analysis
Patient and hospitalization characteristics were summarized. A sample size of at least 14 in each group was estimated as necessary to detect a 50% reduction in supervision cost between the groups using alpha = 0.05, a power of 80%, a mean cost of $4,400 in the NA group, and a standard deviation of $1,600.Wilcoxon rank-sum tests were used to assess differences in median supervision cost between NA and CVM use. Differences in mean LOS and days to weight gain between NA and CVM use were assessed with t-tests because these data were normally distributed.
RESULTS
Patient Characteristics and Supervision Costs
The study included 37 consecutive admissions (NA = 23 and CVM = 14) with 35 unique patients. Patients were female, primarily non-Hispanic White, and privately insured (Table 1). Median supervision cost for the NA was statistically significantly more expensive at $4,104/admission versus $1,166/admission for CVM (P < .001, Table 2).
Balancing Measures, Acceptability, and Feasibility
Mean LOS was 11.7 days for NA and 9.8 days for CVM (P = .27; Table 2). The mean number of days to weight gain was 3.1 and 3.6 days, respectively (P = .28). No patients converted from CVM to NA supervision. One patient with SI converted to CVM after SI resolved and two patients required ongoing NA supervision due to continued SI. There were no reported refusals, technology failures, or unplanned discontinuations of CVM. One patient/family reported excessive CVM redirection of behavior.
DISCUSSION
This is the first description of CVM use in adolescent patients or patients with eating disorders. Our results suggest that CVM appears feasible and less costly in this population than one-to-one NA supervision, without statistically significant differences in LOS or time to weight gain. Patients with CVM with any NA supervision (except mealtime alone) were analyzed in the CVM group; therefore, this study may underestimate cost savings from CVM supervision. This innovative use of CVM may represent an opportunity for hospitals to repurpose monitoring technology for more efficient supervision of patients with eating disorders.
This pediatric pilot study adds to the growing body of literature in adult patients suggesting CVM supervision may be a feasible inpatient cost-reduction strategy.9,10 One single-center study demonstrated that the use of CVM with adult inpatients led to fewer unsafe behaviors, eg, patient removal of intravenous catheters and oxygen therapy. Personnel savings exceeded the original investment cost of the monitor within one fiscal quarter.9 Results of another study suggest that CVM use with hospitalized adults who required supervision to prevent falls was associated with improved patient and family satisfaction.14 In the absence of a gold standard for supervision of patients hospitalized with eating disorders, CVM technology is a tool that may balance cost, care quality, and patient experience. Given the upfront investment in CVM units, this technology may be most appropriate for institutions already using CVM for other inpatient indications.
Although our institutional cost of CVM use was similar to that reported by other institutions,11,15 the single-center design of this pilot study limits the generalizability of our findings. Unadjusted results of this observational study may be confounded by indication bias. As this was a pilot study, it was powered to detect a clinically significant difference in cost between NA and CVM supervision. While statistically significant differences were not seen in LOS or weight gain, this pilot study was not powered to detect potential differences or to adjust for all potential confounders (eg, other mental health conditions or comorbidities, eating disorder type, previous hospitalizations). Future studies should include these considerations in estimating sample sizes. The ability to conduct a robust cost-effectiveness analysis was also limited by cost data availability and reliance on staffing assumptions to calculate supervision costs. However, these findings will be important for valid effect size estimates for future interventional studies that rigorously evaluate CVM effectiveness and safety. Patients and families were not formally surveyed about their experiences with CVM, and the patient and family experience is another important outcome to consider in future studies.
CONCLUSION
The results of this pilot study suggest that supervision costs for patients admitted for medical stabilization of eating disorders were statistically significantly lower with CVM when compared with one-to-one NA supervision, without a change in hospitalization LOS or time to weight gain. These findings are particularly important as hospitals seek opportunities to reduce costs while providing safe and effective care. Future efforts should focus on evaluating clinical outcomes and patient experiences with this technology and strategies to maximize efficiency to offset the initial device cost.
Disclosures
The authors have no financial relationships relevant to this article to disclose. The authors have no conflicts of interest relevant to this article to disclose.
1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed
1. Zhao Y, Encinosa W. An update on hospitalizations for eating disorders, 1999 to 2009: statistical brief #120. In: Healthcare Cost and Utilization Project (HCUP) Statistical Briefs. Rockville, MD: Agency for Healthcare Research and Quality (US); 2006. PubMed
2. Bardach NS, Coker TR, Zima BT, et al. Common and costly hospitalizations for pediatric mental health disorders. Pediatrics. 2014;133(4):602-609. doi: 10.1542/peds.2013-3165. PubMed
3. Society for Adolescent H, Medicine, Golden NH, et al. Position Paper of the Society for Adolescent Health and Medicine: medical management of restrictive eating disorders in adolescents and young adults. J Adolesc Health. 2015;56(1):121-125. doi: 10.1016/j.jadohealth.2014.10.259. PubMed
4. Katzman DK. Medical complications in adolescents with anorexia nervosa: a review of the literature. Int J Eat Disord. 2005;37(S1):S52-S59; discussion S87-S59. doi: 10.1002/eat.20118. PubMed
5. Strandjord SE, Sieke EH, Richmond M, Khadilkar A, Rome ES. Medical stabilization of adolescents with nutritional insufficiency: a clinical care path. Eat Weight Disord. 2016;21(3):403-410. doi: 10.1007/s40519-015-0245-5. PubMed
6. Kells M, Davidson K, Hitchko L, O’Neil K, Schubert-Bob P, McCabe M. Examining supervised meals in patients with restrictive eating disorders. Appl Nurs Res. 2013;26(2):76-79. doi: 10.1016/j.apnr.2012.06.003. PubMed
7. Leclerc A, Turrini T, Sherwood K, Katzman DK. Evaluation of a nutrition rehabilitation protocol in hospitalized adolescents with restrictive eating disorders. J Adolesc Health. 2013;53(5):585-589. doi: 10.1016/j.jadohealth.2013.06.001. PubMed
8. Kells M, Schubert-Bob P, Nagle K, et al. Meal supervision during medical hospitalization for eating disorders. Clin Nurs Res. 2017;26(4):525-537. doi: 10.1177/1054773816637598. PubMed
9. Jeffers S, Searcey P, Boyle K, et al. Centralized video monitoring for patient safety: a Denver Health Lean journey. Nurs Econ. 2013;31(6):298-306. PubMed
10. Sand-Jecklin K, Johnson JR, Tylka S. Protecting patient safety: can video monitoring prevent falls in high-risk patient populations? J Nurs Care Qual. 2016;31(2):131-138. doi: 10.1097/NCQ.0000000000000163. PubMed
11. Burtson PL, Vento L. Sitter reduction through mobile video monitoring: a nurse-driven sitter protocol and administrative oversight. J Nurs Adm. 2015;45(7-8):363-369. doi: 10.1097/NNA.0000000000000216. PubMed
12. Prevention CfDCa. ICD-9-CM Guidelines, 9th ed. https://www.cdc.gov/nchs/data/icd/icd9cm_guidelines_2011.pdf. Accessed April 11, 2018.
13. Prevention CfDca. IDC-9-CM Code Conversion Table. https://www.cdc.gov/nchs/data/icd/icd-9-cm_fy14_cnvtbl_final.pdf. Accessed April 11, 2018.
14. Cournan M, Fusco-Gessick B, Wright L. Improving patient safety through video monitoring. Rehabil Nurs. 2016. doi: 10.1002/rnj.308. PubMed
15. Rochefort CM, Ward L, Ritchie JA, Girard N, Tamblyn RM. Patient and nurse staffing characteristics associated with high sitter use costs. J Adv Nurs. 2012;68(8):1758-1767. doi: 10.1111/j.1365-2648.2011.05864.x. PubMed
© 2019 Society of Hospital Medicine
Interhospital Transfer: Transfer Processes and Patient Outcomes
The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2
However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7
Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.
METHODS
Data and Study Population
We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.
Transfer Process Characteristics
Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7
Outcomes
Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6
Patient Characteristics
Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.
Statistical Analyses
We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.
In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.
RESULTS
Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.
Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.
Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.
Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.
DISCUSSION
In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).
There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).
We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10
In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).
Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).
Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).
Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.
Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.
Disclosures
Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.
1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.
The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2
However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7
Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.
METHODS
Data and Study Population
We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.
Transfer Process Characteristics
Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7
Outcomes
Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6
Patient Characteristics
Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.
Statistical Analyses
We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.
In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.
RESULTS
Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.
Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.
Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.
Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.
DISCUSSION
In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).
There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).
We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10
In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).
Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).
Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).
Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.
Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.
Disclosures
Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.
The transfer of patients between acute care hospitals (interhospital transfer [IHT]) occurs regularly among patients with a variety of diagnoses, in theory, to gain access to unique specialty services and/or a higher level of care, among other reasons.1,2
However, the practice of IHT is variable and nonstandardized,3,4 and existing data largely suggests that transferred patients experience worse outcomes, including longer length of stay, higher hospitalization costs, longer ICU time, and greater mortality, even with rigorous adjustment for confounding by indication.5,6 Though there are many possible reasons for these findings, existing literature suggests that there may be aspects of the transfer process itself which contribute to these outcomes.2,6,7
Understanding which aspects of the transfer process contribute to poor patient outcomes is a key first step toward the development of targeted quality improvement initiatives to improve this process of care. In this study, we aim to examine the association between select characteristics of the transfer process, including the timing of transfer and workload of the admitting physician team, and clinical outcomes among patients undergoing IHT.
METHODS
Data and Study Population
We performed a retrospective analysis of patients ≥age 18 years who transferred to Brigham and Women’s Hospital (BWH), a 777-bed tertiary care hospital, from another acute care hospital between January 2005, and September 2013. Dates of inclusion were purposefully chosen prior to BWH implementation of a new electronic health records system to avoid potential information bias. As at most academic medical centers, night coverage at BWH differs by service and includes a combination of long-call admitting teams and night float coverage. On weekends, many services are less well staffed, and some procedures may only be available if needed emergently. Some services have caps on the daily number of admissions or total patient census, but none have caps on the number of discharges per day. Patients were excluded from analysis if they left BWH against medical advice, were transferred from closely affiliated hospitals with shared personnel and electronic health records (Brigham and Women’s Faulkner Hospital, Dana Farber Cancer Institute), transferred from inpatient psychiatric or inpatient hospice facilities, or transferred to obstetrics or nursery services. Data were obtained from administrative sources and the research patient data repository (RPDR), a centralized clinical data repository that gathers data from various hospital legacy systems and stores them in one data warehouse.8 Our study was approved by the Partners Institutional Review Board (IRB) with a waiver of patient consent.
Transfer Process Characteristics
Predictors included select characteristics of the transfer process, including (1) Day of week of transfer, dichotomized into Friday through Sunday (“weekend”), versus Monday through Thursday (“weekday”);9 Friday was included with “weekend” given the suggestion of increased volume of transfers in advance of the weekend; (2) Time of arrival of the transferred patient, categorized into “daytime” (7
Outcomes
Outcomes included transfer to the intensive care unit (ICU) within 48 hours of arrival and 30-day mortality from date of index admission.5,6
Patient Characteristics
Covariates for adjustment included: patient age, sex, race, Elixhauser comorbidity score,11 Diagnosis-Related Group (DRG)-weight, insurance status, year of admission, number of preadmission medications, and service of admission.
Statistical Analyses
We used descriptive statistics to display baseline characteristics and performed a series of univariable and multivariable logistic regression models to obtain the adjusted odds of each transfer process characteristic on each outcome, adjusting for all covariates (proc logistic, SAS Statistical Software, Cary, North Carolina). For analyses of ICU transfer within 48 hours of arrival, all patients initially admitted to the ICU at time of transfer were excluded.
In the secondary analyses, we used a combined day-of-week and time-of-day variable (ie, Monday day, Monday evening, Monday night, Tuesday day, and so on, with Monday day as the reference group) to obtain a more detailed evaluation of timing of transfer on patient outcomes. We also performed stratified analyses to evaluate each transfer process characteristic on adjusted odds of 30-day mortality stratified by service of admission (ie, at the time of transfer to BWH), adjusting for all covariates. For all analyses, two-sided P values < .05 were considered significant.
RESULTS
Overall, 24,352 patients met our inclusion criteria and underwent IHT, of whom 2,174 (8.9%) died within 30 days. Of the 22,910 transferred patients originally admitted to a non-ICU service, 5,464 (23.8%) underwent ICU transfer within 48 hours of arrival. Cohort characteristics are shown in Table 1.
Multivariable regression analyses demonstrated no significant association between weekend (versus weekday) transfer or increased time delay between patient acceptance and arrival (>48 hours) and adjusted odds of ICU transfer within 48 hours or 30-day mortality. However, they did demonstrate that nighttime (versus daytime) transfer was associated with greater adjusted odds of both ICU transfer and 30-day mortality. Increased admitting team busyness was associated with lower adjusted odds of ICU transfer but was not significantly associated with adjusted odds of 30-day mortality (Table 2). As expected, decreased time delay between patient acceptance and arrival (0-12 hours) was associated with increased adjusted odds of both ICU transfer (adjusted OR 2.68; 95% CI 2.29, 3.15) and 30-day mortality (adjusted OR 1.25; 95% CI 1.03, 1.53) compared with 12-24 hours (results not shown). Time delay >48 hours was not associated with either outcome.
Regression analyses with the combined day/time variable demonstrated that compared with Monday daytime transfer, Sunday night transfer was significantly associated with increased adjusted odds of 30-day mortality, and Friday night transfer was associated with a trend toward increased 30-day mortality (adjusted OR [aOR] 1.88; 95% CI 1.25, 2.82, and aOR 1.43; 95% CI 0.99, 2.06, respectively). We also found that all nighttime transfers (ie, Monday through Sunday night) were associated with increased adjusted odds of ICU transfer within 48 hours (as compared with Monday daytime transfer). Other days/time analyses were not significant.
Univariable and multivariable analyses stratified by service were performed (Appendix). Multivariable stratified analyses demonstrated that weekend transfer, nighttime transfer, and increased admitting team busyness were associated with increased adjusted odds of 30-day mortality among cardiothoracic (CT) and gastrointestinal (GI) surgical service patients. Increased admitting team busyness was also associated with increased mortality among ICU service patients but was associated with decreased mortality among cardiology service patients. An increased time delay between patient acceptance and arrival was associated with decreased mortality among CT and GI surgical service patients (Figure; Appendix). Other adjusted stratified outcomes were not significant.
DISCUSSION
In this study of 24,352 patients undergoing IHT, we found no significant association between weekend transfer or increased time delay between transfer acceptance and arrival and patient outcomes in the cohort as a whole; but we found that nighttime transfer is associated with increased adjusted odds of both ICU transfer within 48 hours and 30-day mortality. Our analyses combining day-of-week and time-of-day demonstrate that Sunday night transfer is particularly associated with increased adjusted odds of 30-day mortality (as compared with Monday daytime transfer), and show a trend toward increased mortality with Friday night transfers. These detailed analyses otherwise reinforce that nighttime transfer across all nights of the week is associated with increased adjusted odds of ICU transfer within 48 hours. We also found that increased admitting team busyness on the day of patient transfer is associated with decreased odds of ICU transfer, though this may solely be reflective of higher turnover services (ie, cardiology) caring for lower acuity patients, as suggested by secondary analyses stratified by service. In addition, secondary analyses demonstrated differential associations between weekend transfers, nighttime transfers, and increased team busyness on the odds of 30-day mortality based on service of transfer. These analyses showed that patients transferred to higher acuity services requiring procedural care, including CT surgery, GI surgery, and Medical ICU, do worse under all three circumstances as compared with patients transferred to other services. Secondary analyses also demonstrated that increased time delay between patient acceptance and arrival is inversely associated with 30-day mortality among CT and GI surgery service patients, likely reflecting lower acuity patients (ie, less sick patients are less rapidly transferred).
There are several possible explanations for these findings. Patients transferred to surgical services at night may reflect a more urgent need for surgery and include a sicker cohort of patients, possibly explaining these findings. Alternatively, or in addition, both weekend and nighttime hospital admission expose patients to similar potential risks, ie, limited resources available during off-peak hours. Our findings could, therefore, reflect the possibility that patients transferred to higher acuity services in need of procedural care are most vulnerable to off-peak timing of transfer. Similar data looking at patients admitted through the emergency room (ER) find the strongest effect of off-peak admissions on patients in need of procedures, including GI hemorrhage,12 atrial fibrillation13 and acute myocardial infarction (AMI),14 arguably because of the limited availability of necessary interventions. Patients undergoing IHT are a sicker cohort of patients than those admitted through the ER, and, therefore, may be even more vulnerable to these issues.3,5 This is supported by our findings that Sunday night transfers (and trend toward Friday night transfers) are associated with greater mortality compared with Monday daytime transfers, when at-the-ready resources and/or specialty personnel may be less available (Sunday night), and delays until receipt of necessary procedures may be longer (Friday night). Though we did not observe similar results among cardiology service transfers, as may be expected based on existing literature,13,14 this subset of patients includes more heterogeneous diagnoses, (ie, not solely those that require acute intervention) and exhibited a low level of acuity (low Elixhauser score and DRG-weight, data not shown).
We also found that increased admitting team busyness on the day of patient transfer is associated with increased odds of 30-day mortality among CT surgery, GI surgery, and ICU service transfers. As above, there are several possible explanations for this finding. It is possible that among these services, only the sickest/neediest patients are accepted for transfer when teams are busiest, explaining our findings. Though this explanation is possible, the measure of team “busyness” includes patient discharge, thereby increasing, not decreasing, availability for incoming patients, making this explanation less likely. Alternatively, it is possible that this finding is reflective of reverse causation, ie, that teams have less ability to discharge/admit new patients when caring for particularly sick/unstable patient transfers, though this assumes that transferred patients arrive earlier in the day, (eg, in time to influence discharge decisions), which infrequently occurs (Table 1). Lastly, it is possible that this subset of patients will be more vulnerable to the workload of the team that is caring for them at the time of their arrival. With high patient turnover (admissions/discharges), the time allocated to each patient’s care may be diminished (ie, “work compression,” trying to do the same amount of work in less time), and may result in decreased time to care for the transferred patient. This has been shown to influence patient outcomes at the time of patient discharge.10
In trying to understand why we observed an inverse relationship between admitting team busyness and odds of ICU transfer within 48 hours, we believe this finding is largely driven by cardiology service transfers, which comprise the highest volume of transferred patients in our cohort (Table 1), and are low acuity patients. Within this population of patients, admitting team busyness is likely a surrogate variable for high turnover/low acuity. This idea is supported by our findings that admitting team busyness is associated with decreased adjusted odds of 30-day mortality in this group (and only in this group).
Similarly, our observed inverse relationship between increased time delay and 30-day mortality among CT and GI surgical service patients is also likely reflective of lower acuity patients. We anticipated that decreased time delay (0-12 hours) would be reflective of greater patient acuity (supported by our findings that decreased time delay is associated with increased odds of ICU transfer and 30-day mortality). However, our findings also suggest that increased time delay (>48 hours) is similarly representative of lower patient acuity and therefore an imperfect measure of discontinuity and/or harmful delays in care during IHT (see limitations below).
Our study is subject to several limitations. This is a single site study; given known variation in transfer practices between hospitals,3 it is possible that our findings are not generalizable. However, given similar existing data on patients admitted through the ER, it is likely our findings may be reflective of IHT to similar tertiary referral hospitals. Second, although we adjusted for patient characteristics, there remains the possibility of unmeasured confounding and other bias that account for our results, as discussed. Third, although the definition of “busyness” used in this study was chosen based on prior data demonstrating an effect on patient outcomes,10 we did not include other measures of busyness that may influence outcomes of transferred patients such as overall team census or hospital busyness. However, the workload associated with a high volume of patient admissions and discharges is arguably a greater reflection of “work compression” for the admitting team compared with overall team census, which may reflect a more static workload with less impact on the care of a newly transferred patient. Also, although hospital census may influence the ability to transfer (ie, lower volume of transferred patients during times of high hospital census), this likely has less of an impact on the direct care of transferred patients than the admitting team’s workload. It is more likely that it would serve as a confounder (eg, sicker patients are accepted for transfer despite high hospital census, while lower risk patients are not).
Nevertheless, future studies should further evaluate the association with other measures of busyness/workload and outcomes of transferred patients. Lastly, though we anticipated time delay between transfer acceptance and arrival would be correlated with patient acuity, we hypothesized that longer delay might affect patient continuity and communication and impact patient outcomes. However, our results demonstrate that our measurement of this variable was unsuccessful in unraveling patient acuity from our intended evaluation of these vulnerable aspects of IHT. It is likely that a more detailed evaluation is required to explore potential challenges more fully that may occur with greater time delays (eg, suboptimal communication regarding changes in clinical status during this time period, delays in treatment). Similarly, though our study evaluates the association between nighttime and weekend transfer (and the interaction between these) with patient outcomes, we did not evaluate other intermediate outcomes that may be more affected by the timing of transfer, such as diagnostic errors or delays in procedural care, which warrant further investigation. We do not directly examine the underlying reasons that explain our observed associations, and thus more research is needed to identify these as well as design and evaluate solutions.
Collectively, our findings suggest that high acuity patients in need of procedural care experience worse outcomes during off-peak times of transfer, and during times of high care-team workload. Though further research is needed to identify underlying reasons to explain our findings, both the timing of patient transfer (when modifiable) and workload of the team caring for the patient on arrival may serve as potential targets for interventions to improve the quality and safety of IHT for patients at greatest risk.
Disclosures
Dr. Mueller and Dr. Schnipper have nothing to disclose. Ms. Fiskio has nothing to disclose. Dr. Schnipper is the recipient of grant funding from Mallinckrodt Pharmaceuticals to conduct an investigator-initiated study of predictors and impact of opioid-related adverse drug events.
1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.
1. Iwashyna TJ. The incomplete infrastructure for interhospital patient transfer. Crit Care Med. 2 012;40(8):2470-2478. https://doi.org/10.1097/CCM.0b013e318254516f.
2. Mueller SK, Shannon E, Dalal A, Schnipper JL, Dykes P. Patient and physician experience with interhospital transfer: a qualitative study. J Patient Saf. 2018. https://doi.org/10.1097/PTS.0000000000000501
3. Mueller SK, Zheng J, Orav EJ, Schnipper JL. Rates, predictors and variability of interhospital transfers: a national evaluation. J Hosp Med. 2017;12(6):435-442.https://doi.org/10.12788/jhm.2747.
4. Bosk EA, Veinot T, Iwashyna TJ. Which patients and where: a qualitative study of patient transfers from community hospitals. Med Care. 2011;49(6):592-598. https://doi.org/10.1097/MLR.0b013e31820fb71b.
5. Sokol-Hessner L, White AA, Davis KF, Herzig SJ, Hohmann SF. Interhospital transfer patients discharged by academic hospitalists and general internists: characteristics and outcomes. J Hosp Med. 2016;11(4):245-50. https://doi.org/10.1002/jhm.2515.
6. Mueller S, Zheng J, Orav EJP, Schnipper JL. Inter-hospital transfer and patient outcomes: a retrospective cohort study. BMJ Qual Saf. 2018. https://doi.org/10.1136/bmjqs-2018-008087.
7. Mueller SK, Schnipper JL. Physician perspectives on interhospital transfers. J Patient Saf. 2016. https://doi.org/10.1097/PTS.0000000000000312.
8. Research Patient Data Registry (RPDR). http://rc.partners.org/rpdr. Accessed April 20, 2018.
9. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. https://doi.org/10.1056/NEJMsa003376
10. Mueller SK, Donze J, Schnipper JL. Intern workload and discontinuity of care on 30-day readmission. Am J Med. 2013;126(1):81-88. https://doi.org/10.1016/j.amjmed.2012.09.003.
11. Elixhauser A, Steiner C, Harris DR, Coffey RM. Comorbidity measures for use with administrative data. Med Care. 1998;36(1):8-27. PubMed
12. Ananthakrishnan AN, McGinley EL, Saeian K. Outcomes of weekend admissions for upper gastrointestinal hemorrhage: a nationwide analysis. Clin Gastroenterol Hepatol. 2009;7(3):296-302e1. https://doi.org/10.1016/j.cgh.2008.08.013.
13. Deshmukh A, Pant S, Kumar G, Bursac Z, Paydak H, Mehta JL. Comparison of outcomes of weekend versus weekday admissions for atrial fibrillation. Am J Cardiol. 2012;110(2):208-211. https://doi.org/10.1016/j.amjcard.2012.03.011.
14. Clarke MS, Wills RA, Bowman RV, et al. Exploratory study of the ‘weekend effect’ for acute medical admissions to public hospitals in Queensland, Australia. Intern Med J. 2010;40(11):777-783. https://doi.org/-10.1111/j.1445-5994.2009.02067.x.
© 2019 Society of Hospital Medicine
Critical Errors in Inhaler Technique among Children Hospitalized with Asthma
Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6
Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.
METHODS
As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.
We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.
Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8
We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.
RESULTS
Participants
From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).
Errors in Inhaler Technique
The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.
Demographic, Medical History, and Socioeconomic Characteristics
Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).
DISCUSSION
Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.
Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.
The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.
The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15
CONCLUSION
Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.
Acknowledgments
The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.
Disclosures
Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.
Funding
This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.
1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed
Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6
Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.
METHODS
As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.
We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.
Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8
We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.
RESULTS
Participants
From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).
Errors in Inhaler Technique
The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.
Demographic, Medical History, and Socioeconomic Characteristics
Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).
DISCUSSION
Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.
Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.
The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.
The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15
CONCLUSION
Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.
Acknowledgments
The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.
Disclosures
Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.
Funding
This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.
Many studies have shown that improved control can be achieved for most children with asthma if inhaled medications are taken correctly and adequately.1-3 Drug delivery studies have shown that bioavailability of medication with a pressurized metered-dose inhaler (MDI) improves from 34% to 83% with the addition of spacer devices. This difference is largely due to the decrease in oropharyngeal deposition,1,4,5 and therefore, the use of a spacer with proper technique has been recommended in all pediatric patients.1,6
Poor inhaler technique is common among children.1,7 Previous studies of children with asthma have evaluated inhaler technique, primarily in the outpatient and community settings, and reported variable rates of error (from 45% to >90%).8,9 No studies have evaluated children hospitalized with asthma. As these children represent a particularly high-risk group for morbidity and mortality,10,11 the objectives of this study were to assess errors in inhaler technique in hospitalized asthmatic children and identify risk factors for improper use.
METHODS
As part of a larger interventional study, we conducted a prospective cross-sectional study at a tertiary urban children’s hospital. We enrolled a convenience sample of children aged 2-16 years admitted to the inpatient ward with an asthma exacerbation Monday-Friday from 8 AM to 6 PM. Participants were required to have a diagnosis of asthma (an established diagnosis by their primary care provider or meets the National Heart, Lung, and Blood Institute [NHLBI] criteria1), have a consenting adult available, and speak English. Patients were excluded if they had a codiagnosis of an additional respiratory disease (ie, pneumonia), cardiac disease, or sickle cell anemia. The Institutional Review Board approved this study.
We asked caregivers, or children >10 years old if they independently use their inhaler, to demonstrate their typical home inhaler technique using a spacer with mask (SM), spacer with mouthpiece (SMP), or no spacer (per their usual home practice). Inhaler technique was scored using a previously validated asthma checklist (Table 1).12 Certain steps in the checklist were identified as critical: (Step 1) removing the cap, (Step 3) attaching to a spacer, (Step 7) taking six breaths (SM), and (Step 9) holding breath for five seconds (SMP). Caregivers only were also asked to complete questionnaires assessing their literacy (Brief Health Literacy Screen [BHLS]), confidence (Parent Asthma Management Self-Efficacy scale [PAMSE]), and any barriers to managing their child’s asthma (Barriers to Asthma Care). Demographic and medical history information was extracted from the medical chart.
Inhaler technique was evaluated in two ways by comparing: (1) patients who missed more than one critical step with those who missed zero critical steps and (2) patients with an asthma checklist score <7 versus ≥7. While there is a lot of variability in how inhaler technique has been measured in past studies, these two markers (75% of steps and critical errors) were the most common.8
We assessed a number of variables to evaluate their association with improper inhaler technique. For categorical variables, the association with each outcome was evaluated using relative risks (RRs). Bivariate P-values were calculated using chi-square or Fisher’s exact tests, as appropriate. Continuous variables were assessed for associations with each outcome using two-sample t-tests. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated using logistic regression analyses. Using a model entry criterion of P < .10 on univariate tests, variables were entered into a multivariable logistic regression model for each outcome. Full models with all eligible covariates and reduced models selected via a manual backward selection process were evaluated. Two-sided P-values <.05 were considered statistically significant.
RESULTS
Participants
From October 2016 to June 2017, 380 participants were assessed for participation; 215 were excluded for not having a parent available (59%), not speaking English (27%), not having an asthma diagnosis (ie, viral wheezing; 14%), and 52 (14%) declined to participate. Therefore, a total of 113 participants were enrolled, with demonstrations provided by 100 caregivers and 13 children. The mean age of the patients overall was 6.6 ± 3.4 years and over half (55%) of the participants had uncontrolled asthma (NHLBI criteria1).
Errors in Inhaler Technique
The mean asthma checklist score was 6.7 (maximum score of 10 for SM and 12 for SMP). A third (35%) scored <7 on the asthma checklist and 42% of participants missed at least one critical step. Overall, children who missed a critical step were significantly older (7.8 [6.7-8.9] vs 5.8 [5.1-6.5] years; P = .002). More participants missed a critical step with the SMP than the SM (75% [51%-90%] vs 36% [27%-46%]; P = .003), and this was the most prominent factor for missing a critical step in the adjusted regression analysis (OR 6.95 [1.71-28.23], P = .007). The most commonly missed steps were breathing normally for 30 seconds for SM, and for SMP, it was breathing out fully and breathing away from the spacer (Table 1). Twenty participants (18%) did not use a spacer device; these patients were older than those who did use a spacer (mean age 8.5 [6.7-10.4] vs 6.2 [5.6-6.9] years; P = .005); however, no other significant differences were identified.
Demographic, Medical History, and Socioeconomic Characteristics
Overall, race, ethnicity, and insurance status did not vary significantly based on asthma checklist score ≥7 or missing a critical step. Patients in the SM group who had received inpatient asthma education during a previous admission, had a history of pediatric intensive care unit (PICU) admission, and had been prescribed a daily controller were less likely to miss a critical step (Table 2). Parental education level varied, with 33% having a high school degree or less, but was not associated with asthma checklist score or missing critical steps. Parental BHLS and parental confidence (PAMSE) were not significantly associated with inhaler proficiency. However, transportation-related barriers were more common in patients with checklist scores <7 and more missed critical steps (OR 1.62 [1.06-2.46]; P = .02).
DISCUSSION
Nearly half of the participants in this study missed at least one critical step in inhaler use. In addition, 18% did not use a spacer when demonstrating their inhaler technique. Despite robust studies demonstrating how asthma education can improve both asthma skills and clinical outcomes,13 our study demonstrates that a large gap remains in proper inhaler technique among asthmatic patients presenting for inpatient care. Specifically, in the mouthpiece group, steps related to breathing technique were the most commonly missed. Our results also show that inhaler technique errors were most prominent in the adolescent population, possibly coinciding with the process of transitioning to a mouthpiece and more independence in medication administration. Adolescents may be a high-impact population on which to focus inpatient asthma education. Additionally, we found that a previous PICU admission and previous inpatient asthma education were associated with missing fewer critical steps in inhaler technique. This finding is consistent with those of another study that evaluated inhaler technique in the emergency department and found that previous hospitalization for asthma was inversely related to improper inhaler use (RR 0.55, 95% CI 0.36-0.84).14 This supports that when provided, inpatient education can increase inhaler administration skills.
Previous studies conducted in the outpatient setting have demonstrated variable rates of inhaler skill, from 0% to approximately 89% of children performing all steps of inhalation correctly.8 This wide range may be related to variations in the number and definition of critical steps between the different studies. In our study, we highlighted removing the cap, attaching a spacer, and adequate breathing technique as critical steps, because failure to complete them would significantly reduce lung deposition of medication. While past studies did evaluate both MDIs and discuss the devices, our study is the first to report difference in problems with technique between SM and SMP. As asthma educational interventions are developed and/or implemented, it is important to stress that different steps in inhaler technique are being missed in those using a mask versus mouthpiece.
The limitations of this study include that it was at a single center with a primarily urban and English-speaking population; however, this study population reflects the racial diversity of pediatric asthma patients. Further studies may explore the reproducibility of these findings at multiple centers and with non-English-speaking families. This study included younger patients than in some previous publications investigating asthma; however, all patients met the criteria for asthma diagnosis and this age range is reflective of patients presenting for inpatient asthma care. Furthermore, because of our daytime research hours, 59% of patients were excluded because a primary caregiver was not available. It is possible that these families have decreased access to inpatient asthma educators as well and may be another target group for future studies. Finally, a large proportion of parents had a college education or greater in our sample. However, there was no association within our analysis between parental education level and inhaler proficiency.
The findings from this study indicate that continued efforts are needed to establish that inhaler technique is adequate for all families regardless of their educational status or socioeconomic background, especially for adolescents and in the setting of poor asthma control. Furthermore, our findings support that inhaler technique education may be beneficial in the inpatient setting and that acute care settings can provide a valuable “teachable moment.”14,15
CONCLUSION
Errors in inhaler technique are prevalent in pediatric inpatients with asthma, primarily those using a mouthpiece device. Educational efforts in both inpatient and outpatient settings have the potential to improve drug delivery and therefore asthma control. Inpatient hospitalization may serve as a platform for further studies to investigate innovative educational interventions.
Acknowledgments
The authors thank Tina Carter for her assistance in the recruitment and data collection and Ashley Hull and Susannah Butters for training the study staff on the use of the asthma checklist.
Disclosures
Dr. Gupta receives research grant support from the National Institutes of Health and the United Healthcare Group. Dr. Gupta serves as a consultant for DBV Technology, Aimmune Therapeutics, Kaleo & BEFORE Brands. Dr. Gupta has received lecture fees/honorariums from the Allergy Asthma Network & the American College of Asthma, Allergy & Immunology. Dr. Press reports research support from the Chicago Center for Diabetes Translation Research Pilot and Feasibility Grant, the Bucksbaum Institute for Clinical Excellence Pilot Grant Program, the Academy of Distinguished Medical Educators, the Development of Novel Hospital-initiated Care Bundle in Adults Hospitalized for Acute Asthma: the 41st Multicenter Airway Research Collaboration (MARC-41) Study, UCM’s Innovation Grant Program, the University of Chicago-Chapin Hall Join Research Fund, the NIH/NHLBI Loan Repayment Program, 1 K23 HL118151 01, NIH NLBHI R03 (RFA-HL-18-025), the George and Carol Abramson Pilot Awards, the COPD Foundation Green Shoots Grant, the University of Chicago Women’s Board Grant, NIH NHLBI UG1 (RFA-HL-17-009), and the CTSA Pilot Award, outside the submitted work. These disclosures have been reported to Dr. Press’ institutional IRB board. Additionally, a management plan is on file that details how to address conflicts such as these which are sources of research support but do not directly support the work at hand. The remaining authors have no conflicts of interest relevant to the article to disclose.
Funding
This study was funded by internal grants from Ann and Robert H. Lurie Children’s Hospital of Chicago. Dr. Press was funded by a K23HL118151.
1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed
1. Expert Panel Report 3: guidelines for the diagnosis and management of asthma: full report. Washington, DC: US Department of Health and Human Services, National Institutes of Health, National Heart, Lung, and Blood Institute; 2007. PubMed
2. Hekking PP, Wener RR, Amelink M, Zwinderman AH, Bouvy ML, Bel EH. The prevalence of severe refractory asthma. J Allergy Clin Immunol. 2015;135(4):896-902. doi: 10.1016/j.jaci.2014.08.042. PubMed
3. Peters SP, Ferguson G, Deniz Y, Reisner C. Uncontrolled asthma: a review of the prevalence, disease burden and options for treatment. Respir Med. 2006;100(7):1139-1151. doi: 10.1016/j.rmed.2006.03.031. PubMed
4. Dickens GR, Wermeling DP, Matheny CJ, et al. Pharmacokinetics of flunisolide administered via metered dose inhaler with and without a spacer device and following oral administration. Ann Allergy Asthma Immunol. 2000;84(5):528-532. doi: 10.1016/S1081-1206(10)62517-3. PubMed
5. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv. 2014;27(1):S4-S23. doi: 10.1089/jamp.2013.1076. PubMed
6. Rubin BK, Fink JB. The delivery of inhaled medication to the young child. Pediatr Clin North Am. 2003;50(3):717-731. doi:10.1016/S0031-3955(03)00049-X. PubMed
7. Roland NJ, Bhalla RK, Earis J. The local side effects of inhaled corticosteroids: current understanding and review of the literature. Chest. 2004;126(1):213-219. doi: 10.1378/chest.126.1.213. PubMed
8. Gillette C, Rockich-Winston N, Kuhn JA, Flesher S, Shepherd M. Inhaler technique in children with asthma: a systematic review. Acad Pediatr. 2016;16(7):605-615. doi: 10.1016/j.acap.2016.04.006. PubMed
9. Pappalardo AA, Karavolos K, Martin MA. What really happens in the home: the medication environment of urban, minority youth. J Allergy Clin Immunol Pract. 2017;5(3):764-770. doi: 10.1016/j.jaip.2016.09.046. PubMed
10. Crane J, Pearce N, Burgess C, Woodman K, Robson B, Beasley R. Markers of risk of asthma death or readmission in the 12 months following a hospital admission for asthma. Int J Epidemiol. 1992;21(4):737-744. doi: 10.1093/ije/21.4.737. PubMed
11. Turner MO, Noertjojo K, Vedal S, Bai T, Crump S, Fitzgerald JM. Risk factors for near-fatal asthma. A case-control study in hospitalized patients with asthma. Am J Respir Crit Care Med. 1998;157(6 Pt 1):1804-1809. doi: 10.1164/ajrccm.157.6.9708092. PubMed
12. Press VG, Arora VM, Shah LM, et al. Misuse of respiratory inhalers in hospitalized patients with asthma or COPD. J Gen Intern Med. 2011;26(6):635-642. doi: 10.1007/s11606-010-1624-2. PubMed
13. Guevara JP, Wolf FM, Grum CM, Clark NM. Effects of educational interventions for self management of asthma in children and adolescents: systematic review and meta-analysis. BMJ. 2003;326(7402):1308-1309. doi: 10.1136/bmj.326.7402.1308. PubMed
14. Scarfone RJ, Capraro GA, Zorc JJ, Zhao H. Demonstrated use of metered-dose inhalers and peak flow meters by children and adolescents with acute asthma exacerbations. Arch Pediatr Adolesc Med. 2002;156(4):378-383. doi: 10.1001/archpedi.156.4.378. PubMed
15. Sockrider MM, Abramson S, Brooks E, et al. Delivering tailored asthma family education in a pediatric emergency department setting: a pilot study. Pediatrics. 2006;117(4 Pt 2):S135-144. doi: 10.1542/peds.2005-2000K. PubMed
© 2019 Society of Hospital Medicine
The Current State of Advanced Practice Provider Fellowships in Hospital Medicine: A Survey of Program Directors
Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.
Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.
First described in 2010 by the Mayo Clinic,
METHODS
This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.
The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,
A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.
RESULTS
In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table).
Fellowship and Individual Characteristics
Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.
The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.
Program Rationales
All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.
In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.
Curricula
Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.
There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.
There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.
Methods of Fellow Assessment
Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).
DISCUSSION
We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.
There have been several publications detailing successful individual APP fellowships in medical subspecialties,
It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.
Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.
CONCLUSION
APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.
Acknowledgments
Disclosures
The authors report no conflicts of interest.
Funding
This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.
1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32.
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014.
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008.
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892.
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021.
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.
Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.
Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.
First described in 2010 by the Mayo Clinic,
METHODS
This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.
The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,
A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.
RESULTS
In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table).
Fellowship and Individual Characteristics
Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.
The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.
Program Rationales
All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.
In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.
Curricula
Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.
There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.
There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.
Methods of Fellow Assessment
Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).
DISCUSSION
We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.
There have been several publications detailing successful individual APP fellowships in medical subspecialties,
It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.
Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.
CONCLUSION
APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.
Acknowledgments
Disclosures
The authors report no conflicts of interest.
Funding
This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.
Postgraduate training for physician assistants (PAs) and nurse practitioners (NPs) is a rapidly evolving field. It has been estimated that the number of these advanced practice providers (APPs) almost doubled between 2000 and 2016 (from 15.3 to 28.2 per 100 physicians) and is expected to double again by 2030.
Historically, postgraduate APP fellowships have functioned to help bridge the gap in clinical practice experience between physicians and APPs.
First described in 2010 by the Mayo Clinic,
METHODS
This was a cross-sectional study of all APP adult and pediatric fellowships in hospital medicine, in the United States, that were identifiable through May 2018. Multiple methods were used to identify all active fellowships. First, all training programs offering a Hospital Medicine Fellowship in the ARC-PA and Association of Postgraduate PA Programs databases were noted. Second, questionnaires were given out at the NP/PA forum at the national SHM conference in 2018 to gather information on existing APP fellowships. Third, similar online requests to identify known programs were posted to the SHM web forum Hospital Medicine Exchange (HMX). Fourth, Internet searches were used to discover additional programs. Once those fellowships were identified, surveys were sent to their program directors (PDs). These surveys not only asked the PDs about their fellowship but also asked them to identify additional APP fellowships beyond those that we had captured. Once additional programs were identified, a second round of surveys was sent to their PDs. This was performed in an iterative fashion until no additional fellowships were discovered.
The survey tool was developed and validated internally in the AAMC Survey Development style18 and was influenced by prior validated surveys of postgraduate medical fellowships.10,
A web-based survey format (Qualtrics) was used to distribute the questionnaire e-mail to the PDs. Follow up e-mail reminders were sent to all nonresponders to encourage full participation. Survey completion was voluntary; no financial incentives or gifts were offered. IRB approval was obtained at Johns Hopkins Bayview (IRB number 00181629). Descriptive statistics (proportions, means, and ranges as appropriate) were calculated for all variables. Stata 13 (StataCorp. 2013. Stata Statistical Software: Release 13. College Station, Texas. StataCorp LP) was used for data analysis.
RESULTS
In total, 11 fellowships were identified using our multimethod approach. We found four (36%) programs by utilizing existing online databases, two (18%) through the SHM questionnaire and HMX forum, three (27%) through internet searches, and the remaining two (18%) were referred to us by the other PDs who were surveyed. Of the programs surveyed, 10 were adult programs and one was a pediatric program. Surveys were sent to the PDs of the 11 fellowships, and all but one of them (10/11, 91%) responded. Respondent programs were given alphabetical designations A through J (Table).
Fellowship and Individual Characteristics
Most programs have been in existence for five years or fewer. Eighty percent of the programs are about one year in duration; two outlier programs have fellowship lengths of six months and 18 months. The main hospital where training occurs has a mean of 496 beds (range 213 to 900). Ninety percent of the hospitals also have physician residency training programs. Sixty percent of programs enroll two to four fellows per year while 40% enroll five or more. The salary range paid by the programs is $55,000 to >$70,000, and half the programs pay more than $65,000.
The majority of fellows accepted into APP fellowships in hospital medicine are women. Eighty percent of fellows are 26-30 years old, and 90% of fellows have been out of NP or PA school for one year or less. Both NP and PA applicants are accepted in 80% of fellowships.
Program Rationales
All programs reported that training and retaining applicants is the main driver for developing their fellowship, and 50% of them offer financial incentives for retention upon successful completion of the program. Forty percent of PDs stated that there is an implicit or explicit understanding that successful completion of the fellowship would result in further employment. Over the last five years, 89% (range: 71%-100%) of graduates were asked to remain for a full-time position after program completion.
In addition to training and retention, building an interprofessional team (50%), managing patient volume (30%), and reducing overhead (20%) were also reported as rationales for program development. The majority of programs (80%) have fellows bill for clinical services, and five of those eight programs do so after their fellows become more clinically competent.
Curricula
Of the nine adult programs, 67% teach explicitly to SHM core competencies and 33% send their fellows to the SHM NP/PA Boot Camp. Thirty percent of fellowships partner formally with either a physician residency or a local PA program to develop educational content. Six of the nine programs with active physician residencies, including the pediatric fellowship, offer shared educational experiences for the residents and APPs.
There are notable differences in clinical rotations between the programs (Figure 1). No single rotation is universally required, although general hospital internal medicine is required in all adult fellowships. The majority (80%) of programs offer at least one elective. Six programs reported mandatory rotations outside the department of medicine, most commonly neurology or the stroke service (four programs). Only one program reported only general medicine rotations, with no subspecialty electives.
There are also differences between programs with respect to educational experiences and learning formats (Figure 2). Each fellowship takes a unique approach to clinical instruction; teaching rounds and lecture attendance are the only experiences that are mandatory across the board. Grand rounds are available, but not required, in all programs. Ninety percent of programs offer or require fellow presentations, journal clubs, reading assignments, or scholarly projects. Fellow presentations (70%) and journal club attendance (60%) are required in more than half the programs; however, reading assignments (30%) and scholarly projects (20%) are rarely required.
Methods of Fellow Assessment
Each program surveyed has a unique method of fellow assessment. Ninety percent of the programs use more than one method to assess their fellows. Faculty reviews are most commonly used and are conducted in all rotations in 80% of fellowships. Both self-assessment exercises and written examinations are used in some rotations by the majority of programs. Capstone projects are required infrequently (30%).
DISCUSSION
We found several commonalities between the fellowships surveyed. Many of the program characteristics, such as years in operation, salary, duration, and lack of accreditation, are quite similar. Most fellowships also have a similar rationale for building their programs and use resources from the SHM to inform their curricula. Fellows, on average, share several demographic characteristics, such as age, gender, and time out of schooling. Conversely, we found wide variability in clinical rotations, the general teaching structure, and methods of fellow evaluation.
There have been several publications detailing successful individual APP fellowships in medical subspecialties,
It is noteworthy that every program surveyed was created with training and retention in mind, rather than other factors like decreasing overhead or managing patient volume. Training one’s own APPs so that they can learn on the job, come to understand expectations within a group, and witness the culture is extremely valuable. From a patient safety standpoint, it has been documented that physician hospitalists straight out of residency have a higher patient mortality compared with more experienced providers.
Several limitations to this study should be considered. While we used multiple strategies to locate as many fellowships as possible, it is unlikely that we successfully captured all existing programs, and new programs are being developed annually. We also relied on self-reported data from PDs. While we would expect PDs to provide accurate data, we could not externally validate their answers. Additionally, although our survey tool was reviewed extensively and validated internally, it was developed de novo for this study.
CONCLUSION
APP fellowships in hospital medicine have experienced marked growth since the first program was described in 2010. The majority of programs are 12 months long, operate in existing teaching centers, and are intended to further enhance the training and retention of newly graduated PAs and NPs. Despite their similarities, fellowships have striking variability in their methods of teaching and assessing their learners. Best practices have yet to be identified, and further study is required to determine how to standardize curricula across the board.
Acknowledgments
Disclosures
The authors report no conflicts of interest.
Funding
This project was supported by the Johns Hopkins School of Medicine Biostatistics, Epidemiology and Data Management (BEAD) Core. Dr. Wright is the Anne Gaines and G. Thomas Miller Professor of Medicine, which is supported through the Johns Hopkins’ Center for Innovative Medicine.
1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32.
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014.
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008.
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892.
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021.
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.
1. Auerbach DI, Staiger DO, Buerhaus PI. Growing ranks of advanced practice clinicians — implications for the physician workforce. N Engl J Med. 2018;378(25):2358-2360. doi: 10.1056/nejmp1801869. PubMed
2. Darves B. Midlevels make a rocky entrance into hospital medicine. Todays Hospitalist. 2007;5(1):28-32.
3. Polansky M. A historical perspective on postgraduate physician assistant education and the association of postgraduate physician assistant programs. J Physician Assist Educ. 2007;18(3):100-108. doi: 10.1097/01367895-200718030-00014.
4. FNP & AGNP Certification Candidate Handbook. The American Academy of Nurse Practitioners National Certification Board, Inc; 2018. https://www.aanpcert.org/resource/documents/AGNP FNP Candidate Handbook.pdf. Accessed December 20, 2018
5. Become a PA: Getting Your Prerequisites and Certification. AAPA. https://www.aapa.org/career-central/become-a-pa/. Accessed December 20, 2018.
6. ACGME Common Program Requirements. ACGME; 2017. https://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_2017-07-01.pdf. Accessed December 20, 2018
7. Committee on the Learning Health Care System in America; Institute of Medicine, Smith MD, Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press; 2013. PubMed
8. The Future of Nursing LEADING CHANGE, ADVANCING HEALTH. THE NATIONAL ACADEMIES PRESS; 2014. https://www.nap.edu/read/12956/chapter/1. Accessed December 16, 2018.
9. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate pa training programs. JAAPA. 2016:29:1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
10. Polansky M, Garver GJH, Hilton G. Postgraduate clinical education of physician assistants. J Physician Assist Educ. 2012;23(1):39-45. doi: 10.1097/01367895-201223010-00008.
11. Will KK, Budavari AI, Wilkens JA, Mishark K, Hartsell ZC. A hospitalist postgraduate training program for physician assistants. J Hosp Med. 2010;5(2):94-98. doi: 10.1002/jhm.619. PubMed
12. Kartha A, Restuccia JD, Burgess JF, et al. Nurse practitioner and physician assistant scope of practice in 118 acute care hospitals. J Hosp Med. 2014;9(10):615-620. doi: 10.1002/jhm.2231. PubMed
13. Singh S, Fletcher KE, Schapira MM, et al. A comparison of outcomes of general medical inpatient care provided by a hospitalist-physician assistant model vs a traditional resident-based model. J Hosp Med. 2011;6(3):122-130. doi: 10.1002/jhm.826. PubMed
14. Hussaini SS, Bushardt RL, Gonsalves WC, et al. Accreditation and implications of clinical postgraduate PA training programs. JAAPA. 2016;29(5):1-7. doi: 10.1097/01.jaa.0000482298.17821.fb. PubMed
15. Postgraduate Programs. ARC-PA. http://www.arc-pa.org/accreditation/postgraduate-programs. Accessed September 13, 2018.
16. National Nurse Practitioner Residency & Fellowship Training Consortium: Mission. https://www.nppostgradtraining.com/About-Us/Mission. Accessed September 27, 2018.
17. NP/PA Boot Camp. State of Hospital Medicine | Society of Hospital Medicine. http://www.hospitalmedicine.org/events/nppa-boot-camp. Accessed September 13, 2018.
18. Gehlbach H, Artino Jr AR, Durning SJ. AM last page: survey development guidance for medical education researchers. Acad Med. 2010;85(5):925. doi: 10.1097/ACM.0b013e3181dd3e88.” Accessed March 10, 2018. PubMed
19. Kraus C, Carlisle T, Carney D. Emergency Medicine Physician Assistant (EMPA) post-graduate training programs: program characteristics and training curricula. West J Emerg Med. 2018;19(5):803-807. doi: 10.5811/westjem.2018.6.37892.
20. Shah NH, Rhim HJH, Maniscalco J, Wilson K, Rassbach C. The current state of pediatric hospital medicine fellowships: A survey of program directors. J Hosp Med. 2016;11(5):324-328. doi: 10.1002/jhm.2571. PubMed
21. Thompson BM, Searle NS, Gruppen LD, Hatem CJ, Nelson E. A national survey of medical education fellowships. Med Educ Online. 2011;16(1):5642. doi: 10.3402/meo.v16i0.5642. PubMed
22. Hooker R. A physician assistant rheumatology fellowship. JAAPA. 2013;26(6):49-52. doi: 10.1097/01.jaa.0000430346.04435.e4 PubMed
23. Keizer T, Trangle M. the benefits of a physician assistant and/or nurse practitioner psychiatric postgraduate training program. Acad Psychiatry. 2015;39(6):691-694. doi: 10.1007/s40596-015-0331-z. PubMed
24. Miller A, Weiss J, Hill V, Lindaman K, Emory C. Implementation of a postgraduate orthopaedic physician assistant fellowship for improved specialty training. JBJS Journal of Orthopaedics for Physician Assistants. 2017:1. doi: 10.2106/jbjs.jopa.17.00021.
25. Sharma P, Brooks M, Roomiany P, Verma L, Criscione-Schreiber L. physician assistant student training for the inpatient setting. J Physician Assist Educ. 2017;28(4):189-195. doi: 10.1097/jpa.0000000000000174. PubMed
26. Goodwin JS, Salameh H, Zhou J, Singh S, Kuo Y-F, Nattinger AB. Association of hospitalist years of experience with mortality in the hospitalized medicare population. JAMA Intern Med. 2018;178(2):196. doi: 10.1001/jamainternmed.2017.7049. PubMed
27. Barnes H. Exploring the factors that influence nurse practitioner role transition. J Nurse Pract. 2015;11(2):178-183. doi: 10.1016/j.nurpra.2014.11.004. PubMed
28. Will K, Williams J, Hilton G, Wilson L, Geyer H. Perceived efficacy and utility of postgraduate physician assistant training programs. JAAPA. 2016;29(3):46-48. doi: 10.1097/01.jaa.0000480569.39885.c8. PubMed
29. Torok H, Lackner C, Landis R, Wright S. Learning needs of physician assistants working in hospital medicine. J Hosp Med. 2011;7(3):190-194. doi: 10.1002/jhm.1001. PubMed
30. Cate O. Competency-based postgraduate medical education: past, present and future. GMS J Med Educ. 2017:34(5). doi: 10.3205/zma001146. PubMed
31. Exploring the ACGME Core Competencies (Part 1 of 7). NEJM Knowledge. https://knowledgeplus.nejm.org/blog/exploring-acgme-core-competencies/. Accessed October 24, 2018.
32. Core Competencies. Core Competencies | Society of Hospital Medicine. http://www.hospitalmedicine.org/professional-development/core-competencies/. Accessed October 24, 2018.
© 2019 Society of Hospital Medicine
Modifiable Factors Associated with Quality of Bowel Preparation Among Hospitalized Patients Undergoing Colonoscopy
Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.
In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify
METHODS
Potential Predictors of IBP
Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.
Outcome Measures
An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.
Statistical Analysis
After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.
Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.
RESULTS
Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.
In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).
Multivariate Analysis
Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00
Potential Impact of Modifiable Variables
We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00
DISCUSSION
In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.
Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.
We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would
The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.
Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.
Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed.
Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.
Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.
Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20
CONCLUSIONS
In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.
Disclosures
Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.
1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331.
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed
Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.
In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify
METHODS
Potential Predictors of IBP
Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.
Outcome Measures
An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.
Statistical Analysis
After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.
Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.
RESULTS
Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.
In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).
Multivariate Analysis
Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00
Potential Impact of Modifiable Variables
We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00
DISCUSSION
In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.
Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.
We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would
The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.
Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.
Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed.
Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.
Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.
Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20
CONCLUSIONS
In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.
Disclosures
Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.
Inadequate bowel preparation (IBP) at the time of inpatient colonoscopy is common and associated with increased length of stay and cost of care.1 The factors that contribute to IBP can be categorized into those that are modifiable and those that are nonmodifiable. While many factors have been associated with IBP, studies have been limited by small sample size or have combined inpatient/outpatient populations, thus limiting generalizability.1-5 Moreover, most factors associated with IBP, such as socioeconomic status, male gender, increased age, and comorbidities, are nonmodifiable. No studies have explicitly focused on modifiable risk factors, such as medication use, colonoscopy timing, or assessed the potential impact of modifying these factors.
In a large, multihospital system, we examine the frequency of IBP among inpatients undergoing colonoscopy along with factors associated with IBP. We attempted to identify
METHODS
Potential Predictors of IBP
Demographic data such as patient age, gender, ethnicity, body mass index (BMI), and insurance/payor status were obtained from the electronic health record (EHR). International Classification of Disease 9th and 10th revision, Clinical Modifications (ICD-9/10-CM) codes were used to obtain patient comorbidities including diabetes, coronary artery disease, heart failure, cirrhosis, gastroparesis, hypothyroidism, inflammatory bowel disease, constipation, stroke, dementia, dysphagia, and nausea/vomiting. Use of opioid medications within three days before colonoscopy was extracted from the medication administration record. These variables were chosen as biologically plausible modifiers of bowel preparation or had previously been assessed in the literature.1-6 The name and volume, classified as 4 L (GoLytely®) and < 4 liters (MoviPrep®) of bowel preparation, time of day when colonoscopy was performed, solid diet the day prior to colonoscopy, type of sedation used (conscious sedation or general anesthesia), and total colonoscopy time (defined as the time from scope insertion to removal) was recorded. Hospitalization-related variables, including the number of hospitalizations in the year before the current hospitalization, the year in which the colonoscopy was performed, and the number of days from admission to colonoscopy, were also obtained from the EHR.
Outcome Measures
An internally validated natural language algorithm, using Structured Queried Language was used to search through colonoscopy reports to identify adequacy of bowel preparation. ProVation® software allows the gastroenterologist to use some terms to describe bowel preparation in a drop-down menu format. In addition to the Aronchik scale (which allows the gastroenterologist to rate bowel preparation on a five-point scale: “excellent,” “good,” “fair,” “poor,” and “inadequate”) it also allows the provider to use terms such as “adequate” or “adequate to detect polyps >5 mm” as well as “unsatisfactory.”7 Mirroring prior literature, bowel preparation quality was classified into “adequate” and “inadequate”; “good” and “excellent” on the Aronchik scale were categorized as adequate as was the term “adequate” in any form; “fair,” “poor,” or “inadequate” on the Aronchik scale were classified as inadequate as was the term “unsatisfactory.” We evaluated the hospital length of stay (LOS) as a secondary outcome measure.
Statistical Analysis
After describing the frequency of IBP, the quality of bowel preparation (adequate vs inadequate) was compared based on the predictors described above. Categorical variables were reported as frequencies with percentages and continuous variables were reported as medians with 25th-75th percentile values. The significance of the difference between the proportion or median values of those who had inadequate versus adequate bowel preparation was assessed. Two-sided chi-square analysis was used to assess the significance of differences between categorical variables and the Wilcoxon Rank-Sum test was used to assess the significance of differences between continuous variables.
Multivariate logistic regression analysis was performed to assess factors associated with hospital predictors and outcomes, after adjusting for all the aforementioned factors and clustering the effect based on the endoscopist. To evaluate the potential impact of modifiable factors on IBP, we performed counterfactual analysis, in which the observed distribution was compared to a hypothetical population in which all the modifiable risk factors were optimal.
RESULTS
Overall, 8,819 patients were included in our study population. They had a median age of 64 [53-76] years; 50.5% were female and 51% had an IBP. Patient characteristics and rates of IBP are presented in Table 1.
In unadjusted analyses, with regards to modifiable factors, opiate use within three days of colonoscopy was associated with a higher rate of IBP (55.4% vs 47.3%, P <.001), as was a lower volume (<4L) bowel preparation (55.3% vs 50.4%, P = .003). IBP was less frequent when colonoscopy was performed before noon vs afternoon (50.3% vs 57.4%, P < .001), and when patients were documented to receive a clear liquid diet or nil per os vs a solid diet the day prior to colonoscopy (50.3% vs 57.4%, P < .001). Overall bowel preparation quality improved over time (Figure 1). Median LOS was five [3-11] days. Patients who had IBP on their initial colonoscopy had a LOS one day longer than patients without IBP (six days vs five days, P < .001).
Multivariate Analysis
Table 2 shows the results of the multivariate analysis. The following modifiable factors were associated with IBP: opiate used within three days of the procedure (OR 1.31; 95% CI 1.8, 1.45), having the colonoscopy performed after12:00
Potential Impact of Modifiable Variables
We conducted a counterfactual analysis based on a multivariate model to assess the impact of each modifiable risk factor on the IBP rate (Figure 1). In the included study population, 44.9% received an opiate, 39.3% had a colonoscopy after 12:00
DISCUSSION
In this large, multihospital cohort, IBP was documented in half (51%) of 8,819 inpatient colonoscopies performed. Nonmodifiable patient characteristics independently associated with IBP were age, male gender, white race, Medicare and Medicaid insurance, nausea/vomiting, dysphagia, and gastroparesis. Modifiable factors included not consuming opiates within three days of colonoscopy, avoidance of a solid diet the day prior to colonoscopy and performing the colonoscopy before noon. The volume of bowel preparation consumed was not associated with IBP. In a counterfactual analysis, we found that if all three modifiable factors were optimized, the predicted rate of IBP would drop to 45%.
Many studies, including our analysis, have shown significant differences between the frequency of IBP in inpatient versus outpatient bowel preparations.8-11 Therefore, it is crucial to study IBP in these settings separately. Three single-institution studies, including a total of 898 patients, have identified risk factors for inpatient IBP. Individual studies ranged in size from 130 to 524 patients with rates of IBP ranging from 22%-57%.1-3 They found IBP to be associated with increasing age, lower income, ASA Grade >3, diabetes, coronary artery disease (CAD), nausea or vomiting, BMI >25, and chronic constipation. Modifiable factors included opiates, afternoon procedures, and runway times >6 hours.
We also found IBP to be associated with increasing age and male gender. However, we found no association with diabetes, chronic constipation, CAD or BMI. As we were able to adjust for a wider variety of variables, it is possible that we were able to account for residual confounding better than previous studies. For example, we found that having nausea/vomiting, dysphagia, and gastroparesis was associated with IBP. Gastroparesis with associated nausea and vomiting may be the mechanism by which diabetes increases the risk for IBP. Further studies are needed to assess if interventions or alternative bowel cleansing in these patients can result in improved IBP. Finally, in contrast to studies with smaller cohorts which found that lower volume bowel preps improved IBP in the right colon,4,12 we found no association between IBP based and volume of bowel preparation consumed. Our impact analysis suggests that avoidance of opiates for at least three days before colonoscopy, avoidance of solid diet on the day before colonoscopy and performing all colonoscopies before noon would
The factors mentioned above may not always be amenable to modification. For example, for patients with active gastrointestinal bleeding, postponing colonoscopy by one day for the sake of maintaining a patient on a clear diet may not be feasible. Similarly, performing colonoscopies in the morning is highly dependent on endoscopy suite availability and hospital logistics. Denying opiates to patients experiencing severe pain is not ethical. In many scenarios, however, these variables could be modified, and institutional efforts to support these practices could yield considerable savings. Future prospective studies are needed to verify the real impact of these changes.
Further discussion is needed to contextualize the finding that colonoscopies scheduled in the afternoon are associated with improved bowel preparation quality. Previous research—albeit in the outpatient setting—has demonstrated 11.8 hours as the maximum upper time limit for the time elapsed between the end of bowel preparation to colonoscopy.14 Another study found an inverse relationship between the quality of bowel preparation and the time after completion of the bowel preparation.15 This makes sense from a physiological perspective as delaying the time between completion of bowel preparation, and the procedure allows chyme from the small intestine to reaccumulate in the colon. Anecdotally, at our institution as well as at many others, the bowel preparations are ordered to start in the evening to allow the consumption of complete bowel preparation by midnight. As a result of this practice, only patients who have their colonoscopies scheduled before noon fall within the optimal period of 11.8 hours. In the outpatient setting, the use of split preparations has led to the obliteration of the difference in the quality of bowel preparation between morning and afternoon colonoscopies.16 Prospective trials are needed to evaluate the use of split preparations to improve the quality of afternoon inpatient colonoscopies.
Few other strategies have been shown to mitigate IBP in the inpatient setting. In a small randomized controlled trial, Ergen et al. found that providing an educational booklet improved inpatient bowel preparation as measured by the Boston Bowel Preparation Scale.17 In a quasi-experimental design, Yadlapati et al. found that an automated split-dose bowel preparation resulted in decreased IBP, fewer repeated procedures, shorter LOS, and lower hospital cost.18 Our study adds to these tools by identifying three additional risk factors which could be optimized for inpatients. Because our findings are observational, they should be subjected to prospective trials. Our study also calls into question the impact of bowel preparation volume. We found no difference in the rate of IBP between low and large volume preparations. It is possible that other factors are more important than the specific preparation employed.
Interestingly, we found that IBP declined substantially in 2014 and continued to decline after that. The year was the most influential risk factor for IBP (on par with gastroparesis). The reason for this is unclear, as rates of our modifiable risk factors did not differ substantially by year. Other possibilities include improved access (including weekend access) to endoscopy coinciding with the development of a new endoscopy facility and use of integrated irrigation pump system instead of the use of manual syringes for flushing.
Our study has many strengths. It is by far the most extensive study of bowel preparation quality in inpatients to date and the only one that has included patient, procedural and bowel preparation characteristics. The study also has several significant limitations. This is a single center study, which could limit generalizability. Nonetheless, it was conducted within a health system with multiple hospitals in different parts of the United States (Ohio and Florida) and included a broad population mix with differing levels of acuity. The retrospective nature of the assessment precludes establishing causation. However, we mitigated confounding by adjusting for a wide variety of factors, and there is a plausible physiological mechanism for each of the factors we studied. Also, the retrospective nature of our study predisposes our data to omissions and misrepresentations during the documentation process. This is especially true with the use of ICD codes.19 Inaccuracies in coding are likely to bias toward the null, so observed associations may be an underestimate of the true association.
Our inability to ascertain if a patient completed the prescribed bowel preparation limited our ability to detect what may be a significant risk factor. Lastly, while clinically relevant, the Aronchik scale used to identify adequate from IBP has never been validated though it is frequently utilized and cited in the bowel preparation literature.20
CONCLUSIONS
In this large retrospective study evaluating bowel preparation quality in inpatients undergoing colonoscopy, we found that more than half of the patients have IBP and that IBP was associated with an extra day of hospitalization. Our study identifies those patients at highest risk and identifies modifiable risk factors for IBP. Specifically, we found that abstinence from opiates or solid diet before the colonoscopy, along with performing colonoscopies before noon were associated with improved outcomes. Prospective studies are needed to confirm the effects of these interventions on bowel preparation quality.
Disclosures
Carol A Burke, MD has received research funding from Ferring Pharmaceuticals. Other authors have no conflicts of interest to disclose.
1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331.
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed
1. Yadlapati R, Johnston ER, Gregory DL, Ciolino JD, Cooper A, Keswani RN. Predictors of inadequate inpatient colonoscopy preparation and its association with hospital length of stay and costs. Dig Dis Sci. 2015;60(11):3482-3490. doi: 10.1007/s10620-015-3761-2. PubMed
2. Jawa H, Mosli M, Alsamadani W, et al. Predictors of inadequate bowel preparation for inpatient colonoscopy. Turk J Gastroenterol. 2017;28(6):460-464. doi: 10.5152/tjg.2017.17196. PubMed
3. Mcnabb-Baltar J, Dorreen A, Dhahab HA, et al. Age is the only predictor of poor bowel preparation in the hospitalized patient. Can J Gastroenterol Hepatol. 2016;2016:1-5. doi: 10.1155/2016/2139264. PubMed
4. Rotondano G, Rispo A, Bottiglieri ME, et al. Tu1503 Quality of bowel cleansing in hospitalized patients is not worse than that of outpatients undergoing colonoscopy: results of a multicenter prospective regional study. Gastrointest Endosc. 2014;79(5):AB564. doi: 10.1016/j.gie.2014.02.949. PubMed
5. Ness R. Predictors of inadequate bowel preparation for colonoscopy. Am J Gastroenterol. 2001;96(6):1797-1802. doi: 10.1016/s0002-9270(01)02437-6. PubMed
6. Johnson DA, Barkun AN, Cohen LB, et al. Optimizing adequacy of bowel cleansing for colonoscopy: recommendations from the us multi-society task force on colorectal cancer. Gastroenterology. 2014;147(4):903-924. doi: 10.1053/j.gastro.2014.07.002. PubMed
7. Aronchick CA, Lipshutz WH, Wright SH, et al. A novel tableted purgative for colonoscopic preparation: efficacy and safety comparisons with Colyte and Fleet Phospho-Soda. Gastrointest Endosc. 2000;52(3):346-352. doi: 10.1067/mge.2000.108480. PubMed
8. Froehlich F, Wietlisbach V, Gonvers J-J, Burnand B, Vader J-P. Impact of colonic cleansing on quality and diagnostic yield of colonoscopy: the European Panel of Appropriateness of Gastrointestinal Endoscopy European multicenter study. Gastrointest Endosc. 2005;61(3):378-384. doi: 10.1016/s0016-5107(04)02776-2. PubMed
9. Sarvepalli S, Garber A, Rizk M, et al. 923 adjusted comparison of commercial bowel preparations based on inadequacy of bowel preparation in outpatient settings. Gastrointest Endosc. 2018;87(6):AB127. doi: 10.1016/j.gie.2018.04.1331.
10. Hendry PO, Jenkins JT, Diament RH. The impact of poor bowel preparation on colonoscopy: a prospective single center study of 10 571 colonoscopies. Colorectal Dis. 2007;9(8):745-748. doi: 10.1111/j.1463-1318.2007.01220.x. PubMed
11. Lebwohl B, Wang TC, Neugut AI. Socioeconomic and other predictors of colonoscopy preparation quality. Dig Dis Sci. 2010;55(7):2014-2020. doi: 10.1007/s10620-009-1079-7. PubMed
12. Chorev N, Chadad B, Segal N, et al. Preparation for colonoscopy in hospitalized patients. Dig Dis Sci. 2007;52(3):835-839. doi: 10.1007/s10620-006-9591-5. PubMed
13. Weiss AJ. Overview of Hospital Stays in the United States, 2012. HCUP Statistical Brief #180. Rockville, MD: Agency for Healthcare Research and Quality; 2014. PubMed
14. Kojecky V, Matous J, Keil R, et al. The optimal bowel preparation intervals before colonoscopy: a randomized study comparing polyethylene glycol and low-volume solutions. Dig Liver Dis. 2018;50(3):271-276. doi: 10.1016/j.dld.2017.10.010. PubMed
15. Siddiqui AA, Yang K, Spechler SJ, et al. Duration of the interval between the completion of bowel preparation and the start of colonoscopy predicts bowel-preparation quality. Gastrointest Endosc. 2009;69(3):700-706. doi: 10.1016/j.gie.2008.09.047. PubMed
16. Eun CS, Han DS, Hyun YS, et al. The timing of bowel preparation is more important than the timing of colonoscopy in determining the quality of bowel cleansing. Dig Dis Sci. 2010;56(2):539-544. doi: 10.1007/s10620-010-1457-1. PubMed
17. Ergen WF, Pasricha T, Hubbard FJ, et al. Providing hospitalized patients with an educational booklet increases the quality of colonoscopy bowel preparation. Clin Gastroenterol Hepatol. 2016;14(6):858-864. doi: 10.1016/j.cgh.2015.11.015. PubMed
18. Yadlapati R, Johnston ER, Gluskin AB, et al. An automated inpatient split-dose bowel preparation system improves colonoscopy quality and reduces repeat procedures. J Clin Gastroenterol. 2018;52(8):709-714. doi: 10.1097/mcg.0000000000000849. PubMed
19. Birman-Deych E, Waterman AD, Yan Y, Nilasena DS, Radford MJ, Gage BF. The accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Med Care. 2005;43(5):480-485. doi: 10.1097/01.mlr.0000160417.39497.a9. PubMed
20. Parmar R, Martel M, Rostom A, Barkun AN. Validated scales for colon cleansing: a systematic review. J Clin Gastroenterol. 2016;111(2):197-204. doi: 10.1038/ajg.2015.417. PubMed
© 2019 Society of Hospital Medicine