Slot System
Featured Buckets
Featured Buckets Admin

Implementing Change in the Heat of the Moment

Article Type
Changed
Mon, 11/30/2020 - 15:42

Early in the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, the World Health Organization issued guidance for coronavirus disease 2019 (COVID-19) management.1 Based on a high intubation rate among 12 subjects with Middle Eastern respiratory syndrome, noninvasive ventilation (NIV) was discouraged.2 While high-flow nasal oxygen (HFNO) was recognized as a reasonable strategy to avoid endotracheal intubation,1 uncertainty regarding the potential of both therapies to aerosolize SARS-CoV-2 and reports of rapid, unexpected respiratory decompensations were deterrents to use.3 As hospitals prepared for a surge of patients, reports of SARS-CoV-2 transmission to healthcare personnel also emerged. Together, these issues led many institutions to recommend lower than usual thresholds for intubation. This well-intentioned guidance was based on limited historical data, a rapidly evolving literature that frequently appeared on preprint servers before peer review, or as anecdotes on social media.

As COVID-19 caseloads increased, clinicians were immediately faced with patients who rapidly reached the planned intubation threshold, but also looked very comfortable with minimal to no use of accessory muscles of respiration. In addition, the pace of respiratory decompensation among those who ultimately required intubation was slower than expected. Moreover, intensive care unit (ICU) capacity was stretched thin, raising concern for an imminent need for ventilator rationing. Lastly, the risk of SARS-CoV-2 transmission to healthcare workers appeared well-controlled with the use of personal protective equipment.4

In light of this accumulating experience, sites worldwide evolved quickly from their initial management strategies for COVID-19 respiratory failure. However, the deliberate process described by Soares et al in this issue of the Journal of Hospital Medicine is notable.5 Their transition towards the beginning of the pandemic from a conservative early intubation approach to a new strategy that encouraged use of NIV, HFNO, and self-proning is described. They were motivated by reports of good outcomes using these interventions, high mortality in intubated patients, and reassurance that aerosolization of respiratory secretions during NIV and HFNO was comparable to regular nasal cannula or face mask oxygen.3 The new protocol was defined and rapidly deployed over 4 days using multipronged communication from project and institutional leaders via in person and electronic means (email, Whatsapp, GoogleDrive). To facilitate implementation, COVID-19 patients requiring respiratory support were placed in dedicated units with bedside flowsheets for guidance. An immediate impact was demonstrated over the next 2 weeks by a significant decrease in use of mechanical ventilation in COVID-19 patients from 25.2% to 10.7%. In-hospital mortality, the primary outcome, did not change, ICU admissions decreased, as did hospital length of stay (10 vs 8.4 days, though not statistically significant), all providing supportive evidence for relative safety of the new protocol.

Soares et al exemplify a nimble system that recognized planned strategies to be problematic, and then achieved rapid implementation of a new protocol across a four-hospital system. Changes in medical practice are typically much slower, with some studies suggesting this process may take a decade or more. Implementation science focuses on translating research evidence into clinical practice using strategies tailored to particular contexts. The current study harnessed important implementation principles to quickly translate evidence into practice using effective engagement and education of key stakeholders across specialties (eg, emergency medicine, hospitalists, critical care, and respiratory therapy), the identification of pathways that mitigated barriers, frequent re-evaluation of a rapidly evolving literature, and an open-mindedness to the value of change.6 As the pandemic continues, traditional research and implementation science are critical not only to define optimal treatments and management strategies, but also to learn how best to implement successful interventions in an accelerated manner.7

Disclosures

The authors reported no conflicts of interest.

Funding

Dr Hochberg is supported by a National Institutes of Health training grant (T32HL007534).

References

1. World Health Organization. Clinical management of severe acute respiratory infection when novel coronavirus (1019-nCoV) infection is suspected: interim guidance, 28 January 2020. Accessed October 25, 2020. https://apps.who.int/iris/handle/10665/330893
2. Arabi YM, Arifi AA, Balkhy HH, et al. Clinical course and outcomes of critically ill patients with Middle East respiratory syndrome coronavirus infection. Ann Intern Med. 2014;160:389-397. https://doi.org/ 10.7326/M13-2486
3. Westafer LM, Elia T, Medarametla V, Lagu T. A transdisciplinary COVID-19 early respiratory intervention protocol: an implementation story. J Hosp Med. 2020;15:372-374. https://doi.org/10.12788/jhm.3456
4. Self WH, Tenforde MW, Stubblefield WB, et al. Seroprevalence of SARS-CoV-2 among frontline health care personnel in a multistate hospital network - 13 academic medical centers, April-June 2020. MMWR Morb Mortal Wkly Rep. 2020;69:1221-1226. https://doi.org/10.15585/mmwr.mm6935e2
5. Soares WE III, Schoenfeld EM, Visintainer P, et al. Safety assessment of a noninvasive respiratory protocol. J Hosp Med. 2020;15:734-738. https://doi.org/ 10.12788/jhm.3548
6. Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice: a model for large scale knowledge translation. BMJ. 2008;337:a1714. https://doi.org/10.1136/bmj.a1714
7. Taylor SP, Kowalkowski MA, Beidas RS. Where is the implementation science? An opportunity to apply principles during the COVID19 pandemic. Online ahead of print. Clin Infect Dis. 2020. https://doi.org/10.1093/cid/ciaa622

Article PDF
Issue
Journal of Hospital Medicine 15(12)
Publications
Topics
Page Number
768
Sections
Article PDF
Article PDF
Related Articles

Early in the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, the World Health Organization issued guidance for coronavirus disease 2019 (COVID-19) management.1 Based on a high intubation rate among 12 subjects with Middle Eastern respiratory syndrome, noninvasive ventilation (NIV) was discouraged.2 While high-flow nasal oxygen (HFNO) was recognized as a reasonable strategy to avoid endotracheal intubation,1 uncertainty regarding the potential of both therapies to aerosolize SARS-CoV-2 and reports of rapid, unexpected respiratory decompensations were deterrents to use.3 As hospitals prepared for a surge of patients, reports of SARS-CoV-2 transmission to healthcare personnel also emerged. Together, these issues led many institutions to recommend lower than usual thresholds for intubation. This well-intentioned guidance was based on limited historical data, a rapidly evolving literature that frequently appeared on preprint servers before peer review, or as anecdotes on social media.

As COVID-19 caseloads increased, clinicians were immediately faced with patients who rapidly reached the planned intubation threshold, but also looked very comfortable with minimal to no use of accessory muscles of respiration. In addition, the pace of respiratory decompensation among those who ultimately required intubation was slower than expected. Moreover, intensive care unit (ICU) capacity was stretched thin, raising concern for an imminent need for ventilator rationing. Lastly, the risk of SARS-CoV-2 transmission to healthcare workers appeared well-controlled with the use of personal protective equipment.4

In light of this accumulating experience, sites worldwide evolved quickly from their initial management strategies for COVID-19 respiratory failure. However, the deliberate process described by Soares et al in this issue of the Journal of Hospital Medicine is notable.5 Their transition towards the beginning of the pandemic from a conservative early intubation approach to a new strategy that encouraged use of NIV, HFNO, and self-proning is described. They were motivated by reports of good outcomes using these interventions, high mortality in intubated patients, and reassurance that aerosolization of respiratory secretions during NIV and HFNO was comparable to regular nasal cannula or face mask oxygen.3 The new protocol was defined and rapidly deployed over 4 days using multipronged communication from project and institutional leaders via in person and electronic means (email, Whatsapp, GoogleDrive). To facilitate implementation, COVID-19 patients requiring respiratory support were placed in dedicated units with bedside flowsheets for guidance. An immediate impact was demonstrated over the next 2 weeks by a significant decrease in use of mechanical ventilation in COVID-19 patients from 25.2% to 10.7%. In-hospital mortality, the primary outcome, did not change, ICU admissions decreased, as did hospital length of stay (10 vs 8.4 days, though not statistically significant), all providing supportive evidence for relative safety of the new protocol.

Soares et al exemplify a nimble system that recognized planned strategies to be problematic, and then achieved rapid implementation of a new protocol across a four-hospital system. Changes in medical practice are typically much slower, with some studies suggesting this process may take a decade or more. Implementation science focuses on translating research evidence into clinical practice using strategies tailored to particular contexts. The current study harnessed important implementation principles to quickly translate evidence into practice using effective engagement and education of key stakeholders across specialties (eg, emergency medicine, hospitalists, critical care, and respiratory therapy), the identification of pathways that mitigated barriers, frequent re-evaluation of a rapidly evolving literature, and an open-mindedness to the value of change.6 As the pandemic continues, traditional research and implementation science are critical not only to define optimal treatments and management strategies, but also to learn how best to implement successful interventions in an accelerated manner.7

Disclosures

The authors reported no conflicts of interest.

Funding

Dr Hochberg is supported by a National Institutes of Health training grant (T32HL007534).

Early in the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, the World Health Organization issued guidance for coronavirus disease 2019 (COVID-19) management.1 Based on a high intubation rate among 12 subjects with Middle Eastern respiratory syndrome, noninvasive ventilation (NIV) was discouraged.2 While high-flow nasal oxygen (HFNO) was recognized as a reasonable strategy to avoid endotracheal intubation,1 uncertainty regarding the potential of both therapies to aerosolize SARS-CoV-2 and reports of rapid, unexpected respiratory decompensations were deterrents to use.3 As hospitals prepared for a surge of patients, reports of SARS-CoV-2 transmission to healthcare personnel also emerged. Together, these issues led many institutions to recommend lower than usual thresholds for intubation. This well-intentioned guidance was based on limited historical data, a rapidly evolving literature that frequently appeared on preprint servers before peer review, or as anecdotes on social media.

As COVID-19 caseloads increased, clinicians were immediately faced with patients who rapidly reached the planned intubation threshold, but also looked very comfortable with minimal to no use of accessory muscles of respiration. In addition, the pace of respiratory decompensation among those who ultimately required intubation was slower than expected. Moreover, intensive care unit (ICU) capacity was stretched thin, raising concern for an imminent need for ventilator rationing. Lastly, the risk of SARS-CoV-2 transmission to healthcare workers appeared well-controlled with the use of personal protective equipment.4

In light of this accumulating experience, sites worldwide evolved quickly from their initial management strategies for COVID-19 respiratory failure. However, the deliberate process described by Soares et al in this issue of the Journal of Hospital Medicine is notable.5 Their transition towards the beginning of the pandemic from a conservative early intubation approach to a new strategy that encouraged use of NIV, HFNO, and self-proning is described. They were motivated by reports of good outcomes using these interventions, high mortality in intubated patients, and reassurance that aerosolization of respiratory secretions during NIV and HFNO was comparable to regular nasal cannula or face mask oxygen.3 The new protocol was defined and rapidly deployed over 4 days using multipronged communication from project and institutional leaders via in person and electronic means (email, Whatsapp, GoogleDrive). To facilitate implementation, COVID-19 patients requiring respiratory support were placed in dedicated units with bedside flowsheets for guidance. An immediate impact was demonstrated over the next 2 weeks by a significant decrease in use of mechanical ventilation in COVID-19 patients from 25.2% to 10.7%. In-hospital mortality, the primary outcome, did not change, ICU admissions decreased, as did hospital length of stay (10 vs 8.4 days, though not statistically significant), all providing supportive evidence for relative safety of the new protocol.

Soares et al exemplify a nimble system that recognized planned strategies to be problematic, and then achieved rapid implementation of a new protocol across a four-hospital system. Changes in medical practice are typically much slower, with some studies suggesting this process may take a decade or more. Implementation science focuses on translating research evidence into clinical practice using strategies tailored to particular contexts. The current study harnessed important implementation principles to quickly translate evidence into practice using effective engagement and education of key stakeholders across specialties (eg, emergency medicine, hospitalists, critical care, and respiratory therapy), the identification of pathways that mitigated barriers, frequent re-evaluation of a rapidly evolving literature, and an open-mindedness to the value of change.6 As the pandemic continues, traditional research and implementation science are critical not only to define optimal treatments and management strategies, but also to learn how best to implement successful interventions in an accelerated manner.7

Disclosures

The authors reported no conflicts of interest.

Funding

Dr Hochberg is supported by a National Institutes of Health training grant (T32HL007534).

References

1. World Health Organization. Clinical management of severe acute respiratory infection when novel coronavirus (1019-nCoV) infection is suspected: interim guidance, 28 January 2020. Accessed October 25, 2020. https://apps.who.int/iris/handle/10665/330893
2. Arabi YM, Arifi AA, Balkhy HH, et al. Clinical course and outcomes of critically ill patients with Middle East respiratory syndrome coronavirus infection. Ann Intern Med. 2014;160:389-397. https://doi.org/ 10.7326/M13-2486
3. Westafer LM, Elia T, Medarametla V, Lagu T. A transdisciplinary COVID-19 early respiratory intervention protocol: an implementation story. J Hosp Med. 2020;15:372-374. https://doi.org/10.12788/jhm.3456
4. Self WH, Tenforde MW, Stubblefield WB, et al. Seroprevalence of SARS-CoV-2 among frontline health care personnel in a multistate hospital network - 13 academic medical centers, April-June 2020. MMWR Morb Mortal Wkly Rep. 2020;69:1221-1226. https://doi.org/10.15585/mmwr.mm6935e2
5. Soares WE III, Schoenfeld EM, Visintainer P, et al. Safety assessment of a noninvasive respiratory protocol. J Hosp Med. 2020;15:734-738. https://doi.org/ 10.12788/jhm.3548
6. Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice: a model for large scale knowledge translation. BMJ. 2008;337:a1714. https://doi.org/10.1136/bmj.a1714
7. Taylor SP, Kowalkowski MA, Beidas RS. Where is the implementation science? An opportunity to apply principles during the COVID19 pandemic. Online ahead of print. Clin Infect Dis. 2020. https://doi.org/10.1093/cid/ciaa622

References

1. World Health Organization. Clinical management of severe acute respiratory infection when novel coronavirus (1019-nCoV) infection is suspected: interim guidance, 28 January 2020. Accessed October 25, 2020. https://apps.who.int/iris/handle/10665/330893
2. Arabi YM, Arifi AA, Balkhy HH, et al. Clinical course and outcomes of critically ill patients with Middle East respiratory syndrome coronavirus infection. Ann Intern Med. 2014;160:389-397. https://doi.org/ 10.7326/M13-2486
3. Westafer LM, Elia T, Medarametla V, Lagu T. A transdisciplinary COVID-19 early respiratory intervention protocol: an implementation story. J Hosp Med. 2020;15:372-374. https://doi.org/10.12788/jhm.3456
4. Self WH, Tenforde MW, Stubblefield WB, et al. Seroprevalence of SARS-CoV-2 among frontline health care personnel in a multistate hospital network - 13 academic medical centers, April-June 2020. MMWR Morb Mortal Wkly Rep. 2020;69:1221-1226. https://doi.org/10.15585/mmwr.mm6935e2
5. Soares WE III, Schoenfeld EM, Visintainer P, et al. Safety assessment of a noninvasive respiratory protocol. J Hosp Med. 2020;15:734-738. https://doi.org/ 10.12788/jhm.3548
6. Pronovost PJ, Berenholtz SM, Needham DM. Translating evidence into practice: a model for large scale knowledge translation. BMJ. 2008;337:a1714. https://doi.org/10.1136/bmj.a1714
7. Taylor SP, Kowalkowski MA, Beidas RS. Where is the implementation science? An opportunity to apply principles during the COVID19 pandemic. Online ahead of print. Clin Infect Dis. 2020. https://doi.org/10.1093/cid/ciaa622

Issue
Journal of Hospital Medicine 15(12)
Issue
Journal of Hospital Medicine 15(12)
Page Number
768
Page Number
768
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
David N Hager, MD, PHD; Telephone: 410-614-6292; Email: dhager1@jhmi.edu; Twitter: @davidnhager.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Article PDF Media

Pediatric Readmissions and the Quality of Hospital-to-Home Transitions

Article Type
Changed
Mon, 11/30/2020 - 15:01

Since 2012, when the Centers for Medicare & Medicaid Services (CMS) began linking financial penalties to hospitals with excessive readmissions for adult patients, researchers have questioned the extent to which pediatric readmissions can be used as a reliable quality measure. Compared with readmissions among adult patients, readmissions among pediatric patients are relatively uncommon. Furthermore, few (approximately 2%) qualify as potentially preventable, and pediatric readmission rates remain largely unchanged despite targeted attempts to prevent reutilization.1,2 Nonetheless, state Medicaid agencies have continued to reduce reimbursement for hospitals based on available readmissions metrics, most commonly the Potentially Preventable Readmissions (PPR) algorithm.1

In this issue of the Journal of Hospital Medicine, Auger et al3 performed a retrospective study to explore four existing metrics of pediatric hospital readmissions for their ability to identify preventable and unplanned readmissions. Investigators examined 30-day readmissions (n = 1,125) from 2014-2016 across multiple subspecialties, and classified readmissions by their preventability and unplanned status with use of a validated chart abstraction tool. Using the results of chart abstraction as the gold standard, investigators calculated the sensitivity and specificity, as well as estimated the positive and negative predictive values, of each readmissions metric. Auger and colleagues found that none of the four readmissions metrics could reliably assess preventability, and that only one metric reliably predicted unplanned hospital readmissions. Specifically, the commonly used PPR algorithm was estimated to have a positive predictive value of 13.0%-35.5% across a prevalence range of 10%-30%. This means that in a hospital where 10% of readmissions are truly preventable, the PPR will be wrong approximately 87% of the time. Tying payments to this metric is difficult to justify.

The authors highlighted the policy implications of the PPR falling short in its ability to identify preventable and unplanned pediatric readmissions. A good quality measure should be consistently reliable, and neither the PPR nor other measures studied meets this benchmark. Yet the findings lead to a broader conclusion: if most pediatric readmissions are not preventable, if there is no reliable way of measuring preventability, and if we have not demonstrated the ability to change patient trajectories away from reutilization, then perhaps the sun has set on using readmissions as a comprehensive quality measure for hospital-based care.

So how, then, should the hospital-to-home transition be evaluated? The paradigm of pediatric value of care is shifting to incorporate family-centered perspectives into consideration of quality measures.2 There has to be a balance between healthcare costs and outcomes that affect families; measures should take into account issues such as patient and caregiver anxiety and time away from work.2 Moreover, because social determinants of health and medical complexity strongly influence readmission rates,4,5 focus should be placed on redirecting resources toward patients and families with significant medical, social, and financial needs as they transition home from the hospital. While measures of healthcare equity are currently lacking, the overall quality and equity of pediatric care transitions could be enhanced by looking beyond the narrow lens of readmission rates to incorporate actual needs assessments of families.

In summary, Auger and colleagues identified deficits in existing readmission metrics—but creating a solution that is meaningful to all stakeholders will be more complex than simply identifying a better metric. Family-centered quality metrics show promise in creating value in pediatric care within an equitable health system, but long-term evaluation of these metrics is necessary.

Disclosure

The authors have nothing to disclose.

References

1. Auger KA, Harris JM, Gay JC, et al. Progress (?) toward reducing pediatric readmissions. J Hosp Med. 2019;14(10):618-621. https://doi.org/10.12788/jhm.3210
2. Forrest CB, Silber JH. Concept and measurement of pediatric value. Acad Pediatr. 2014;14(5 Suppl):S33-S38. https://doi.org/10.1016/j.acap.2014.03.013
3. Auger K, Ponti-Zins M, Statile A, Wesselkamper K, Haberman B, Hanke S. Performance of pediatric readmission measures. J Hosp Med. 2020;15:723-726. https://doi.org/10.12788/jhm.3521
4. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children’s hospitals. JAMA. 2011;305(7):682-690. https://doi.org/10.1001/jama.2011.122
5. Beck AF, Huang B, Simmons JM, et al. Role of financial and social hardships in asthma racial disparities. Pediatrics. 2014;133(3):431-439. https://doi.org/10.1542/peds.2013-2437

Article PDF
Issue
Journal of Hospital Medicine 15(12)
Publications
Topics
Page Number
767
Sections
Article PDF
Article PDF
Related Articles

Since 2012, when the Centers for Medicare & Medicaid Services (CMS) began linking financial penalties to hospitals with excessive readmissions for adult patients, researchers have questioned the extent to which pediatric readmissions can be used as a reliable quality measure. Compared with readmissions among adult patients, readmissions among pediatric patients are relatively uncommon. Furthermore, few (approximately 2%) qualify as potentially preventable, and pediatric readmission rates remain largely unchanged despite targeted attempts to prevent reutilization.1,2 Nonetheless, state Medicaid agencies have continued to reduce reimbursement for hospitals based on available readmissions metrics, most commonly the Potentially Preventable Readmissions (PPR) algorithm.1

In this issue of the Journal of Hospital Medicine, Auger et al3 performed a retrospective study to explore four existing metrics of pediatric hospital readmissions for their ability to identify preventable and unplanned readmissions. Investigators examined 30-day readmissions (n = 1,125) from 2014-2016 across multiple subspecialties, and classified readmissions by their preventability and unplanned status with use of a validated chart abstraction tool. Using the results of chart abstraction as the gold standard, investigators calculated the sensitivity and specificity, as well as estimated the positive and negative predictive values, of each readmissions metric. Auger and colleagues found that none of the four readmissions metrics could reliably assess preventability, and that only one metric reliably predicted unplanned hospital readmissions. Specifically, the commonly used PPR algorithm was estimated to have a positive predictive value of 13.0%-35.5% across a prevalence range of 10%-30%. This means that in a hospital where 10% of readmissions are truly preventable, the PPR will be wrong approximately 87% of the time. Tying payments to this metric is difficult to justify.

The authors highlighted the policy implications of the PPR falling short in its ability to identify preventable and unplanned pediatric readmissions. A good quality measure should be consistently reliable, and neither the PPR nor other measures studied meets this benchmark. Yet the findings lead to a broader conclusion: if most pediatric readmissions are not preventable, if there is no reliable way of measuring preventability, and if we have not demonstrated the ability to change patient trajectories away from reutilization, then perhaps the sun has set on using readmissions as a comprehensive quality measure for hospital-based care.

So how, then, should the hospital-to-home transition be evaluated? The paradigm of pediatric value of care is shifting to incorporate family-centered perspectives into consideration of quality measures.2 There has to be a balance between healthcare costs and outcomes that affect families; measures should take into account issues such as patient and caregiver anxiety and time away from work.2 Moreover, because social determinants of health and medical complexity strongly influence readmission rates,4,5 focus should be placed on redirecting resources toward patients and families with significant medical, social, and financial needs as they transition home from the hospital. While measures of healthcare equity are currently lacking, the overall quality and equity of pediatric care transitions could be enhanced by looking beyond the narrow lens of readmission rates to incorporate actual needs assessments of families.

In summary, Auger and colleagues identified deficits in existing readmission metrics—but creating a solution that is meaningful to all stakeholders will be more complex than simply identifying a better metric. Family-centered quality metrics show promise in creating value in pediatric care within an equitable health system, but long-term evaluation of these metrics is necessary.

Disclosure

The authors have nothing to disclose.

Since 2012, when the Centers for Medicare & Medicaid Services (CMS) began linking financial penalties to hospitals with excessive readmissions for adult patients, researchers have questioned the extent to which pediatric readmissions can be used as a reliable quality measure. Compared with readmissions among adult patients, readmissions among pediatric patients are relatively uncommon. Furthermore, few (approximately 2%) qualify as potentially preventable, and pediatric readmission rates remain largely unchanged despite targeted attempts to prevent reutilization.1,2 Nonetheless, state Medicaid agencies have continued to reduce reimbursement for hospitals based on available readmissions metrics, most commonly the Potentially Preventable Readmissions (PPR) algorithm.1

In this issue of the Journal of Hospital Medicine, Auger et al3 performed a retrospective study to explore four existing metrics of pediatric hospital readmissions for their ability to identify preventable and unplanned readmissions. Investigators examined 30-day readmissions (n = 1,125) from 2014-2016 across multiple subspecialties, and classified readmissions by their preventability and unplanned status with use of a validated chart abstraction tool. Using the results of chart abstraction as the gold standard, investigators calculated the sensitivity and specificity, as well as estimated the positive and negative predictive values, of each readmissions metric. Auger and colleagues found that none of the four readmissions metrics could reliably assess preventability, and that only one metric reliably predicted unplanned hospital readmissions. Specifically, the commonly used PPR algorithm was estimated to have a positive predictive value of 13.0%-35.5% across a prevalence range of 10%-30%. This means that in a hospital where 10% of readmissions are truly preventable, the PPR will be wrong approximately 87% of the time. Tying payments to this metric is difficult to justify.

The authors highlighted the policy implications of the PPR falling short in its ability to identify preventable and unplanned pediatric readmissions. A good quality measure should be consistently reliable, and neither the PPR nor other measures studied meets this benchmark. Yet the findings lead to a broader conclusion: if most pediatric readmissions are not preventable, if there is no reliable way of measuring preventability, and if we have not demonstrated the ability to change patient trajectories away from reutilization, then perhaps the sun has set on using readmissions as a comprehensive quality measure for hospital-based care.

So how, then, should the hospital-to-home transition be evaluated? The paradigm of pediatric value of care is shifting to incorporate family-centered perspectives into consideration of quality measures.2 There has to be a balance between healthcare costs and outcomes that affect families; measures should take into account issues such as patient and caregiver anxiety and time away from work.2 Moreover, because social determinants of health and medical complexity strongly influence readmission rates,4,5 focus should be placed on redirecting resources toward patients and families with significant medical, social, and financial needs as they transition home from the hospital. While measures of healthcare equity are currently lacking, the overall quality and equity of pediatric care transitions could be enhanced by looking beyond the narrow lens of readmission rates to incorporate actual needs assessments of families.

In summary, Auger and colleagues identified deficits in existing readmission metrics—but creating a solution that is meaningful to all stakeholders will be more complex than simply identifying a better metric. Family-centered quality metrics show promise in creating value in pediatric care within an equitable health system, but long-term evaluation of these metrics is necessary.

Disclosure

The authors have nothing to disclose.

References

1. Auger KA, Harris JM, Gay JC, et al. Progress (?) toward reducing pediatric readmissions. J Hosp Med. 2019;14(10):618-621. https://doi.org/10.12788/jhm.3210
2. Forrest CB, Silber JH. Concept and measurement of pediatric value. Acad Pediatr. 2014;14(5 Suppl):S33-S38. https://doi.org/10.1016/j.acap.2014.03.013
3. Auger K, Ponti-Zins M, Statile A, Wesselkamper K, Haberman B, Hanke S. Performance of pediatric readmission measures. J Hosp Med. 2020;15:723-726. https://doi.org/10.12788/jhm.3521
4. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children’s hospitals. JAMA. 2011;305(7):682-690. https://doi.org/10.1001/jama.2011.122
5. Beck AF, Huang B, Simmons JM, et al. Role of financial and social hardships in asthma racial disparities. Pediatrics. 2014;133(3):431-439. https://doi.org/10.1542/peds.2013-2437

References

1. Auger KA, Harris JM, Gay JC, et al. Progress (?) toward reducing pediatric readmissions. J Hosp Med. 2019;14(10):618-621. https://doi.org/10.12788/jhm.3210
2. Forrest CB, Silber JH. Concept and measurement of pediatric value. Acad Pediatr. 2014;14(5 Suppl):S33-S38. https://doi.org/10.1016/j.acap.2014.03.013
3. Auger K, Ponti-Zins M, Statile A, Wesselkamper K, Haberman B, Hanke S. Performance of pediatric readmission measures. J Hosp Med. 2020;15:723-726. https://doi.org/10.12788/jhm.3521
4. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children’s hospitals. JAMA. 2011;305(7):682-690. https://doi.org/10.1001/jama.2011.122
5. Beck AF, Huang B, Simmons JM, et al. Role of financial and social hardships in asthma racial disparities. Pediatrics. 2014;133(3):431-439. https://doi.org/10.1542/peds.2013-2437

Issue
Journal of Hospital Medicine 15(12)
Issue
Journal of Hospital Medicine 15(12)
Page Number
767
Page Number
767
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Morgan Congdon MD, MPH; Email: congdonm@email.chop.edu; Telephone: 215-906-1261; Twitter: @CongdonMorgan.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Article PDF Media

Deimplementation of Established Medical Practice Without Intervention: Does It Actually Happen?

Article Type
Changed
Mon, 11/30/2020 - 14:46

In this edition of the Journal of Hospital Medicine, Fenster and colleagues evaluate the trend of postdischarge intravenous (IV) antibiotic therapy for children with osteomyelitis, complicated pneumonia, and complicated appendicitis.1 Children requiring prolonged antibiotic therapy were historically discharged home with a peripherally inserted central catheter (PICC) for IV antibiotics. Recent studies suggest that treatment failure occurs uncommonly, and that oral antibiotics are as effective as those administered intravenously.2-4 Oral antibiotics also avoid the additional risk of PICC-related complications, such as line malfunction, infections, and thrombi, which all lead to increased re-visits to hospital.

QUESTIONING ESTABLISHED MEDICAL PRACTICE

New research seldom leads to rapid change in clinical practice.5 This is particularly the case when new evidence favors the abandonment of accepted medical practices or supports the deimplementation of low-value care. The mounting body of evidence suggests that postdischarge IV antibiotic therapy is low-value care for children with osteomyelitis, complicated pneumonia, and complicated appendicitis, and that overuse is associated with unnecessary harm. Fenster and colleagues sought to evaluate the extent to which the management of these conditions has changed over time in the United States. They conducted a retrospective cohort study of children discharged from hospitals contributing data to the Pediatric Health Information System (PHIS) database. Validated algorithms using discharge diagnosis and procedure codes were used to identify children with the three conditions who were discharged home with IV antibiotic therapy.

Between January 2000 and December 2018 and across 52 hospitals, there were 24,753 hospitalizations for osteomyelitis, 13,700 for complicated pneumonia, and 60,575 for complicated appendicitis. Rates of postdischarge IV antibiotic therapy decreased over time for all conditions, from 61% to 22% for osteomyelitis, from 29% to 19% for complicated pneumonia, and from 13% to 2% for complicated appendicitis. Rather than a gradual reduction over time, the authors used piecewise linear regression to identify an inflection point when the decrease started: the inflection points for all three occurred around 2009 or 2010. Despite the observed decrease over time, there was significant variation in practice patterns among hospitals in 2018. For example, while the median rate of postdischarge IV antibiotic therapy for osteomyelitis was 18%, the interquartile ranged from 9% to 40%.

The authors conducted several sensitivity analyses, with the exclusion of hospitals that provided data only for certain years, which supported the robustness of the findings. Yet there are important limitations, most notably the lack of data on outcomes related to overuse and efficiency: type of antibiotics used (narrow vs broad spectrum) and total duration of antibiotics or variation in length of stay. The validated algorithms were also based on older ICD-9 codes and may perform less well with ICD-10 or from 2015 onwards. Lastly, the findings are limited to children’s hospitals and may not apply to general hospitals that care for many children.

CAN DEIMPLEMENTATION HAPPEN WITHOUT INTERVENTIONS?

The authors suggest that the deimplementation of postdischarge IV antibiotic therapy for the three conditions occurred spontaneously. Yet it is worth considering the different levels of agents of change that may have influenced these observations, such as research evidence, national condition guidelines, national efforts at reducing overuse and improving safety, local hospital efforts, and shared decision-making.

Postdischarge antibiotic therapy options for osteomyelitis, complicated pneumonia, and complicated appendicitis are supported by weak research evidence. Oral and parenteral therapy are equally effective but based on observational data; a randomized controlled trial is unlikely to ever be conducted because of uncommon outcomes, such as treatment failures. For these scenarios, greater emphasis should be placed on factors other than effectiveness, such as harms, availability of alternative options, and cost.6 For postdischarge IV antibiotic therapy, one potential explanation for the observed deimplementation is the greater awareness of harm, with up to 20% of cases with IV antibiotics requiring PICC removal.7 There is also a readily available alternative (oral antibiotics) with a favorable cost and effectiveness profile.

National condition guidelines advocating early transition to oral antibiotic therapy began to appear before and during the observed inflection point of 2009 and 2010. The 2002 British Thoracic Society guidelines for community-acquired pneumonia suggested considering oral agents after clear evidence of improvement,8 and the 2010 Infectious Diseases Society of America guidelines recommended oral antibiotic options for children discharged home with intra-abdominal infections.9 A systematic review published in 2002 also questioned the need for prolonged IV antibiotic therapy compared with early transition to oral agents in osteomyelitis.10 While no targeted national interventions to drive practice change existed, widespread national efforts at reducing overuse (eg, Choosing Wisely®) and improving safety (eg, reducing central line complications) have increased in the past decade.11

An important agent of change that Fenster and colleagues were not able to tease out was the impact of local hospital level efforts. In parallel to national efforts, there has likely been targeted hospital-level interventions that are disease specific (eg, order sets, pathways/guidelines, shared–decision-making tools) or focused on reducing adverse events (eg, reducing inappropriate PICC use). For example, between 2010 and 2012, one US children’s hospital increased the number of children with osteomyelitis discharged on oral antibiotics from a median of 0% to 100% with a bundle of quality improvement interventions, including standardized treatment protocols and shared decision-making.12

Despite the encouraging results, up to 22% of children were discharged from hospitals with postdischarge IV antibiotic therapy, and significant variation persists in 2018. Evidence of harm or even strong recommendations to change practice are themselves inadequate for behavior change.13 While it is clear that some element of deimplementation may have occurred organically over the past two decades, it is time for concerted deimplementation strategies that focus on practitioners or hospitals with “entrenched practices.”6

Disclosures

Dr Gill has received grant funding from the Canadian Paediatric Society, the Hospital for Sick Children, and the Canadian Institutes of Health Research (CIHR) in the past 5 years. He is on editorial board of BMJ Evidence-Based Medicine (EBM) and on the Institute Advisory Board for the CIHR Institute of Human Development and Child and Youth Health (IHDCYH), for which he has expenses reimbursed to attend meetings. He is a member of the EBMLive steering committee, and he has expenses reimbursed to attend the conference. Dr Mahant has received grant funding from CIHR in the past 5 years and is a Senior Deputy Editor of Journal of Hospital Medicine. The authors reported no conflicts of interest or financial relationships relevant to this manuscript.

References

1. Fenster ME, Hersh AL, Srivastava R, Keren R, Wilkes J, Coon ER. Trends in use of postdischarge intravenous antibiotic therapy for children. J Hosp Med. 2020;15:731-733. https://doi.org/10.12788/jhm.3422
2. Keren R, Shah SS, Srivastava R, et al. Comparative effectiveness of intravenous vs oral antibiotics for postdischarge treatment of acute osteomyelitis in children. JAMA Pediatr. 2015;169(2):120-128. https://doi.org/10.1001/jamapediatrics.2014.2822
3. Rangel SJ, Anderson BR, Srivastava R, et al. Intravenous versus oral antibiotics for the prevention of treatment failure in children with complicated appendicitis: has the abandonment of peripherally inserted catheters been justified? Ann Surg. 2017;266(2):361-368. https://doi.org/10.1097/sla.0000000000001923
4. Shah SS, Srivastava R, Wu S, et al. Intravenous versus oral antibiotics for postdischarge treatment of complicated pneumonia. Pediatrics. 2016;138(6):e20161692. https://doi.org/10.1542/peds.2016-1692
5. Davidoff F. On the undiffusion of established practices.  JAMA Intern Med. 2015;175(5):809-811. https://doi.org/10.1001/jamainternmed.2015.0167
6. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. https://doi.org/10.1186/1748-5908-9-1
7. Jumani K, Advani S, Reich NG, Gosey L, Milstone AM. Risk factors for peripherally inserted central venous catheter complications in children. JAMA Pediatr. 2013;167(5):429-435. https://doi.org/10.1001/jamapediatrics.2013.775
8. British Thoracic Society Standards of Care Committee. British Thoracic Society guidelines for the management of community acquired pneumonia in childhood. Thorax. 2002;57(Suppl 1):i1-i24. https://doi.org/10.1136/thorax.57.90001.i1
9. Solomkin JS, Mazuski JE, Bradley JS, et al. Diagnosis and management of complicated intra-abdominal infection in adults and children: guidelines by the Surgical Infection Society and the Infectious Diseases Society of America. Clin Infect Dis. 2010;50(2):133-164. https://doi.org/10.1086/649554
10. Le Saux N, Howard A, Barrowman NJ, Gaboury I, Sampson M, Moher D. Shorter courses of parenteral antibiotic therapy do not appear to influence response rates for children with acute hematogenous osteomyelitis: a systematic review. BMC Infect Dis. 2002;2:16. https://doi.org/10.1186/1471-2334-2-16
11. Born K, Kool T, Levinson W. Reducing overuse in healthcare: advancing Choosing Wisely. BMJ. 2019;367:l6317. https://doi.org/10.1136/bmj.l6317
12. Brady PW, Brinkman WB, Simmons JM, et al. Oral antibiotics at discharge for children with acute osteomyelitis: a rapid cycle improvement project. BMJ Qual Saf. 2014;23(6):499-507. https://doi.org/10.1136/bmjqs-2013-002179
13. Rosenberg A, Agiro A, Gottlieb M, et al. Early trends among seven recommendations from the choosing wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. https://doi.org/10.1001/jamainternmed.2015.5441

Article PDF
Issue
Journal of Hospital Medicine 15(12)
Publications
Topics
Page Number
765-766
Sections
Article PDF
Article PDF
Related Articles

In this edition of the Journal of Hospital Medicine, Fenster and colleagues evaluate the trend of postdischarge intravenous (IV) antibiotic therapy for children with osteomyelitis, complicated pneumonia, and complicated appendicitis.1 Children requiring prolonged antibiotic therapy were historically discharged home with a peripherally inserted central catheter (PICC) for IV antibiotics. Recent studies suggest that treatment failure occurs uncommonly, and that oral antibiotics are as effective as those administered intravenously.2-4 Oral antibiotics also avoid the additional risk of PICC-related complications, such as line malfunction, infections, and thrombi, which all lead to increased re-visits to hospital.

QUESTIONING ESTABLISHED MEDICAL PRACTICE

New research seldom leads to rapid change in clinical practice.5 This is particularly the case when new evidence favors the abandonment of accepted medical practices or supports the deimplementation of low-value care. The mounting body of evidence suggests that postdischarge IV antibiotic therapy is low-value care for children with osteomyelitis, complicated pneumonia, and complicated appendicitis, and that overuse is associated with unnecessary harm. Fenster and colleagues sought to evaluate the extent to which the management of these conditions has changed over time in the United States. They conducted a retrospective cohort study of children discharged from hospitals contributing data to the Pediatric Health Information System (PHIS) database. Validated algorithms using discharge diagnosis and procedure codes were used to identify children with the three conditions who were discharged home with IV antibiotic therapy.

Between January 2000 and December 2018 and across 52 hospitals, there were 24,753 hospitalizations for osteomyelitis, 13,700 for complicated pneumonia, and 60,575 for complicated appendicitis. Rates of postdischarge IV antibiotic therapy decreased over time for all conditions, from 61% to 22% for osteomyelitis, from 29% to 19% for complicated pneumonia, and from 13% to 2% for complicated appendicitis. Rather than a gradual reduction over time, the authors used piecewise linear regression to identify an inflection point when the decrease started: the inflection points for all three occurred around 2009 or 2010. Despite the observed decrease over time, there was significant variation in practice patterns among hospitals in 2018. For example, while the median rate of postdischarge IV antibiotic therapy for osteomyelitis was 18%, the interquartile ranged from 9% to 40%.

The authors conducted several sensitivity analyses, with the exclusion of hospitals that provided data only for certain years, which supported the robustness of the findings. Yet there are important limitations, most notably the lack of data on outcomes related to overuse and efficiency: type of antibiotics used (narrow vs broad spectrum) and total duration of antibiotics or variation in length of stay. The validated algorithms were also based on older ICD-9 codes and may perform less well with ICD-10 or from 2015 onwards. Lastly, the findings are limited to children’s hospitals and may not apply to general hospitals that care for many children.

CAN DEIMPLEMENTATION HAPPEN WITHOUT INTERVENTIONS?

The authors suggest that the deimplementation of postdischarge IV antibiotic therapy for the three conditions occurred spontaneously. Yet it is worth considering the different levels of agents of change that may have influenced these observations, such as research evidence, national condition guidelines, national efforts at reducing overuse and improving safety, local hospital efforts, and shared decision-making.

Postdischarge antibiotic therapy options for osteomyelitis, complicated pneumonia, and complicated appendicitis are supported by weak research evidence. Oral and parenteral therapy are equally effective but based on observational data; a randomized controlled trial is unlikely to ever be conducted because of uncommon outcomes, such as treatment failures. For these scenarios, greater emphasis should be placed on factors other than effectiveness, such as harms, availability of alternative options, and cost.6 For postdischarge IV antibiotic therapy, one potential explanation for the observed deimplementation is the greater awareness of harm, with up to 20% of cases with IV antibiotics requiring PICC removal.7 There is also a readily available alternative (oral antibiotics) with a favorable cost and effectiveness profile.

National condition guidelines advocating early transition to oral antibiotic therapy began to appear before and during the observed inflection point of 2009 and 2010. The 2002 British Thoracic Society guidelines for community-acquired pneumonia suggested considering oral agents after clear evidence of improvement,8 and the 2010 Infectious Diseases Society of America guidelines recommended oral antibiotic options for children discharged home with intra-abdominal infections.9 A systematic review published in 2002 also questioned the need for prolonged IV antibiotic therapy compared with early transition to oral agents in osteomyelitis.10 While no targeted national interventions to drive practice change existed, widespread national efforts at reducing overuse (eg, Choosing Wisely®) and improving safety (eg, reducing central line complications) have increased in the past decade.11

An important agent of change that Fenster and colleagues were not able to tease out was the impact of local hospital level efforts. In parallel to national efforts, there has likely been targeted hospital-level interventions that are disease specific (eg, order sets, pathways/guidelines, shared–decision-making tools) or focused on reducing adverse events (eg, reducing inappropriate PICC use). For example, between 2010 and 2012, one US children’s hospital increased the number of children with osteomyelitis discharged on oral antibiotics from a median of 0% to 100% with a bundle of quality improvement interventions, including standardized treatment protocols and shared decision-making.12

Despite the encouraging results, up to 22% of children were discharged from hospitals with postdischarge IV antibiotic therapy, and significant variation persists in 2018. Evidence of harm or even strong recommendations to change practice are themselves inadequate for behavior change.13 While it is clear that some element of deimplementation may have occurred organically over the past two decades, it is time for concerted deimplementation strategies that focus on practitioners or hospitals with “entrenched practices.”6

Disclosures

Dr Gill has received grant funding from the Canadian Paediatric Society, the Hospital for Sick Children, and the Canadian Institutes of Health Research (CIHR) in the past 5 years. He is on editorial board of BMJ Evidence-Based Medicine (EBM) and on the Institute Advisory Board for the CIHR Institute of Human Development and Child and Youth Health (IHDCYH), for which he has expenses reimbursed to attend meetings. He is a member of the EBMLive steering committee, and he has expenses reimbursed to attend the conference. Dr Mahant has received grant funding from CIHR in the past 5 years and is a Senior Deputy Editor of Journal of Hospital Medicine. The authors reported no conflicts of interest or financial relationships relevant to this manuscript.

In this edition of the Journal of Hospital Medicine, Fenster and colleagues evaluate the trend of postdischarge intravenous (IV) antibiotic therapy for children with osteomyelitis, complicated pneumonia, and complicated appendicitis.1 Children requiring prolonged antibiotic therapy were historically discharged home with a peripherally inserted central catheter (PICC) for IV antibiotics. Recent studies suggest that treatment failure occurs uncommonly, and that oral antibiotics are as effective as those administered intravenously.2-4 Oral antibiotics also avoid the additional risk of PICC-related complications, such as line malfunction, infections, and thrombi, which all lead to increased re-visits to hospital.

QUESTIONING ESTABLISHED MEDICAL PRACTICE

New research seldom leads to rapid change in clinical practice.5 This is particularly the case when new evidence favors the abandonment of accepted medical practices or supports the deimplementation of low-value care. The mounting body of evidence suggests that postdischarge IV antibiotic therapy is low-value care for children with osteomyelitis, complicated pneumonia, and complicated appendicitis, and that overuse is associated with unnecessary harm. Fenster and colleagues sought to evaluate the extent to which the management of these conditions has changed over time in the United States. They conducted a retrospective cohort study of children discharged from hospitals contributing data to the Pediatric Health Information System (PHIS) database. Validated algorithms using discharge diagnosis and procedure codes were used to identify children with the three conditions who were discharged home with IV antibiotic therapy.

Between January 2000 and December 2018 and across 52 hospitals, there were 24,753 hospitalizations for osteomyelitis, 13,700 for complicated pneumonia, and 60,575 for complicated appendicitis. Rates of postdischarge IV antibiotic therapy decreased over time for all conditions, from 61% to 22% for osteomyelitis, from 29% to 19% for complicated pneumonia, and from 13% to 2% for complicated appendicitis. Rather than a gradual reduction over time, the authors used piecewise linear regression to identify an inflection point when the decrease started: the inflection points for all three occurred around 2009 or 2010. Despite the observed decrease over time, there was significant variation in practice patterns among hospitals in 2018. For example, while the median rate of postdischarge IV antibiotic therapy for osteomyelitis was 18%, the interquartile ranged from 9% to 40%.

The authors conducted several sensitivity analyses, with the exclusion of hospitals that provided data only for certain years, which supported the robustness of the findings. Yet there are important limitations, most notably the lack of data on outcomes related to overuse and efficiency: type of antibiotics used (narrow vs broad spectrum) and total duration of antibiotics or variation in length of stay. The validated algorithms were also based on older ICD-9 codes and may perform less well with ICD-10 or from 2015 onwards. Lastly, the findings are limited to children’s hospitals and may not apply to general hospitals that care for many children.

CAN DEIMPLEMENTATION HAPPEN WITHOUT INTERVENTIONS?

The authors suggest that the deimplementation of postdischarge IV antibiotic therapy for the three conditions occurred spontaneously. Yet it is worth considering the different levels of agents of change that may have influenced these observations, such as research evidence, national condition guidelines, national efforts at reducing overuse and improving safety, local hospital efforts, and shared decision-making.

Postdischarge antibiotic therapy options for osteomyelitis, complicated pneumonia, and complicated appendicitis are supported by weak research evidence. Oral and parenteral therapy are equally effective but based on observational data; a randomized controlled trial is unlikely to ever be conducted because of uncommon outcomes, such as treatment failures. For these scenarios, greater emphasis should be placed on factors other than effectiveness, such as harms, availability of alternative options, and cost.6 For postdischarge IV antibiotic therapy, one potential explanation for the observed deimplementation is the greater awareness of harm, with up to 20% of cases with IV antibiotics requiring PICC removal.7 There is also a readily available alternative (oral antibiotics) with a favorable cost and effectiveness profile.

National condition guidelines advocating early transition to oral antibiotic therapy began to appear before and during the observed inflection point of 2009 and 2010. The 2002 British Thoracic Society guidelines for community-acquired pneumonia suggested considering oral agents after clear evidence of improvement,8 and the 2010 Infectious Diseases Society of America guidelines recommended oral antibiotic options for children discharged home with intra-abdominal infections.9 A systematic review published in 2002 also questioned the need for prolonged IV antibiotic therapy compared with early transition to oral agents in osteomyelitis.10 While no targeted national interventions to drive practice change existed, widespread national efforts at reducing overuse (eg, Choosing Wisely®) and improving safety (eg, reducing central line complications) have increased in the past decade.11

An important agent of change that Fenster and colleagues were not able to tease out was the impact of local hospital level efforts. In parallel to national efforts, there has likely been targeted hospital-level interventions that are disease specific (eg, order sets, pathways/guidelines, shared–decision-making tools) or focused on reducing adverse events (eg, reducing inappropriate PICC use). For example, between 2010 and 2012, one US children’s hospital increased the number of children with osteomyelitis discharged on oral antibiotics from a median of 0% to 100% with a bundle of quality improvement interventions, including standardized treatment protocols and shared decision-making.12

Despite the encouraging results, up to 22% of children were discharged from hospitals with postdischarge IV antibiotic therapy, and significant variation persists in 2018. Evidence of harm or even strong recommendations to change practice are themselves inadequate for behavior change.13 While it is clear that some element of deimplementation may have occurred organically over the past two decades, it is time for concerted deimplementation strategies that focus on practitioners or hospitals with “entrenched practices.”6

Disclosures

Dr Gill has received grant funding from the Canadian Paediatric Society, the Hospital for Sick Children, and the Canadian Institutes of Health Research (CIHR) in the past 5 years. He is on editorial board of BMJ Evidence-Based Medicine (EBM) and on the Institute Advisory Board for the CIHR Institute of Human Development and Child and Youth Health (IHDCYH), for which he has expenses reimbursed to attend meetings. He is a member of the EBMLive steering committee, and he has expenses reimbursed to attend the conference. Dr Mahant has received grant funding from CIHR in the past 5 years and is a Senior Deputy Editor of Journal of Hospital Medicine. The authors reported no conflicts of interest or financial relationships relevant to this manuscript.

References

1. Fenster ME, Hersh AL, Srivastava R, Keren R, Wilkes J, Coon ER. Trends in use of postdischarge intravenous antibiotic therapy for children. J Hosp Med. 2020;15:731-733. https://doi.org/10.12788/jhm.3422
2. Keren R, Shah SS, Srivastava R, et al. Comparative effectiveness of intravenous vs oral antibiotics for postdischarge treatment of acute osteomyelitis in children. JAMA Pediatr. 2015;169(2):120-128. https://doi.org/10.1001/jamapediatrics.2014.2822
3. Rangel SJ, Anderson BR, Srivastava R, et al. Intravenous versus oral antibiotics for the prevention of treatment failure in children with complicated appendicitis: has the abandonment of peripherally inserted catheters been justified? Ann Surg. 2017;266(2):361-368. https://doi.org/10.1097/sla.0000000000001923
4. Shah SS, Srivastava R, Wu S, et al. Intravenous versus oral antibiotics for postdischarge treatment of complicated pneumonia. Pediatrics. 2016;138(6):e20161692. https://doi.org/10.1542/peds.2016-1692
5. Davidoff F. On the undiffusion of established practices.  JAMA Intern Med. 2015;175(5):809-811. https://doi.org/10.1001/jamainternmed.2015.0167
6. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. https://doi.org/10.1186/1748-5908-9-1
7. Jumani K, Advani S, Reich NG, Gosey L, Milstone AM. Risk factors for peripherally inserted central venous catheter complications in children. JAMA Pediatr. 2013;167(5):429-435. https://doi.org/10.1001/jamapediatrics.2013.775
8. British Thoracic Society Standards of Care Committee. British Thoracic Society guidelines for the management of community acquired pneumonia in childhood. Thorax. 2002;57(Suppl 1):i1-i24. https://doi.org/10.1136/thorax.57.90001.i1
9. Solomkin JS, Mazuski JE, Bradley JS, et al. Diagnosis and management of complicated intra-abdominal infection in adults and children: guidelines by the Surgical Infection Society and the Infectious Diseases Society of America. Clin Infect Dis. 2010;50(2):133-164. https://doi.org/10.1086/649554
10. Le Saux N, Howard A, Barrowman NJ, Gaboury I, Sampson M, Moher D. Shorter courses of parenteral antibiotic therapy do not appear to influence response rates for children with acute hematogenous osteomyelitis: a systematic review. BMC Infect Dis. 2002;2:16. https://doi.org/10.1186/1471-2334-2-16
11. Born K, Kool T, Levinson W. Reducing overuse in healthcare: advancing Choosing Wisely. BMJ. 2019;367:l6317. https://doi.org/10.1136/bmj.l6317
12. Brady PW, Brinkman WB, Simmons JM, et al. Oral antibiotics at discharge for children with acute osteomyelitis: a rapid cycle improvement project. BMJ Qual Saf. 2014;23(6):499-507. https://doi.org/10.1136/bmjqs-2013-002179
13. Rosenberg A, Agiro A, Gottlieb M, et al. Early trends among seven recommendations from the choosing wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. https://doi.org/10.1001/jamainternmed.2015.5441

References

1. Fenster ME, Hersh AL, Srivastava R, Keren R, Wilkes J, Coon ER. Trends in use of postdischarge intravenous antibiotic therapy for children. J Hosp Med. 2020;15:731-733. https://doi.org/10.12788/jhm.3422
2. Keren R, Shah SS, Srivastava R, et al. Comparative effectiveness of intravenous vs oral antibiotics for postdischarge treatment of acute osteomyelitis in children. JAMA Pediatr. 2015;169(2):120-128. https://doi.org/10.1001/jamapediatrics.2014.2822
3. Rangel SJ, Anderson BR, Srivastava R, et al. Intravenous versus oral antibiotics for the prevention of treatment failure in children with complicated appendicitis: has the abandonment of peripherally inserted catheters been justified? Ann Surg. 2017;266(2):361-368. https://doi.org/10.1097/sla.0000000000001923
4. Shah SS, Srivastava R, Wu S, et al. Intravenous versus oral antibiotics for postdischarge treatment of complicated pneumonia. Pediatrics. 2016;138(6):e20161692. https://doi.org/10.1542/peds.2016-1692
5. Davidoff F. On the undiffusion of established practices.  JAMA Intern Med. 2015;175(5):809-811. https://doi.org/10.1001/jamainternmed.2015.0167
6. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. https://doi.org/10.1186/1748-5908-9-1
7. Jumani K, Advani S, Reich NG, Gosey L, Milstone AM. Risk factors for peripherally inserted central venous catheter complications in children. JAMA Pediatr. 2013;167(5):429-435. https://doi.org/10.1001/jamapediatrics.2013.775
8. British Thoracic Society Standards of Care Committee. British Thoracic Society guidelines for the management of community acquired pneumonia in childhood. Thorax. 2002;57(Suppl 1):i1-i24. https://doi.org/10.1136/thorax.57.90001.i1
9. Solomkin JS, Mazuski JE, Bradley JS, et al. Diagnosis and management of complicated intra-abdominal infection in adults and children: guidelines by the Surgical Infection Society and the Infectious Diseases Society of America. Clin Infect Dis. 2010;50(2):133-164. https://doi.org/10.1086/649554
10. Le Saux N, Howard A, Barrowman NJ, Gaboury I, Sampson M, Moher D. Shorter courses of parenteral antibiotic therapy do not appear to influence response rates for children with acute hematogenous osteomyelitis: a systematic review. BMC Infect Dis. 2002;2:16. https://doi.org/10.1186/1471-2334-2-16
11. Born K, Kool T, Levinson W. Reducing overuse in healthcare: advancing Choosing Wisely. BMJ. 2019;367:l6317. https://doi.org/10.1136/bmj.l6317
12. Brady PW, Brinkman WB, Simmons JM, et al. Oral antibiotics at discharge for children with acute osteomyelitis: a rapid cycle improvement project. BMJ Qual Saf. 2014;23(6):499-507. https://doi.org/10.1136/bmjqs-2013-002179
13. Rosenberg A, Agiro A, Gottlieb M, et al. Early trends among seven recommendations from the choosing wisely campaign. JAMA Intern Med. 2015;175(12):1913-1920. https://doi.org/10.1001/jamainternmed.2015.5441

Issue
Journal of Hospital Medicine 15(12)
Issue
Journal of Hospital Medicine 15(12)
Page Number
765-766
Page Number
765-766
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Peter J Gill, MD, DPhil, FRCPC; Email: peter.gill@sickkids.ca; Telephone: 416-813-7654 (ext 308881); Twitter: @peterjgill.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Medscape Article
Article PDF Media

Assessing Individual Hospitalist Performance: Domains and Attribution

Article Type
Changed
Thu, 10/01/2020 - 05:15

When asked by friend or family “Which hospital did you go to?” or “Which doctor did you see?” most are likely to answer with a single institution or clinician. Yet for hospital stays the patient’s experience and outcomes are a product of many individuals and an entire system of care, so measuring performance at the group, or “team,” level is appropriate.

Assessing and managing performance of individuals in healthcare is also important. In this regard, though, healthcare may be more like assessing individual baseball players prior to the widespread adoption of detailed statistics, a transition to what is often referred to as sabermetrics (and popularized by the 2004 book Moneyball).1 An individual player’s performance and future potential went from being assessed largely by the opinion of expert talent scouts to including, or even principally relying on, a wide array of measurements and statistics.

It sometimes seems healthcare has arrived at its “sabermetrics moment.” There is a rapidly growing set of measures for individual clinicians, and nearly every week, hospitalists will open a new report of their performance sent by a payer, a government agency, their own hospitals, or other organizations. But most of these metrics suffer from problems with attributing performance to a single clinician; for example, many or most metrics attribute performance to the attending at the time of a patient’s discharge according to the clinical record. Yet while clinical metrics (eg, administer beta-blocker when indicated, length of stay (LOS), readmissions), patient experience, financial metrics (eg, cost per case), and others are vital to understanding performance at an aggregate level such as a hospital or physician group, they are potentially confusing or even misleading when attributed entirely to the discharging provider. So healthcare leaders still tend to rely meaningfully on expert opinion—“talent scouts”—to identify high performers.

In this issue of the Journal of Hospital Medicine, Dow and colleagues have advanced our understanding of the current state of individual- rather than group-level hospitalist performance measurement.2 This scoping review identified 43 studies published over the last 25 years reporting individual adult or pediatric hospitalist performance across one or more of the STEEEP framework domains of performance: Safe, Timely, Effective, Efficient, Equitable, Patient Centered.3

The most common domain assessed in the studies was Patient Centered (20 studies), and in descending order from there were Safe (16), Efficient (13), Timely (10), Effective (9). No studies reported individual hospitalist performance on Equitable care. This distribution of studied domains is likely a function of readily available data and processes for study more than level of interest or importance attached to each domain. Their research was not designed to assess the quality of each study, and some—or even many—might have weaknesses in both determining which clinicians met the definition of hospitalist and how performance was attributed to individuals. The authors appropriately conclude that “further defining and refining approaches to assess individual performance is necessary to ensure the highest quality.”

Their findings should help guide research priorities regarding measurement of individual hospitalist performance. Yet each hospitalist group and individual hospitalist still faces decisions about managing their own group and personal performance and must navigate without the benefit of research providing clear direction. Many hospitalist metrics are tracked and reported to meet regulatory requirements such as those from Centers for Medicare & Medicaid Services, financial metrics for the local hospital and hospitalist group, and for use as components of hospitalist compensation. (The biennial State of Hospital Medicine Report captures extensive data regarding the latter.4)

Many people and processes across an entire healthcare system influence performance on every metric, but it is useful and practical to attribute some metrics entirely to a single hospitalist provider, such as timely documentation and the time of day the discharge order is entered. And arguably, it is useful to attribute readmission rate entirely to the discharging provider—the last hospital provider who can influence readmission risk. But for most other metrics individual attribution is problematic or misleading and collective experience and expert opinion are helpful here. Two examples come to mind of relatively simple approaches that have gained some popularity in teasing out individual contribution to hospitalist performance.

One can estimate individual hospitalist contribution to patient LOS by calculating the ratio of current procedural terminology (CPT) codes for all follow-up services to all discharge codes. For each hospitalist in the group who cares for a similar population, those with the highest ratios likely manage patients in ways associated with longer LOS. It is relatively simple to use billing data to calculate the ratio, and some groups report it for all providers monthly.

Many metrics that aggregate performance across an entire hospital stay, such as patient experience surveys, can be apportioned to each hospitalist who had a billed encounter with the patient. For example, if a hospitalist has 4 of a patient’s 10 billed encounters within the same group, then 40% of the patient’s survey score could be attributed to that hospitalist. It’s still imperfect, but it’s likely more meaningful than attributing the entire survey result to only the discharging provider.

These approaches have value but still leave us unsatisfied and unable to assess performance as effectively as we would like. Advancements in measurement have been slow and incremental, but they are likely to accelerate with maturation of electronic health records paired with machine learning or artificial intelligence, wearable devices, and sensors in patient rooms, which collectively may make capturing a robust set of metrics trivially easy (and raise questions regarding privacy and so forth). For example, it is already possible to capture via a smart speaker all conversations between patient, loved ones, and clinician.5 Imagine you are presented with a word cloud summary of all conversations you had with all patients over a year. Did you use empathy words often enough? How reliably did you address all appropriate discharge-related topics?

As performance metrics become more numerous and ubiquitous, the challenge will be to ensure they accurately capture what they appear to measure, are appropriately attributed to individuals or groups, and provide insights into important domains of performance. Significant opportunity for improvement remains.

Disclosure

Dr Nelson has no conflict of interest to disclose.

References

1. Lewis M. Moneyball: The Art of Winning an Unfair Game. W.W. Norton & Company; 2004.
2. Dow AW, Chopski B, Cyrus JW, et al. A STEEEP hill to climb: a scoping review of assessments of individual hospitalist performance. J Hosp Med. 2020;15:599-605. https://doi.org/10.12788/jhm.3445
3. Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press (US); 2001. https://doi.org/10.17226/10027
4. 2018 State of Hospital Medicine Report. Society of Hospital Medicine. Accessed May 19, 2020. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/
5. Chiu CC, Tripathi A, Chou K, et al. Speech recognition for medical conversations. arXiv. Preprint posted online November 20, 2017. Revised June 20, 2018. https://arxiv.org/pdf/1711.07274.pdf

Article PDF
Issue
Journal of Hospital Medicine 15(10)
Publications
Topics
Page Number
639-640
Sections
Article PDF
Article PDF
Related Articles

When asked by friend or family “Which hospital did you go to?” or “Which doctor did you see?” most are likely to answer with a single institution or clinician. Yet for hospital stays the patient’s experience and outcomes are a product of many individuals and an entire system of care, so measuring performance at the group, or “team,” level is appropriate.

Assessing and managing performance of individuals in healthcare is also important. In this regard, though, healthcare may be more like assessing individual baseball players prior to the widespread adoption of detailed statistics, a transition to what is often referred to as sabermetrics (and popularized by the 2004 book Moneyball).1 An individual player’s performance and future potential went from being assessed largely by the opinion of expert talent scouts to including, or even principally relying on, a wide array of measurements and statistics.

It sometimes seems healthcare has arrived at its “sabermetrics moment.” There is a rapidly growing set of measures for individual clinicians, and nearly every week, hospitalists will open a new report of their performance sent by a payer, a government agency, their own hospitals, or other organizations. But most of these metrics suffer from problems with attributing performance to a single clinician; for example, many or most metrics attribute performance to the attending at the time of a patient’s discharge according to the clinical record. Yet while clinical metrics (eg, administer beta-blocker when indicated, length of stay (LOS), readmissions), patient experience, financial metrics (eg, cost per case), and others are vital to understanding performance at an aggregate level such as a hospital or physician group, they are potentially confusing or even misleading when attributed entirely to the discharging provider. So healthcare leaders still tend to rely meaningfully on expert opinion—“talent scouts”—to identify high performers.

In this issue of the Journal of Hospital Medicine, Dow and colleagues have advanced our understanding of the current state of individual- rather than group-level hospitalist performance measurement.2 This scoping review identified 43 studies published over the last 25 years reporting individual adult or pediatric hospitalist performance across one or more of the STEEEP framework domains of performance: Safe, Timely, Effective, Efficient, Equitable, Patient Centered.3

The most common domain assessed in the studies was Patient Centered (20 studies), and in descending order from there were Safe (16), Efficient (13), Timely (10), Effective (9). No studies reported individual hospitalist performance on Equitable care. This distribution of studied domains is likely a function of readily available data and processes for study more than level of interest or importance attached to each domain. Their research was not designed to assess the quality of each study, and some—or even many—might have weaknesses in both determining which clinicians met the definition of hospitalist and how performance was attributed to individuals. The authors appropriately conclude that “further defining and refining approaches to assess individual performance is necessary to ensure the highest quality.”

Their findings should help guide research priorities regarding measurement of individual hospitalist performance. Yet each hospitalist group and individual hospitalist still faces decisions about managing their own group and personal performance and must navigate without the benefit of research providing clear direction. Many hospitalist metrics are tracked and reported to meet regulatory requirements such as those from Centers for Medicare & Medicaid Services, financial metrics for the local hospital and hospitalist group, and for use as components of hospitalist compensation. (The biennial State of Hospital Medicine Report captures extensive data regarding the latter.4)

Many people and processes across an entire healthcare system influence performance on every metric, but it is useful and practical to attribute some metrics entirely to a single hospitalist provider, such as timely documentation and the time of day the discharge order is entered. And arguably, it is useful to attribute readmission rate entirely to the discharging provider—the last hospital provider who can influence readmission risk. But for most other metrics individual attribution is problematic or misleading and collective experience and expert opinion are helpful here. Two examples come to mind of relatively simple approaches that have gained some popularity in teasing out individual contribution to hospitalist performance.

One can estimate individual hospitalist contribution to patient LOS by calculating the ratio of current procedural terminology (CPT) codes for all follow-up services to all discharge codes. For each hospitalist in the group who cares for a similar population, those with the highest ratios likely manage patients in ways associated with longer LOS. It is relatively simple to use billing data to calculate the ratio, and some groups report it for all providers monthly.

Many metrics that aggregate performance across an entire hospital stay, such as patient experience surveys, can be apportioned to each hospitalist who had a billed encounter with the patient. For example, if a hospitalist has 4 of a patient’s 10 billed encounters within the same group, then 40% of the patient’s survey score could be attributed to that hospitalist. It’s still imperfect, but it’s likely more meaningful than attributing the entire survey result to only the discharging provider.

These approaches have value but still leave us unsatisfied and unable to assess performance as effectively as we would like. Advancements in measurement have been slow and incremental, but they are likely to accelerate with maturation of electronic health records paired with machine learning or artificial intelligence, wearable devices, and sensors in patient rooms, which collectively may make capturing a robust set of metrics trivially easy (and raise questions regarding privacy and so forth). For example, it is already possible to capture via a smart speaker all conversations between patient, loved ones, and clinician.5 Imagine you are presented with a word cloud summary of all conversations you had with all patients over a year. Did you use empathy words often enough? How reliably did you address all appropriate discharge-related topics?

As performance metrics become more numerous and ubiquitous, the challenge will be to ensure they accurately capture what they appear to measure, are appropriately attributed to individuals or groups, and provide insights into important domains of performance. Significant opportunity for improvement remains.

Disclosure

Dr Nelson has no conflict of interest to disclose.

When asked by friend or family “Which hospital did you go to?” or “Which doctor did you see?” most are likely to answer with a single institution or clinician. Yet for hospital stays the patient’s experience and outcomes are a product of many individuals and an entire system of care, so measuring performance at the group, or “team,” level is appropriate.

Assessing and managing performance of individuals in healthcare is also important. In this regard, though, healthcare may be more like assessing individual baseball players prior to the widespread adoption of detailed statistics, a transition to what is often referred to as sabermetrics (and popularized by the 2004 book Moneyball).1 An individual player’s performance and future potential went from being assessed largely by the opinion of expert talent scouts to including, or even principally relying on, a wide array of measurements and statistics.

It sometimes seems healthcare has arrived at its “sabermetrics moment.” There is a rapidly growing set of measures for individual clinicians, and nearly every week, hospitalists will open a new report of their performance sent by a payer, a government agency, their own hospitals, or other organizations. But most of these metrics suffer from problems with attributing performance to a single clinician; for example, many or most metrics attribute performance to the attending at the time of a patient’s discharge according to the clinical record. Yet while clinical metrics (eg, administer beta-blocker when indicated, length of stay (LOS), readmissions), patient experience, financial metrics (eg, cost per case), and others are vital to understanding performance at an aggregate level such as a hospital or physician group, they are potentially confusing or even misleading when attributed entirely to the discharging provider. So healthcare leaders still tend to rely meaningfully on expert opinion—“talent scouts”—to identify high performers.

In this issue of the Journal of Hospital Medicine, Dow and colleagues have advanced our understanding of the current state of individual- rather than group-level hospitalist performance measurement.2 This scoping review identified 43 studies published over the last 25 years reporting individual adult or pediatric hospitalist performance across one or more of the STEEEP framework domains of performance: Safe, Timely, Effective, Efficient, Equitable, Patient Centered.3

The most common domain assessed in the studies was Patient Centered (20 studies), and in descending order from there were Safe (16), Efficient (13), Timely (10), Effective (9). No studies reported individual hospitalist performance on Equitable care. This distribution of studied domains is likely a function of readily available data and processes for study more than level of interest or importance attached to each domain. Their research was not designed to assess the quality of each study, and some—or even many—might have weaknesses in both determining which clinicians met the definition of hospitalist and how performance was attributed to individuals. The authors appropriately conclude that “further defining and refining approaches to assess individual performance is necessary to ensure the highest quality.”

Their findings should help guide research priorities regarding measurement of individual hospitalist performance. Yet each hospitalist group and individual hospitalist still faces decisions about managing their own group and personal performance and must navigate without the benefit of research providing clear direction. Many hospitalist metrics are tracked and reported to meet regulatory requirements such as those from Centers for Medicare & Medicaid Services, financial metrics for the local hospital and hospitalist group, and for use as components of hospitalist compensation. (The biennial State of Hospital Medicine Report captures extensive data regarding the latter.4)

Many people and processes across an entire healthcare system influence performance on every metric, but it is useful and practical to attribute some metrics entirely to a single hospitalist provider, such as timely documentation and the time of day the discharge order is entered. And arguably, it is useful to attribute readmission rate entirely to the discharging provider—the last hospital provider who can influence readmission risk. But for most other metrics individual attribution is problematic or misleading and collective experience and expert opinion are helpful here. Two examples come to mind of relatively simple approaches that have gained some popularity in teasing out individual contribution to hospitalist performance.

One can estimate individual hospitalist contribution to patient LOS by calculating the ratio of current procedural terminology (CPT) codes for all follow-up services to all discharge codes. For each hospitalist in the group who cares for a similar population, those with the highest ratios likely manage patients in ways associated with longer LOS. It is relatively simple to use billing data to calculate the ratio, and some groups report it for all providers monthly.

Many metrics that aggregate performance across an entire hospital stay, such as patient experience surveys, can be apportioned to each hospitalist who had a billed encounter with the patient. For example, if a hospitalist has 4 of a patient’s 10 billed encounters within the same group, then 40% of the patient’s survey score could be attributed to that hospitalist. It’s still imperfect, but it’s likely more meaningful than attributing the entire survey result to only the discharging provider.

These approaches have value but still leave us unsatisfied and unable to assess performance as effectively as we would like. Advancements in measurement have been slow and incremental, but they are likely to accelerate with maturation of electronic health records paired with machine learning or artificial intelligence, wearable devices, and sensors in patient rooms, which collectively may make capturing a robust set of metrics trivially easy (and raise questions regarding privacy and so forth). For example, it is already possible to capture via a smart speaker all conversations between patient, loved ones, and clinician.5 Imagine you are presented with a word cloud summary of all conversations you had with all patients over a year. Did you use empathy words often enough? How reliably did you address all appropriate discharge-related topics?

As performance metrics become more numerous and ubiquitous, the challenge will be to ensure they accurately capture what they appear to measure, are appropriately attributed to individuals or groups, and provide insights into important domains of performance. Significant opportunity for improvement remains.

Disclosure

Dr Nelson has no conflict of interest to disclose.

References

1. Lewis M. Moneyball: The Art of Winning an Unfair Game. W.W. Norton & Company; 2004.
2. Dow AW, Chopski B, Cyrus JW, et al. A STEEEP hill to climb: a scoping review of assessments of individual hospitalist performance. J Hosp Med. 2020;15:599-605. https://doi.org/10.12788/jhm.3445
3. Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press (US); 2001. https://doi.org/10.17226/10027
4. 2018 State of Hospital Medicine Report. Society of Hospital Medicine. Accessed May 19, 2020. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/
5. Chiu CC, Tripathi A, Chou K, et al. Speech recognition for medical conversations. arXiv. Preprint posted online November 20, 2017. Revised June 20, 2018. https://arxiv.org/pdf/1711.07274.pdf

References

1. Lewis M. Moneyball: The Art of Winning an Unfair Game. W.W. Norton & Company; 2004.
2. Dow AW, Chopski B, Cyrus JW, et al. A STEEEP hill to climb: a scoping review of assessments of individual hospitalist performance. J Hosp Med. 2020;15:599-605. https://doi.org/10.12788/jhm.3445
3. Institute of Medicine (US) Committee on Quality of Health Care in America. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academy Press (US); 2001. https://doi.org/10.17226/10027
4. 2018 State of Hospital Medicine Report. Society of Hospital Medicine. Accessed May 19, 2020. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/
5. Chiu CC, Tripathi A, Chou K, et al. Speech recognition for medical conversations. arXiv. Preprint posted online November 20, 2017. Revised June 20, 2018. https://arxiv.org/pdf/1711.07274.pdf

Issue
Journal of Hospital Medicine 15(10)
Issue
Journal of Hospital Medicine 15(10)
Page Number
639-640
Page Number
639-640
Publications
Publications
Topics
Article Type
Sections
Inside the Article

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
John R Nelson, MD, MHM; Email: john.nelson@nelsonflores.com; Telephone: 425-467-3316.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Article PDF Media

Hospital Star Ratings and Sociodemographics: A Scoring System in Need of Revision

Article Type
Changed
Thu, 10/01/2020 - 05:15

Still in its infancy, the Hospital Compare overall hospital quality star rating program introduced by the Centers for Medicare & Medicaid Services (CMS) has generated intense industry debate. Individual health systems are microcosms of the challenges of ratings and measurement design. Sibley Memorial Hospital, a member of Johns Hopkins Medicine, is a well-run, 288-bed, community hospital located in a wealthy section of northwest District of Columbia with a five-star rating. In contrast, its academic partner, the Johns Hopkins Hospital, a 1,162-bed hospital with a century-long history of innovation situated in an impoverished Baltimore, Maryland, neighborhood, received a three-star rating.

Hospital ratings are the product of an industry in transition: As care delivery has shifted from an individual provider-driven industry to an increasingly scaled systems enterprise, policymakers implemented regulatory standards targeting quality measurement. Subsequent to the National Academy of Medicine’s 1999 report To Err is Human, policy efforts brought public reporting of quality ratings to multiple market segments, including dialysis facilities (2001), nursing homes (2003), Medicare Advantage plans (2007), and physicians (2015). The hospital industry was no exception, and in 2016—with much controversy1—CMS launched the hospital star ratings program.

CMS Star Ratings for hospitals are based on seven measure groups: mortality, safety, readmission, patient experience, effectiveness, timeliness, and efficient use of medical imaging. Both industry and researchers have decried the challenges of star ratings, noting that hospitals with a narrower scope of services are more likely to receive higher ratings.2 Measure groupings may be further flawed as shown by recent work demonstrating that larger, safety net, or academic hospitals, as well as hospitals offering transplant services, have higher readmission rates,3 which may be caused by differences in patient complexity. Other research has demonstrated that overall quality ratings inappropriately pool all hospitals together, when it may be fairer to initially categorize hospitals and then score them.4

It is within this maelstrom of debate that, in this month’s issue of the Journal of Hospital Medicine, Shi and colleagues explore the relationship between hospital star ratings and the socioeconomic features of the surrounding communities.5 Conducting their analysis by linking multiple reputable government and industry sources, Shi and colleagues found that counties with higher education attainment and a lower proportion of dual Medicare-Medicaid–eligible populations had higher hospital star ratings. Furthermore, a county’s minority population percentage negatively correlated with hospital ratings. Validating the experience of many rural hospital executives—who frequently experience financial challenges—Shi and colleagues noted that rural hospitals were less likely to receive five-star ratings.

Do these findings reflect a true disparity and lack of access to high-quality hospitals, or are they artifactual—secondary to a flawed construct of hospital quality measurement? Many lower-ranking hospitals are urban academic centers frequently providing services not offered at their five-star community counterparts, such as neurosurgery, comprehensive cancer care, and organ transplants, while simultaneously serving as safety net hospitals, research institutions, trauma centers, and national referral centers.

Sociodemographics factor significantly in self-care management for hospital aftercare. Health literacy, access to primary and behavioral healthcare, and transportation all affect star indicators. Recent work6 demonstrated that comprehensive investments in transitional care strategies and the social determinants of health were ineffective at reducing readmissions, which suggests that high readmission rates for hospitals in impoverished areas are not only common, but also may not accurately reflect hospital quality and local investment.

Patient experience is also complicating, with research demonstrating that patient perceptions vary significantly by education, age, primary language, ethnicity, and overall health. For example, one-third of average-ranked hospitals would have rankings vary by at least 18 percentile points when evaluated by Spanish-speaking patients. Star ratings fail to capture and communicate this granularity.7

More concerning is that star ratings inherently assume that hospital performance is being compared across the same tasks, regardless of patient characteristics, local resources, or the scope of services provided, the latter of which may vary between hospitals. For example, communication may differ in both complexity and time intensity: Explaining an antibiotic to the uncomplicated patient with pneumonia differs from prescribing an antibiotic to a patient who is legally blind from optic neuritis, walks with a cane because of multiple sclerosis, and has 24 other prescription medications. Similar challenges exist for differences in local neighborhood resources and for facilities with differing service scope.

Although one strategy to handle these “disparities” in star ratings might be to risk-adjust for social determinants of health, patients may be better served by first rethinking how star ratings are constructed. Clustering hospitals by scope of services provided and geographic region prior to determining star ratings would provide consumers with meaningful information by helping patients compare and make choices among either local or regional hospitals; national quality rankings are unhelpful for patients.

Arguably one of the most complex and person-dependent service enterprises, care delivery presents unique challenges for evaluation of customer experience and medical quality. Hospital star ratings are no exception: We must rethink their construction so they can be more meaningful for both patients and physicians.

Acknowledgments

The authors would like to acknowledge Daniel J Brotman, MD, for his editorial advice and input.

Disclosures

Dr Miller reported consulting for the Federal Trade Commission and serving as a member of the Centers for Medicare & Medicaid Services Medicare Evidence Development Coverage Advisory Committee. Drs Siddiqui and Deutschendorf have nothing to disclose.

References

1. Whitman E. CMS releases star ratings for hospitals. Modern Healthcare. July 27, 2016. Accessed April 27, 2020. https://www.modernhealthcare.com/article/20160727/NEWS/160729910/cms-releases-star-ratings-for-hospitals
2. Siddiqui ZK, Abusamaan M, Bertram A, et al. Comparison of services available in 5-star and non-5-star patient experience hospital. JAMA Intern Med. 2019;179(10):1429-1430. https://doi.org/10.1001/jamainternmed.2019.1285
3. Hoyer EH, Padula WV, Brotman DJ, et al. Patterns of hospital performance on the hospital-wide 30-day readmission metric: is the playing field level? J Gen Intern Med. 2018;33(1):57-64. https://doi.org/10.1007/s11606-017-4193-9
4. Chung JW, Dahlke AR, Barnard C, DeLancey JO, Merkow RP, Bilimoria KY. The Centers for Medicare and Medicaid Services hospital ratings: pitfalls of grading on a single curve. Health Aff (Millwood). 2019;38(9):1523-1529. https://doi.org/10.1377/hlthaff.2018.05345
5. Shi B, King C, Huang SS. Relationship of hospital star ratings to race, education, and community income. J Hosp Med. 2020;15:588-593. https://doi.org/10.12788/jhm.3393
6. Finkelstein A, Zhou A, Taubman S, Doyle J. Health care hotspotting—a randomized controlled trial. N Engl J Med. 2020;382:152-162. https://doi.org/10.1056/NEJMsa1906848
7. Elliott MN, Lehrman WG, Goldstein E, Hambarsoomian K, Beckett MK, Giordano LA. Do hospitals rank differently on HCAHPS for different patient subgroups? Med Care Res Rev. 2010;67(1):56-73. https://doi.org/10.1177/1077558709339066

Article PDF
Issue
Journal of Hospital Medicine 15(10)
Publications
Topics
Page Number
637-638
Sections
Article PDF
Article PDF
Related Articles

Still in its infancy, the Hospital Compare overall hospital quality star rating program introduced by the Centers for Medicare & Medicaid Services (CMS) has generated intense industry debate. Individual health systems are microcosms of the challenges of ratings and measurement design. Sibley Memorial Hospital, a member of Johns Hopkins Medicine, is a well-run, 288-bed, community hospital located in a wealthy section of northwest District of Columbia with a five-star rating. In contrast, its academic partner, the Johns Hopkins Hospital, a 1,162-bed hospital with a century-long history of innovation situated in an impoverished Baltimore, Maryland, neighborhood, received a three-star rating.

Hospital ratings are the product of an industry in transition: As care delivery has shifted from an individual provider-driven industry to an increasingly scaled systems enterprise, policymakers implemented regulatory standards targeting quality measurement. Subsequent to the National Academy of Medicine’s 1999 report To Err is Human, policy efforts brought public reporting of quality ratings to multiple market segments, including dialysis facilities (2001), nursing homes (2003), Medicare Advantage plans (2007), and physicians (2015). The hospital industry was no exception, and in 2016—with much controversy1—CMS launched the hospital star ratings program.

CMS Star Ratings for hospitals are based on seven measure groups: mortality, safety, readmission, patient experience, effectiveness, timeliness, and efficient use of medical imaging. Both industry and researchers have decried the challenges of star ratings, noting that hospitals with a narrower scope of services are more likely to receive higher ratings.2 Measure groupings may be further flawed as shown by recent work demonstrating that larger, safety net, or academic hospitals, as well as hospitals offering transplant services, have higher readmission rates,3 which may be caused by differences in patient complexity. Other research has demonstrated that overall quality ratings inappropriately pool all hospitals together, when it may be fairer to initially categorize hospitals and then score them.4

It is within this maelstrom of debate that, in this month’s issue of the Journal of Hospital Medicine, Shi and colleagues explore the relationship between hospital star ratings and the socioeconomic features of the surrounding communities.5 Conducting their analysis by linking multiple reputable government and industry sources, Shi and colleagues found that counties with higher education attainment and a lower proportion of dual Medicare-Medicaid–eligible populations had higher hospital star ratings. Furthermore, a county’s minority population percentage negatively correlated with hospital ratings. Validating the experience of many rural hospital executives—who frequently experience financial challenges—Shi and colleagues noted that rural hospitals were less likely to receive five-star ratings.

Do these findings reflect a true disparity and lack of access to high-quality hospitals, or are they artifactual—secondary to a flawed construct of hospital quality measurement? Many lower-ranking hospitals are urban academic centers frequently providing services not offered at their five-star community counterparts, such as neurosurgery, comprehensive cancer care, and organ transplants, while simultaneously serving as safety net hospitals, research institutions, trauma centers, and national referral centers.

Sociodemographics factor significantly in self-care management for hospital aftercare. Health literacy, access to primary and behavioral healthcare, and transportation all affect star indicators. Recent work6 demonstrated that comprehensive investments in transitional care strategies and the social determinants of health were ineffective at reducing readmissions, which suggests that high readmission rates for hospitals in impoverished areas are not only common, but also may not accurately reflect hospital quality and local investment.

Patient experience is also complicating, with research demonstrating that patient perceptions vary significantly by education, age, primary language, ethnicity, and overall health. For example, one-third of average-ranked hospitals would have rankings vary by at least 18 percentile points when evaluated by Spanish-speaking patients. Star ratings fail to capture and communicate this granularity.7

More concerning is that star ratings inherently assume that hospital performance is being compared across the same tasks, regardless of patient characteristics, local resources, or the scope of services provided, the latter of which may vary between hospitals. For example, communication may differ in both complexity and time intensity: Explaining an antibiotic to the uncomplicated patient with pneumonia differs from prescribing an antibiotic to a patient who is legally blind from optic neuritis, walks with a cane because of multiple sclerosis, and has 24 other prescription medications. Similar challenges exist for differences in local neighborhood resources and for facilities with differing service scope.

Although one strategy to handle these “disparities” in star ratings might be to risk-adjust for social determinants of health, patients may be better served by first rethinking how star ratings are constructed. Clustering hospitals by scope of services provided and geographic region prior to determining star ratings would provide consumers with meaningful information by helping patients compare and make choices among either local or regional hospitals; national quality rankings are unhelpful for patients.

Arguably one of the most complex and person-dependent service enterprises, care delivery presents unique challenges for evaluation of customer experience and medical quality. Hospital star ratings are no exception: We must rethink their construction so they can be more meaningful for both patients and physicians.

Acknowledgments

The authors would like to acknowledge Daniel J Brotman, MD, for his editorial advice and input.

Disclosures

Dr Miller reported consulting for the Federal Trade Commission and serving as a member of the Centers for Medicare & Medicaid Services Medicare Evidence Development Coverage Advisory Committee. Drs Siddiqui and Deutschendorf have nothing to disclose.

Still in its infancy, the Hospital Compare overall hospital quality star rating program introduced by the Centers for Medicare & Medicaid Services (CMS) has generated intense industry debate. Individual health systems are microcosms of the challenges of ratings and measurement design. Sibley Memorial Hospital, a member of Johns Hopkins Medicine, is a well-run, 288-bed, community hospital located in a wealthy section of northwest District of Columbia with a five-star rating. In contrast, its academic partner, the Johns Hopkins Hospital, a 1,162-bed hospital with a century-long history of innovation situated in an impoverished Baltimore, Maryland, neighborhood, received a three-star rating.

Hospital ratings are the product of an industry in transition: As care delivery has shifted from an individual provider-driven industry to an increasingly scaled systems enterprise, policymakers implemented regulatory standards targeting quality measurement. Subsequent to the National Academy of Medicine’s 1999 report To Err is Human, policy efforts brought public reporting of quality ratings to multiple market segments, including dialysis facilities (2001), nursing homes (2003), Medicare Advantage plans (2007), and physicians (2015). The hospital industry was no exception, and in 2016—with much controversy1—CMS launched the hospital star ratings program.

CMS Star Ratings for hospitals are based on seven measure groups: mortality, safety, readmission, patient experience, effectiveness, timeliness, and efficient use of medical imaging. Both industry and researchers have decried the challenges of star ratings, noting that hospitals with a narrower scope of services are more likely to receive higher ratings.2 Measure groupings may be further flawed as shown by recent work demonstrating that larger, safety net, or academic hospitals, as well as hospitals offering transplant services, have higher readmission rates,3 which may be caused by differences in patient complexity. Other research has demonstrated that overall quality ratings inappropriately pool all hospitals together, when it may be fairer to initially categorize hospitals and then score them.4

It is within this maelstrom of debate that, in this month’s issue of the Journal of Hospital Medicine, Shi and colleagues explore the relationship between hospital star ratings and the socioeconomic features of the surrounding communities.5 Conducting their analysis by linking multiple reputable government and industry sources, Shi and colleagues found that counties with higher education attainment and a lower proportion of dual Medicare-Medicaid–eligible populations had higher hospital star ratings. Furthermore, a county’s minority population percentage negatively correlated with hospital ratings. Validating the experience of many rural hospital executives—who frequently experience financial challenges—Shi and colleagues noted that rural hospitals were less likely to receive five-star ratings.

Do these findings reflect a true disparity and lack of access to high-quality hospitals, or are they artifactual—secondary to a flawed construct of hospital quality measurement? Many lower-ranking hospitals are urban academic centers frequently providing services not offered at their five-star community counterparts, such as neurosurgery, comprehensive cancer care, and organ transplants, while simultaneously serving as safety net hospitals, research institutions, trauma centers, and national referral centers.

Sociodemographics factor significantly in self-care management for hospital aftercare. Health literacy, access to primary and behavioral healthcare, and transportation all affect star indicators. Recent work6 demonstrated that comprehensive investments in transitional care strategies and the social determinants of health were ineffective at reducing readmissions, which suggests that high readmission rates for hospitals in impoverished areas are not only common, but also may not accurately reflect hospital quality and local investment.

Patient experience is also complicating, with research demonstrating that patient perceptions vary significantly by education, age, primary language, ethnicity, and overall health. For example, one-third of average-ranked hospitals would have rankings vary by at least 18 percentile points when evaluated by Spanish-speaking patients. Star ratings fail to capture and communicate this granularity.7

More concerning is that star ratings inherently assume that hospital performance is being compared across the same tasks, regardless of patient characteristics, local resources, or the scope of services provided, the latter of which may vary between hospitals. For example, communication may differ in both complexity and time intensity: Explaining an antibiotic to the uncomplicated patient with pneumonia differs from prescribing an antibiotic to a patient who is legally blind from optic neuritis, walks with a cane because of multiple sclerosis, and has 24 other prescription medications. Similar challenges exist for differences in local neighborhood resources and for facilities with differing service scope.

Although one strategy to handle these “disparities” in star ratings might be to risk-adjust for social determinants of health, patients may be better served by first rethinking how star ratings are constructed. Clustering hospitals by scope of services provided and geographic region prior to determining star ratings would provide consumers with meaningful information by helping patients compare and make choices among either local or regional hospitals; national quality rankings are unhelpful for patients.

Arguably one of the most complex and person-dependent service enterprises, care delivery presents unique challenges for evaluation of customer experience and medical quality. Hospital star ratings are no exception: We must rethink their construction so they can be more meaningful for both patients and physicians.

Acknowledgments

The authors would like to acknowledge Daniel J Brotman, MD, for his editorial advice and input.

Disclosures

Dr Miller reported consulting for the Federal Trade Commission and serving as a member of the Centers for Medicare & Medicaid Services Medicare Evidence Development Coverage Advisory Committee. Drs Siddiqui and Deutschendorf have nothing to disclose.

References

1. Whitman E. CMS releases star ratings for hospitals. Modern Healthcare. July 27, 2016. Accessed April 27, 2020. https://www.modernhealthcare.com/article/20160727/NEWS/160729910/cms-releases-star-ratings-for-hospitals
2. Siddiqui ZK, Abusamaan M, Bertram A, et al. Comparison of services available in 5-star and non-5-star patient experience hospital. JAMA Intern Med. 2019;179(10):1429-1430. https://doi.org/10.1001/jamainternmed.2019.1285
3. Hoyer EH, Padula WV, Brotman DJ, et al. Patterns of hospital performance on the hospital-wide 30-day readmission metric: is the playing field level? J Gen Intern Med. 2018;33(1):57-64. https://doi.org/10.1007/s11606-017-4193-9
4. Chung JW, Dahlke AR, Barnard C, DeLancey JO, Merkow RP, Bilimoria KY. The Centers for Medicare and Medicaid Services hospital ratings: pitfalls of grading on a single curve. Health Aff (Millwood). 2019;38(9):1523-1529. https://doi.org/10.1377/hlthaff.2018.05345
5. Shi B, King C, Huang SS. Relationship of hospital star ratings to race, education, and community income. J Hosp Med. 2020;15:588-593. https://doi.org/10.12788/jhm.3393
6. Finkelstein A, Zhou A, Taubman S, Doyle J. Health care hotspotting—a randomized controlled trial. N Engl J Med. 2020;382:152-162. https://doi.org/10.1056/NEJMsa1906848
7. Elliott MN, Lehrman WG, Goldstein E, Hambarsoomian K, Beckett MK, Giordano LA. Do hospitals rank differently on HCAHPS for different patient subgroups? Med Care Res Rev. 2010;67(1):56-73. https://doi.org/10.1177/1077558709339066

References

1. Whitman E. CMS releases star ratings for hospitals. Modern Healthcare. July 27, 2016. Accessed April 27, 2020. https://www.modernhealthcare.com/article/20160727/NEWS/160729910/cms-releases-star-ratings-for-hospitals
2. Siddiqui ZK, Abusamaan M, Bertram A, et al. Comparison of services available in 5-star and non-5-star patient experience hospital. JAMA Intern Med. 2019;179(10):1429-1430. https://doi.org/10.1001/jamainternmed.2019.1285
3. Hoyer EH, Padula WV, Brotman DJ, et al. Patterns of hospital performance on the hospital-wide 30-day readmission metric: is the playing field level? J Gen Intern Med. 2018;33(1):57-64. https://doi.org/10.1007/s11606-017-4193-9
4. Chung JW, Dahlke AR, Barnard C, DeLancey JO, Merkow RP, Bilimoria KY. The Centers for Medicare and Medicaid Services hospital ratings: pitfalls of grading on a single curve. Health Aff (Millwood). 2019;38(9):1523-1529. https://doi.org/10.1377/hlthaff.2018.05345
5. Shi B, King C, Huang SS. Relationship of hospital star ratings to race, education, and community income. J Hosp Med. 2020;15:588-593. https://doi.org/10.12788/jhm.3393
6. Finkelstein A, Zhou A, Taubman S, Doyle J. Health care hotspotting—a randomized controlled trial. N Engl J Med. 2020;382:152-162. https://doi.org/10.1056/NEJMsa1906848
7. Elliott MN, Lehrman WG, Goldstein E, Hambarsoomian K, Beckett MK, Giordano LA. Do hospitals rank differently on HCAHPS for different patient subgroups? Med Care Res Rev. 2010;67(1):56-73. https://doi.org/10.1177/1077558709339066

Issue
Journal of Hospital Medicine 15(10)
Issue
Journal of Hospital Medicine 15(10)
Page Number
637-638
Page Number
637-638
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2020 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Brian J Miller, MD, MBA, MPH; Email: bmille78@jhmi.edu; Telephone: 410-614-4474; Twitter: @4_BetterHealth.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Page Free
Article PDF Media