Given name(s)
Win
Family name
Whitcomb
Degrees
MD

Choosing location after discharge wisely

Article Type
Changed
Fri, 09/14/2018 - 11:55
A novel, important skill for the inpatient team

 

Of all the care decisions we make during a hospital stay, perhaps the one with the biggest implications for cost and quality is the one determining the location to which we send the patient after discharge.

Yet ironically, we haven’t typically participated in this decision, but instead have left it up to case managers and others to work with patients to determine discharge location. This is a missed opportunity, as patients first look to their doctor for guidance on this decision. Absent such guidance, they turn to other care team members for the conversation. With a principal focus on hospital length of stay, we have prioritized when patients are ready to leave over where they go after they leave.

Dr. Win Whitcomb
Discharge location has a large impact on quality and cost. The hazards of going to a postacute facility are similar to the hazards of hospitalization – delirium, falls, infection, and deconditioning are well-documented adverse effects. We may invoke the argument that, all things being equal, a facility is safer than home. Yet, there is scant evidence supporting this assertion. At the same time, when contemplating a home discharge, a capable caregiver is often in short supply, and patients requiring assistance may have few options but to go to a facility.

In terms of cost during hospitalization and for the 30 days after discharge, for common conditions such as pneumonia, heart failure, COPD, or major joint replacement, Medicare spends nearly as much on postacute care – home health, skilled nursing facilities, inpatient rehabilitation, long-term acute care hospitals – as for hospital care.1 Further, an Institute of Medicine analysis showed that geographic variation in postacute care spending is responsible for three-quarters of all variation in Medicare spending.2 Such variation raises questions about the rigor with which postacute care decisions are made by hospital teams.

Perhaps most striking of all, hospitalist care (versus that of traditional primary care providers) has been associated with excess discharge rates to skilled nursing facilities, and savings that accrue under hospitalists during hospitalization are more than outweighed by spending on care during the postacute period.3

All of this leads me to my point: Hospitalists and inpatient teams need a defined process for selecting the most appropriate discharge location. Such a location should ideally be the least restrictive location suitable for a patient’s needs. In the box below, I propose a framework for the process. The domains listed in the box should be evaluated and discussed by the team, with early input and final approval by the patient and caregiver(s). The domains listed are not intended to be an exhaustive list, but rather to serve as the basis for discussion during discharge team rounds.

Identifying patient factors informing an optimal discharge location may represent a new skill set for many hospitalists and underscores the value of collaboration with team members who can provide needed information. In April, the Society of Hospital Medicine published the Revised Core Competencies in Hospital Medicine. In the Care of the Older Patient section, the authors state that hospitalists should be able to “describe postacute care options that can enable older patients to regain functional capacity.”4 Inherent in this competency is an understanding of not only patient factors in postacute care location decisions, but also the differing capabilities of home health agencies, skilled nursing facilities, inpatient rehabilitation facilities, and long-term acute care hospitals.
 

Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and cofounder and past president of the Society of Hospital Medicine. Contact him at wfwhit@comcast.net.

References

1. Mechanic R. Post-acute care – the next frontier for controlling Medicare spending. N Engl J Med. 2014;370:692-4.

2. Newhouse JP, et al. Geographic variation in Medicare services. N Engl J Med. 2013;368:1465-8.

3. Kuo YF, et al. Association of hospitalist care with medical utilization after discharge: evidence of cost shift from a cohort study. Ann Intern Med. 2011;155(3):152-9.

4. Nichani S, et al. Core Competencies in Hospital Medicine 2017 Revision. Section 3: Healthcare Systems. J Hosp Med. 2017 April;12(1):S55-S82.
 

Framework for Selecting Appropriate Discharge Location

Patient Independence

  • Can the patient perform activities of daily living?
  • Can the patient ambulate?
  • Is there cognitive impairment?

Caregiver Availability

  • If the patient needs it, is a caregiver who is capable and reliable available? If so, to what extent is s/he available?

Therapy Needs

  • Does the patient require PT, OT, and/or ST?
  • How much and for how long?
 

 

Skilled Nursing Needs

  • What, if anything, does the patient require in this area? For example, a new PEG tube, wound care, IV therapies, etc.

Social Factors

  • Is there access to transportation, food, and safe housing?

Home Factors

  • Are there stairs to enter the house or to get to the bedroom or bathroom?
  • Has the home been modified to accommodate special needs? Is the home inhabitable?
Publications
Topics
Sections
A novel, important skill for the inpatient team
A novel, important skill for the inpatient team

 

Of all the care decisions we make during a hospital stay, perhaps the one with the biggest implications for cost and quality is the one determining the location to which we send the patient after discharge.

Yet ironically, we haven’t typically participated in this decision, but instead have left it up to case managers and others to work with patients to determine discharge location. This is a missed opportunity, as patients first look to their doctor for guidance on this decision. Absent such guidance, they turn to other care team members for the conversation. With a principal focus on hospital length of stay, we have prioritized when patients are ready to leave over where they go after they leave.

Dr. Win Whitcomb
Discharge location has a large impact on quality and cost. The hazards of going to a postacute facility are similar to the hazards of hospitalization – delirium, falls, infection, and deconditioning are well-documented adverse effects. We may invoke the argument that, all things being equal, a facility is safer than home. Yet, there is scant evidence supporting this assertion. At the same time, when contemplating a home discharge, a capable caregiver is often in short supply, and patients requiring assistance may have few options but to go to a facility.

In terms of cost during hospitalization and for the 30 days after discharge, for common conditions such as pneumonia, heart failure, COPD, or major joint replacement, Medicare spends nearly as much on postacute care – home health, skilled nursing facilities, inpatient rehabilitation, long-term acute care hospitals – as for hospital care.1 Further, an Institute of Medicine analysis showed that geographic variation in postacute care spending is responsible for three-quarters of all variation in Medicare spending.2 Such variation raises questions about the rigor with which postacute care decisions are made by hospital teams.

Perhaps most striking of all, hospitalist care (versus that of traditional primary care providers) has been associated with excess discharge rates to skilled nursing facilities, and savings that accrue under hospitalists during hospitalization are more than outweighed by spending on care during the postacute period.3

All of this leads me to my point: Hospitalists and inpatient teams need a defined process for selecting the most appropriate discharge location. Such a location should ideally be the least restrictive location suitable for a patient’s needs. In the box below, I propose a framework for the process. The domains listed in the box should be evaluated and discussed by the team, with early input and final approval by the patient and caregiver(s). The domains listed are not intended to be an exhaustive list, but rather to serve as the basis for discussion during discharge team rounds.

Identifying patient factors informing an optimal discharge location may represent a new skill set for many hospitalists and underscores the value of collaboration with team members who can provide needed information. In April, the Society of Hospital Medicine published the Revised Core Competencies in Hospital Medicine. In the Care of the Older Patient section, the authors state that hospitalists should be able to “describe postacute care options that can enable older patients to regain functional capacity.”4 Inherent in this competency is an understanding of not only patient factors in postacute care location decisions, but also the differing capabilities of home health agencies, skilled nursing facilities, inpatient rehabilitation facilities, and long-term acute care hospitals.
 

Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and cofounder and past president of the Society of Hospital Medicine. Contact him at wfwhit@comcast.net.

References

1. Mechanic R. Post-acute care – the next frontier for controlling Medicare spending. N Engl J Med. 2014;370:692-4.

2. Newhouse JP, et al. Geographic variation in Medicare services. N Engl J Med. 2013;368:1465-8.

3. Kuo YF, et al. Association of hospitalist care with medical utilization after discharge: evidence of cost shift from a cohort study. Ann Intern Med. 2011;155(3):152-9.

4. Nichani S, et al. Core Competencies in Hospital Medicine 2017 Revision. Section 3: Healthcare Systems. J Hosp Med. 2017 April;12(1):S55-S82.
 

Framework for Selecting Appropriate Discharge Location

Patient Independence

  • Can the patient perform activities of daily living?
  • Can the patient ambulate?
  • Is there cognitive impairment?

Caregiver Availability

  • If the patient needs it, is a caregiver who is capable and reliable available? If so, to what extent is s/he available?

Therapy Needs

  • Does the patient require PT, OT, and/or ST?
  • How much and for how long?
 

 

Skilled Nursing Needs

  • What, if anything, does the patient require in this area? For example, a new PEG tube, wound care, IV therapies, etc.

Social Factors

  • Is there access to transportation, food, and safe housing?

Home Factors

  • Are there stairs to enter the house or to get to the bedroom or bathroom?
  • Has the home been modified to accommodate special needs? Is the home inhabitable?

 

Of all the care decisions we make during a hospital stay, perhaps the one with the biggest implications for cost and quality is the one determining the location to which we send the patient after discharge.

Yet ironically, we haven’t typically participated in this decision, but instead have left it up to case managers and others to work with patients to determine discharge location. This is a missed opportunity, as patients first look to their doctor for guidance on this decision. Absent such guidance, they turn to other care team members for the conversation. With a principal focus on hospital length of stay, we have prioritized when patients are ready to leave over where they go after they leave.

Dr. Win Whitcomb
Discharge location has a large impact on quality and cost. The hazards of going to a postacute facility are similar to the hazards of hospitalization – delirium, falls, infection, and deconditioning are well-documented adverse effects. We may invoke the argument that, all things being equal, a facility is safer than home. Yet, there is scant evidence supporting this assertion. At the same time, when contemplating a home discharge, a capable caregiver is often in short supply, and patients requiring assistance may have few options but to go to a facility.

In terms of cost during hospitalization and for the 30 days after discharge, for common conditions such as pneumonia, heart failure, COPD, or major joint replacement, Medicare spends nearly as much on postacute care – home health, skilled nursing facilities, inpatient rehabilitation, long-term acute care hospitals – as for hospital care.1 Further, an Institute of Medicine analysis showed that geographic variation in postacute care spending is responsible for three-quarters of all variation in Medicare spending.2 Such variation raises questions about the rigor with which postacute care decisions are made by hospital teams.

Perhaps most striking of all, hospitalist care (versus that of traditional primary care providers) has been associated with excess discharge rates to skilled nursing facilities, and savings that accrue under hospitalists during hospitalization are more than outweighed by spending on care during the postacute period.3

All of this leads me to my point: Hospitalists and inpatient teams need a defined process for selecting the most appropriate discharge location. Such a location should ideally be the least restrictive location suitable for a patient’s needs. In the box below, I propose a framework for the process. The domains listed in the box should be evaluated and discussed by the team, with early input and final approval by the patient and caregiver(s). The domains listed are not intended to be an exhaustive list, but rather to serve as the basis for discussion during discharge team rounds.

Identifying patient factors informing an optimal discharge location may represent a new skill set for many hospitalists and underscores the value of collaboration with team members who can provide needed information. In April, the Society of Hospital Medicine published the Revised Core Competencies in Hospital Medicine. In the Care of the Older Patient section, the authors state that hospitalists should be able to “describe postacute care options that can enable older patients to regain functional capacity.”4 Inherent in this competency is an understanding of not only patient factors in postacute care location decisions, but also the differing capabilities of home health agencies, skilled nursing facilities, inpatient rehabilitation facilities, and long-term acute care hospitals.
 

Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and cofounder and past president of the Society of Hospital Medicine. Contact him at wfwhit@comcast.net.

References

1. Mechanic R. Post-acute care – the next frontier for controlling Medicare spending. N Engl J Med. 2014;370:692-4.

2. Newhouse JP, et al. Geographic variation in Medicare services. N Engl J Med. 2013;368:1465-8.

3. Kuo YF, et al. Association of hospitalist care with medical utilization after discharge: evidence of cost shift from a cohort study. Ann Intern Med. 2011;155(3):152-9.

4. Nichani S, et al. Core Competencies in Hospital Medicine 2017 Revision. Section 3: Healthcare Systems. J Hosp Med. 2017 April;12(1):S55-S82.
 

Framework for Selecting Appropriate Discharge Location

Patient Independence

  • Can the patient perform activities of daily living?
  • Can the patient ambulate?
  • Is there cognitive impairment?

Caregiver Availability

  • If the patient needs it, is a caregiver who is capable and reliable available? If so, to what extent is s/he available?

Therapy Needs

  • Does the patient require PT, OT, and/or ST?
  • How much and for how long?
 

 

Skilled Nursing Needs

  • What, if anything, does the patient require in this area? For example, a new PEG tube, wound care, IV therapies, etc.

Social Factors

  • Is there access to transportation, food, and safe housing?

Home Factors

  • Are there stairs to enter the house or to get to the bedroom or bathroom?
  • Has the home been modified to accommodate special needs? Is the home inhabitable?
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Hospital value-based purchasing is largely ineffective

Article Type
Changed
Fri, 09/14/2018 - 11:57
How should hospitalists pay for performance change as a result?

 

Over the last 5 years, I’ve periodically devoted this column to providing updates to the Hospital Value-Based Purchasing program. HVBP launched in 2013 as a 5-year mixed upside/downside incentive program with mandatory participation for all U.S. acute care hospitals (critical access, acute inpatient rehabilitation, and long-term acute care hospitals are exempt). The program initially included process and patient experience measures. It later added measures for mortality, efficiency, and patient safety.

For the 2017 version of HVBP, the measures are allocated as follows: eight for patient experience, seven for patient safety (1 of which is a roll up of 11 claims-based measures), three for process, and three for mortality. HVBP uses a budget-neutral funding approach with some winners and some losers but overall net zero spending on the program. It initially put hospitals at risk for 1% of their Medicare inpatient payments (in 2013), with a progressive increase to 2% by this year. HVBP has used a complex approach to determining incentives and penalties, rewarding either improvement or achievement, depending on the baseline performance of the hospital.

Dr. Win Whitcomb
When HVBP was rolled out it seemed like a big deal. Hospitals devoted resources to it. I contended that hospitalists should pay attention to its measures and to work with their hospital quality department to promote high performance in the relevant measure domains. I emphasized that the program was good for hospitalists because it put dollars behind the quality improvement projects we had been working on for some time – projects to improve HCAHPS scores; lower mortality; improve heart failure, heart attack, or pneumonia processes; and decrease hospital-acquired infections. For some perspective on dollars at stake, by this year, a 700-bed hospital has about $3.4 million at risk in the program, and a 90-bed hospital has roughly $250,000 at risk.

Has HVBP improved quality? Two studies looking at the early period of HVBP failed to show improvements in process or patient experience measures and demonstrated no change in mortality for heart failure, pneumonia, or heart attack.1,2 Now that the program is in its 5th and final year, thanks to a recent study by Ryan et al., we have an idea if HVBP is associated with longer-term improvements in quality.3

In the study, Ryan et al. compared hospitals participating in HVBP with critical access hospitals, which are exempt from the program. The study yielded some disappointing, if not surprising, results. Improvements in process and patient experience measures for HVBP hospitals were no greater than those for the control group. HVBP was not associated with a significant reduction in mortality for heart failure or heart attack, but was associated with a mortality reduction for pneumonia. In sum, HVBP was not associated with improvements in process or patient experience, and was not associated with lower mortality, except in pneumonia.

As a program designed to incentivize better quality, where did HVBP go wrong? I believe HVBP simply had too many measures for the cognitive bandwidth of an individual or a team looking to improve quality. The total measure count for 2017 is 21! I submit that a hospitalist working to improve quality can keep top-of-mind one or two measures, possibly three at most. While others have postulated that the amount of dollars at risk are too small, I don’t think that’s the problem. Instead, my sense is that hospitalists and other members of the hospital team have quality improvement in their DNA and, regardless of the size of the financial incentives, will work to improve it as long as they have the right tools. Chief among these are good performance data and the time to focus on a finite number of projects.

What lessons can inform better design in the future? As of January 2017, the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) – representing the biggest change in reimbursement in a generation – progressively exposes doctors and other professionals to upside/downside incentives for quality, resource utilization, use of a certified electronic health record (hospitalists are exempt as they already use the hospital’s EHR), and practice improvement activities.

It would be wise to learn from the shortcomings of HVBP. Namely, if MACRA keeps on its course to incentivize physicians using a complicated formula based on four domains and many more subdomains, it will repeat the mistakes of HVBP and – while creating more administrative burden – likely improve quality very little, if at all. Instead, MACRA should delineate a simple measure set representing improvement activities that physicians and teams can incorporate into their regular work flow without more time taken away from patient care.

The reality is that complicated pay-for-performance programs divert limited available resources away from meaningful improvement activities in order to comply with onerous reporting requirements. As we gain a more nuanced understanding of how these programs work, policy makers should pay attention to the elements of “low-value” and “high-value” incentive systems and apply the “less is more” ethos of high-value care to the next generation of pay-for-performance programs.
 

 

 

Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.

References

1. Ryan AM, Burgess JF, Pesko MF, Borden WB, Dimick JB. “The early effects of Medicare’s mandatory hospital pay-for-performance program” Health Serv Res. 2015;50:81-97.

2. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. “Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study” BMJ. 2016;353:i2214.

3. Ryan AM, Krinsky S, Maurer KA, Dimick JB. “Changes in Hospital Quality Associated with Hospital Value-Based Purchasing” N Engl J Med. 2017;376:2358-66.

Publications
Topics
Sections
How should hospitalists pay for performance change as a result?
How should hospitalists pay for performance change as a result?

 

Over the last 5 years, I’ve periodically devoted this column to providing updates to the Hospital Value-Based Purchasing program. HVBP launched in 2013 as a 5-year mixed upside/downside incentive program with mandatory participation for all U.S. acute care hospitals (critical access, acute inpatient rehabilitation, and long-term acute care hospitals are exempt). The program initially included process and patient experience measures. It later added measures for mortality, efficiency, and patient safety.

For the 2017 version of HVBP, the measures are allocated as follows: eight for patient experience, seven for patient safety (1 of which is a roll up of 11 claims-based measures), three for process, and three for mortality. HVBP uses a budget-neutral funding approach with some winners and some losers but overall net zero spending on the program. It initially put hospitals at risk for 1% of their Medicare inpatient payments (in 2013), with a progressive increase to 2% by this year. HVBP has used a complex approach to determining incentives and penalties, rewarding either improvement or achievement, depending on the baseline performance of the hospital.

Dr. Win Whitcomb
When HVBP was rolled out it seemed like a big deal. Hospitals devoted resources to it. I contended that hospitalists should pay attention to its measures and to work with their hospital quality department to promote high performance in the relevant measure domains. I emphasized that the program was good for hospitalists because it put dollars behind the quality improvement projects we had been working on for some time – projects to improve HCAHPS scores; lower mortality; improve heart failure, heart attack, or pneumonia processes; and decrease hospital-acquired infections. For some perspective on dollars at stake, by this year, a 700-bed hospital has about $3.4 million at risk in the program, and a 90-bed hospital has roughly $250,000 at risk.

Has HVBP improved quality? Two studies looking at the early period of HVBP failed to show improvements in process or patient experience measures and demonstrated no change in mortality for heart failure, pneumonia, or heart attack.1,2 Now that the program is in its 5th and final year, thanks to a recent study by Ryan et al., we have an idea if HVBP is associated with longer-term improvements in quality.3

In the study, Ryan et al. compared hospitals participating in HVBP with critical access hospitals, which are exempt from the program. The study yielded some disappointing, if not surprising, results. Improvements in process and patient experience measures for HVBP hospitals were no greater than those for the control group. HVBP was not associated with a significant reduction in mortality for heart failure or heart attack, but was associated with a mortality reduction for pneumonia. In sum, HVBP was not associated with improvements in process or patient experience, and was not associated with lower mortality, except in pneumonia.

As a program designed to incentivize better quality, where did HVBP go wrong? I believe HVBP simply had too many measures for the cognitive bandwidth of an individual or a team looking to improve quality. The total measure count for 2017 is 21! I submit that a hospitalist working to improve quality can keep top-of-mind one or two measures, possibly three at most. While others have postulated that the amount of dollars at risk are too small, I don’t think that’s the problem. Instead, my sense is that hospitalists and other members of the hospital team have quality improvement in their DNA and, regardless of the size of the financial incentives, will work to improve it as long as they have the right tools. Chief among these are good performance data and the time to focus on a finite number of projects.

What lessons can inform better design in the future? As of January 2017, the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) – representing the biggest change in reimbursement in a generation – progressively exposes doctors and other professionals to upside/downside incentives for quality, resource utilization, use of a certified electronic health record (hospitalists are exempt as they already use the hospital’s EHR), and practice improvement activities.

It would be wise to learn from the shortcomings of HVBP. Namely, if MACRA keeps on its course to incentivize physicians using a complicated formula based on four domains and many more subdomains, it will repeat the mistakes of HVBP and – while creating more administrative burden – likely improve quality very little, if at all. Instead, MACRA should delineate a simple measure set representing improvement activities that physicians and teams can incorporate into their regular work flow without more time taken away from patient care.

The reality is that complicated pay-for-performance programs divert limited available resources away from meaningful improvement activities in order to comply with onerous reporting requirements. As we gain a more nuanced understanding of how these programs work, policy makers should pay attention to the elements of “low-value” and “high-value” incentive systems and apply the “less is more” ethos of high-value care to the next generation of pay-for-performance programs.
 

 

 

Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.

References

1. Ryan AM, Burgess JF, Pesko MF, Borden WB, Dimick JB. “The early effects of Medicare’s mandatory hospital pay-for-performance program” Health Serv Res. 2015;50:81-97.

2. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. “Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study” BMJ. 2016;353:i2214.

3. Ryan AM, Krinsky S, Maurer KA, Dimick JB. “Changes in Hospital Quality Associated with Hospital Value-Based Purchasing” N Engl J Med. 2017;376:2358-66.

 

Over the last 5 years, I’ve periodically devoted this column to providing updates to the Hospital Value-Based Purchasing program. HVBP launched in 2013 as a 5-year mixed upside/downside incentive program with mandatory participation for all U.S. acute care hospitals (critical access, acute inpatient rehabilitation, and long-term acute care hospitals are exempt). The program initially included process and patient experience measures. It later added measures for mortality, efficiency, and patient safety.

For the 2017 version of HVBP, the measures are allocated as follows: eight for patient experience, seven for patient safety (1 of which is a roll up of 11 claims-based measures), three for process, and three for mortality. HVBP uses a budget-neutral funding approach with some winners and some losers but overall net zero spending on the program. It initially put hospitals at risk for 1% of their Medicare inpatient payments (in 2013), with a progressive increase to 2% by this year. HVBP has used a complex approach to determining incentives and penalties, rewarding either improvement or achievement, depending on the baseline performance of the hospital.

Dr. Win Whitcomb
When HVBP was rolled out it seemed like a big deal. Hospitals devoted resources to it. I contended that hospitalists should pay attention to its measures and to work with their hospital quality department to promote high performance in the relevant measure domains. I emphasized that the program was good for hospitalists because it put dollars behind the quality improvement projects we had been working on for some time – projects to improve HCAHPS scores; lower mortality; improve heart failure, heart attack, or pneumonia processes; and decrease hospital-acquired infections. For some perspective on dollars at stake, by this year, a 700-bed hospital has about $3.4 million at risk in the program, and a 90-bed hospital has roughly $250,000 at risk.

Has HVBP improved quality? Two studies looking at the early period of HVBP failed to show improvements in process or patient experience measures and demonstrated no change in mortality for heart failure, pneumonia, or heart attack.1,2 Now that the program is in its 5th and final year, thanks to a recent study by Ryan et al., we have an idea if HVBP is associated with longer-term improvements in quality.3

In the study, Ryan et al. compared hospitals participating in HVBP with critical access hospitals, which are exempt from the program. The study yielded some disappointing, if not surprising, results. Improvements in process and patient experience measures for HVBP hospitals were no greater than those for the control group. HVBP was not associated with a significant reduction in mortality for heart failure or heart attack, but was associated with a mortality reduction for pneumonia. In sum, HVBP was not associated with improvements in process or patient experience, and was not associated with lower mortality, except in pneumonia.

As a program designed to incentivize better quality, where did HVBP go wrong? I believe HVBP simply had too many measures for the cognitive bandwidth of an individual or a team looking to improve quality. The total measure count for 2017 is 21! I submit that a hospitalist working to improve quality can keep top-of-mind one or two measures, possibly three at most. While others have postulated that the amount of dollars at risk are too small, I don’t think that’s the problem. Instead, my sense is that hospitalists and other members of the hospital team have quality improvement in their DNA and, regardless of the size of the financial incentives, will work to improve it as long as they have the right tools. Chief among these are good performance data and the time to focus on a finite number of projects.

What lessons can inform better design in the future? As of January 2017, the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) – representing the biggest change in reimbursement in a generation – progressively exposes doctors and other professionals to upside/downside incentives for quality, resource utilization, use of a certified electronic health record (hospitalists are exempt as they already use the hospital’s EHR), and practice improvement activities.

It would be wise to learn from the shortcomings of HVBP. Namely, if MACRA keeps on its course to incentivize physicians using a complicated formula based on four domains and many more subdomains, it will repeat the mistakes of HVBP and – while creating more administrative burden – likely improve quality very little, if at all. Instead, MACRA should delineate a simple measure set representing improvement activities that physicians and teams can incorporate into their regular work flow without more time taken away from patient care.

The reality is that complicated pay-for-performance programs divert limited available resources away from meaningful improvement activities in order to comply with onerous reporting requirements. As we gain a more nuanced understanding of how these programs work, policy makers should pay attention to the elements of “low-value” and “high-value” incentive systems and apply the “less is more” ethos of high-value care to the next generation of pay-for-performance programs.
 

 

 

Dr. Whitcomb is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.

References

1. Ryan AM, Burgess JF, Pesko MF, Borden WB, Dimick JB. “The early effects of Medicare’s mandatory hospital pay-for-performance program” Health Serv Res. 2015;50:81-97.

2. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. “Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study” BMJ. 2016;353:i2214.

3. Ryan AM, Krinsky S, Maurer KA, Dimick JB. “Changes in Hospital Quality Associated with Hospital Value-Based Purchasing” N Engl J Med. 2017;376:2358-66.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Will artificial intelligence make us better doctors?

Article Type
Changed
Fri, 09/14/2018 - 11:59
Gating factors: Data availability, signal, noise.

 

Given the amount of time physicians spend entering data, clicking through screens, navigating pages, and logging in to computers, one would have hoped that substantial near-term payback for such efforts would have materialized.

Many of us believed this would take the form of health information exchange – the ability to easily access clinical information from hospitals or clinics other than our own, creating a more complete picture of the patient before us. To our disappointment, true information exchange has yet to materialize. (We won’t debate here whether politics or technology is culpable.) We are left to look elsewhere for the benefits of the digitization of the medical records and other sources of health care knowledge.

Lately, there has been a lot of talk about the promise of machine learning and artificial intelligence (AI) in health care. Much of the resurgence of interest in AI can be traced to IBM Watson’s appearance as a contestant on Jeopardy in 2011. Watson, a natural language supercomputer with enough power to process the equivalent of a million books per second, had access to 200 million pages of content, including the full text of Wikipedia, for Jeopardy.1 Watson handily outperformed its human opponents – two Jeopardy savants who were also the most successful contestants in game show history – taking the $1 million first prize but struggling in categories with clues containing only a few words.
 

MD Anderson and Watson: Dashed hopes follow initial promise

As a result of growing recognition of AI’s potential in health care, IBM began collaborations with a number of health care organizations to deploy Watson.

In 2013, MD Anderson Cancer Center and IBM began a pilot to develop an oncology clinical decision support technology tool powered by Watson to aid MD Anderson “in its mission to eradicate cancer.” Recently, it was announced that the project – which cost the cancer center $62 million – has been put on hold, and MD Anderson is looking for other contractors to replace IBM.

While administrative problems are at least partly responsible for the project’s challenges, the undertaking has raised issues with the quality and quantity of data in health care that call into question the ability of AI to work as well in health care as it did on Jeopardy, at least in the short term.
 

Health care: Not as data rich as you might think

“We are not ‘Big Data’ in health care, yet.” – Dale Sanders, Health Catalyst.2

In its quest for Jeopardy victory, Watson accessed a massive data storehouse subsuming a vast array of knowledge assembled over the course of human history. Conversely, for health care, Watson is limited to a few decades of scientific journals (that may not contribute to diagnosis and treatment as much as one might think), claims data geared to billing without much clinical information like outcomes, and clinical data from progress notes (plagued by inaccuracies, serial “copy and paste,” and nonstandardized language and numeric representations), and variable-format reports from lab, radiology, pathology, and other disciplines.

To articulate how data-poor health care is, Dale Sanders, executive vice president for software at Health Catalyst, notes that a Boeing 787 generates 500GB of data in a six hour flight while one patient may generate just 100MB of data in an entire year.2 He pointed out that, in the near term, AI platforms like Watson simply do not have enough data substrate to impact health care as many hoped it would. Over the longer term, he says, if health care can develop a coherent, standard approach to data content, AI may fulfill its promise.

SKapi/Thinkstock

 

What can AI and related technologies achieve in the near-term?

“AI seems to have replaced Uber as the most overused word or phrase in digital health.” – Reporter Stephanie Baum, paraphrasing from an interview with Bob Kocher, Venrock Partners.3

My observations tell me that we have already made some progress and are likely to make more strides in the coming years, thanks to AI, machine learning, and natural language processing. A few areas of potential gain are:

Clinical documentation

Technology that can derive meaning from words or groups of words can help with more accurate clinical documentation. For example, if a patient has a documented UTI but also has in the record an 11 on the Glasgow Coma Scale, a systolic BP of 90, and a respiratory rate of 24, technology can alert the physician to document sepsis.

Quality measurement and reporting

Similarly, if technology can recognize words and numbers, it may be able to extract and report quality measures (for example, an ejection fraction of 35% in a heart failure patient) from progress notes without having a nurse-abstractor manually enter such data into structured fields for reporting, as is currently the case.

 

 

Predicting readmissions, mortality, other events

While machine learning has had mixed results in predicting future clinical events, this is likely to change as data integrity and algorithms improve. Best-of-breed technology will probably use both clinical and machine learning tools for predictive purposes in the future.

In 2015, I had the privilege of meeting Vinod Khosla, cofounder of SUN Microsystems and venture capitalist, who predicts that computers will largely supplant physicians in the future, at least in domains relying on access to data. As he puts it, “the core functions necessary for complex diagnoses, treatments, and monitoring will be driven by machine judgment instead of human judgment.”4

While the benefits of technology, especially in health care, are often oversold, I believe AI and related technologies will some day play a large role alongside physicians in the care of patients. However, for AI to deliver, we must first figure out how to collect and organize health care data so that computers are able to ingest, digest and use it in a purposeful way.

Note: Dr. Whitcomb is founder and advisor to Zato Health, which uses natural language processing and discovery technology in health care.

He is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.

References

1. Zimmer, Ben. Is It Time to Welcome Our New Computer Overlords?. The Atlantic. https://www.theatlantic.com/technology/archive/2011/02/is-it-time-to-welcome-our-new-computer-overlords/71388/. Accessed 23 Apr 2017.

2. Sanders, Dale. The MD Anderson / IBM Watson Announcement: What does it mean for machine learning in healthcare? Webinar. https://www.slideshare.net/healthcatalyst1/the-md-anderson-ibm-watson-announcement-what-does-it-mean-for-machine-learning-in-healthcare. Accessed 23 Apr 2017.

3. Baum, Stephanie. Venrock survey predicts a flight to quality for digital health investments. MedCity News. 12 Apr 2017. http://medcitynews.com/2017/04/venrock-survey-predicts-flight-quality-digital-health-investment/. Accessed 22 Apr 2017.

4. Khosla, Vinod. The Reinvention Of Medicine: Dr. Algorithm V0-7 And Beyond. TechCrunch. 22 Sept 2014. https://techcrunch.com/2014/09/22/the-reinvention-of-medicine-dr-algorithm-version-0-7-and-beyond/. Accessed 22 Apr 2017.

Publications
Topics
Sections
Gating factors: Data availability, signal, noise.
Gating factors: Data availability, signal, noise.

 

Given the amount of time physicians spend entering data, clicking through screens, navigating pages, and logging in to computers, one would have hoped that substantial near-term payback for such efforts would have materialized.

Many of us believed this would take the form of health information exchange – the ability to easily access clinical information from hospitals or clinics other than our own, creating a more complete picture of the patient before us. To our disappointment, true information exchange has yet to materialize. (We won’t debate here whether politics or technology is culpable.) We are left to look elsewhere for the benefits of the digitization of the medical records and other sources of health care knowledge.

Lately, there has been a lot of talk about the promise of machine learning and artificial intelligence (AI) in health care. Much of the resurgence of interest in AI can be traced to IBM Watson’s appearance as a contestant on Jeopardy in 2011. Watson, a natural language supercomputer with enough power to process the equivalent of a million books per second, had access to 200 million pages of content, including the full text of Wikipedia, for Jeopardy.1 Watson handily outperformed its human opponents – two Jeopardy savants who were also the most successful contestants in game show history – taking the $1 million first prize but struggling in categories with clues containing only a few words.
 

MD Anderson and Watson: Dashed hopes follow initial promise

As a result of growing recognition of AI’s potential in health care, IBM began collaborations with a number of health care organizations to deploy Watson.

In 2013, MD Anderson Cancer Center and IBM began a pilot to develop an oncology clinical decision support technology tool powered by Watson to aid MD Anderson “in its mission to eradicate cancer.” Recently, it was announced that the project – which cost the cancer center $62 million – has been put on hold, and MD Anderson is looking for other contractors to replace IBM.

While administrative problems are at least partly responsible for the project’s challenges, the undertaking has raised issues with the quality and quantity of data in health care that call into question the ability of AI to work as well in health care as it did on Jeopardy, at least in the short term.
 

Health care: Not as data rich as you might think

“We are not ‘Big Data’ in health care, yet.” – Dale Sanders, Health Catalyst.2

In its quest for Jeopardy victory, Watson accessed a massive data storehouse subsuming a vast array of knowledge assembled over the course of human history. Conversely, for health care, Watson is limited to a few decades of scientific journals (that may not contribute to diagnosis and treatment as much as one might think), claims data geared to billing without much clinical information like outcomes, and clinical data from progress notes (plagued by inaccuracies, serial “copy and paste,” and nonstandardized language and numeric representations), and variable-format reports from lab, radiology, pathology, and other disciplines.

To articulate how data-poor health care is, Dale Sanders, executive vice president for software at Health Catalyst, notes that a Boeing 787 generates 500GB of data in a six hour flight while one patient may generate just 100MB of data in an entire year.2 He pointed out that, in the near term, AI platforms like Watson simply do not have enough data substrate to impact health care as many hoped it would. Over the longer term, he says, if health care can develop a coherent, standard approach to data content, AI may fulfill its promise.

SKapi/Thinkstock

 

What can AI and related technologies achieve in the near-term?

“AI seems to have replaced Uber as the most overused word or phrase in digital health.” – Reporter Stephanie Baum, paraphrasing from an interview with Bob Kocher, Venrock Partners.3

My observations tell me that we have already made some progress and are likely to make more strides in the coming years, thanks to AI, machine learning, and natural language processing. A few areas of potential gain are:

Clinical documentation

Technology that can derive meaning from words or groups of words can help with more accurate clinical documentation. For example, if a patient has a documented UTI but also has in the record an 11 on the Glasgow Coma Scale, a systolic BP of 90, and a respiratory rate of 24, technology can alert the physician to document sepsis.

Quality measurement and reporting

Similarly, if technology can recognize words and numbers, it may be able to extract and report quality measures (for example, an ejection fraction of 35% in a heart failure patient) from progress notes without having a nurse-abstractor manually enter such data into structured fields for reporting, as is currently the case.

 

 

Predicting readmissions, mortality, other events

While machine learning has had mixed results in predicting future clinical events, this is likely to change as data integrity and algorithms improve. Best-of-breed technology will probably use both clinical and machine learning tools for predictive purposes in the future.

In 2015, I had the privilege of meeting Vinod Khosla, cofounder of SUN Microsystems and venture capitalist, who predicts that computers will largely supplant physicians in the future, at least in domains relying on access to data. As he puts it, “the core functions necessary for complex diagnoses, treatments, and monitoring will be driven by machine judgment instead of human judgment.”4

While the benefits of technology, especially in health care, are often oversold, I believe AI and related technologies will some day play a large role alongside physicians in the care of patients. However, for AI to deliver, we must first figure out how to collect and organize health care data so that computers are able to ingest, digest and use it in a purposeful way.

Note: Dr. Whitcomb is founder and advisor to Zato Health, which uses natural language processing and discovery technology in health care.

He is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.

References

1. Zimmer, Ben. Is It Time to Welcome Our New Computer Overlords?. The Atlantic. https://www.theatlantic.com/technology/archive/2011/02/is-it-time-to-welcome-our-new-computer-overlords/71388/. Accessed 23 Apr 2017.

2. Sanders, Dale. The MD Anderson / IBM Watson Announcement: What does it mean for machine learning in healthcare? Webinar. https://www.slideshare.net/healthcatalyst1/the-md-anderson-ibm-watson-announcement-what-does-it-mean-for-machine-learning-in-healthcare. Accessed 23 Apr 2017.

3. Baum, Stephanie. Venrock survey predicts a flight to quality for digital health investments. MedCity News. 12 Apr 2017. http://medcitynews.com/2017/04/venrock-survey-predicts-flight-quality-digital-health-investment/. Accessed 22 Apr 2017.

4. Khosla, Vinod. The Reinvention Of Medicine: Dr. Algorithm V0-7 And Beyond. TechCrunch. 22 Sept 2014. https://techcrunch.com/2014/09/22/the-reinvention-of-medicine-dr-algorithm-version-0-7-and-beyond/. Accessed 22 Apr 2017.

 

Given the amount of time physicians spend entering data, clicking through screens, navigating pages, and logging in to computers, one would have hoped that substantial near-term payback for such efforts would have materialized.

Many of us believed this would take the form of health information exchange – the ability to easily access clinical information from hospitals or clinics other than our own, creating a more complete picture of the patient before us. To our disappointment, true information exchange has yet to materialize. (We won’t debate here whether politics or technology is culpable.) We are left to look elsewhere for the benefits of the digitization of the medical records and other sources of health care knowledge.

Lately, there has been a lot of talk about the promise of machine learning and artificial intelligence (AI) in health care. Much of the resurgence of interest in AI can be traced to IBM Watson’s appearance as a contestant on Jeopardy in 2011. Watson, a natural language supercomputer with enough power to process the equivalent of a million books per second, had access to 200 million pages of content, including the full text of Wikipedia, for Jeopardy.1 Watson handily outperformed its human opponents – two Jeopardy savants who were also the most successful contestants in game show history – taking the $1 million first prize but struggling in categories with clues containing only a few words.
 

MD Anderson and Watson: Dashed hopes follow initial promise

As a result of growing recognition of AI’s potential in health care, IBM began collaborations with a number of health care organizations to deploy Watson.

In 2013, MD Anderson Cancer Center and IBM began a pilot to develop an oncology clinical decision support technology tool powered by Watson to aid MD Anderson “in its mission to eradicate cancer.” Recently, it was announced that the project – which cost the cancer center $62 million – has been put on hold, and MD Anderson is looking for other contractors to replace IBM.

While administrative problems are at least partly responsible for the project’s challenges, the undertaking has raised issues with the quality and quantity of data in health care that call into question the ability of AI to work as well in health care as it did on Jeopardy, at least in the short term.
 

Health care: Not as data rich as you might think

“We are not ‘Big Data’ in health care, yet.” – Dale Sanders, Health Catalyst.2

In its quest for Jeopardy victory, Watson accessed a massive data storehouse subsuming a vast array of knowledge assembled over the course of human history. Conversely, for health care, Watson is limited to a few decades of scientific journals (that may not contribute to diagnosis and treatment as much as one might think), claims data geared to billing without much clinical information like outcomes, and clinical data from progress notes (plagued by inaccuracies, serial “copy and paste,” and nonstandardized language and numeric representations), and variable-format reports from lab, radiology, pathology, and other disciplines.

To articulate how data-poor health care is, Dale Sanders, executive vice president for software at Health Catalyst, notes that a Boeing 787 generates 500GB of data in a six hour flight while one patient may generate just 100MB of data in an entire year.2 He pointed out that, in the near term, AI platforms like Watson simply do not have enough data substrate to impact health care as many hoped it would. Over the longer term, he says, if health care can develop a coherent, standard approach to data content, AI may fulfill its promise.

SKapi/Thinkstock

 

What can AI and related technologies achieve in the near-term?

“AI seems to have replaced Uber as the most overused word or phrase in digital health.” – Reporter Stephanie Baum, paraphrasing from an interview with Bob Kocher, Venrock Partners.3

My observations tell me that we have already made some progress and are likely to make more strides in the coming years, thanks to AI, machine learning, and natural language processing. A few areas of potential gain are:

Clinical documentation

Technology that can derive meaning from words or groups of words can help with more accurate clinical documentation. For example, if a patient has a documented UTI but also has in the record an 11 on the Glasgow Coma Scale, a systolic BP of 90, and a respiratory rate of 24, technology can alert the physician to document sepsis.

Quality measurement and reporting

Similarly, if technology can recognize words and numbers, it may be able to extract and report quality measures (for example, an ejection fraction of 35% in a heart failure patient) from progress notes without having a nurse-abstractor manually enter such data into structured fields for reporting, as is currently the case.

 

 

Predicting readmissions, mortality, other events

While machine learning has had mixed results in predicting future clinical events, this is likely to change as data integrity and algorithms improve. Best-of-breed technology will probably use both clinical and machine learning tools for predictive purposes in the future.

In 2015, I had the privilege of meeting Vinod Khosla, cofounder of SUN Microsystems and venture capitalist, who predicts that computers will largely supplant physicians in the future, at least in domains relying on access to data. As he puts it, “the core functions necessary for complex diagnoses, treatments, and monitoring will be driven by machine judgment instead of human judgment.”4

While the benefits of technology, especially in health care, are often oversold, I believe AI and related technologies will some day play a large role alongside physicians in the care of patients. However, for AI to deliver, we must first figure out how to collect and organize health care data so that computers are able to ingest, digest and use it in a purposeful way.

Note: Dr. Whitcomb is founder and advisor to Zato Health, which uses natural language processing and discovery technology in health care.

He is chief medical officer at Remedy Partners in Darien, Conn., and a cofounder and past president of SHM.

References

1. Zimmer, Ben. Is It Time to Welcome Our New Computer Overlords?. The Atlantic. https://www.theatlantic.com/technology/archive/2011/02/is-it-time-to-welcome-our-new-computer-overlords/71388/. Accessed 23 Apr 2017.

2. Sanders, Dale. The MD Anderson / IBM Watson Announcement: What does it mean for machine learning in healthcare? Webinar. https://www.slideshare.net/healthcatalyst1/the-md-anderson-ibm-watson-announcement-what-does-it-mean-for-machine-learning-in-healthcare. Accessed 23 Apr 2017.

3. Baum, Stephanie. Venrock survey predicts a flight to quality for digital health investments. MedCity News. 12 Apr 2017. http://medcitynews.com/2017/04/venrock-survey-predicts-flight-quality-digital-health-investment/. Accessed 22 Apr 2017.

4. Khosla, Vinod. The Reinvention Of Medicine: Dr. Algorithm V0-7 And Beyond. TechCrunch. 22 Sept 2014. https://techcrunch.com/2014/09/22/the-reinvention-of-medicine-dr-algorithm-version-0-7-and-beyond/. Accessed 22 Apr 2017.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME