User login
Acknowledging striking deficiencies in the quality and safety of healthcare, the Institute of Medicine, policy makers, and payors have called for transformation of the US healthcare system.1 Public reporting of hospital performance is one key strategy for accelerating improvement2 and may improve quality in several ways. First, feedback about performance relative to peers may stimulate quality improvement activities by appealing to professionalism. Second, the desire to preserve one's reputation by not appearing on a list of poor performers may be a powerful incentive. Finally, patients and referring providers could use reports to select high‐quality hospitals, thereby shifting care from low‐quality to high‐quality hospitals and stimulating quality improvement efforts to maintain or enhance market share.
Almost 20 years after New York and Pennsylvania began reporting cardiac surgery outcomes,3 the evidence that public reporting improves healthcare quality is equivocal.4 Moreover, stakeholders have embraced public reporting to differing degrees. Public reporting does lead to greater engagement in quality improvement activities,58 and additional financial incentives provide modest incremental benefits.9 Purchasers, too, are starting to pay attention.10 In New York State, payors appear to contract more with high‐quality surgeons and avoid poorly performing outliers.11 Some payors are creating tiered systems, assigning higher patient copayments for hospitals with poor quality metrics. These new systems have not been rigorously studied and should raise concern among hospitals.12
In contrast to hospitals and payors, patients have been slow to embrace public reporting. In a survey of coronary artery bypass graft (CABG) patients in Pennsylvania, only 2% said that public reporting of mortality rates affected their decision making.13 Eight years later, only 11% of patients sought information about hospitals before deciding on elective major surgery,14 although a majority of patients in both studies expressed interest in the information. It is not clear whether recent proliferation of information on the internet will change patient behavior, but to date public reporting appears not to effect market share.5, 15, 16
Barriers to patients' use of public reporting include difficulty accessing the information, lack of trust, information that is not salient, and data that are difficult to interpret.17 In the absence of consensus on what or how to report, a growing number of organizations, including state and federal government, accrediting bodies, private foundations, and for‐profit companies report a variety of measures relating to structure, processes, and outcomes. Although these sites purport to target consumers, they sometimes offer conflicting information18 and are not easily interpreted by lay readers.19
To realize the benefits of public reporting, and minimize the unintended consequences, rating systems must report salient information in a way that is comprehensible to patients and trusted by the doctors who advise them. At the same time, they should be fair to hospitals and offer useful data for quality improvement. We offer 10 recommendations for improving the public reporting of healthcare quality information: 5 describing what to report and 5 detailing how it should be reported (Figure 1). We also examine 3 leading performance reporting programs to see how well they implement these recommendations.
Recommendations to Make Data Salient for Patients
1. Prioritize Elective Procedures
Hospital quality is not uniform across conditions.2 For data to be salient, then, it should be disease‐specific and focus on common elective procedures, for which consumer choice is possible. Table 1 compares 3 popular reporting services. Hospital Compare, produced by the Center for Medicare Services (CMS, US Department of Health and Human Services, Washington, DC), provides process of care measures for 4 conditions, 3 of which are not elective. The fourth, surgical infection prevention, contains 5 measures3 related to perioperative antibiotics and 2 related to thromboembolism prophylaxisfor all surgical cases. Recently, more conditions have been added, but reports are limited to the number of cases and mean Medicare charge. By year 2011, however, Hospital Compare will offer many new measures, including rates of central line infection, ventilator‐associated pneumonia, and surgical site infection. HealthGrades, a private company, offers comparative mortality rates on over 30 diagnoses, of which 15 can be considered elective, at least some of the time. Only the Leapfrog group, an industry consortium, focuses exclusively on elective procedures, offering volume measures on 7 and outcome measures on 2.
Rule | Hospital Compare | HealthGrades | Leapfrog | |||
---|---|---|---|---|---|---|
| ||||||
1. Prioritize elective procedures | Yes | 22/28 at least partially elective | Yes | 15/31 at least partially elective | Yes | 7/8 elective |
2. Include quality of life and outcome data, if possible | Yes | Mortality for AMI and CHF | Yes | Mortality or complications* | Yes | Outcomes for CABG, PCI, and AVR |
3. Include standardized patient satisfaction and service measures | Yes | HCAHPS | No | No | ||
4. Offer composite measures that are weighted and evidence‐based | No | No | Specialty excellence award, not evidence‐based | No | ||
5. Costs comparisons should include patient prices | Yes | Average Medicare payment | Yes | Charges, health plan and Medicare costs available for a fee | No | |
6. Adjust outcomes for severity and risk | Yes | Methodology published on website | Yes | Methodology not public | Yes | Various methodologies published or referenced on website |
7. Identify differences not due to chance | Yes | Compares mortality to national mean | Yes | Compares mortality or complications to mean | Yes | Compares mortality to national mean |
8. Standardize reporting periods | October 2005 to September 2006 | 2004‐2006 | 12‐24 months, ending 12/31/07 or 6/30/08 | |||
9. Avoid use of nonvalidated administrative data | Yes | None used | No | Uses PSIs for safety rating | Yes | None used |
10. Utilization rates should be evidence‐based | No | Surgical case volume of Medicare patients | No | Includes Caesarian‐section rates | Yes | Some case volume rates are evidence‐based |
2. Include Quality of Life and Outcome Data
Outcomes are more valuable to patients than process measures, but the risk adjustment needed to compare outcomes requires considerable effort. So far, public reporting of risk‐adjusted outcomes has been limited almost exclusively to mortality. Yet a patient contemplating knee replacement surgery would find no meaningful difference in mortalitythere were only 510 deaths nationally in year 200620but might be interested in whether patients return to full mobility after surgery, and all patients should compare rates of nosocomial infections. For some low‐risk procedures, HealthGrades Inc. (Golden, CO) includes a composite measure of major complications, including complication of an orthopedic implant, stroke, cardiac arrest, excessive bleeding, and some types of infection; CMS will soon add rates of infection and readmission.
3. Include Measures of Patient Experience, Such as Satisfaction and Service Measures
Beyond outcomes, patients want to know about the experience of others.21 Satisfaction surveys should be standardized and made disease‐specific, since patients' experiences may differ between the cardiology suite and the delivery unit. Questions could address the attentiveness of the nursing staff, how well privacy was respected, how easy it was to deal with insurance issues, whether patients were promptly informed of test results, and whether the care team answered questions fully. Medicare has begun reporting patient satisfaction using the Hospital Consumer Assessment of Healthcare Providers (HCAHPS) survey on Hospital Compare, but the data are not disease‐specific and audit a very small number of patients from each institution. Other services are unlikely to perform their own surveys, as multiple surveys would prove burdensome. Social networking sites that allow patients to post their own personal reviews of hospitals and doctors offer an additional if less reliable dimension to traditional public reporting. Such sites are already transforming the market for other industries, such as travel.22
4. Offer Composite Measures That Are Weighted and Evidence‐Based
Interpreting multiple measures, some of which are more important than others, and some of which have better evidence than others, is difficult for health care providers and may be impossible for patients. Is it more important to get aspirin on arrival or at discharge? Also, how does a patient weigh a 1% difference in the number of heart attack patients who get aspirin on arrival against a 14% difference in those who are offered smoking cessation? Because patients may be overwhelmed by data,23 public reports should include evidence‐based, weighted measures of overall care for a given condition, with higher weights attached to those process measures most likely to have clinical benefit, and careful attention to visual representations that convey relative differences.19, 23 More sophisticated measures should be developed to guard against overuse. For example, while hospitals should be rewarded for providing vaccination, they should be penalized for vaccinating the same patient twice.
None of the services we examined provides weighted outcomes. Leapfrog (The Leapfrog Group, Washington, DC) offers a composite snapshot containing 9 pie charts, divided into 4 leaps. The 6 pies representing high‐risk procedures are of equal size, even though 2 of these, esophagectomy and pancreatic resection represent very rare surgeries, even at major medical centers. From a visual perspective, however, these are equivalent to having computerized physician order entry and full‐time intensive care unit staffing, which affect thousands more patients. Similarly, in determining pay‐for‐performance measures, CMS created a composite based on the total number of opportunities of all interventions, weighting all measures equally. Because no validated weighting measures exist, future research will be necessary to achieve this goal. Also, none of the evidence‐based measures contained safeguards against overtreatment.
5. Cost Comparisons Should Include Patient Prices
In an era of patient copayments and deductibles, consumers are increasingly aware of costs. For patients with very high deductible plans or no health insurance, hospital fees are a common cause of bankruptcy.24 Several public reporting agencies, including Hospital Compare and HealthGrades have incorporated Medicare costs into their reported measures, but these have little connection to what patients actually pay. Health sites aimed at consumers should publish the average patient copayment.
Recommendations to Ensure That Data Reflects Hospital Quality
6. Adjust Outcomes for Severity and Risk
Not all bypass operations are the same and not all patients are at equal risk. More difficult operations (eg, CABG for a patient with a previous bypass) will have more complications; similarly, patients with serious comorbidities will experience worse outcomes. Since hospitals which specialize in a procedure will attract complicated cases and higher risk patients, it is important to adjust outcomes to account for these differences. Otherwise, hospitals and surgeons may be discouraged from taking difficult cases. Outside of cardiac surgery, most risk adjustment systems use administrative claims data but vary dramatically in the numbers of variables considered and the underlying proprietary models, which are often criticized as being black boxes that yield discordant results.25 Thus, a hospital's mortality may appear below expected by 1 system and above expected by another. Instead, risk adjustment systems should include clinical data abstracted from patient records using standardized data definitions. Although costly to collect, clinical data offer more predictive information than do administrative data. For example, for heart failure patients undergoing CABG, the ejection fraction predicts mortality better than many stable comorbid diagnoses. A single transparent risk‐adjustment system should be recognized as the industry standard. The American College of Surgeons' standardized risk‐adjusted outcome reporting for the National Surgical Quality Improvement Program (NSQIP) is a good example of such an effort.
7. Identify Differences Not Due to Chance
As a result of random variation, during any period, some hospitals will appear better than average and others worse. Statistical tests should be employed to identify hospitals that differ from the mean, and to allow consumers to compare 2 hospitals directly, with appropriate caveats when the hospitals serve very different patient populations. Medicare's mortality rating system for myocardial infarction identifies only 17 hospitals in the nation as better than average and 7 as worse, out of 4,500 institutions. HealthGrades compares hospitals' actual mortality or complication rates to their predicted rates based on disease‐specific logistic regression models and reports whether the hospital is statistically better or worse than predicted. Hospitals are not compared directly to one another. Given the rarity of mortality in most procedures, other outcome measures will be necessary to distinguish among hospitals.26
8. Standardize Reporting Periods
In a world of continuous quality improvement, public reporting should represent a hospital's recent performance, but reporting periods also need to be long enough to provide a stable estimate of infrequent events, especially at low‐volume institutions. In contrast, the lag time between the end of the reporting period and public availability should be kept to a minimum. We found that reporting periods varied from 1 to 3 years, and did not always cover the same years for all conditions, even on the same website. Some data were 3 years old. Patients will have a hard time making decisions on data that is 1 year old, and hospitals will have little incentive to make improvements that will not be acknowledged for years.
9. Avoid Use of Nonvalidated Administrative Data
Administrative data collected for billing purposes, unlike most clinical data, are already in electronic format, and can inexpensively produce quality rankings using validated models.27 In contrast, screening tools, such as the Agency for Healthcare Research and Quality's patient safety indicators (PSIs), were designed to identify potential quality problems, such as postoperative deep vein thrombosis, for internal quality improvement. Cases identified by the PSI software require additional chart review,28, 29 and should not be used as quality indicators. Even so, HealthGrades reports PSIs and some insurers use them in pay‐for‐performance initiatives. Improvements in PSIs, including present‐on‐admission coding, may increase accuracy,30 but these measures need to be validated before they can be adopted for public reporting.
10. Utilization Rates Should Be Evidence‐Based
Although utilization rates for most procedures vary as much as 2‐fold by state or institution, there is little evidence for a best rate. Nevertheless, HealthGrades reports utilization rates for several obstetrical procedures. At present, there are no standards for these, and it is possible that utilization could be too low in some places. Further research is needed; until then, utilization should not purport to measure quality.
Discussion
The growing commitment to making hospital performance data public could transform the quality and safety of care in the US, introducing competition on quality and price and fostering informed consumer choice. To date, the promise of public reporting remains only partially fulfilled. Few hospitals have done more than comply with regulatory mandates and payer incentives, and consumers have failed to respond. To capture the full benefits of public reporting, we have made 10 recommendations to benefit patients and better engage hospitals. We suggest that reporting be patient‐centered, with an emphasis on making the data useful, meaningful, important, interpretable, and relevant. At the same time, hospitals, which are being judged on their performance, should have a level playing field, with measures that are timely, consistent, severity‐adjusted, evidence‐based, and which foster good clinical care. Of the 3 services we examined, Hospital Compare came closest to meeting these recommendations.
Although this blueprint for public reporting is easy to draft, it is challenging to implement. In particular, some of our suggestions, such as the one regarding risk adjustment, may not currently be feasible, because the complexity and cost of collecting clinical data, even in the era of electronic medical records, may be prohibitive. Until such data are readily available, it may be preferable to report nothing at all, rather than report data that are misleading. In the rush to make hospitals accountable, enthusiasm has often outstripped science,31 and several measures have had to be revised for unintended consequences.32
Any initiative to improve public reporting should have the buy‐in of all stakeholders, but particularly hospitals, which stand to benefit in several ways. By receiving regular feedback, they can focus on improving care, becoming better organizations. These improvements may be rewarded through direct compensation (pay‐for‐performance), decreased costs from complications, or increased market share. Hospitals will be more engaged if the data reflect actual quality, are adequately adjusted for severity, and acknowledge the role of chance. Otherwise, they will merely comply, or worse, look for opportunities to game the system. To succeed, public reporting needs to involve hospitals in establishing standards for reporting and validation, as well as auditing procedures to prevent fraud.33 The Hospital Quality Alliance (HQA, Washington, DC), a first step in this direction, at present has few measures. NSQIP (American College of Surgeons, Chicago, IL) is perhaps a better example of hospitals cooperating to set measurement standards to promote best‐practices. Public release of NSQIP data might accelerate progress. Alternatively, the National Quality Forum (NQF, Washington, DC) could expand its role from endorsing quality measures to include standardizing the way these measures are used in public reporting.
Still, if you build it, will they come? To date, public reporting has not been embraced by the public, despite its stated interest in the information. Several explanations could be offered. First, we may be presenting the wrong data. Process measures and mortality rates are important but represent abstract concepts for most patients. Surveys tell us that patients value most the experiences of other patients.14, 21 They want to know whether their pain will be controlled, whether the doctor will listen to them, whether the nurse will come when they call. The recent advent of the HCAHPS survey (AHRQ, Washington, DC) is another positive step. Stratifying the results by diagnosis and adding a few diagnosis‐specific questions would make HCAHPS even more valuable. Second, the data may not be readily available. Although most public reporting is done on the web, older patients who are deciding about hospitals may not have Internet access. Some reports are still proprietary, and cost could present another obstacle. Finally, even if freely‐available and patient‐centered, the results may not be interpretable by physicians, let alone patients.34
If public reporting is to succeed, it will require measures that better reflect patients' concerns. In order to collect the massive amounts of data required and present them in a timely fashion, better electronic record systems will be necessary. But these are no panacea; others have noted that the Department of Veterans Affairs, a leader in electronic records, still invests considerable time and money to review charts for NSQIP.35 Given the value that Americans place on transparency in other facets of their lives, it is clear that public reporting is here to stay. While much progress has been made over the past 5 years, additional research is needed to better measure quality from the patient's perspective, and to determine how this information can be used to help guide decision‐making, and to reward hospitals for offering the highest‐quality care.
Acknowledgements
The authors thank Kenneth Flax for his help with an earlier version of this manuscript.
- Committee on Quality of Health Care in America IoM.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academy Press;2001.
- Care in U.S. hospitals: the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265–274. , , , .
- Achieving and sustaining improved quality: lessons from New York state and cardiac surgery.Health Aff. 20022002;21(4):40–51. .
- Systematic review: the evidence that publishing patient care performance data improves quality of care.Ann Intern Med.2008;148(2):111–123. , , , , .
- Hospital performance reports: impact on quality, market share, and reputation.Health Aff (Millwood).2005;24(4):1150–1160. , , .
- Does publicizing hospital performance stimulate quality improvement efforts?Health Aff (Millwood).2003;22(2):84–94. , , .
- Improving the outcomes of coronary artery bypass surgery in New York State.JAMA.1994;271(10):761–766. , , , , .
- Declines in hospital mortality associated with a regional initiative to measure hospital performance.Am J Med Qual.1997;12(2):103–112. , , .
- Public reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356(5):486–496. , , , et al.
- Do quality report cards play a role in HMOs' contracting practices? Evidence from New York State.Health Serv Res.2000;35(1 Pt 2):319–332. , , , , , .
- Quality of cardiac surgeons and managed care contracting practices.Health Serv Res.2002;37(5):1129–1144. , , , .
- Using performance data to identify preferred hospitals.Health Serv Res.2007;42(6 Pt 1):2109–2119; discussion 2294–2323. , , , , , .
- Use of public performance reports: a survey of patients undergoing cardiac surgery.JAMA.1998;279(20):1638–1642. , .
- How do elderly patients decide where to go for major surgery? Telephone interview survey.BMJ.2005;331(7520):821. , , .
- The effect of publicly reporting hospital performance on market share and risk‐adjusted mortality at high‐mortality hospitals.Med Care.2003;41(6):729–740. , , , , , .
- The predictive accuracy of the New York State coronary artery bypass surgery report‐card system.Health Aff (Millwood).2006;25(3):844–855. , .
- Publicly disclosed information about the quality of health care: response of the US public.Qual Saf Health Care.2001;10(2):96–103. , .
- Choosing the best hospital: the limitations of public reporting of hospital quality.Health Aff (Millwood).2008;27(6):1680–1687. , , , , .
- Will quality report cards help consumers?Health Aff (Millwood).1997;16(3):218–228. , .
- Agency for Healthcare Research and Quality. HCUPnet, Healthcare Cost and Utilization Project. Available at: http://hcupnet.ahrq.gov. Accessed January 2009.
- Recovering from cardiac surgery: what patients want you to know.Am J Crit Care.2002;11(4):333–343. , , .
- Trip Advisor. Available at: http://www.tripadvisor.com. Accessed January 2009.
- Less is more in presenting quality information to consumers.Med Care Res Rev.2007;64(2):169–190. , , , , .
- MarketWatch: illness and injury as contributors to bankruptcy.Health Aff (Millwood)2005;(Suppl Web Exclusives):W5‐63–W5‐73. , , , .
- The Lake Wobegon effect: when all the patients are sicker.Am J Med Qual.2006;21(6):365–366. .
- Surgical mortality as an indicator of hospital quality: the problem with small sample size.JAMA.2004;292(7):847–851. , , .
- An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):1693–1701. , , , et al.
- Can administrative data be used to compare postoperative complication rates across hospitals?Med Care.2002;40(10):856–867. , , , .
- Impact of diagnosis‐timing indicators on measures of safety, comorbidity, and case mix groupings from administrative data sources.Med Care.2007;45(8):781–788. , , , , .
- Do the AHRQ patient safety indicators flag conditions that are present at the time of hospital admission?Med Care.2008;46(5):516–522. , , , , .
- The tension between needing to improve care and knowing how to do it.N Engl J Med.2007;357(6):608–613. , , .
- Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure.Ann Intern Med.2008;149(1):29–32. , , , .
- The GAAP in quality measurement and reporting.JAMA.2007;298(15):1800–1802. , , .
- Consumer competencies and the use of comparative quality information: it isn't just about literacy.Med Care Res Rev.2007;64(4):379–394. , , , .
- Performance measurement in search of a path.N Engl J Med.2007;356(9):951–953. .
Acknowledging striking deficiencies in the quality and safety of healthcare, the Institute of Medicine, policy makers, and payors have called for transformation of the US healthcare system.1 Public reporting of hospital performance is one key strategy for accelerating improvement2 and may improve quality in several ways. First, feedback about performance relative to peers may stimulate quality improvement activities by appealing to professionalism. Second, the desire to preserve one's reputation by not appearing on a list of poor performers may be a powerful incentive. Finally, patients and referring providers could use reports to select high‐quality hospitals, thereby shifting care from low‐quality to high‐quality hospitals and stimulating quality improvement efforts to maintain or enhance market share.
Almost 20 years after New York and Pennsylvania began reporting cardiac surgery outcomes,3 the evidence that public reporting improves healthcare quality is equivocal.4 Moreover, stakeholders have embraced public reporting to differing degrees. Public reporting does lead to greater engagement in quality improvement activities,58 and additional financial incentives provide modest incremental benefits.9 Purchasers, too, are starting to pay attention.10 In New York State, payors appear to contract more with high‐quality surgeons and avoid poorly performing outliers.11 Some payors are creating tiered systems, assigning higher patient copayments for hospitals with poor quality metrics. These new systems have not been rigorously studied and should raise concern among hospitals.12
In contrast to hospitals and payors, patients have been slow to embrace public reporting. In a survey of coronary artery bypass graft (CABG) patients in Pennsylvania, only 2% said that public reporting of mortality rates affected their decision making.13 Eight years later, only 11% of patients sought information about hospitals before deciding on elective major surgery,14 although a majority of patients in both studies expressed interest in the information. It is not clear whether recent proliferation of information on the internet will change patient behavior, but to date public reporting appears not to effect market share.5, 15, 16
Barriers to patients' use of public reporting include difficulty accessing the information, lack of trust, information that is not salient, and data that are difficult to interpret.17 In the absence of consensus on what or how to report, a growing number of organizations, including state and federal government, accrediting bodies, private foundations, and for‐profit companies report a variety of measures relating to structure, processes, and outcomes. Although these sites purport to target consumers, they sometimes offer conflicting information18 and are not easily interpreted by lay readers.19
To realize the benefits of public reporting, and minimize the unintended consequences, rating systems must report salient information in a way that is comprehensible to patients and trusted by the doctors who advise them. At the same time, they should be fair to hospitals and offer useful data for quality improvement. We offer 10 recommendations for improving the public reporting of healthcare quality information: 5 describing what to report and 5 detailing how it should be reported (Figure 1). We also examine 3 leading performance reporting programs to see how well they implement these recommendations.
Recommendations to Make Data Salient for Patients
1. Prioritize Elective Procedures
Hospital quality is not uniform across conditions.2 For data to be salient, then, it should be disease‐specific and focus on common elective procedures, for which consumer choice is possible. Table 1 compares 3 popular reporting services. Hospital Compare, produced by the Center for Medicare Services (CMS, US Department of Health and Human Services, Washington, DC), provides process of care measures for 4 conditions, 3 of which are not elective. The fourth, surgical infection prevention, contains 5 measures3 related to perioperative antibiotics and 2 related to thromboembolism prophylaxisfor all surgical cases. Recently, more conditions have been added, but reports are limited to the number of cases and mean Medicare charge. By year 2011, however, Hospital Compare will offer many new measures, including rates of central line infection, ventilator‐associated pneumonia, and surgical site infection. HealthGrades, a private company, offers comparative mortality rates on over 30 diagnoses, of which 15 can be considered elective, at least some of the time. Only the Leapfrog group, an industry consortium, focuses exclusively on elective procedures, offering volume measures on 7 and outcome measures on 2.
Rule | Hospital Compare | HealthGrades | Leapfrog | |||
---|---|---|---|---|---|---|
| ||||||
1. Prioritize elective procedures | Yes | 22/28 at least partially elective | Yes | 15/31 at least partially elective | Yes | 7/8 elective |
2. Include quality of life and outcome data, if possible | Yes | Mortality for AMI and CHF | Yes | Mortality or complications* | Yes | Outcomes for CABG, PCI, and AVR |
3. Include standardized patient satisfaction and service measures | Yes | HCAHPS | No | No | ||
4. Offer composite measures that are weighted and evidence‐based | No | No | Specialty excellence award, not evidence‐based | No | ||
5. Costs comparisons should include patient prices | Yes | Average Medicare payment | Yes | Charges, health plan and Medicare costs available for a fee | No | |
6. Adjust outcomes for severity and risk | Yes | Methodology published on website | Yes | Methodology not public | Yes | Various methodologies published or referenced on website |
7. Identify differences not due to chance | Yes | Compares mortality to national mean | Yes | Compares mortality or complications to mean | Yes | Compares mortality to national mean |
8. Standardize reporting periods | October 2005 to September 2006 | 2004‐2006 | 12‐24 months, ending 12/31/07 or 6/30/08 | |||
9. Avoid use of nonvalidated administrative data | Yes | None used | No | Uses PSIs for safety rating | Yes | None used |
10. Utilization rates should be evidence‐based | No | Surgical case volume of Medicare patients | No | Includes Caesarian‐section rates | Yes | Some case volume rates are evidence‐based |
2. Include Quality of Life and Outcome Data
Outcomes are more valuable to patients than process measures, but the risk adjustment needed to compare outcomes requires considerable effort. So far, public reporting of risk‐adjusted outcomes has been limited almost exclusively to mortality. Yet a patient contemplating knee replacement surgery would find no meaningful difference in mortalitythere were only 510 deaths nationally in year 200620but might be interested in whether patients return to full mobility after surgery, and all patients should compare rates of nosocomial infections. For some low‐risk procedures, HealthGrades Inc. (Golden, CO) includes a composite measure of major complications, including complication of an orthopedic implant, stroke, cardiac arrest, excessive bleeding, and some types of infection; CMS will soon add rates of infection and readmission.
3. Include Measures of Patient Experience, Such as Satisfaction and Service Measures
Beyond outcomes, patients want to know about the experience of others.21 Satisfaction surveys should be standardized and made disease‐specific, since patients' experiences may differ between the cardiology suite and the delivery unit. Questions could address the attentiveness of the nursing staff, how well privacy was respected, how easy it was to deal with insurance issues, whether patients were promptly informed of test results, and whether the care team answered questions fully. Medicare has begun reporting patient satisfaction using the Hospital Consumer Assessment of Healthcare Providers (HCAHPS) survey on Hospital Compare, but the data are not disease‐specific and audit a very small number of patients from each institution. Other services are unlikely to perform their own surveys, as multiple surveys would prove burdensome. Social networking sites that allow patients to post their own personal reviews of hospitals and doctors offer an additional if less reliable dimension to traditional public reporting. Such sites are already transforming the market for other industries, such as travel.22
4. Offer Composite Measures That Are Weighted and Evidence‐Based
Interpreting multiple measures, some of which are more important than others, and some of which have better evidence than others, is difficult for health care providers and may be impossible for patients. Is it more important to get aspirin on arrival or at discharge? Also, how does a patient weigh a 1% difference in the number of heart attack patients who get aspirin on arrival against a 14% difference in those who are offered smoking cessation? Because patients may be overwhelmed by data,23 public reports should include evidence‐based, weighted measures of overall care for a given condition, with higher weights attached to those process measures most likely to have clinical benefit, and careful attention to visual representations that convey relative differences.19, 23 More sophisticated measures should be developed to guard against overuse. For example, while hospitals should be rewarded for providing vaccination, they should be penalized for vaccinating the same patient twice.
None of the services we examined provides weighted outcomes. Leapfrog (The Leapfrog Group, Washington, DC) offers a composite snapshot containing 9 pie charts, divided into 4 leaps. The 6 pies representing high‐risk procedures are of equal size, even though 2 of these, esophagectomy and pancreatic resection represent very rare surgeries, even at major medical centers. From a visual perspective, however, these are equivalent to having computerized physician order entry and full‐time intensive care unit staffing, which affect thousands more patients. Similarly, in determining pay‐for‐performance measures, CMS created a composite based on the total number of opportunities of all interventions, weighting all measures equally. Because no validated weighting measures exist, future research will be necessary to achieve this goal. Also, none of the evidence‐based measures contained safeguards against overtreatment.
5. Cost Comparisons Should Include Patient Prices
In an era of patient copayments and deductibles, consumers are increasingly aware of costs. For patients with very high deductible plans or no health insurance, hospital fees are a common cause of bankruptcy.24 Several public reporting agencies, including Hospital Compare and HealthGrades have incorporated Medicare costs into their reported measures, but these have little connection to what patients actually pay. Health sites aimed at consumers should publish the average patient copayment.
Recommendations to Ensure That Data Reflects Hospital Quality
6. Adjust Outcomes for Severity and Risk
Not all bypass operations are the same and not all patients are at equal risk. More difficult operations (eg, CABG for a patient with a previous bypass) will have more complications; similarly, patients with serious comorbidities will experience worse outcomes. Since hospitals which specialize in a procedure will attract complicated cases and higher risk patients, it is important to adjust outcomes to account for these differences. Otherwise, hospitals and surgeons may be discouraged from taking difficult cases. Outside of cardiac surgery, most risk adjustment systems use administrative claims data but vary dramatically in the numbers of variables considered and the underlying proprietary models, which are often criticized as being black boxes that yield discordant results.25 Thus, a hospital's mortality may appear below expected by 1 system and above expected by another. Instead, risk adjustment systems should include clinical data abstracted from patient records using standardized data definitions. Although costly to collect, clinical data offer more predictive information than do administrative data. For example, for heart failure patients undergoing CABG, the ejection fraction predicts mortality better than many stable comorbid diagnoses. A single transparent risk‐adjustment system should be recognized as the industry standard. The American College of Surgeons' standardized risk‐adjusted outcome reporting for the National Surgical Quality Improvement Program (NSQIP) is a good example of such an effort.
7. Identify Differences Not Due to Chance
As a result of random variation, during any period, some hospitals will appear better than average and others worse. Statistical tests should be employed to identify hospitals that differ from the mean, and to allow consumers to compare 2 hospitals directly, with appropriate caveats when the hospitals serve very different patient populations. Medicare's mortality rating system for myocardial infarction identifies only 17 hospitals in the nation as better than average and 7 as worse, out of 4,500 institutions. HealthGrades compares hospitals' actual mortality or complication rates to their predicted rates based on disease‐specific logistic regression models and reports whether the hospital is statistically better or worse than predicted. Hospitals are not compared directly to one another. Given the rarity of mortality in most procedures, other outcome measures will be necessary to distinguish among hospitals.26
8. Standardize Reporting Periods
In a world of continuous quality improvement, public reporting should represent a hospital's recent performance, but reporting periods also need to be long enough to provide a stable estimate of infrequent events, especially at low‐volume institutions. In contrast, the lag time between the end of the reporting period and public availability should be kept to a minimum. We found that reporting periods varied from 1 to 3 years, and did not always cover the same years for all conditions, even on the same website. Some data were 3 years old. Patients will have a hard time making decisions on data that is 1 year old, and hospitals will have little incentive to make improvements that will not be acknowledged for years.
9. Avoid Use of Nonvalidated Administrative Data
Administrative data collected for billing purposes, unlike most clinical data, are already in electronic format, and can inexpensively produce quality rankings using validated models.27 In contrast, screening tools, such as the Agency for Healthcare Research and Quality's patient safety indicators (PSIs), were designed to identify potential quality problems, such as postoperative deep vein thrombosis, for internal quality improvement. Cases identified by the PSI software require additional chart review,28, 29 and should not be used as quality indicators. Even so, HealthGrades reports PSIs and some insurers use them in pay‐for‐performance initiatives. Improvements in PSIs, including present‐on‐admission coding, may increase accuracy,30 but these measures need to be validated before they can be adopted for public reporting.
10. Utilization Rates Should Be Evidence‐Based
Although utilization rates for most procedures vary as much as 2‐fold by state or institution, there is little evidence for a best rate. Nevertheless, HealthGrades reports utilization rates for several obstetrical procedures. At present, there are no standards for these, and it is possible that utilization could be too low in some places. Further research is needed; until then, utilization should not purport to measure quality.
Discussion
The growing commitment to making hospital performance data public could transform the quality and safety of care in the US, introducing competition on quality and price and fostering informed consumer choice. To date, the promise of public reporting remains only partially fulfilled. Few hospitals have done more than comply with regulatory mandates and payer incentives, and consumers have failed to respond. To capture the full benefits of public reporting, we have made 10 recommendations to benefit patients and better engage hospitals. We suggest that reporting be patient‐centered, with an emphasis on making the data useful, meaningful, important, interpretable, and relevant. At the same time, hospitals, which are being judged on their performance, should have a level playing field, with measures that are timely, consistent, severity‐adjusted, evidence‐based, and which foster good clinical care. Of the 3 services we examined, Hospital Compare came closest to meeting these recommendations.
Although this blueprint for public reporting is easy to draft, it is challenging to implement. In particular, some of our suggestions, such as the one regarding risk adjustment, may not currently be feasible, because the complexity and cost of collecting clinical data, even in the era of electronic medical records, may be prohibitive. Until such data are readily available, it may be preferable to report nothing at all, rather than report data that are misleading. In the rush to make hospitals accountable, enthusiasm has often outstripped science,31 and several measures have had to be revised for unintended consequences.32
Any initiative to improve public reporting should have the buy‐in of all stakeholders, but particularly hospitals, which stand to benefit in several ways. By receiving regular feedback, they can focus on improving care, becoming better organizations. These improvements may be rewarded through direct compensation (pay‐for‐performance), decreased costs from complications, or increased market share. Hospitals will be more engaged if the data reflect actual quality, are adequately adjusted for severity, and acknowledge the role of chance. Otherwise, they will merely comply, or worse, look for opportunities to game the system. To succeed, public reporting needs to involve hospitals in establishing standards for reporting and validation, as well as auditing procedures to prevent fraud.33 The Hospital Quality Alliance (HQA, Washington, DC), a first step in this direction, at present has few measures. NSQIP (American College of Surgeons, Chicago, IL) is perhaps a better example of hospitals cooperating to set measurement standards to promote best‐practices. Public release of NSQIP data might accelerate progress. Alternatively, the National Quality Forum (NQF, Washington, DC) could expand its role from endorsing quality measures to include standardizing the way these measures are used in public reporting.
Still, if you build it, will they come? To date, public reporting has not been embraced by the public, despite its stated interest in the information. Several explanations could be offered. First, we may be presenting the wrong data. Process measures and mortality rates are important but represent abstract concepts for most patients. Surveys tell us that patients value most the experiences of other patients.14, 21 They want to know whether their pain will be controlled, whether the doctor will listen to them, whether the nurse will come when they call. The recent advent of the HCAHPS survey (AHRQ, Washington, DC) is another positive step. Stratifying the results by diagnosis and adding a few diagnosis‐specific questions would make HCAHPS even more valuable. Second, the data may not be readily available. Although most public reporting is done on the web, older patients who are deciding about hospitals may not have Internet access. Some reports are still proprietary, and cost could present another obstacle. Finally, even if freely‐available and patient‐centered, the results may not be interpretable by physicians, let alone patients.34
If public reporting is to succeed, it will require measures that better reflect patients' concerns. In order to collect the massive amounts of data required and present them in a timely fashion, better electronic record systems will be necessary. But these are no panacea; others have noted that the Department of Veterans Affairs, a leader in electronic records, still invests considerable time and money to review charts for NSQIP.35 Given the value that Americans place on transparency in other facets of their lives, it is clear that public reporting is here to stay. While much progress has been made over the past 5 years, additional research is needed to better measure quality from the patient's perspective, and to determine how this information can be used to help guide decision‐making, and to reward hospitals for offering the highest‐quality care.
Acknowledgements
The authors thank Kenneth Flax for his help with an earlier version of this manuscript.
Acknowledging striking deficiencies in the quality and safety of healthcare, the Institute of Medicine, policy makers, and payors have called for transformation of the US healthcare system.1 Public reporting of hospital performance is one key strategy for accelerating improvement2 and may improve quality in several ways. First, feedback about performance relative to peers may stimulate quality improvement activities by appealing to professionalism. Second, the desire to preserve one's reputation by not appearing on a list of poor performers may be a powerful incentive. Finally, patients and referring providers could use reports to select high‐quality hospitals, thereby shifting care from low‐quality to high‐quality hospitals and stimulating quality improvement efforts to maintain or enhance market share.
Almost 20 years after New York and Pennsylvania began reporting cardiac surgery outcomes,3 the evidence that public reporting improves healthcare quality is equivocal.4 Moreover, stakeholders have embraced public reporting to differing degrees. Public reporting does lead to greater engagement in quality improvement activities,58 and additional financial incentives provide modest incremental benefits.9 Purchasers, too, are starting to pay attention.10 In New York State, payors appear to contract more with high‐quality surgeons and avoid poorly performing outliers.11 Some payors are creating tiered systems, assigning higher patient copayments for hospitals with poor quality metrics. These new systems have not been rigorously studied and should raise concern among hospitals.12
In contrast to hospitals and payors, patients have been slow to embrace public reporting. In a survey of coronary artery bypass graft (CABG) patients in Pennsylvania, only 2% said that public reporting of mortality rates affected their decision making.13 Eight years later, only 11% of patients sought information about hospitals before deciding on elective major surgery,14 although a majority of patients in both studies expressed interest in the information. It is not clear whether recent proliferation of information on the internet will change patient behavior, but to date public reporting appears not to effect market share.5, 15, 16
Barriers to patients' use of public reporting include difficulty accessing the information, lack of trust, information that is not salient, and data that are difficult to interpret.17 In the absence of consensus on what or how to report, a growing number of organizations, including state and federal government, accrediting bodies, private foundations, and for‐profit companies report a variety of measures relating to structure, processes, and outcomes. Although these sites purport to target consumers, they sometimes offer conflicting information18 and are not easily interpreted by lay readers.19
To realize the benefits of public reporting, and minimize the unintended consequences, rating systems must report salient information in a way that is comprehensible to patients and trusted by the doctors who advise them. At the same time, they should be fair to hospitals and offer useful data for quality improvement. We offer 10 recommendations for improving the public reporting of healthcare quality information: 5 describing what to report and 5 detailing how it should be reported (Figure 1). We also examine 3 leading performance reporting programs to see how well they implement these recommendations.
Recommendations to Make Data Salient for Patients
1. Prioritize Elective Procedures
Hospital quality is not uniform across conditions.2 For data to be salient, then, it should be disease‐specific and focus on common elective procedures, for which consumer choice is possible. Table 1 compares 3 popular reporting services. Hospital Compare, produced by the Center for Medicare Services (CMS, US Department of Health and Human Services, Washington, DC), provides process of care measures for 4 conditions, 3 of which are not elective. The fourth, surgical infection prevention, contains 5 measures3 related to perioperative antibiotics and 2 related to thromboembolism prophylaxisfor all surgical cases. Recently, more conditions have been added, but reports are limited to the number of cases and mean Medicare charge. By year 2011, however, Hospital Compare will offer many new measures, including rates of central line infection, ventilator‐associated pneumonia, and surgical site infection. HealthGrades, a private company, offers comparative mortality rates on over 30 diagnoses, of which 15 can be considered elective, at least some of the time. Only the Leapfrog group, an industry consortium, focuses exclusively on elective procedures, offering volume measures on 7 and outcome measures on 2.
Rule | Hospital Compare | HealthGrades | Leapfrog | |||
---|---|---|---|---|---|---|
| ||||||
1. Prioritize elective procedures | Yes | 22/28 at least partially elective | Yes | 15/31 at least partially elective | Yes | 7/8 elective |
2. Include quality of life and outcome data, if possible | Yes | Mortality for AMI and CHF | Yes | Mortality or complications* | Yes | Outcomes for CABG, PCI, and AVR |
3. Include standardized patient satisfaction and service measures | Yes | HCAHPS | No | No | ||
4. Offer composite measures that are weighted and evidence‐based | No | No | Specialty excellence award, not evidence‐based | No | ||
5. Costs comparisons should include patient prices | Yes | Average Medicare payment | Yes | Charges, health plan and Medicare costs available for a fee | No | |
6. Adjust outcomes for severity and risk | Yes | Methodology published on website | Yes | Methodology not public | Yes | Various methodologies published or referenced on website |
7. Identify differences not due to chance | Yes | Compares mortality to national mean | Yes | Compares mortality or complications to mean | Yes | Compares mortality to national mean |
8. Standardize reporting periods | October 2005 to September 2006 | 2004‐2006 | 12‐24 months, ending 12/31/07 or 6/30/08 | |||
9. Avoid use of nonvalidated administrative data | Yes | None used | No | Uses PSIs for safety rating | Yes | None used |
10. Utilization rates should be evidence‐based | No | Surgical case volume of Medicare patients | No | Includes Caesarian‐section rates | Yes | Some case volume rates are evidence‐based |
2. Include Quality of Life and Outcome Data
Outcomes are more valuable to patients than process measures, but the risk adjustment needed to compare outcomes requires considerable effort. So far, public reporting of risk‐adjusted outcomes has been limited almost exclusively to mortality. Yet a patient contemplating knee replacement surgery would find no meaningful difference in mortalitythere were only 510 deaths nationally in year 200620but might be interested in whether patients return to full mobility after surgery, and all patients should compare rates of nosocomial infections. For some low‐risk procedures, HealthGrades Inc. (Golden, CO) includes a composite measure of major complications, including complication of an orthopedic implant, stroke, cardiac arrest, excessive bleeding, and some types of infection; CMS will soon add rates of infection and readmission.
3. Include Measures of Patient Experience, Such as Satisfaction and Service Measures
Beyond outcomes, patients want to know about the experience of others.21 Satisfaction surveys should be standardized and made disease‐specific, since patients' experiences may differ between the cardiology suite and the delivery unit. Questions could address the attentiveness of the nursing staff, how well privacy was respected, how easy it was to deal with insurance issues, whether patients were promptly informed of test results, and whether the care team answered questions fully. Medicare has begun reporting patient satisfaction using the Hospital Consumer Assessment of Healthcare Providers (HCAHPS) survey on Hospital Compare, but the data are not disease‐specific and audit a very small number of patients from each institution. Other services are unlikely to perform their own surveys, as multiple surveys would prove burdensome. Social networking sites that allow patients to post their own personal reviews of hospitals and doctors offer an additional if less reliable dimension to traditional public reporting. Such sites are already transforming the market for other industries, such as travel.22
4. Offer Composite Measures That Are Weighted and Evidence‐Based
Interpreting multiple measures, some of which are more important than others, and some of which have better evidence than others, is difficult for health care providers and may be impossible for patients. Is it more important to get aspirin on arrival or at discharge? Also, how does a patient weigh a 1% difference in the number of heart attack patients who get aspirin on arrival against a 14% difference in those who are offered smoking cessation? Because patients may be overwhelmed by data,23 public reports should include evidence‐based, weighted measures of overall care for a given condition, with higher weights attached to those process measures most likely to have clinical benefit, and careful attention to visual representations that convey relative differences.19, 23 More sophisticated measures should be developed to guard against overuse. For example, while hospitals should be rewarded for providing vaccination, they should be penalized for vaccinating the same patient twice.
None of the services we examined provides weighted outcomes. Leapfrog (The Leapfrog Group, Washington, DC) offers a composite snapshot containing 9 pie charts, divided into 4 leaps. The 6 pies representing high‐risk procedures are of equal size, even though 2 of these, esophagectomy and pancreatic resection represent very rare surgeries, even at major medical centers. From a visual perspective, however, these are equivalent to having computerized physician order entry and full‐time intensive care unit staffing, which affect thousands more patients. Similarly, in determining pay‐for‐performance measures, CMS created a composite based on the total number of opportunities of all interventions, weighting all measures equally. Because no validated weighting measures exist, future research will be necessary to achieve this goal. Also, none of the evidence‐based measures contained safeguards against overtreatment.
5. Cost Comparisons Should Include Patient Prices
In an era of patient copayments and deductibles, consumers are increasingly aware of costs. For patients with very high deductible plans or no health insurance, hospital fees are a common cause of bankruptcy.24 Several public reporting agencies, including Hospital Compare and HealthGrades have incorporated Medicare costs into their reported measures, but these have little connection to what patients actually pay. Health sites aimed at consumers should publish the average patient copayment.
Recommendations to Ensure That Data Reflects Hospital Quality
6. Adjust Outcomes for Severity and Risk
Not all bypass operations are the same and not all patients are at equal risk. More difficult operations (eg, CABG for a patient with a previous bypass) will have more complications; similarly, patients with serious comorbidities will experience worse outcomes. Since hospitals which specialize in a procedure will attract complicated cases and higher risk patients, it is important to adjust outcomes to account for these differences. Otherwise, hospitals and surgeons may be discouraged from taking difficult cases. Outside of cardiac surgery, most risk adjustment systems use administrative claims data but vary dramatically in the numbers of variables considered and the underlying proprietary models, which are often criticized as being black boxes that yield discordant results.25 Thus, a hospital's mortality may appear below expected by 1 system and above expected by another. Instead, risk adjustment systems should include clinical data abstracted from patient records using standardized data definitions. Although costly to collect, clinical data offer more predictive information than do administrative data. For example, for heart failure patients undergoing CABG, the ejection fraction predicts mortality better than many stable comorbid diagnoses. A single transparent risk‐adjustment system should be recognized as the industry standard. The American College of Surgeons' standardized risk‐adjusted outcome reporting for the National Surgical Quality Improvement Program (NSQIP) is a good example of such an effort.
7. Identify Differences Not Due to Chance
As a result of random variation, during any period, some hospitals will appear better than average and others worse. Statistical tests should be employed to identify hospitals that differ from the mean, and to allow consumers to compare 2 hospitals directly, with appropriate caveats when the hospitals serve very different patient populations. Medicare's mortality rating system for myocardial infarction identifies only 17 hospitals in the nation as better than average and 7 as worse, out of 4,500 institutions. HealthGrades compares hospitals' actual mortality or complication rates to their predicted rates based on disease‐specific logistic regression models and reports whether the hospital is statistically better or worse than predicted. Hospitals are not compared directly to one another. Given the rarity of mortality in most procedures, other outcome measures will be necessary to distinguish among hospitals.26
8. Standardize Reporting Periods
In a world of continuous quality improvement, public reporting should represent a hospital's recent performance, but reporting periods also need to be long enough to provide a stable estimate of infrequent events, especially at low‐volume institutions. In contrast, the lag time between the end of the reporting period and public availability should be kept to a minimum. We found that reporting periods varied from 1 to 3 years, and did not always cover the same years for all conditions, even on the same website. Some data were 3 years old. Patients will have a hard time making decisions on data that is 1 year old, and hospitals will have little incentive to make improvements that will not be acknowledged for years.
9. Avoid Use of Nonvalidated Administrative Data
Administrative data collected for billing purposes, unlike most clinical data, are already in electronic format, and can inexpensively produce quality rankings using validated models.27 In contrast, screening tools, such as the Agency for Healthcare Research and Quality's patient safety indicators (PSIs), were designed to identify potential quality problems, such as postoperative deep vein thrombosis, for internal quality improvement. Cases identified by the PSI software require additional chart review,28, 29 and should not be used as quality indicators. Even so, HealthGrades reports PSIs and some insurers use them in pay‐for‐performance initiatives. Improvements in PSIs, including present‐on‐admission coding, may increase accuracy,30 but these measures need to be validated before they can be adopted for public reporting.
10. Utilization Rates Should Be Evidence‐Based
Although utilization rates for most procedures vary as much as 2‐fold by state or institution, there is little evidence for a best rate. Nevertheless, HealthGrades reports utilization rates for several obstetrical procedures. At present, there are no standards for these, and it is possible that utilization could be too low in some places. Further research is needed; until then, utilization should not purport to measure quality.
Discussion
The growing commitment to making hospital performance data public could transform the quality and safety of care in the US, introducing competition on quality and price and fostering informed consumer choice. To date, the promise of public reporting remains only partially fulfilled. Few hospitals have done more than comply with regulatory mandates and payer incentives, and consumers have failed to respond. To capture the full benefits of public reporting, we have made 10 recommendations to benefit patients and better engage hospitals. We suggest that reporting be patient‐centered, with an emphasis on making the data useful, meaningful, important, interpretable, and relevant. At the same time, hospitals, which are being judged on their performance, should have a level playing field, with measures that are timely, consistent, severity‐adjusted, evidence‐based, and which foster good clinical care. Of the 3 services we examined, Hospital Compare came closest to meeting these recommendations.
Although this blueprint for public reporting is easy to draft, it is challenging to implement. In particular, some of our suggestions, such as the one regarding risk adjustment, may not currently be feasible, because the complexity and cost of collecting clinical data, even in the era of electronic medical records, may be prohibitive. Until such data are readily available, it may be preferable to report nothing at all, rather than report data that are misleading. In the rush to make hospitals accountable, enthusiasm has often outstripped science,31 and several measures have had to be revised for unintended consequences.32
Any initiative to improve public reporting should have the buy‐in of all stakeholders, but particularly hospitals, which stand to benefit in several ways. By receiving regular feedback, they can focus on improving care, becoming better organizations. These improvements may be rewarded through direct compensation (pay‐for‐performance), decreased costs from complications, or increased market share. Hospitals will be more engaged if the data reflect actual quality, are adequately adjusted for severity, and acknowledge the role of chance. Otherwise, they will merely comply, or worse, look for opportunities to game the system. To succeed, public reporting needs to involve hospitals in establishing standards for reporting and validation, as well as auditing procedures to prevent fraud.33 The Hospital Quality Alliance (HQA, Washington, DC), a first step in this direction, at present has few measures. NSQIP (American College of Surgeons, Chicago, IL) is perhaps a better example of hospitals cooperating to set measurement standards to promote best‐practices. Public release of NSQIP data might accelerate progress. Alternatively, the National Quality Forum (NQF, Washington, DC) could expand its role from endorsing quality measures to include standardizing the way these measures are used in public reporting.
Still, if you build it, will they come? To date, public reporting has not been embraced by the public, despite its stated interest in the information. Several explanations could be offered. First, we may be presenting the wrong data. Process measures and mortality rates are important but represent abstract concepts for most patients. Surveys tell us that patients value most the experiences of other patients.14, 21 They want to know whether their pain will be controlled, whether the doctor will listen to them, whether the nurse will come when they call. The recent advent of the HCAHPS survey (AHRQ, Washington, DC) is another positive step. Stratifying the results by diagnosis and adding a few diagnosis‐specific questions would make HCAHPS even more valuable. Second, the data may not be readily available. Although most public reporting is done on the web, older patients who are deciding about hospitals may not have Internet access. Some reports are still proprietary, and cost could present another obstacle. Finally, even if freely‐available and patient‐centered, the results may not be interpretable by physicians, let alone patients.34
If public reporting is to succeed, it will require measures that better reflect patients' concerns. In order to collect the massive amounts of data required and present them in a timely fashion, better electronic record systems will be necessary. But these are no panacea; others have noted that the Department of Veterans Affairs, a leader in electronic records, still invests considerable time and money to review charts for NSQIP.35 Given the value that Americans place on transparency in other facets of their lives, it is clear that public reporting is here to stay. While much progress has been made over the past 5 years, additional research is needed to better measure quality from the patient's perspective, and to determine how this information can be used to help guide decision‐making, and to reward hospitals for offering the highest‐quality care.
Acknowledgements
The authors thank Kenneth Flax for his help with an earlier version of this manuscript.
- Committee on Quality of Health Care in America IoM.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academy Press;2001.
- Care in U.S. hospitals: the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265–274. , , , .
- Achieving and sustaining improved quality: lessons from New York state and cardiac surgery.Health Aff. 20022002;21(4):40–51. .
- Systematic review: the evidence that publishing patient care performance data improves quality of care.Ann Intern Med.2008;148(2):111–123. , , , , .
- Hospital performance reports: impact on quality, market share, and reputation.Health Aff (Millwood).2005;24(4):1150–1160. , , .
- Does publicizing hospital performance stimulate quality improvement efforts?Health Aff (Millwood).2003;22(2):84–94. , , .
- Improving the outcomes of coronary artery bypass surgery in New York State.JAMA.1994;271(10):761–766. , , , , .
- Declines in hospital mortality associated with a regional initiative to measure hospital performance.Am J Med Qual.1997;12(2):103–112. , , .
- Public reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356(5):486–496. , , , et al.
- Do quality report cards play a role in HMOs' contracting practices? Evidence from New York State.Health Serv Res.2000;35(1 Pt 2):319–332. , , , , , .
- Quality of cardiac surgeons and managed care contracting practices.Health Serv Res.2002;37(5):1129–1144. , , , .
- Using performance data to identify preferred hospitals.Health Serv Res.2007;42(6 Pt 1):2109–2119; discussion 2294–2323. , , , , , .
- Use of public performance reports: a survey of patients undergoing cardiac surgery.JAMA.1998;279(20):1638–1642. , .
- How do elderly patients decide where to go for major surgery? Telephone interview survey.BMJ.2005;331(7520):821. , , .
- The effect of publicly reporting hospital performance on market share and risk‐adjusted mortality at high‐mortality hospitals.Med Care.2003;41(6):729–740. , , , , , .
- The predictive accuracy of the New York State coronary artery bypass surgery report‐card system.Health Aff (Millwood).2006;25(3):844–855. , .
- Publicly disclosed information about the quality of health care: response of the US public.Qual Saf Health Care.2001;10(2):96–103. , .
- Choosing the best hospital: the limitations of public reporting of hospital quality.Health Aff (Millwood).2008;27(6):1680–1687. , , , , .
- Will quality report cards help consumers?Health Aff (Millwood).1997;16(3):218–228. , .
- Agency for Healthcare Research and Quality. HCUPnet, Healthcare Cost and Utilization Project. Available at: http://hcupnet.ahrq.gov. Accessed January 2009.
- Recovering from cardiac surgery: what patients want you to know.Am J Crit Care.2002;11(4):333–343. , , .
- Trip Advisor. Available at: http://www.tripadvisor.com. Accessed January 2009.
- Less is more in presenting quality information to consumers.Med Care Res Rev.2007;64(2):169–190. , , , , .
- MarketWatch: illness and injury as contributors to bankruptcy.Health Aff (Millwood)2005;(Suppl Web Exclusives):W5‐63–W5‐73. , , , .
- The Lake Wobegon effect: when all the patients are sicker.Am J Med Qual.2006;21(6):365–366. .
- Surgical mortality as an indicator of hospital quality: the problem with small sample size.JAMA.2004;292(7):847–851. , , .
- An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):1693–1701. , , , et al.
- Can administrative data be used to compare postoperative complication rates across hospitals?Med Care.2002;40(10):856–867. , , , .
- Impact of diagnosis‐timing indicators on measures of safety, comorbidity, and case mix groupings from administrative data sources.Med Care.2007;45(8):781–788. , , , , .
- Do the AHRQ patient safety indicators flag conditions that are present at the time of hospital admission?Med Care.2008;46(5):516–522. , , , , .
- The tension between needing to improve care and knowing how to do it.N Engl J Med.2007;357(6):608–613. , , .
- Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure.Ann Intern Med.2008;149(1):29–32. , , , .
- The GAAP in quality measurement and reporting.JAMA.2007;298(15):1800–1802. , , .
- Consumer competencies and the use of comparative quality information: it isn't just about literacy.Med Care Res Rev.2007;64(4):379–394. , , , .
- Performance measurement in search of a path.N Engl J Med.2007;356(9):951–953. .
- Committee on Quality of Health Care in America IoM.Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academy Press;2001.
- Care in U.S. hospitals: the Hospital Quality Alliance program.N Engl J Med.2005;353(3):265–274. , , , .
- Achieving and sustaining improved quality: lessons from New York state and cardiac surgery.Health Aff. 20022002;21(4):40–51. .
- Systematic review: the evidence that publishing patient care performance data improves quality of care.Ann Intern Med.2008;148(2):111–123. , , , , .
- Hospital performance reports: impact on quality, market share, and reputation.Health Aff (Millwood).2005;24(4):1150–1160. , , .
- Does publicizing hospital performance stimulate quality improvement efforts?Health Aff (Millwood).2003;22(2):84–94. , , .
- Improving the outcomes of coronary artery bypass surgery in New York State.JAMA.1994;271(10):761–766. , , , , .
- Declines in hospital mortality associated with a regional initiative to measure hospital performance.Am J Med Qual.1997;12(2):103–112. , , .
- Public reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356(5):486–496. , , , et al.
- Do quality report cards play a role in HMOs' contracting practices? Evidence from New York State.Health Serv Res.2000;35(1 Pt 2):319–332. , , , , , .
- Quality of cardiac surgeons and managed care contracting practices.Health Serv Res.2002;37(5):1129–1144. , , , .
- Using performance data to identify preferred hospitals.Health Serv Res.2007;42(6 Pt 1):2109–2119; discussion 2294–2323. , , , , , .
- Use of public performance reports: a survey of patients undergoing cardiac surgery.JAMA.1998;279(20):1638–1642. , .
- How do elderly patients decide where to go for major surgery? Telephone interview survey.BMJ.2005;331(7520):821. , , .
- The effect of publicly reporting hospital performance on market share and risk‐adjusted mortality at high‐mortality hospitals.Med Care.2003;41(6):729–740. , , , , , .
- The predictive accuracy of the New York State coronary artery bypass surgery report‐card system.Health Aff (Millwood).2006;25(3):844–855. , .
- Publicly disclosed information about the quality of health care: response of the US public.Qual Saf Health Care.2001;10(2):96–103. , .
- Choosing the best hospital: the limitations of public reporting of hospital quality.Health Aff (Millwood).2008;27(6):1680–1687. , , , , .
- Will quality report cards help consumers?Health Aff (Millwood).1997;16(3):218–228. , .
- Agency for Healthcare Research and Quality. HCUPnet, Healthcare Cost and Utilization Project. Available at: http://hcupnet.ahrq.gov. Accessed January 2009.
- Recovering from cardiac surgery: what patients want you to know.Am J Crit Care.2002;11(4):333–343. , , .
- Trip Advisor. Available at: http://www.tripadvisor.com. Accessed January 2009.
- Less is more in presenting quality information to consumers.Med Care Res Rev.2007;64(2):169–190. , , , , .
- MarketWatch: illness and injury as contributors to bankruptcy.Health Aff (Millwood)2005;(Suppl Web Exclusives):W5‐63–W5‐73. , , , .
- The Lake Wobegon effect: when all the patients are sicker.Am J Med Qual.2006;21(6):365–366. .
- Surgical mortality as an indicator of hospital quality: the problem with small sample size.JAMA.2004;292(7):847–851. , , .
- An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):1693–1701. , , , et al.
- Can administrative data be used to compare postoperative complication rates across hospitals?Med Care.2002;40(10):856–867. , , , .
- Impact of diagnosis‐timing indicators on measures of safety, comorbidity, and case mix groupings from administrative data sources.Med Care.2007;45(8):781–788. , , , , .
- Do the AHRQ patient safety indicators flag conditions that are present at the time of hospital admission?Med Care.2008;46(5):516–522. , , , , .
- The tension between needing to improve care and knowing how to do it.N Engl J Med.2007;357(6):608–613. , , .
- Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure.Ann Intern Med.2008;149(1):29–32. , , , .
- The GAAP in quality measurement and reporting.JAMA.2007;298(15):1800–1802. , , .
- Consumer competencies and the use of comparative quality information: it isn't just about literacy.Med Care Res Rev.2007;64(4):379–394. , , , .
- Performance measurement in search of a path.N Engl J Med.2007;356(9):951–953. .
Copyright © 2009 Society of Hospital Medicine