MicroRNA may be therapeutic target for MF

Article Type
Changed
Tue, 10/27/2015 - 06:00
Display Headline
MicroRNA may be therapeutic target for MF

Mycosis fungoides

A Notch-related microRNA may be a therapeutic target for mycosis fungoides (MF), according to research published in the Journal of Investigative Dermatology.

The Notch pathway has been implicated in the progression of cutaneous T-cell lymphomas, but the mechanisms driving Notch activation has been unclear.

So investigators studied a series of skin samples from patients with MF in tumor phase, focusing on the Notch pathway.

“The purpose of this project has been to research the state of the Notch pathway in a series of samples from patients with mycosis fungoides and compare the results to a control group to discover if Notch activation in tumors is influenced by epigenetic modifications,” said Fernando Gallardo, MD, of Hospital del Mar Investigacions Mèdiques in Barcelona, Spain.

So he and his colleagues looked at methylation patterns in several components of the Notch pathway and confirmed that Notch1 was activated in samples from patients with MF.

They then identified a microRNA, miR-200C, that was epigenetically repressed in the samples. Further investigation revealed that this repression leads to the activation of the Notch pathway.

“The restoration of miR-200C expression, silenced in the tumor cells, could represent a potential therapeutic target for this subtype of lymphomas,” Dr Gallardo concluded.

Publications
Topics

Mycosis fungoides

A Notch-related microRNA may be a therapeutic target for mycosis fungoides (MF), according to research published in the Journal of Investigative Dermatology.

The Notch pathway has been implicated in the progression of cutaneous T-cell lymphomas, but the mechanisms driving Notch activation has been unclear.

So investigators studied a series of skin samples from patients with MF in tumor phase, focusing on the Notch pathway.

“The purpose of this project has been to research the state of the Notch pathway in a series of samples from patients with mycosis fungoides and compare the results to a control group to discover if Notch activation in tumors is influenced by epigenetic modifications,” said Fernando Gallardo, MD, of Hospital del Mar Investigacions Mèdiques in Barcelona, Spain.

So he and his colleagues looked at methylation patterns in several components of the Notch pathway and confirmed that Notch1 was activated in samples from patients with MF.

They then identified a microRNA, miR-200C, that was epigenetically repressed in the samples. Further investigation revealed that this repression leads to the activation of the Notch pathway.

“The restoration of miR-200C expression, silenced in the tumor cells, could represent a potential therapeutic target for this subtype of lymphomas,” Dr Gallardo concluded.

Mycosis fungoides

A Notch-related microRNA may be a therapeutic target for mycosis fungoides (MF), according to research published in the Journal of Investigative Dermatology.

The Notch pathway has been implicated in the progression of cutaneous T-cell lymphomas, but the mechanisms driving Notch activation has been unclear.

So investigators studied a series of skin samples from patients with MF in tumor phase, focusing on the Notch pathway.

“The purpose of this project has been to research the state of the Notch pathway in a series of samples from patients with mycosis fungoides and compare the results to a control group to discover if Notch activation in tumors is influenced by epigenetic modifications,” said Fernando Gallardo, MD, of Hospital del Mar Investigacions Mèdiques in Barcelona, Spain.

So he and his colleagues looked at methylation patterns in several components of the Notch pathway and confirmed that Notch1 was activated in samples from patients with MF.

They then identified a microRNA, miR-200C, that was epigenetically repressed in the samples. Further investigation revealed that this repression leads to the activation of the Notch pathway.

“The restoration of miR-200C expression, silenced in the tumor cells, could represent a potential therapeutic target for this subtype of lymphomas,” Dr Gallardo concluded.

Publications
Publications
Topics
Article Type
Display Headline
MicroRNA may be therapeutic target for MF
Display Headline
MicroRNA may be therapeutic target for MF
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Iron chelator tablets may now be crushed

Article Type
Changed
Tue, 10/27/2015 - 05:00
Display Headline
Iron chelator tablets may now be crushed

Prescription medications

Photo courtesy of the CDC

The US Food and Drug Administration (FDA) has approved a label change for Jadenu, an oral formulation of the iron chelator Exjade (deferasirox).

Jadenu comes in tablet form, and the previous label stated that Jadenu tablets must be swallowed whole.

Now, the medication can also be crushed to help simplify administration for patients who have difficulty swallowing whole tablets.

Jadenu tablets may be crushed and mixed with soft foods, such as yogurt or applesauce, immediately prior to use.

The label notes that commercial crushers with serrated surfaces should be avoided for crushing a single 90 mg tablet. The dose should be consumed immediately and not stored.

Jadenu was granted accelerated approval from the FDA earlier this year.

It is approved to treat patients 2 years of age and older who have chronic iron overload resulting from blood transfusions, as well as to treat chronic iron overload in patients 10 years of age and older who have non-transfusion-dependent thalassemia.

The full prescribing information for Jadenu can be found at http://www.pharma.us.novartis.com/product/pi/pdf/jadenu.pdf.

Publications
Topics

Prescription medications

Photo courtesy of the CDC

The US Food and Drug Administration (FDA) has approved a label change for Jadenu, an oral formulation of the iron chelator Exjade (deferasirox).

Jadenu comes in tablet form, and the previous label stated that Jadenu tablets must be swallowed whole.

Now, the medication can also be crushed to help simplify administration for patients who have difficulty swallowing whole tablets.

Jadenu tablets may be crushed and mixed with soft foods, such as yogurt or applesauce, immediately prior to use.

The label notes that commercial crushers with serrated surfaces should be avoided for crushing a single 90 mg tablet. The dose should be consumed immediately and not stored.

Jadenu was granted accelerated approval from the FDA earlier this year.

It is approved to treat patients 2 years of age and older who have chronic iron overload resulting from blood transfusions, as well as to treat chronic iron overload in patients 10 years of age and older who have non-transfusion-dependent thalassemia.

The full prescribing information for Jadenu can be found at http://www.pharma.us.novartis.com/product/pi/pdf/jadenu.pdf.

Prescription medications

Photo courtesy of the CDC

The US Food and Drug Administration (FDA) has approved a label change for Jadenu, an oral formulation of the iron chelator Exjade (deferasirox).

Jadenu comes in tablet form, and the previous label stated that Jadenu tablets must be swallowed whole.

Now, the medication can also be crushed to help simplify administration for patients who have difficulty swallowing whole tablets.

Jadenu tablets may be crushed and mixed with soft foods, such as yogurt or applesauce, immediately prior to use.

The label notes that commercial crushers with serrated surfaces should be avoided for crushing a single 90 mg tablet. The dose should be consumed immediately and not stored.

Jadenu was granted accelerated approval from the FDA earlier this year.

It is approved to treat patients 2 years of age and older who have chronic iron overload resulting from blood transfusions, as well as to treat chronic iron overload in patients 10 years of age and older who have non-transfusion-dependent thalassemia.

The full prescribing information for Jadenu can be found at http://www.pharma.us.novartis.com/product/pi/pdf/jadenu.pdf.

Publications
Publications
Topics
Article Type
Display Headline
Iron chelator tablets may now be crushed
Display Headline
Iron chelator tablets may now be crushed
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Technique enables SCD detection with a smartphone

Article Type
Changed
Tue, 10/27/2015 - 05:00
Display Headline
Technique enables SCD detection with a smartphone

Doctor using a smartphone

Photo by Daniel Sone

Researchers say they’ve developed a simple technique for diagnosing and monitoring sickle cell disease (SCD) that could be used in regions where advanced medical technology and training are scarce.

The team created a 3D-printed box that can be attached to an Android smartphone and used to test a small blood sample.

The testing method involves magnetic levitation, which allows the user to differentiate sickle cells from normal red blood cells with the naked eye.

Savas Tasoglu, PhD, of the University of Connecticut in Storrs, and his colleagues described this technique in Nature Scientific Reports.

First, a clinician takes a blood sample from a patient and mixes it with a common, salt-based solution that draws oxygen out of sickle cells, making them denser and easier to detect via magnetic levitation. The denser sickle cells will float at a lower height than healthy red blood cells, which are not affected by the solution.

The sample is then loaded into a disposable micro-capillary that is inserted into the tester attached to the smartphone. Inside the testing apparatus, the micro-capillary passes between 2 magnets that are aligned so that the same poles face each other, creating a magnetic field.

The capillary is then illuminated with an LED that is filtered through a ground glass diffuser and magnified by an internal lens.

The smartphone’s built-in camera captures the resulting image and presents it digitally on the phone’s external display. The blood cells floating inside the capillary—whether higher-floating healthy red blood cells or lower-floating sickle cells—can be easily observed.

The device also provides clinicians with a digital readout that assigns a numerical value to the sample density to assist with the diagnosis. The entire process takes less than 15 minutes.

“With this device, you’re getting much more specific information about your cells than some other tests,” said Stephanie Knowlton, a graduate student at the University of Connecticut.

“Rather than sending a sample to a lab and waiting 3 days to find out if you have this disease, with this device, you get on-site and portable results right away. We believe a device like this could be very helpful in third-world countries where laboratory resources may be limited.”

Dr Tasoglu’s lab has filed a provisional patent for the device and is working on expanding its capabilities so it can be applied to other diseases.

Publications
Topics

Doctor using a smartphone

Photo by Daniel Sone

Researchers say they’ve developed a simple technique for diagnosing and monitoring sickle cell disease (SCD) that could be used in regions where advanced medical technology and training are scarce.

The team created a 3D-printed box that can be attached to an Android smartphone and used to test a small blood sample.

The testing method involves magnetic levitation, which allows the user to differentiate sickle cells from normal red blood cells with the naked eye.

Savas Tasoglu, PhD, of the University of Connecticut in Storrs, and his colleagues described this technique in Nature Scientific Reports.

First, a clinician takes a blood sample from a patient and mixes it with a common, salt-based solution that draws oxygen out of sickle cells, making them denser and easier to detect via magnetic levitation. The denser sickle cells will float at a lower height than healthy red blood cells, which are not affected by the solution.

The sample is then loaded into a disposable micro-capillary that is inserted into the tester attached to the smartphone. Inside the testing apparatus, the micro-capillary passes between 2 magnets that are aligned so that the same poles face each other, creating a magnetic field.

The capillary is then illuminated with an LED that is filtered through a ground glass diffuser and magnified by an internal lens.

The smartphone’s built-in camera captures the resulting image and presents it digitally on the phone’s external display. The blood cells floating inside the capillary—whether higher-floating healthy red blood cells or lower-floating sickle cells—can be easily observed.

The device also provides clinicians with a digital readout that assigns a numerical value to the sample density to assist with the diagnosis. The entire process takes less than 15 minutes.

“With this device, you’re getting much more specific information about your cells than some other tests,” said Stephanie Knowlton, a graduate student at the University of Connecticut.

“Rather than sending a sample to a lab and waiting 3 days to find out if you have this disease, with this device, you get on-site and portable results right away. We believe a device like this could be very helpful in third-world countries where laboratory resources may be limited.”

Dr Tasoglu’s lab has filed a provisional patent for the device and is working on expanding its capabilities so it can be applied to other diseases.

Doctor using a smartphone

Photo by Daniel Sone

Researchers say they’ve developed a simple technique for diagnosing and monitoring sickle cell disease (SCD) that could be used in regions where advanced medical technology and training are scarce.

The team created a 3D-printed box that can be attached to an Android smartphone and used to test a small blood sample.

The testing method involves magnetic levitation, which allows the user to differentiate sickle cells from normal red blood cells with the naked eye.

Savas Tasoglu, PhD, of the University of Connecticut in Storrs, and his colleagues described this technique in Nature Scientific Reports.

First, a clinician takes a blood sample from a patient and mixes it with a common, salt-based solution that draws oxygen out of sickle cells, making them denser and easier to detect via magnetic levitation. The denser sickle cells will float at a lower height than healthy red blood cells, which are not affected by the solution.

The sample is then loaded into a disposable micro-capillary that is inserted into the tester attached to the smartphone. Inside the testing apparatus, the micro-capillary passes between 2 magnets that are aligned so that the same poles face each other, creating a magnetic field.

The capillary is then illuminated with an LED that is filtered through a ground glass diffuser and magnified by an internal lens.

The smartphone’s built-in camera captures the resulting image and presents it digitally on the phone’s external display. The blood cells floating inside the capillary—whether higher-floating healthy red blood cells or lower-floating sickle cells—can be easily observed.

The device also provides clinicians with a digital readout that assigns a numerical value to the sample density to assist with the diagnosis. The entire process takes less than 15 minutes.

“With this device, you’re getting much more specific information about your cells than some other tests,” said Stephanie Knowlton, a graduate student at the University of Connecticut.

“Rather than sending a sample to a lab and waiting 3 days to find out if you have this disease, with this device, you get on-site and portable results right away. We believe a device like this could be very helpful in third-world countries where laboratory resources may be limited.”

Dr Tasoglu’s lab has filed a provisional patent for the device and is working on expanding its capabilities so it can be applied to other diseases.

Publications
Publications
Topics
Article Type
Display Headline
Technique enables SCD detection with a smartphone
Display Headline
Technique enables SCD detection with a smartphone
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Immunosuppressant can treat autoimmune cytopenias

Article Type
Changed
Tue, 10/27/2015 - 05:00
Display Headline
Immunosuppressant can treat autoimmune cytopenias

Red and white blood cells

New research suggests the immunosuppressant sirolimus may be a promising treatment option for patients with refractory autoimmune cytopenias.

The drug proved particularly effective in children with autoimmune lymphoproliferative syndrome (ALPS), producing complete responses in all of the ALPS patients studied.

On the other hand, patients with single-lineage autoimmune cytopenias, such as immune thrombocytopenia (ITP), did not fare as well.

David T. Teachey, MD, of The Children’s Hospital of Philadelphia in Pennsylvania, and his colleagues reported these results in Blood.

The group studied sirolimus in 30 patients with refractory autoimmune cytopenias who were 5 to 19 years of age. All of the patients were refractory to or could not tolerate corticosteroids.

Twelve patients had ALPS, 6 had single-lineage autoimmune cytopenias (4 with ITP and 2 with autoimmune hemolytic anemia [AIHA]), and 12 patients had multi-lineage cytopenias secondary to common variable immune deficiency (n=2), Evans syndrome (n=8), or systemic lupus erythematosus (n=2).

The patients received 2 mg/m2 to 2.5 mg/m2 per day of sirolimus in liquid or tablet form for 6 months. After 6 months, those who benefited from the drug were allowed to continue treatment with follow-up appointments to monitor toxicities.

Of the 12 children with ALPS, 11 had complete responses—normalization of blood cell counts—from 1 to 3 months after receiving sirolimus. The remaining patient achieved a complete response after 18 months.

All ALPS patients were successfully weaned off steroids and discontinued all other medications within 1 week to 1 month after starting sirolimus.

The patients with multi-lineage cytopenias also responded well to sirolimus. Eight of the 12 patients had complete responses, although these occurred later than for most ALPS patients (after 3 months).

The 6 patients with single-lineage cytopenias had less robust results—1 complete response and 2 partial responses. One child with ITP achieved a partial response but had to discontinue therapy.

One of the patients with AIHA had a complete response by 6 months and was able to stop taking other medications within a month. The other child with AIHA achieved a partial response.

For all patients, the median time on sirolimus was 2 years (range, 1–4.5 years).

The most common adverse event observed in this study was grade 1-2 mucositis (n=10). Other toxicities included elevated triglycerides and elevated cholesterol (n=2), acne (n=1), sun sensitivity (n=1), and exacerbation of gastro-esophageal reflux disease (n=1).

One patient developed hypertension 2 years after starting sirolimus, but this was temporally related to starting a new psychiatric medication.

Another patient (with Evans syndrome) developed a headache with associated white matter changes (4 different lesions). The changes were attributed to disease-associated vasculitis, and the lesions resolved over a few months with the addition of steroids. The patient was eventually diagnosed with a primary T-cell immune deficiency and underwent hematopoietic stem cell transplant.

“This study demonstrates that sirolimus is an effective and safe alternative to steroids, providing children with an improved quality of life as they continue treatment into adulthood,” Dr Teachey said. “While further studies are needed, sirolimus should be considered an early therapy option for patients with autoimmune blood disorders requiring ongoing therapy.”

Publications
Topics

Red and white blood cells

New research suggests the immunosuppressant sirolimus may be a promising treatment option for patients with refractory autoimmune cytopenias.

The drug proved particularly effective in children with autoimmune lymphoproliferative syndrome (ALPS), producing complete responses in all of the ALPS patients studied.

On the other hand, patients with single-lineage autoimmune cytopenias, such as immune thrombocytopenia (ITP), did not fare as well.

David T. Teachey, MD, of The Children’s Hospital of Philadelphia in Pennsylvania, and his colleagues reported these results in Blood.

The group studied sirolimus in 30 patients with refractory autoimmune cytopenias who were 5 to 19 years of age. All of the patients were refractory to or could not tolerate corticosteroids.

Twelve patients had ALPS, 6 had single-lineage autoimmune cytopenias (4 with ITP and 2 with autoimmune hemolytic anemia [AIHA]), and 12 patients had multi-lineage cytopenias secondary to common variable immune deficiency (n=2), Evans syndrome (n=8), or systemic lupus erythematosus (n=2).

The patients received 2 mg/m2 to 2.5 mg/m2 per day of sirolimus in liquid or tablet form for 6 months. After 6 months, those who benefited from the drug were allowed to continue treatment with follow-up appointments to monitor toxicities.

Of the 12 children with ALPS, 11 had complete responses—normalization of blood cell counts—from 1 to 3 months after receiving sirolimus. The remaining patient achieved a complete response after 18 months.

All ALPS patients were successfully weaned off steroids and discontinued all other medications within 1 week to 1 month after starting sirolimus.

The patients with multi-lineage cytopenias also responded well to sirolimus. Eight of the 12 patients had complete responses, although these occurred later than for most ALPS patients (after 3 months).

The 6 patients with single-lineage cytopenias had less robust results—1 complete response and 2 partial responses. One child with ITP achieved a partial response but had to discontinue therapy.

One of the patients with AIHA had a complete response by 6 months and was able to stop taking other medications within a month. The other child with AIHA achieved a partial response.

For all patients, the median time on sirolimus was 2 years (range, 1–4.5 years).

The most common adverse event observed in this study was grade 1-2 mucositis (n=10). Other toxicities included elevated triglycerides and elevated cholesterol (n=2), acne (n=1), sun sensitivity (n=1), and exacerbation of gastro-esophageal reflux disease (n=1).

One patient developed hypertension 2 years after starting sirolimus, but this was temporally related to starting a new psychiatric medication.

Another patient (with Evans syndrome) developed a headache with associated white matter changes (4 different lesions). The changes were attributed to disease-associated vasculitis, and the lesions resolved over a few months with the addition of steroids. The patient was eventually diagnosed with a primary T-cell immune deficiency and underwent hematopoietic stem cell transplant.

“This study demonstrates that sirolimus is an effective and safe alternative to steroids, providing children with an improved quality of life as they continue treatment into adulthood,” Dr Teachey said. “While further studies are needed, sirolimus should be considered an early therapy option for patients with autoimmune blood disorders requiring ongoing therapy.”

Red and white blood cells

New research suggests the immunosuppressant sirolimus may be a promising treatment option for patients with refractory autoimmune cytopenias.

The drug proved particularly effective in children with autoimmune lymphoproliferative syndrome (ALPS), producing complete responses in all of the ALPS patients studied.

On the other hand, patients with single-lineage autoimmune cytopenias, such as immune thrombocytopenia (ITP), did not fare as well.

David T. Teachey, MD, of The Children’s Hospital of Philadelphia in Pennsylvania, and his colleagues reported these results in Blood.

The group studied sirolimus in 30 patients with refractory autoimmune cytopenias who were 5 to 19 years of age. All of the patients were refractory to or could not tolerate corticosteroids.

Twelve patients had ALPS, 6 had single-lineage autoimmune cytopenias (4 with ITP and 2 with autoimmune hemolytic anemia [AIHA]), and 12 patients had multi-lineage cytopenias secondary to common variable immune deficiency (n=2), Evans syndrome (n=8), or systemic lupus erythematosus (n=2).

The patients received 2 mg/m2 to 2.5 mg/m2 per day of sirolimus in liquid or tablet form for 6 months. After 6 months, those who benefited from the drug were allowed to continue treatment with follow-up appointments to monitor toxicities.

Of the 12 children with ALPS, 11 had complete responses—normalization of blood cell counts—from 1 to 3 months after receiving sirolimus. The remaining patient achieved a complete response after 18 months.

All ALPS patients were successfully weaned off steroids and discontinued all other medications within 1 week to 1 month after starting sirolimus.

The patients with multi-lineage cytopenias also responded well to sirolimus. Eight of the 12 patients had complete responses, although these occurred later than for most ALPS patients (after 3 months).

The 6 patients with single-lineage cytopenias had less robust results—1 complete response and 2 partial responses. One child with ITP achieved a partial response but had to discontinue therapy.

One of the patients with AIHA had a complete response by 6 months and was able to stop taking other medications within a month. The other child with AIHA achieved a partial response.

For all patients, the median time on sirolimus was 2 years (range, 1–4.5 years).

The most common adverse event observed in this study was grade 1-2 mucositis (n=10). Other toxicities included elevated triglycerides and elevated cholesterol (n=2), acne (n=1), sun sensitivity (n=1), and exacerbation of gastro-esophageal reflux disease (n=1).

One patient developed hypertension 2 years after starting sirolimus, but this was temporally related to starting a new psychiatric medication.

Another patient (with Evans syndrome) developed a headache with associated white matter changes (4 different lesions). The changes were attributed to disease-associated vasculitis, and the lesions resolved over a few months with the addition of steroids. The patient was eventually diagnosed with a primary T-cell immune deficiency and underwent hematopoietic stem cell transplant.

“This study demonstrates that sirolimus is an effective and safe alternative to steroids, providing children with an improved quality of life as they continue treatment into adulthood,” Dr Teachey said. “While further studies are needed, sirolimus should be considered an early therapy option for patients with autoimmune blood disorders requiring ongoing therapy.”

Publications
Publications
Topics
Article Type
Display Headline
Immunosuppressant can treat autoimmune cytopenias
Display Headline
Immunosuppressant can treat autoimmune cytopenias
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Hospital Evidence‐Based Practice Centers

Article Type
Changed
Mon, 05/15/2017 - 22:37
Display Headline
Evidence synthesis activities of a hospital evidence‐based practice center and impact on hospital decision making

Hospital evidence‐based practice centers (EPCs) are structures with the potential to facilitate the integration of evidence into institutional decision making to close knowing‐doing gaps[1, 2, 3, 4, 5, 6]; in the process, they can support the evolution of their parent institutions into learning healthcare systems.[7] The potential of hospital EPCs stems from their ability to identify and adapt national evidence‐based guidelines and systematic reviews for the local setting,[8] create local evidence‐based guidelines in the absence of national guidelines, use local data to help define problems and assess the impact of solutions,[9] and implement evidence into practice through computerized clinical decision support (CDS) interventions and other quality‐improvement (QI) initiatives.[9, 10] As such, hospital EPCs have the potential to strengthen relationships and understanding between clinicians and administrators[11]; foster a culture of evidence‐based practice; and improve the quality, safety, and value of care provided.[10]

Formal hospital EPCs remain uncommon in the United States,[10, 11, 12] though their numbers have expanded worldwide.[13, 14] This growth is due not to any reduced role for national EPCs, such as the National Institute for Health and Clinical Excellence[15] in the United Kingdom, or the 13 EPCs funded by the Agency for Healthcare Research and Quality (AHRQ)[16, 17] in the United States. Rather, this growth is fueled by the heightened awareness that the value of healthcare interventions often needs to be assessed locally, and that clinical guidelines that consider local context have a greater potential to improve quality and efficiency.[9, 18, 19]

Despite the increasing number of hospital EPCs globally, their impact on administrative and clinical decision making has rarely been examined,[13, 20] especially for hospital EPCs in the United States. The few studies that have assessed the impact of hospital EPCs on institutional decision making have done so in the context of technology acquisition, neglecting the role hospital EPCs may play in the integration of evidence into clinical practice. For example, the Technology Assessment Unit at McGill University Health Center found that of the 27 reviews commissioned in their first 5 years, 25 were implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of $10 million.[21] Understanding the activities and impact of hospital EPCs is particularly critical for hospitalist leaders, who could leverage hospital EPCs to inform efforts to support the quality, safety, and value of care provided, or who may choose to establish or lead such infrastructure. The availability of such opportunities could also support hospitalist recruitment and retention.

In 2006, the University of Pennsylvania Health System (UPHS) created the Center for Evidence‐based Practice (CEP) to support the integration of evidence into practice to strengthen quality, safety, and value.[10] Cofounded by hospitalists with formal training in clinical epidemiology, the CEP performs rapid systematic reviews of the scientific literature to inform local practice and policy. In this article, we describe the first 8 years of the CEP's evidence synthesis activities and examine its impact on decision making across the health system.

METHODS

Setting

The UPHS includes 3 acute care hospitals, and inpatient facilities specializing in acute rehabilitation, skilled nursing, long‐term acute care, and hospice, with a capacity of more than 1800 beds and 75,000 annual admissions, as well as primary care and specialty clinics with more than 2 million annual outpatient visits. The CEP is funded by and organized within the Office of the UPHS Chief Medical Officer, serves all UPHS facilities, has an annual budget of approximately $1 million, and is currently staffed by a hospitalist director, 3 research analysts, 6 physician and nurse liaisons, a health economist, biostatistician, administrator, and librarians, totaling 5.5 full time equivalents.

The mission of the CEP is to support the quality, safety, and value of care at Penn through evidence‐based practice. To accomplish this mission, the CEP performs rapid systematic reviews, translates evidence into practice through the use of CDS interventions and clinical pathways, and offers education in evidence‐based decision making to trainees, staff, and faculty. This study is focused on the CEP's evidence synthesis activities.

Typically, clinical and administrative leaders submit a request to the CEP for an evidence review, the request is discussed and approved at the weekly staff meeting, and a research analyst and clinical liaison are assigned to the request and communicate with the requestor to clearly define the question of interest. Subsequently, the research analyst completes a protocol, a draft search, and a draft report, each reviewed and approved by the clinical liaison and requestor. The final report is posted to the website, disseminated to all key stakeholders across the UPHS as identified by the clinical liaisons, and integrated into decision making through various routes, including in‐person presentations to decision makers, and CDS and QI initiatives.

Study Design

The study included an analysis of an internal database of evidence reviews and a survey of report requestors, and was exempted from institutional review board review. Survey respondents were informed that their responses would be confidential and did not receive incentives.

Internal Database of Reports

Data from the CEP's internal management database were analyzed for its first 8 fiscal years (July 2006June 2014). Variables included requestor characteristics, report characteristics (eg, technology reviewed, clinical specialty examined, completion time, and performance of meta‐analyses and GRADE [Grading of Recommendations Assessment, Development and Evaluation] analyses[22]), report use (eg, integration of report into CDS interventions) and dissemination beyond the UPHS (eg, submission to Center for Reviews and Dissemination [CRD] Health Technology Assessment [HTA] database[23] and to peer‐reviewed journals). Report completion time was defined as the time between the date work began on the report and the date the final report was sent to the requestor. The technology categorization scheme was adapted from that provided by Goodman (2004)[24] and the UK National Institute for Health Research HTA Programme.[25] We systematically assigned the technology reviewed in each report to 1 of 8 mutually exclusive categories. The clinical specialty examined in each report was determined using an algorithm (see Supporting Information, Appendix 1, in the online version of this article).

We compared the report completion times and the proportions of requestor types, technologies reviewed, and clinical specialties examined in the CEP's first 4 fiscal years (July 2006June 2010) to those in the CEP's second 4 fiscal years (July 2010June 2014) using t tests and 2 tests for continuous and categorical variables, respectively.

Survey

We conducted a Web‐based survey (see Supporting Information, Appendix 2, in the online version of this article) of all requestors of the 139 rapid reviews completed in the last 4 fiscal years. Participants who requested multiple reports were surveyed only about the most recent report. Requestors were invited to participate in the survey via e‐mail, and follow‐up e‐mails were sent to nonrespondents at 7, 14, and 16 days. Nonrespondents and respondents were compared with respect to requestor type (physician vs nonphysician) and topic evaluated (traditional HTA topics such as drugs, biologics, and devices vs nontraditional HTA topics such as processes of care). The survey was administered using REDCap[26] electronic data capture tools. The 44‐item questionnaire collected data on the interaction between the requestor and the CEP, report characteristics, report impact, and requestor satisfaction.

Survey results were imported into Microsoft Excel (Microsoft Corp, Redmond, WA) and SPSS (IBM, Armonk, NY) for analysis. Descriptive statistics were generated, and statistical comparisons were conducted using 2 and Fisher exact tests.

RESULTS

Evidence Synthesis Activity

The CEP has produced several different report products since its inception. Evidence reviews (57%, n = 142) consist of a systematic review and analysis of the primary literature. Evidence advisories (32%, n = 79) are summaries of evidence from secondary sources such as guidelines or systematic reviews. Evidence inventories (3%, n = 7) are literature searches that describe the quantity and focus of available evidence, without analysis or synthesis.[27]

The categories of technologies examined, including their definitions and examples, are provided in Table 1. Drugs (24%, n = 60) and devices/equipment/supplies (19%, n = 48) were most commonly examined. The proportion of reports examining technology types traditionally evaluated by HTA organizations significantly decreased when comparing the first 4 years of CEP activity to the second 4 years (62% vs 38%, P < 0.01), whereas reports examining less traditionally reviewed categories increased (38% vs 62%, P < 0.01). The most common clinical specialties represented by the CEP reports were nursing (11%, n = 28), general surgery (11%, n = 28), critical care (10%, n = 24), and general medicine (9%, n = 22) (see Supporting Information, Appendix 3, in the online version of this article). Clinical departments were the most common requestors (29%, n = 72) (Table 2). The proportion of requests from clinical departments significantly increased when comparing the first 4 years to the second 4 years (20% vs 36%, P < 0.01), with requests from purchasing committees significantly decreasing (25% vs 6%, P < 0.01). The overall report completion time was 70 days, and significantly decreased when comparing the first 4 years to the second 4 years (89 days vs 50 days, P < 0.01).

Technology Categories, Definitions, Examples, and Frequencies by Fiscal Years
CategoryDefinitionExamplesTotal2007201020112014P Value
Total  249 (100%)109 (100%)140 (100%) 
DrugA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a pharmacologic agentCelecoxib for pain in joint arthroplasty; colchicine for prevention of pericarditis and atrial fibrillation60 (24%)35 (32%)25 (18%)0.009
Device, equipment, and suppliesA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory that is intended for use in the prevention, diagnosis, or treatment of disease and does not achieve its primary intended purposes though chemical action or metabolism[50]Thermometers for pediatric use; femoral closure devices for cardiac catheterization48 (19%)25 (23%)23 (16%)0.19
Process of careA report primarily examining a clinical pathway or a clinical practice guideline that significantly involves elements of prevention, diagnosis, and/or treatment or significantly incorporates 2 or more of the other technology categoriesPreventing patient falls; prevention and management of delirium31 (12%)18 (17%)13 (9%)0.09
Test, scale, or risk factorA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a test intended to screen for, diagnose, classify, or monitor the progression of a diseaseComputed tomography for acute chest pain; urine drug screening in chronic pain patients on opioid therapy31 (12%)8 (7%)23 (16%)0.03
Medical/surgical procedureA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a medical intervention that is not a drug, device, or test or of the application or removal of a deviceBiliary drainage for chemotherapy patients; cognitive behavioral therapy for insomnia26 (10%)8 (7%)18 (13%)0.16
Policy or organizational/managerial systemA report primarily examining laws or regulations; the organization, financing, or delivery of care, including settings of care; or healthcare providersMedical care costs and productivity changes associated with smoking; physician training and credentialing for robotic surgery in obstetrics and gynecology26 (10%)4 (4%)22 (16%)0.002
Support systemA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an intervention designed to provide a new or improved service to patients or healthcare providers that does not fall into 1 of the other categoriesReconciliation of data from differing electronic medical records; social media, text messaging, and postdischarge communication14 (6%)3 (3%)11 (8%)0.09
BiologicA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a product manufactured in a living systemRecombinant factor VIIa for cardiovascular surgery; osteobiologics for orthopedic fusions13 (5%)8 (7%)5 (4%)0.19
Requestor Categories and Frequencies by Fiscal Years
CategoryTotal2007201020112014P Value
  • NOTE: *Other includes ad hoc committees, CEP, Children's Hospital of Philadelphia, IT committees, payers, and the primary care network.. Abbreviations: CEP, Center for Evidence‐based Practice; CMO, chief medical officer; IT, information technology.

Total249 (100%)109 (100%)140 (100%) 
Clinical department72 (29%)22 (20%)50 (36%)0.007
CMO47 (19%)21 (19%)26 (19%)0.92
Purchasing committee35 (14%)27 (25%)8 (6%)<0.001
Formulary committee22 (9%)12 (11%)10 (7%)0.54
Quality committee21 (8%)11 (10%)10 (7%)0.42
Administrative department19 (8%)5 (5%)14 (10%)0.11
Nursing14 (6%)4 (4%)10 (7%)0.23
Other*19 (8%)7 (6%)12 (9%)0.55

Thirty‐seven (15%) reports included meta‐analyses conducted by CEP staff. Seventy‐five reports (30%) contained an evaluation of the quality of the evidence base using GRADE analyses.[22] Of these reports, the highest GRADE of evidence available for any comparison of interest was moderate (35%, n = 26) or high (33%, n = 25) in most cases, followed by very low (19%, n = 14) and low (13%, n = 10).

Reports were disseminated in a variety of ways beyond direct dissemination and presentation to requestors and posting on the center website. Thirty reports (12%) informed CDS interventions, 24 (10%) resulted in peer‐reviewed publications, and 204 (82%) were posted to the CRD HTA database.

Evidence Synthesis Impact

A total of 139 reports were completed between July 2010 and June 2014 for 65 individual requestors. Email invitations to participate in the survey were sent to the 64 requestors employed by the UPHS. The response rate was 72% (46/64). The proportions of physician requestors and traditional HTA topics evaluated were similar across respondents and nonrespondents (43% [20/46] vs 39% [7/18], P = 0.74; and 37% [17/46] vs 44% [8/18], P = 0.58, respectively). Aggregated survey responses are presented for items using a Likert scale in Figure 1, and for items using a yes/no or ordinal scale in Table 3.

Responses to Yes/No and Ranking Survey Questions
Items% of Respondents Responding Affirmatively
 Percentage of Respondents Ranking as First Choice*
  • NOTE: Abbreviations: CEP, Center for Evidence‐based Practice. *The sum of these percentages is greater than 100 percent because respondents could rank multiple options first.

Requestor activity 
What factors prompted you to request a report from CEP? (Please select all that apply.) 
My own time constraints28% (13/46)
CEP's ability to identify and synthesize evidence89% (41/46)
CEP's objectivity52% (24/46)
Recommendation from colleague30% (14/46)
Did you conduct any of your own literature searches before contacting CEP?67% (31/46)
Did you obtain and read any of the articles cited in CEP's report?63% (29/46)
Did you read the following sections of CEP's report? 
Evidence summary (at beginning of report)100% (45/45)
Introduction/background93% (42/45)
Methods84% (38/45)
Results98% (43/43)
Conclusion100% (43/43)
Report dissemination 
Did you share CEP's report with anyone NOT involved in requesting the report or in making the final decision?67% (30/45)
Did you share CEP's report with anyone outside of Penn?7% (3/45)
Requestor preferences 
Would it be helpful for CEP staff to call you after you receive any future CEP reports to answer any questions you might have?55% (24/44)
Following any future reports you request from CEP, would you be willing to complete a brief questionnaire?100% (44/44)
Please rank how you would prefer to receive reports from CEP in the future. 
E‐mail containing the report as a PDF attachment77% (34/44)
E‐mail containing a link to the report on CEP's website16% (7/44)
In‐person presentation by the CEP analyst writing the report18% (8/44)
In‐person presentation by the CEP director involved in the report16% (7/44)
Figure 1
Requestor responses to Likert survey questions. Abbreviations: CEP, Center for Evidence‐based Practice.

In general, respondents found reports easy to request, easy to use, timely, and relevant, resulting in high requestor satisfaction. In addition, 98% described the scope of content and level of detail as about right. Report impact was rated highly as well, with the evidence summary and conclusions rated as the most critical to decision making. A majority of respondents indicated that reports confirmed their tentative decision (77%, n = 34), whereas some changed their tentative decision (7%, n = 3), and others suggested the report had no effect on their tentative decision (16%, n = 7). Respondents indicated the amount of time that elapsed between receiving reports and making final decisions was 1 to 7 days (5%, n = 2), 8 to 30 days (40%, n = 17), 1 to 3 months (37%, n = 16), 4 to 6 months (9%, n = 4), or greater than 6 months (9%, n = 4). The most common reasons cited for requesting a report were the CEP's evidence synthesis skills and objectivity.

DISCUSSION

To our knowledge, this is the first comprehensive description and assessment of evidence synthesis activity by a hospital EPC in the United States. Our findings suggest that clinical and administrative leaders will request reports from a hospital EPC, and that hospital EPCs can promptly produce reports when requested. Moreover, these syntheses can address a wide range of clinical and policy topics, and can be disseminated through a variety of routes. Lastly, requestors are satisfied by these syntheses, and report that they inform decision making. These results suggest that EPCs may be an effective infrastructure paradigm for promoting evidence‐based decision making within healthcare provider organizations, and are consistent with previous analyses of hospital‐based EPCs.[21, 28, 29]

Over half of report requestors cited CEP's objectivity as a factor in their decision to request a report, underscoring the value of a neutral entity in an environment where clinical departments and hospital committees may have competing interests.[10] This asset was 1 of the primary drivers for establishing our hospital EPC. Concerns by clinical executives about the influence of industry and local politics on institutional decision making, and a desire to have clinical evidence more systematically and objectively integrated into decision making, fueled our center's funding.

The survey results also demonstrate that respondents were satisfied with the reports for many reasons, including readability, concision, timeliness, scope, and content, consistent with the evaluation of the French hospital‐based EPC CEDIT (French Committee for the Assessment and Dissemination of Technological Innovations).[29] Given the importance of readability, concision, and relevance that has been previously described,[16, 28, 30] nearly all CEP reports contain an evidence summary on the first page that highlights key findings in a concise, user‐friendly format.[31] The evidence summaries include bullet points that: (1) reference the most pertinent guideline recommendations along with their strength of recommendation and underlying quality of evidence; (2) organize and summarize study findings using the most critical clinical outcomes, including an assessment of the quality of the underlying evidence for each outcome; and (3) note important limitations of the findings.

Evidence syntheses must be timely to allow decision makers to act on the findings.[28, 32] The primary criticism of CEDIT was the lag between requests and report publication.[29] Rapid reviews, designed to inform urgent decisions, can overcome this challenge.[31, 33, 34] CEP reviews required approximately 2 months to complete on average, consistent with the most rapid timelines reported,[31, 33, 34] and much shorter than standard systematic review timelines, which can take up to 12 to 24 months.[33] Working with requestors to limit the scope of reviews to those issues most critical to a decision, using secondary resources when available, and hiring experienced research analysts help achieve these efficiencies.

The study by Bodeau‐Livinec also argues for the importance of report accessibility to ensure dissemination.[29] This is consistent with the CEP's approach, where all reports are posted on the UPHS internal website. Many also inform QI initiatives, as well as CDS interventions that address topics of general interest to acute care hospitals, such as venous thromboembolism (VTE) prophylaxis,[35] blood product transfusions,[36] sepsis care,[37, 38] and prevention of catheter‐associated urinary tract infections (CAUTI)[39] and hospital readmissions.[40] Most reports are also listed in an international database of rapid reviews,[23] and reports that address topics of general interest, have sufficient evidence to synthesize, and have no prior published systematic reviews are published in the peer‐reviewed literature.[41, 42]

The majority of reports completed by the CEP were evidence reviews, or systematic reviews of primary literature, suggesting that CEP reports often address questions previously unanswered by existing published systematic reviews; however, about a third of reports were evidence advisories, or summaries of evidence from preexisting secondary sources. The relative scarcity of high‐quality evidence bases in those reports where GRADE analyses were conducted might be expected, as requestors may be more likely to seek guidance when the evidence base on a topic is lacking. This was further supported by the small percentage (15%) of reports where adequate data of sufficient homogeneity existed to allow meta‐analyses. The small number of original meta‐analyses performed also reflects our reliance on secondary resources when available.

Only 7% of respondents reported that tentative decisions were changed based on their report. This is not surprising, as evidence reviews infrequently result in clear go or no go recommendations. More commonly, they address or inform complex clinical questions or pathways. In this context, the change/confirm/no effect framework may not completely reflect respondents' use of or benefit from reports. Thus, we included a diverse set of questions in our survey to best estimate the value of our reports. For example, when asked whether the report answered the question posed, informed their final decision, or was consistent with their final decision, 91%, 79%, and 71% agreed or strongly agreed, respectively. When asked whether they would request a report again if they had to do it all over, recommend CEP to their colleagues, and be likely to request reports in the future, at least 95% of survey respondents agreed or strongly agreed. In addition, no respondent indicated that their report was not timely enough to influence their decision. Moreover, only a minority of respondents expressed disappointment that the CEP's report did not provide actionable recommendations due to a lack of published evidence (9%, n = 4). Importantly, the large proportion of requestors indicating that reports confirmed their tentative decisions may be a reflection of hindsight bias.

The most apparent trend in the production of CEP reviews over time is the relative increase in requests by clinical departments, suggesting that the CEP is being increasingly consulted to help define best clinical practices. This is also supported by the relative increase in reports focused on policy or organizational/managerial systems. These findings suggest that hospital EPCs have value beyond the traditional realm of HTA.

This study has a number of limitations. First, not all of the eligible report requestors responded to our survey. Despite this, our response rate of 72% compares favorably with surveys published in medical journals.[43] In addition, nonresponse bias may be less important in physician surveys than surveys of the general population.[44] The similarity in requestor and report characteristics for respondents and nonrespondents supports this. Second, our survey of impact is self‐reported rather than an evaluation of actual decision making or patient outcomes. Thus, the survey relies on the accuracy of the responses. Third, recall bias must be considered, as some respondents were asked to evaluate reports that were greater than 1 year old. To reduce this bias, we asked respondents to consider the most recent report they requested, included that report as an attachment in the survey request, and only surveyed requestors from the most recent 4 of the CEP's 8 fiscal years. Fourth, social desirability bias could have also affected the survey responses, though it was likely minimized by the promise of confidentiality. Fifth, an examination of the impact of the CEP on costs was outside the scope of this evaluation; however, such information may be important to those assessing the sustainability or return on investment of such centers. Simple approaches we have previously used to approximate the value of our activities include: (1) estimating hospital cost savings resulting from decisions supported by our reports, such as the use of technologies like chlorhexidine for surgical site infections[45] or discontinuation of technologies like aprotinin for cardiac surgery[46]; and (2) estimating penalties avoided or rewards attained as a result of center‐led initiatives, such as those to increase VTE prophylaxis,[35] reduce CAUTI rates,[39] and reduce preventable mortality associated with sepsis.[37, 38] Similarly, given the focus of this study on the local evidence synthesis activities of our center, our examination did not include a detailed description of our CDS activities, or teaching activities, including our multidisciplinary workshops for physicians and nurses in evidence‐based QI[47] and our novel evidence‐based practice curriculum for medical students. Our study also did not include a description of our extramural activities, such as those supported by our contract with AHRQ as 1 of their 13 EPCs.[16, 17, 48, 49] A consideration of all of these activities enables a greater appreciation for the potential of such centers. Lastly, we examined a single EPC, which may not be representative of the diversity of hospitals and hospital staff across the United States. However, our EPC serves a diverse array of patient populations, clinical services, and service models throughout our multientity academic healthcare system, which may improve the generalizability of our experience to other settings.

As next steps, we recommend evaluation of other existing hospital EPCs nationally. Such studies could help hospitals and health systems ascertain which of their internal decisions might benefit from locally sourced rapid systematic reviews and determine whether an in‐house EPC could improve the value of care delivered.

In conclusion, our findings suggest that hospital EPCs within academic healthcare systems can efficiently synthesize and disseminate evidence for a variety of stakeholders. Moreover, these syntheses impact decision making in a variety of hospital contexts and clinical specialties. Hospitals and hospitalist leaders seeking to improve the implementation of evidence‐based practice at a systems level might consider establishing such infrastructure locally.

Acknowledgements

The authors thank Fran Barg, PhD (Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine) and Joel Betesh, MD (University of Pennsylvania Health System) for their contributions to developing the survey. They did not receive any compensation for their contributions.

Disclosures: An earlier version of this work was presented as a poster at the 2014 AMA Research Symposium, November 7, 2014, Dallas, Texas. Mr. Jayakumar reports having received a University of Pennsylvania fellowship as a summer intern at the Center for Evidence‐based Practice. Dr. Umscheid cocreated and directs a hospital evidence‐based practice center, is the Senior Associate Director of an Agency for Healthcare Research and Quality Evidence‐Based Practice Center, and is a past member of the Medicare Evidence Development and Coverage Advisory Committee, which uses evidence reports developed by the Evidence‐based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Umscheid's contribution was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Dr. Lavenberg, Dr. Mitchell, and Mr. Leas are employed as research analysts by a hospital evidence‐based practice center. Dr. Doshi is supported in part by a hospital evidence‐based practice center and is an Associate Director of an Agency for Healthcare Research and Quality Evidence‐based Practice Center. Dr. Goldmann is emeritus faculty at Penn, is supported in part by a hospital evidence‐based practice center, and is the Vice President and Chief Quality Assurance Officer in Clinical Solutions, a division of Elsevier, Inc., a global publishing company, and director of the division's Evidence‐based Medicine Center. Dr. Williams cocreated and codirects a hospital evidence‐based practice center. Dr. Brennan has oversight for and helped create a hospital evidence‐based practice center.

Files
References
  1. Avorn J, Fischer M. “Bench to behavior”: translating comparative effectiveness research into improved clinical practice. Health Aff (Millwood). 2010;29(10):18911900.
  2. Rajab MH, Villamaria FJ, Rohack JJ. Evaluating the status of “translating research into practice” at a major academic healthcare system. Int J Technol Assess Health Care. 2009;25(1):8489.
  3. Timbie JW, Fox DS, Busum K, Schneider EC. Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff (Millwood). 2012;31(10):21682175.
  4. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.
  5. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):12251230.
  6. Umscheid CA, Brennan PJ. Incentivizing “structures” over “outcomes” to bridge the knowing‐doing gap. JAMA Intern Med. 2015;175(3):354.
  7. Olsen L, Aisner D, McGinnis JM, eds. Institute of Medicine (US) Roundtable on Evidence‐Based Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: National Academies Press; 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK53494/. Accessed October 29, 2014.
  8. Harrison MB, Legare F, Graham ID, Fervers B. Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010;182(2):E78E84.
  9. Mitchell MD, Williams K, Brennan PJ, Umscheid CA. Integrating local data into hospital‐based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294300.
  10. Umscheid CA, Williams K, Brennan PJ. Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):13521355.
  11. Gutowski C, Maa J, Hoo KS, Bozic KJ, Bozic K. Health technology assessment at the University of California‐San Francisco. J Healthc Manag Am Coll Healthc Exec. 2011;56(1):1529; discussion 29–30.
  12. Schottinger J, Odell RM. Kaiser Permanente Southern California regional technology management process: evidence‐based medicine operationalized. Perm J. 2006;10(1):3841.
  13. Gagnon M‐P. Hospital‐based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819824.
  14. Cicchetti A, Marchetti M, Dibidino R, Corio M. Hospital based health technology assessment world‐wide survey. Available at: http://www.htai.org/fileadmin/HTAi_Files/ISG/HospitalBasedHTA/2008Files/HospitalBasedHTAISGSurveyReport.pdf. Accessed October 11, 2015.
  15. Stevens AJ, Longson C. At the center of health care policy making: the use of health technology assessment at NICE. Med Decis Making. 2013;33(3):320324.
  16. Atkins D, Fink K, Slutsky J. Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 part 2):10351041.
  17. Slutsky JR, Clancy CM. AHRQ's Effective Health Care Program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):6770.
  18. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):13171322.
  19. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):1324.
  20. Gagnon M‐P, Desmartis M, Poder T, Witteman W. Effects and repercussions of local/hospital‐based health technology assessment (HTA): a systematic. Syst Rev. 2014;3:129.
  21. McGregor M, Arnoldo J, Barkun J, et al. Impact of TAU Reports. McGill University Health Centre. Available at: https://francais.mcgill.ca/files/tau/FINAL_TAU_IMPACT_REPORT_FEB_2008.pdf. Published Feb 1, 2008. Accessed August 19, 2014.
  22. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924926.
  23. Booth AM, Wright KE, Outhwaite H. Centre for Reviews and Dissemination databases: value, content, and developments. Int J Technol Assess Health Care. 2010;26(4):470472.
  24. Goodman C. HTA 101. Introduction to Health Technology Assessment. Available at: https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed October 11, 2015.
  25. National Institute for Health Research. Remit. NIHR HTA Programme. Available at: http://www.nets.nihr.ac.uk/programmes/hta/remit. Accessed August 20, 2014.
  26. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research Electronic Data Capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.
  27. Mitchell MD, Williams K, Kuntz G, Umscheid CA. When the decision is what to decide: Using evidence inventory reports to focus health technology assessments. Int J Technol Assess Health Care. 2011;27(2):127132.
  28. McGregor M, Brophy JM. End‐user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(02):263267.
  29. Bodeau‐Livinec F, Simon E, Montagnier‐Petrissans C, Joël M‐E, Féry‐Lemonnier E. Impact of CEDIT recommendations: an example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161168.
  30. Alexander JA, Hearld LR, Jiang HJ, Fraser I. Increasing the relevance of research to health care managers: hospital CEO imperatives for improving quality and lowering costs. Health Care Manage Rev. 2007;32(2):150159.
  31. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10.
  32. Brown M, Hurwitz J, Brixner D, Malone DC. Health care decision makers' use of comparative effectiveness research: report from a series of focus groups. J Manag Care Pharm. 2013;19(9):745754.
  33. Watt A, Cameron A, Sturm L, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133139.
  34. Hartling L, Guise J‐M, Kato E, et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ncbi.nlm.nih.gov/books/NBK274092. Accessed March 5, 2015.
  35. Umscheid CA, Hanish A, Chittams J, Weiner MG, Hecht TEH. Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi‐experimental study. BMC Med Inform Decis Mak. 2012;12:92.
  36. McGreevey JD. Order sets in electronic health records: principles of good practice. Chest. 2013;143(1):228235.
  37. Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):2631.
  38. Guidi JL, Clark K, Upton MT, et al. Clinician perception of the effectiveness of an automated early warning and response system for sepsis in an academic medical center. Ann Am Thorac Soc. 2015;12(10):15141519.
  39. Baillie CA, Epps M, Hanish A, Fishman NO, French B, Umscheid CA. Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter‐associated urinary tract infections. Infect Control Hosp Epidemiol. 2014;35(9):11471155.
  40. Baillie CA, VanZandbergen C, Tait G, et al. The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30‐day readmission. J Hosp Med. 2013;8(12):689695.
  41. Mitchell MD, Mikkelsen ME, Umscheid CA, Lee I, Fuchs BD, Halpern SD. A systematic review to inform institutional decisions about the use of extracorporeal membrane oxygenation during the H1N1 influenza pandemic. Crit Care Med. 2010;38(6):13981404.
  42. Mitchell MD, Anderson BJ, Williams K, Umscheid CA. Heparin flushing and other interventions to maintain patency of central venous catheters: a systematic review. J Adv Nurs. 2009;65(10):20072021.
  43. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):11291136.
  44. Kellerman SE, Herold J. Physician response to surveys: a review of the literature. Am J Prev Med. 2001;20(1):6167.
  45. Lee I, Agarwal RK, Lee BY, Fishman NO, Umscheid CA. Systematic review and cost analysis comparing use of chlorhexidine with use of iodine for preoperative skin antisepsis to prevent surgical site infection. Infect Control Hosp Epidemiol. 2010;31(12):12191229.
  46. Umscheid CA, Kohl BA, Williams K. Antifibrinolytic use in adult cardiac surgery. Curr Opin Hematol. 2007;14(5):455467.
  47. Wyer PC, Umscheid CA, Wright S, Silva SA, Lang E. Teaching evidence assimilation for collaborative health care (TEACH) 2009–2014: building evidence‐based capacity within health care provider organizations. EGEMS (Wash DC). 2015;3(2):1165.
  48. Han JH, Sullivan N, Leas BF, Pegues DA, Kaczmarek JL, Umscheid CA. Cleaning hospital room surfaces to prevent health care‐associated infections: a technical brief [published online August 11, 2015]. Ann Intern Med. doi:10.7326/M15‐1192.
  49. Umscheid CA, Agarwal RK, Brennan PJ, Healthcare Infection Control Practices Advisory Committee. Updating the guideline development methodology of the Healthcare Infection Control Practices Advisory Committee (HICPAC). Am J Infect Control. 2010;38(4):264273.
  50. U.S. Food and Drug Administration. FDA basics—What is a medical device? Available at: http://www.fda.gov/AboutFDA/Transparency/Basics/ucm211822.htm. Accessed November 12, 2014.
Article PDF
Issue
Journal of Hospital Medicine - 11(3)
Page Number
185-192
Sections
Files
Files
Article PDF
Article PDF

Hospital evidence‐based practice centers (EPCs) are structures with the potential to facilitate the integration of evidence into institutional decision making to close knowing‐doing gaps[1, 2, 3, 4, 5, 6]; in the process, they can support the evolution of their parent institutions into learning healthcare systems.[7] The potential of hospital EPCs stems from their ability to identify and adapt national evidence‐based guidelines and systematic reviews for the local setting,[8] create local evidence‐based guidelines in the absence of national guidelines, use local data to help define problems and assess the impact of solutions,[9] and implement evidence into practice through computerized clinical decision support (CDS) interventions and other quality‐improvement (QI) initiatives.[9, 10] As such, hospital EPCs have the potential to strengthen relationships and understanding between clinicians and administrators[11]; foster a culture of evidence‐based practice; and improve the quality, safety, and value of care provided.[10]

Formal hospital EPCs remain uncommon in the United States,[10, 11, 12] though their numbers have expanded worldwide.[13, 14] This growth is due not to any reduced role for national EPCs, such as the National Institute for Health and Clinical Excellence[15] in the United Kingdom, or the 13 EPCs funded by the Agency for Healthcare Research and Quality (AHRQ)[16, 17] in the United States. Rather, this growth is fueled by the heightened awareness that the value of healthcare interventions often needs to be assessed locally, and that clinical guidelines that consider local context have a greater potential to improve quality and efficiency.[9, 18, 19]

Despite the increasing number of hospital EPCs globally, their impact on administrative and clinical decision making has rarely been examined,[13, 20] especially for hospital EPCs in the United States. The few studies that have assessed the impact of hospital EPCs on institutional decision making have done so in the context of technology acquisition, neglecting the role hospital EPCs may play in the integration of evidence into clinical practice. For example, the Technology Assessment Unit at McGill University Health Center found that of the 27 reviews commissioned in their first 5 years, 25 were implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of $10 million.[21] Understanding the activities and impact of hospital EPCs is particularly critical for hospitalist leaders, who could leverage hospital EPCs to inform efforts to support the quality, safety, and value of care provided, or who may choose to establish or lead such infrastructure. The availability of such opportunities could also support hospitalist recruitment and retention.

In 2006, the University of Pennsylvania Health System (UPHS) created the Center for Evidence‐based Practice (CEP) to support the integration of evidence into practice to strengthen quality, safety, and value.[10] Cofounded by hospitalists with formal training in clinical epidemiology, the CEP performs rapid systematic reviews of the scientific literature to inform local practice and policy. In this article, we describe the first 8 years of the CEP's evidence synthesis activities and examine its impact on decision making across the health system.

METHODS

Setting

The UPHS includes 3 acute care hospitals, and inpatient facilities specializing in acute rehabilitation, skilled nursing, long‐term acute care, and hospice, with a capacity of more than 1800 beds and 75,000 annual admissions, as well as primary care and specialty clinics with more than 2 million annual outpatient visits. The CEP is funded by and organized within the Office of the UPHS Chief Medical Officer, serves all UPHS facilities, has an annual budget of approximately $1 million, and is currently staffed by a hospitalist director, 3 research analysts, 6 physician and nurse liaisons, a health economist, biostatistician, administrator, and librarians, totaling 5.5 full time equivalents.

The mission of the CEP is to support the quality, safety, and value of care at Penn through evidence‐based practice. To accomplish this mission, the CEP performs rapid systematic reviews, translates evidence into practice through the use of CDS interventions and clinical pathways, and offers education in evidence‐based decision making to trainees, staff, and faculty. This study is focused on the CEP's evidence synthesis activities.

Typically, clinical and administrative leaders submit a request to the CEP for an evidence review, the request is discussed and approved at the weekly staff meeting, and a research analyst and clinical liaison are assigned to the request and communicate with the requestor to clearly define the question of interest. Subsequently, the research analyst completes a protocol, a draft search, and a draft report, each reviewed and approved by the clinical liaison and requestor. The final report is posted to the website, disseminated to all key stakeholders across the UPHS as identified by the clinical liaisons, and integrated into decision making through various routes, including in‐person presentations to decision makers, and CDS and QI initiatives.

Study Design

The study included an analysis of an internal database of evidence reviews and a survey of report requestors, and was exempted from institutional review board review. Survey respondents were informed that their responses would be confidential and did not receive incentives.

Internal Database of Reports

Data from the CEP's internal management database were analyzed for its first 8 fiscal years (July 2006June 2014). Variables included requestor characteristics, report characteristics (eg, technology reviewed, clinical specialty examined, completion time, and performance of meta‐analyses and GRADE [Grading of Recommendations Assessment, Development and Evaluation] analyses[22]), report use (eg, integration of report into CDS interventions) and dissemination beyond the UPHS (eg, submission to Center for Reviews and Dissemination [CRD] Health Technology Assessment [HTA] database[23] and to peer‐reviewed journals). Report completion time was defined as the time between the date work began on the report and the date the final report was sent to the requestor. The technology categorization scheme was adapted from that provided by Goodman (2004)[24] and the UK National Institute for Health Research HTA Programme.[25] We systematically assigned the technology reviewed in each report to 1 of 8 mutually exclusive categories. The clinical specialty examined in each report was determined using an algorithm (see Supporting Information, Appendix 1, in the online version of this article).

We compared the report completion times and the proportions of requestor types, technologies reviewed, and clinical specialties examined in the CEP's first 4 fiscal years (July 2006June 2010) to those in the CEP's second 4 fiscal years (July 2010June 2014) using t tests and 2 tests for continuous and categorical variables, respectively.

Survey

We conducted a Web‐based survey (see Supporting Information, Appendix 2, in the online version of this article) of all requestors of the 139 rapid reviews completed in the last 4 fiscal years. Participants who requested multiple reports were surveyed only about the most recent report. Requestors were invited to participate in the survey via e‐mail, and follow‐up e‐mails were sent to nonrespondents at 7, 14, and 16 days. Nonrespondents and respondents were compared with respect to requestor type (physician vs nonphysician) and topic evaluated (traditional HTA topics such as drugs, biologics, and devices vs nontraditional HTA topics such as processes of care). The survey was administered using REDCap[26] electronic data capture tools. The 44‐item questionnaire collected data on the interaction between the requestor and the CEP, report characteristics, report impact, and requestor satisfaction.

Survey results were imported into Microsoft Excel (Microsoft Corp, Redmond, WA) and SPSS (IBM, Armonk, NY) for analysis. Descriptive statistics were generated, and statistical comparisons were conducted using 2 and Fisher exact tests.

RESULTS

Evidence Synthesis Activity

The CEP has produced several different report products since its inception. Evidence reviews (57%, n = 142) consist of a systematic review and analysis of the primary literature. Evidence advisories (32%, n = 79) are summaries of evidence from secondary sources such as guidelines or systematic reviews. Evidence inventories (3%, n = 7) are literature searches that describe the quantity and focus of available evidence, without analysis or synthesis.[27]

The categories of technologies examined, including their definitions and examples, are provided in Table 1. Drugs (24%, n = 60) and devices/equipment/supplies (19%, n = 48) were most commonly examined. The proportion of reports examining technology types traditionally evaluated by HTA organizations significantly decreased when comparing the first 4 years of CEP activity to the second 4 years (62% vs 38%, P < 0.01), whereas reports examining less traditionally reviewed categories increased (38% vs 62%, P < 0.01). The most common clinical specialties represented by the CEP reports were nursing (11%, n = 28), general surgery (11%, n = 28), critical care (10%, n = 24), and general medicine (9%, n = 22) (see Supporting Information, Appendix 3, in the online version of this article). Clinical departments were the most common requestors (29%, n = 72) (Table 2). The proportion of requests from clinical departments significantly increased when comparing the first 4 years to the second 4 years (20% vs 36%, P < 0.01), with requests from purchasing committees significantly decreasing (25% vs 6%, P < 0.01). The overall report completion time was 70 days, and significantly decreased when comparing the first 4 years to the second 4 years (89 days vs 50 days, P < 0.01).

Technology Categories, Definitions, Examples, and Frequencies by Fiscal Years
CategoryDefinitionExamplesTotal2007201020112014P Value
Total  249 (100%)109 (100%)140 (100%) 
DrugA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a pharmacologic agentCelecoxib for pain in joint arthroplasty; colchicine for prevention of pericarditis and atrial fibrillation60 (24%)35 (32%)25 (18%)0.009
Device, equipment, and suppliesA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory that is intended for use in the prevention, diagnosis, or treatment of disease and does not achieve its primary intended purposes though chemical action or metabolism[50]Thermometers for pediatric use; femoral closure devices for cardiac catheterization48 (19%)25 (23%)23 (16%)0.19
Process of careA report primarily examining a clinical pathway or a clinical practice guideline that significantly involves elements of prevention, diagnosis, and/or treatment or significantly incorporates 2 or more of the other technology categoriesPreventing patient falls; prevention and management of delirium31 (12%)18 (17%)13 (9%)0.09
Test, scale, or risk factorA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a test intended to screen for, diagnose, classify, or monitor the progression of a diseaseComputed tomography for acute chest pain; urine drug screening in chronic pain patients on opioid therapy31 (12%)8 (7%)23 (16%)0.03
Medical/surgical procedureA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a medical intervention that is not a drug, device, or test or of the application or removal of a deviceBiliary drainage for chemotherapy patients; cognitive behavioral therapy for insomnia26 (10%)8 (7%)18 (13%)0.16
Policy or organizational/managerial systemA report primarily examining laws or regulations; the organization, financing, or delivery of care, including settings of care; or healthcare providersMedical care costs and productivity changes associated with smoking; physician training and credentialing for robotic surgery in obstetrics and gynecology26 (10%)4 (4%)22 (16%)0.002
Support systemA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an intervention designed to provide a new or improved service to patients or healthcare providers that does not fall into 1 of the other categoriesReconciliation of data from differing electronic medical records; social media, text messaging, and postdischarge communication14 (6%)3 (3%)11 (8%)0.09
BiologicA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a product manufactured in a living systemRecombinant factor VIIa for cardiovascular surgery; osteobiologics for orthopedic fusions13 (5%)8 (7%)5 (4%)0.19
Requestor Categories and Frequencies by Fiscal Years
CategoryTotal2007201020112014P Value
  • NOTE: *Other includes ad hoc committees, CEP, Children's Hospital of Philadelphia, IT committees, payers, and the primary care network.. Abbreviations: CEP, Center for Evidence‐based Practice; CMO, chief medical officer; IT, information technology.

Total249 (100%)109 (100%)140 (100%) 
Clinical department72 (29%)22 (20%)50 (36%)0.007
CMO47 (19%)21 (19%)26 (19%)0.92
Purchasing committee35 (14%)27 (25%)8 (6%)<0.001
Formulary committee22 (9%)12 (11%)10 (7%)0.54
Quality committee21 (8%)11 (10%)10 (7%)0.42
Administrative department19 (8%)5 (5%)14 (10%)0.11
Nursing14 (6%)4 (4%)10 (7%)0.23
Other*19 (8%)7 (6%)12 (9%)0.55

Thirty‐seven (15%) reports included meta‐analyses conducted by CEP staff. Seventy‐five reports (30%) contained an evaluation of the quality of the evidence base using GRADE analyses.[22] Of these reports, the highest GRADE of evidence available for any comparison of interest was moderate (35%, n = 26) or high (33%, n = 25) in most cases, followed by very low (19%, n = 14) and low (13%, n = 10).

Reports were disseminated in a variety of ways beyond direct dissemination and presentation to requestors and posting on the center website. Thirty reports (12%) informed CDS interventions, 24 (10%) resulted in peer‐reviewed publications, and 204 (82%) were posted to the CRD HTA database.

Evidence Synthesis Impact

A total of 139 reports were completed between July 2010 and June 2014 for 65 individual requestors. Email invitations to participate in the survey were sent to the 64 requestors employed by the UPHS. The response rate was 72% (46/64). The proportions of physician requestors and traditional HTA topics evaluated were similar across respondents and nonrespondents (43% [20/46] vs 39% [7/18], P = 0.74; and 37% [17/46] vs 44% [8/18], P = 0.58, respectively). Aggregated survey responses are presented for items using a Likert scale in Figure 1, and for items using a yes/no or ordinal scale in Table 3.

Responses to Yes/No and Ranking Survey Questions
Items% of Respondents Responding Affirmatively
 Percentage of Respondents Ranking as First Choice*
  • NOTE: Abbreviations: CEP, Center for Evidence‐based Practice. *The sum of these percentages is greater than 100 percent because respondents could rank multiple options first.

Requestor activity 
What factors prompted you to request a report from CEP? (Please select all that apply.) 
My own time constraints28% (13/46)
CEP's ability to identify and synthesize evidence89% (41/46)
CEP's objectivity52% (24/46)
Recommendation from colleague30% (14/46)
Did you conduct any of your own literature searches before contacting CEP?67% (31/46)
Did you obtain and read any of the articles cited in CEP's report?63% (29/46)
Did you read the following sections of CEP's report? 
Evidence summary (at beginning of report)100% (45/45)
Introduction/background93% (42/45)
Methods84% (38/45)
Results98% (43/43)
Conclusion100% (43/43)
Report dissemination 
Did you share CEP's report with anyone NOT involved in requesting the report or in making the final decision?67% (30/45)
Did you share CEP's report with anyone outside of Penn?7% (3/45)
Requestor preferences 
Would it be helpful for CEP staff to call you after you receive any future CEP reports to answer any questions you might have?55% (24/44)
Following any future reports you request from CEP, would you be willing to complete a brief questionnaire?100% (44/44)
Please rank how you would prefer to receive reports from CEP in the future. 
E‐mail containing the report as a PDF attachment77% (34/44)
E‐mail containing a link to the report on CEP's website16% (7/44)
In‐person presentation by the CEP analyst writing the report18% (8/44)
In‐person presentation by the CEP director involved in the report16% (7/44)
Figure 1
Requestor responses to Likert survey questions. Abbreviations: CEP, Center for Evidence‐based Practice.

In general, respondents found reports easy to request, easy to use, timely, and relevant, resulting in high requestor satisfaction. In addition, 98% described the scope of content and level of detail as about right. Report impact was rated highly as well, with the evidence summary and conclusions rated as the most critical to decision making. A majority of respondents indicated that reports confirmed their tentative decision (77%, n = 34), whereas some changed their tentative decision (7%, n = 3), and others suggested the report had no effect on their tentative decision (16%, n = 7). Respondents indicated the amount of time that elapsed between receiving reports and making final decisions was 1 to 7 days (5%, n = 2), 8 to 30 days (40%, n = 17), 1 to 3 months (37%, n = 16), 4 to 6 months (9%, n = 4), or greater than 6 months (9%, n = 4). The most common reasons cited for requesting a report were the CEP's evidence synthesis skills and objectivity.

DISCUSSION

To our knowledge, this is the first comprehensive description and assessment of evidence synthesis activity by a hospital EPC in the United States. Our findings suggest that clinical and administrative leaders will request reports from a hospital EPC, and that hospital EPCs can promptly produce reports when requested. Moreover, these syntheses can address a wide range of clinical and policy topics, and can be disseminated through a variety of routes. Lastly, requestors are satisfied by these syntheses, and report that they inform decision making. These results suggest that EPCs may be an effective infrastructure paradigm for promoting evidence‐based decision making within healthcare provider organizations, and are consistent with previous analyses of hospital‐based EPCs.[21, 28, 29]

Over half of report requestors cited CEP's objectivity as a factor in their decision to request a report, underscoring the value of a neutral entity in an environment where clinical departments and hospital committees may have competing interests.[10] This asset was 1 of the primary drivers for establishing our hospital EPC. Concerns by clinical executives about the influence of industry and local politics on institutional decision making, and a desire to have clinical evidence more systematically and objectively integrated into decision making, fueled our center's funding.

The survey results also demonstrate that respondents were satisfied with the reports for many reasons, including readability, concision, timeliness, scope, and content, consistent with the evaluation of the French hospital‐based EPC CEDIT (French Committee for the Assessment and Dissemination of Technological Innovations).[29] Given the importance of readability, concision, and relevance that has been previously described,[16, 28, 30] nearly all CEP reports contain an evidence summary on the first page that highlights key findings in a concise, user‐friendly format.[31] The evidence summaries include bullet points that: (1) reference the most pertinent guideline recommendations along with their strength of recommendation and underlying quality of evidence; (2) organize and summarize study findings using the most critical clinical outcomes, including an assessment of the quality of the underlying evidence for each outcome; and (3) note important limitations of the findings.

Evidence syntheses must be timely to allow decision makers to act on the findings.[28, 32] The primary criticism of CEDIT was the lag between requests and report publication.[29] Rapid reviews, designed to inform urgent decisions, can overcome this challenge.[31, 33, 34] CEP reviews required approximately 2 months to complete on average, consistent with the most rapid timelines reported,[31, 33, 34] and much shorter than standard systematic review timelines, which can take up to 12 to 24 months.[33] Working with requestors to limit the scope of reviews to those issues most critical to a decision, using secondary resources when available, and hiring experienced research analysts help achieve these efficiencies.

The study by Bodeau‐Livinec also argues for the importance of report accessibility to ensure dissemination.[29] This is consistent with the CEP's approach, where all reports are posted on the UPHS internal website. Many also inform QI initiatives, as well as CDS interventions that address topics of general interest to acute care hospitals, such as venous thromboembolism (VTE) prophylaxis,[35] blood product transfusions,[36] sepsis care,[37, 38] and prevention of catheter‐associated urinary tract infections (CAUTI)[39] and hospital readmissions.[40] Most reports are also listed in an international database of rapid reviews,[23] and reports that address topics of general interest, have sufficient evidence to synthesize, and have no prior published systematic reviews are published in the peer‐reviewed literature.[41, 42]

The majority of reports completed by the CEP were evidence reviews, or systematic reviews of primary literature, suggesting that CEP reports often address questions previously unanswered by existing published systematic reviews; however, about a third of reports were evidence advisories, or summaries of evidence from preexisting secondary sources. The relative scarcity of high‐quality evidence bases in those reports where GRADE analyses were conducted might be expected, as requestors may be more likely to seek guidance when the evidence base on a topic is lacking. This was further supported by the small percentage (15%) of reports where adequate data of sufficient homogeneity existed to allow meta‐analyses. The small number of original meta‐analyses performed also reflects our reliance on secondary resources when available.

Only 7% of respondents reported that tentative decisions were changed based on their report. This is not surprising, as evidence reviews infrequently result in clear go or no go recommendations. More commonly, they address or inform complex clinical questions or pathways. In this context, the change/confirm/no effect framework may not completely reflect respondents' use of or benefit from reports. Thus, we included a diverse set of questions in our survey to best estimate the value of our reports. For example, when asked whether the report answered the question posed, informed their final decision, or was consistent with their final decision, 91%, 79%, and 71% agreed or strongly agreed, respectively. When asked whether they would request a report again if they had to do it all over, recommend CEP to their colleagues, and be likely to request reports in the future, at least 95% of survey respondents agreed or strongly agreed. In addition, no respondent indicated that their report was not timely enough to influence their decision. Moreover, only a minority of respondents expressed disappointment that the CEP's report did not provide actionable recommendations due to a lack of published evidence (9%, n = 4). Importantly, the large proportion of requestors indicating that reports confirmed their tentative decisions may be a reflection of hindsight bias.

The most apparent trend in the production of CEP reviews over time is the relative increase in requests by clinical departments, suggesting that the CEP is being increasingly consulted to help define best clinical practices. This is also supported by the relative increase in reports focused on policy or organizational/managerial systems. These findings suggest that hospital EPCs have value beyond the traditional realm of HTA.

This study has a number of limitations. First, not all of the eligible report requestors responded to our survey. Despite this, our response rate of 72% compares favorably with surveys published in medical journals.[43] In addition, nonresponse bias may be less important in physician surveys than surveys of the general population.[44] The similarity in requestor and report characteristics for respondents and nonrespondents supports this. Second, our survey of impact is self‐reported rather than an evaluation of actual decision making or patient outcomes. Thus, the survey relies on the accuracy of the responses. Third, recall bias must be considered, as some respondents were asked to evaluate reports that were greater than 1 year old. To reduce this bias, we asked respondents to consider the most recent report they requested, included that report as an attachment in the survey request, and only surveyed requestors from the most recent 4 of the CEP's 8 fiscal years. Fourth, social desirability bias could have also affected the survey responses, though it was likely minimized by the promise of confidentiality. Fifth, an examination of the impact of the CEP on costs was outside the scope of this evaluation; however, such information may be important to those assessing the sustainability or return on investment of such centers. Simple approaches we have previously used to approximate the value of our activities include: (1) estimating hospital cost savings resulting from decisions supported by our reports, such as the use of technologies like chlorhexidine for surgical site infections[45] or discontinuation of technologies like aprotinin for cardiac surgery[46]; and (2) estimating penalties avoided or rewards attained as a result of center‐led initiatives, such as those to increase VTE prophylaxis,[35] reduce CAUTI rates,[39] and reduce preventable mortality associated with sepsis.[37, 38] Similarly, given the focus of this study on the local evidence synthesis activities of our center, our examination did not include a detailed description of our CDS activities, or teaching activities, including our multidisciplinary workshops for physicians and nurses in evidence‐based QI[47] and our novel evidence‐based practice curriculum for medical students. Our study also did not include a description of our extramural activities, such as those supported by our contract with AHRQ as 1 of their 13 EPCs.[16, 17, 48, 49] A consideration of all of these activities enables a greater appreciation for the potential of such centers. Lastly, we examined a single EPC, which may not be representative of the diversity of hospitals and hospital staff across the United States. However, our EPC serves a diverse array of patient populations, clinical services, and service models throughout our multientity academic healthcare system, which may improve the generalizability of our experience to other settings.

As next steps, we recommend evaluation of other existing hospital EPCs nationally. Such studies could help hospitals and health systems ascertain which of their internal decisions might benefit from locally sourced rapid systematic reviews and determine whether an in‐house EPC could improve the value of care delivered.

In conclusion, our findings suggest that hospital EPCs within academic healthcare systems can efficiently synthesize and disseminate evidence for a variety of stakeholders. Moreover, these syntheses impact decision making in a variety of hospital contexts and clinical specialties. Hospitals and hospitalist leaders seeking to improve the implementation of evidence‐based practice at a systems level might consider establishing such infrastructure locally.

Acknowledgements

The authors thank Fran Barg, PhD (Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine) and Joel Betesh, MD (University of Pennsylvania Health System) for their contributions to developing the survey. They did not receive any compensation for their contributions.

Disclosures: An earlier version of this work was presented as a poster at the 2014 AMA Research Symposium, November 7, 2014, Dallas, Texas. Mr. Jayakumar reports having received a University of Pennsylvania fellowship as a summer intern at the Center for Evidence‐based Practice. Dr. Umscheid cocreated and directs a hospital evidence‐based practice center, is the Senior Associate Director of an Agency for Healthcare Research and Quality Evidence‐Based Practice Center, and is a past member of the Medicare Evidence Development and Coverage Advisory Committee, which uses evidence reports developed by the Evidence‐based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Umscheid's contribution was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Dr. Lavenberg, Dr. Mitchell, and Mr. Leas are employed as research analysts by a hospital evidence‐based practice center. Dr. Doshi is supported in part by a hospital evidence‐based practice center and is an Associate Director of an Agency for Healthcare Research and Quality Evidence‐based Practice Center. Dr. Goldmann is emeritus faculty at Penn, is supported in part by a hospital evidence‐based practice center, and is the Vice President and Chief Quality Assurance Officer in Clinical Solutions, a division of Elsevier, Inc., a global publishing company, and director of the division's Evidence‐based Medicine Center. Dr. Williams cocreated and codirects a hospital evidence‐based practice center. Dr. Brennan has oversight for and helped create a hospital evidence‐based practice center.

Hospital evidence‐based practice centers (EPCs) are structures with the potential to facilitate the integration of evidence into institutional decision making to close knowing‐doing gaps[1, 2, 3, 4, 5, 6]; in the process, they can support the evolution of their parent institutions into learning healthcare systems.[7] The potential of hospital EPCs stems from their ability to identify and adapt national evidence‐based guidelines and systematic reviews for the local setting,[8] create local evidence‐based guidelines in the absence of national guidelines, use local data to help define problems and assess the impact of solutions,[9] and implement evidence into practice through computerized clinical decision support (CDS) interventions and other quality‐improvement (QI) initiatives.[9, 10] As such, hospital EPCs have the potential to strengthen relationships and understanding between clinicians and administrators[11]; foster a culture of evidence‐based practice; and improve the quality, safety, and value of care provided.[10]

Formal hospital EPCs remain uncommon in the United States,[10, 11, 12] though their numbers have expanded worldwide.[13, 14] This growth is due not to any reduced role for national EPCs, such as the National Institute for Health and Clinical Excellence[15] in the United Kingdom, or the 13 EPCs funded by the Agency for Healthcare Research and Quality (AHRQ)[16, 17] in the United States. Rather, this growth is fueled by the heightened awareness that the value of healthcare interventions often needs to be assessed locally, and that clinical guidelines that consider local context have a greater potential to improve quality and efficiency.[9, 18, 19]

Despite the increasing number of hospital EPCs globally, their impact on administrative and clinical decision making has rarely been examined,[13, 20] especially for hospital EPCs in the United States. The few studies that have assessed the impact of hospital EPCs on institutional decision making have done so in the context of technology acquisition, neglecting the role hospital EPCs may play in the integration of evidence into clinical practice. For example, the Technology Assessment Unit at McGill University Health Center found that of the 27 reviews commissioned in their first 5 years, 25 were implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of $10 million.[21] Understanding the activities and impact of hospital EPCs is particularly critical for hospitalist leaders, who could leverage hospital EPCs to inform efforts to support the quality, safety, and value of care provided, or who may choose to establish or lead such infrastructure. The availability of such opportunities could also support hospitalist recruitment and retention.

In 2006, the University of Pennsylvania Health System (UPHS) created the Center for Evidence‐based Practice (CEP) to support the integration of evidence into practice to strengthen quality, safety, and value.[10] Cofounded by hospitalists with formal training in clinical epidemiology, the CEP performs rapid systematic reviews of the scientific literature to inform local practice and policy. In this article, we describe the first 8 years of the CEP's evidence synthesis activities and examine its impact on decision making across the health system.

METHODS

Setting

The UPHS includes 3 acute care hospitals, and inpatient facilities specializing in acute rehabilitation, skilled nursing, long‐term acute care, and hospice, with a capacity of more than 1800 beds and 75,000 annual admissions, as well as primary care and specialty clinics with more than 2 million annual outpatient visits. The CEP is funded by and organized within the Office of the UPHS Chief Medical Officer, serves all UPHS facilities, has an annual budget of approximately $1 million, and is currently staffed by a hospitalist director, 3 research analysts, 6 physician and nurse liaisons, a health economist, biostatistician, administrator, and librarians, totaling 5.5 full time equivalents.

The mission of the CEP is to support the quality, safety, and value of care at Penn through evidence‐based practice. To accomplish this mission, the CEP performs rapid systematic reviews, translates evidence into practice through the use of CDS interventions and clinical pathways, and offers education in evidence‐based decision making to trainees, staff, and faculty. This study is focused on the CEP's evidence synthesis activities.

Typically, clinical and administrative leaders submit a request to the CEP for an evidence review, the request is discussed and approved at the weekly staff meeting, and a research analyst and clinical liaison are assigned to the request and communicate with the requestor to clearly define the question of interest. Subsequently, the research analyst completes a protocol, a draft search, and a draft report, each reviewed and approved by the clinical liaison and requestor. The final report is posted to the website, disseminated to all key stakeholders across the UPHS as identified by the clinical liaisons, and integrated into decision making through various routes, including in‐person presentations to decision makers, and CDS and QI initiatives.

Study Design

The study included an analysis of an internal database of evidence reviews and a survey of report requestors, and was exempted from institutional review board review. Survey respondents were informed that their responses would be confidential and did not receive incentives.

Internal Database of Reports

Data from the CEP's internal management database were analyzed for its first 8 fiscal years (July 2006June 2014). Variables included requestor characteristics, report characteristics (eg, technology reviewed, clinical specialty examined, completion time, and performance of meta‐analyses and GRADE [Grading of Recommendations Assessment, Development and Evaluation] analyses[22]), report use (eg, integration of report into CDS interventions) and dissemination beyond the UPHS (eg, submission to Center for Reviews and Dissemination [CRD] Health Technology Assessment [HTA] database[23] and to peer‐reviewed journals). Report completion time was defined as the time between the date work began on the report and the date the final report was sent to the requestor. The technology categorization scheme was adapted from that provided by Goodman (2004)[24] and the UK National Institute for Health Research HTA Programme.[25] We systematically assigned the technology reviewed in each report to 1 of 8 mutually exclusive categories. The clinical specialty examined in each report was determined using an algorithm (see Supporting Information, Appendix 1, in the online version of this article).

We compared the report completion times and the proportions of requestor types, technologies reviewed, and clinical specialties examined in the CEP's first 4 fiscal years (July 2006June 2010) to those in the CEP's second 4 fiscal years (July 2010June 2014) using t tests and 2 tests for continuous and categorical variables, respectively.

Survey

We conducted a Web‐based survey (see Supporting Information, Appendix 2, in the online version of this article) of all requestors of the 139 rapid reviews completed in the last 4 fiscal years. Participants who requested multiple reports were surveyed only about the most recent report. Requestors were invited to participate in the survey via e‐mail, and follow‐up e‐mails were sent to nonrespondents at 7, 14, and 16 days. Nonrespondents and respondents were compared with respect to requestor type (physician vs nonphysician) and topic evaluated (traditional HTA topics such as drugs, biologics, and devices vs nontraditional HTA topics such as processes of care). The survey was administered using REDCap[26] electronic data capture tools. The 44‐item questionnaire collected data on the interaction between the requestor and the CEP, report characteristics, report impact, and requestor satisfaction.

Survey results were imported into Microsoft Excel (Microsoft Corp, Redmond, WA) and SPSS (IBM, Armonk, NY) for analysis. Descriptive statistics were generated, and statistical comparisons were conducted using 2 and Fisher exact tests.

RESULTS

Evidence Synthesis Activity

The CEP has produced several different report products since its inception. Evidence reviews (57%, n = 142) consist of a systematic review and analysis of the primary literature. Evidence advisories (32%, n = 79) are summaries of evidence from secondary sources such as guidelines or systematic reviews. Evidence inventories (3%, n = 7) are literature searches that describe the quantity and focus of available evidence, without analysis or synthesis.[27]

The categories of technologies examined, including their definitions and examples, are provided in Table 1. Drugs (24%, n = 60) and devices/equipment/supplies (19%, n = 48) were most commonly examined. The proportion of reports examining technology types traditionally evaluated by HTA organizations significantly decreased when comparing the first 4 years of CEP activity to the second 4 years (62% vs 38%, P < 0.01), whereas reports examining less traditionally reviewed categories increased (38% vs 62%, P < 0.01). The most common clinical specialties represented by the CEP reports were nursing (11%, n = 28), general surgery (11%, n = 28), critical care (10%, n = 24), and general medicine (9%, n = 22) (see Supporting Information, Appendix 3, in the online version of this article). Clinical departments were the most common requestors (29%, n = 72) (Table 2). The proportion of requests from clinical departments significantly increased when comparing the first 4 years to the second 4 years (20% vs 36%, P < 0.01), with requests from purchasing committees significantly decreasing (25% vs 6%, P < 0.01). The overall report completion time was 70 days, and significantly decreased when comparing the first 4 years to the second 4 years (89 days vs 50 days, P < 0.01).

Technology Categories, Definitions, Examples, and Frequencies by Fiscal Years
CategoryDefinitionExamplesTotal2007201020112014P Value
Total  249 (100%)109 (100%)140 (100%) 
DrugA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a pharmacologic agentCelecoxib for pain in joint arthroplasty; colchicine for prevention of pericarditis and atrial fibrillation60 (24%)35 (32%)25 (18%)0.009
Device, equipment, and suppliesA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory that is intended for use in the prevention, diagnosis, or treatment of disease and does not achieve its primary intended purposes though chemical action or metabolism[50]Thermometers for pediatric use; femoral closure devices for cardiac catheterization48 (19%)25 (23%)23 (16%)0.19
Process of careA report primarily examining a clinical pathway or a clinical practice guideline that significantly involves elements of prevention, diagnosis, and/or treatment or significantly incorporates 2 or more of the other technology categoriesPreventing patient falls; prevention and management of delirium31 (12%)18 (17%)13 (9%)0.09
Test, scale, or risk factorA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a test intended to screen for, diagnose, classify, or monitor the progression of a diseaseComputed tomography for acute chest pain; urine drug screening in chronic pain patients on opioid therapy31 (12%)8 (7%)23 (16%)0.03
Medical/surgical procedureA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a medical intervention that is not a drug, device, or test or of the application or removal of a deviceBiliary drainage for chemotherapy patients; cognitive behavioral therapy for insomnia26 (10%)8 (7%)18 (13%)0.16
Policy or organizational/managerial systemA report primarily examining laws or regulations; the organization, financing, or delivery of care, including settings of care; or healthcare providersMedical care costs and productivity changes associated with smoking; physician training and credentialing for robotic surgery in obstetrics and gynecology26 (10%)4 (4%)22 (16%)0.002
Support systemA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an intervention designed to provide a new or improved service to patients or healthcare providers that does not fall into 1 of the other categoriesReconciliation of data from differing electronic medical records; social media, text messaging, and postdischarge communication14 (6%)3 (3%)11 (8%)0.09
BiologicA report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a product manufactured in a living systemRecombinant factor VIIa for cardiovascular surgery; osteobiologics for orthopedic fusions13 (5%)8 (7%)5 (4%)0.19
Requestor Categories and Frequencies by Fiscal Years
CategoryTotal2007201020112014P Value
  • NOTE: *Other includes ad hoc committees, CEP, Children's Hospital of Philadelphia, IT committees, payers, and the primary care network.. Abbreviations: CEP, Center for Evidence‐based Practice; CMO, chief medical officer; IT, information technology.

Total249 (100%)109 (100%)140 (100%) 
Clinical department72 (29%)22 (20%)50 (36%)0.007
CMO47 (19%)21 (19%)26 (19%)0.92
Purchasing committee35 (14%)27 (25%)8 (6%)<0.001
Formulary committee22 (9%)12 (11%)10 (7%)0.54
Quality committee21 (8%)11 (10%)10 (7%)0.42
Administrative department19 (8%)5 (5%)14 (10%)0.11
Nursing14 (6%)4 (4%)10 (7%)0.23
Other*19 (8%)7 (6%)12 (9%)0.55

Thirty‐seven (15%) reports included meta‐analyses conducted by CEP staff. Seventy‐five reports (30%) contained an evaluation of the quality of the evidence base using GRADE analyses.[22] Of these reports, the highest GRADE of evidence available for any comparison of interest was moderate (35%, n = 26) or high (33%, n = 25) in most cases, followed by very low (19%, n = 14) and low (13%, n = 10).

Reports were disseminated in a variety of ways beyond direct dissemination and presentation to requestors and posting on the center website. Thirty reports (12%) informed CDS interventions, 24 (10%) resulted in peer‐reviewed publications, and 204 (82%) were posted to the CRD HTA database.

Evidence Synthesis Impact

A total of 139 reports were completed between July 2010 and June 2014 for 65 individual requestors. Email invitations to participate in the survey were sent to the 64 requestors employed by the UPHS. The response rate was 72% (46/64). The proportions of physician requestors and traditional HTA topics evaluated were similar across respondents and nonrespondents (43% [20/46] vs 39% [7/18], P = 0.74; and 37% [17/46] vs 44% [8/18], P = 0.58, respectively). Aggregated survey responses are presented for items using a Likert scale in Figure 1, and for items using a yes/no or ordinal scale in Table 3.

Responses to Yes/No and Ranking Survey Questions
Items% of Respondents Responding Affirmatively
 Percentage of Respondents Ranking as First Choice*
  • NOTE: Abbreviations: CEP, Center for Evidence‐based Practice. *The sum of these percentages is greater than 100 percent because respondents could rank multiple options first.

Requestor activity 
What factors prompted you to request a report from CEP? (Please select all that apply.) 
My own time constraints28% (13/46)
CEP's ability to identify and synthesize evidence89% (41/46)
CEP's objectivity52% (24/46)
Recommendation from colleague30% (14/46)
Did you conduct any of your own literature searches before contacting CEP?67% (31/46)
Did you obtain and read any of the articles cited in CEP's report?63% (29/46)
Did you read the following sections of CEP's report? 
Evidence summary (at beginning of report)100% (45/45)
Introduction/background93% (42/45)
Methods84% (38/45)
Results98% (43/43)
Conclusion100% (43/43)
Report dissemination 
Did you share CEP's report with anyone NOT involved in requesting the report or in making the final decision?67% (30/45)
Did you share CEP's report with anyone outside of Penn?7% (3/45)
Requestor preferences 
Would it be helpful for CEP staff to call you after you receive any future CEP reports to answer any questions you might have?55% (24/44)
Following any future reports you request from CEP, would you be willing to complete a brief questionnaire?100% (44/44)
Please rank how you would prefer to receive reports from CEP in the future. 
E‐mail containing the report as a PDF attachment77% (34/44)
E‐mail containing a link to the report on CEP's website16% (7/44)
In‐person presentation by the CEP analyst writing the report18% (8/44)
In‐person presentation by the CEP director involved in the report16% (7/44)
Figure 1
Requestor responses to Likert survey questions. Abbreviations: CEP, Center for Evidence‐based Practice.

In general, respondents found reports easy to request, easy to use, timely, and relevant, resulting in high requestor satisfaction. In addition, 98% described the scope of content and level of detail as about right. Report impact was rated highly as well, with the evidence summary and conclusions rated as the most critical to decision making. A majority of respondents indicated that reports confirmed their tentative decision (77%, n = 34), whereas some changed their tentative decision (7%, n = 3), and others suggested the report had no effect on their tentative decision (16%, n = 7). Respondents indicated the amount of time that elapsed between receiving reports and making final decisions was 1 to 7 days (5%, n = 2), 8 to 30 days (40%, n = 17), 1 to 3 months (37%, n = 16), 4 to 6 months (9%, n = 4), or greater than 6 months (9%, n = 4). The most common reasons cited for requesting a report were the CEP's evidence synthesis skills and objectivity.

DISCUSSION

To our knowledge, this is the first comprehensive description and assessment of evidence synthesis activity by a hospital EPC in the United States. Our findings suggest that clinical and administrative leaders will request reports from a hospital EPC, and that hospital EPCs can promptly produce reports when requested. Moreover, these syntheses can address a wide range of clinical and policy topics, and can be disseminated through a variety of routes. Lastly, requestors are satisfied by these syntheses, and report that they inform decision making. These results suggest that EPCs may be an effective infrastructure paradigm for promoting evidence‐based decision making within healthcare provider organizations, and are consistent with previous analyses of hospital‐based EPCs.[21, 28, 29]

Over half of report requestors cited CEP's objectivity as a factor in their decision to request a report, underscoring the value of a neutral entity in an environment where clinical departments and hospital committees may have competing interests.[10] This asset was 1 of the primary drivers for establishing our hospital EPC. Concerns by clinical executives about the influence of industry and local politics on institutional decision making, and a desire to have clinical evidence more systematically and objectively integrated into decision making, fueled our center's funding.

The survey results also demonstrate that respondents were satisfied with the reports for many reasons, including readability, concision, timeliness, scope, and content, consistent with the evaluation of the French hospital‐based EPC CEDIT (French Committee for the Assessment and Dissemination of Technological Innovations).[29] Given the importance of readability, concision, and relevance that has been previously described,[16, 28, 30] nearly all CEP reports contain an evidence summary on the first page that highlights key findings in a concise, user‐friendly format.[31] The evidence summaries include bullet points that: (1) reference the most pertinent guideline recommendations along with their strength of recommendation and underlying quality of evidence; (2) organize and summarize study findings using the most critical clinical outcomes, including an assessment of the quality of the underlying evidence for each outcome; and (3) note important limitations of the findings.

Evidence syntheses must be timely to allow decision makers to act on the findings.[28, 32] The primary criticism of CEDIT was the lag between requests and report publication.[29] Rapid reviews, designed to inform urgent decisions, can overcome this challenge.[31, 33, 34] CEP reviews required approximately 2 months to complete on average, consistent with the most rapid timelines reported,[31, 33, 34] and much shorter than standard systematic review timelines, which can take up to 12 to 24 months.[33] Working with requestors to limit the scope of reviews to those issues most critical to a decision, using secondary resources when available, and hiring experienced research analysts help achieve these efficiencies.

The study by Bodeau‐Livinec also argues for the importance of report accessibility to ensure dissemination.[29] This is consistent with the CEP's approach, where all reports are posted on the UPHS internal website. Many also inform QI initiatives, as well as CDS interventions that address topics of general interest to acute care hospitals, such as venous thromboembolism (VTE) prophylaxis,[35] blood product transfusions,[36] sepsis care,[37, 38] and prevention of catheter‐associated urinary tract infections (CAUTI)[39] and hospital readmissions.[40] Most reports are also listed in an international database of rapid reviews,[23] and reports that address topics of general interest, have sufficient evidence to synthesize, and have no prior published systematic reviews are published in the peer‐reviewed literature.[41, 42]

The majority of reports completed by the CEP were evidence reviews, or systematic reviews of primary literature, suggesting that CEP reports often address questions previously unanswered by existing published systematic reviews; however, about a third of reports were evidence advisories, or summaries of evidence from preexisting secondary sources. The relative scarcity of high‐quality evidence bases in those reports where GRADE analyses were conducted might be expected, as requestors may be more likely to seek guidance when the evidence base on a topic is lacking. This was further supported by the small percentage (15%) of reports where adequate data of sufficient homogeneity existed to allow meta‐analyses. The small number of original meta‐analyses performed also reflects our reliance on secondary resources when available.

Only 7% of respondents reported that tentative decisions were changed based on their report. This is not surprising, as evidence reviews infrequently result in clear go or no go recommendations. More commonly, they address or inform complex clinical questions or pathways. In this context, the change/confirm/no effect framework may not completely reflect respondents' use of or benefit from reports. Thus, we included a diverse set of questions in our survey to best estimate the value of our reports. For example, when asked whether the report answered the question posed, informed their final decision, or was consistent with their final decision, 91%, 79%, and 71% agreed or strongly agreed, respectively. When asked whether they would request a report again if they had to do it all over, recommend CEP to their colleagues, and be likely to request reports in the future, at least 95% of survey respondents agreed or strongly agreed. In addition, no respondent indicated that their report was not timely enough to influence their decision. Moreover, only a minority of respondents expressed disappointment that the CEP's report did not provide actionable recommendations due to a lack of published evidence (9%, n = 4). Importantly, the large proportion of requestors indicating that reports confirmed their tentative decisions may be a reflection of hindsight bias.

The most apparent trend in the production of CEP reviews over time is the relative increase in requests by clinical departments, suggesting that the CEP is being increasingly consulted to help define best clinical practices. This is also supported by the relative increase in reports focused on policy or organizational/managerial systems. These findings suggest that hospital EPCs have value beyond the traditional realm of HTA.

This study has a number of limitations. First, not all of the eligible report requestors responded to our survey. Despite this, our response rate of 72% compares favorably with surveys published in medical journals.[43] In addition, nonresponse bias may be less important in physician surveys than surveys of the general population.[44] The similarity in requestor and report characteristics for respondents and nonrespondents supports this. Second, our survey of impact is self‐reported rather than an evaluation of actual decision making or patient outcomes. Thus, the survey relies on the accuracy of the responses. Third, recall bias must be considered, as some respondents were asked to evaluate reports that were greater than 1 year old. To reduce this bias, we asked respondents to consider the most recent report they requested, included that report as an attachment in the survey request, and only surveyed requestors from the most recent 4 of the CEP's 8 fiscal years. Fourth, social desirability bias could have also affected the survey responses, though it was likely minimized by the promise of confidentiality. Fifth, an examination of the impact of the CEP on costs was outside the scope of this evaluation; however, such information may be important to those assessing the sustainability or return on investment of such centers. Simple approaches we have previously used to approximate the value of our activities include: (1) estimating hospital cost savings resulting from decisions supported by our reports, such as the use of technologies like chlorhexidine for surgical site infections[45] or discontinuation of technologies like aprotinin for cardiac surgery[46]; and (2) estimating penalties avoided or rewards attained as a result of center‐led initiatives, such as those to increase VTE prophylaxis,[35] reduce CAUTI rates,[39] and reduce preventable mortality associated with sepsis.[37, 38] Similarly, given the focus of this study on the local evidence synthesis activities of our center, our examination did not include a detailed description of our CDS activities, or teaching activities, including our multidisciplinary workshops for physicians and nurses in evidence‐based QI[47] and our novel evidence‐based practice curriculum for medical students. Our study also did not include a description of our extramural activities, such as those supported by our contract with AHRQ as 1 of their 13 EPCs.[16, 17, 48, 49] A consideration of all of these activities enables a greater appreciation for the potential of such centers. Lastly, we examined a single EPC, which may not be representative of the diversity of hospitals and hospital staff across the United States. However, our EPC serves a diverse array of patient populations, clinical services, and service models throughout our multientity academic healthcare system, which may improve the generalizability of our experience to other settings.

As next steps, we recommend evaluation of other existing hospital EPCs nationally. Such studies could help hospitals and health systems ascertain which of their internal decisions might benefit from locally sourced rapid systematic reviews and determine whether an in‐house EPC could improve the value of care delivered.

In conclusion, our findings suggest that hospital EPCs within academic healthcare systems can efficiently synthesize and disseminate evidence for a variety of stakeholders. Moreover, these syntheses impact decision making in a variety of hospital contexts and clinical specialties. Hospitals and hospitalist leaders seeking to improve the implementation of evidence‐based practice at a systems level might consider establishing such infrastructure locally.

Acknowledgements

The authors thank Fran Barg, PhD (Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine) and Joel Betesh, MD (University of Pennsylvania Health System) for their contributions to developing the survey. They did not receive any compensation for their contributions.

Disclosures: An earlier version of this work was presented as a poster at the 2014 AMA Research Symposium, November 7, 2014, Dallas, Texas. Mr. Jayakumar reports having received a University of Pennsylvania fellowship as a summer intern at the Center for Evidence‐based Practice. Dr. Umscheid cocreated and directs a hospital evidence‐based practice center, is the Senior Associate Director of an Agency for Healthcare Research and Quality Evidence‐Based Practice Center, and is a past member of the Medicare Evidence Development and Coverage Advisory Committee, which uses evidence reports developed by the Evidence‐based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Umscheid's contribution was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Dr. Lavenberg, Dr. Mitchell, and Mr. Leas are employed as research analysts by a hospital evidence‐based practice center. Dr. Doshi is supported in part by a hospital evidence‐based practice center and is an Associate Director of an Agency for Healthcare Research and Quality Evidence‐based Practice Center. Dr. Goldmann is emeritus faculty at Penn, is supported in part by a hospital evidence‐based practice center, and is the Vice President and Chief Quality Assurance Officer in Clinical Solutions, a division of Elsevier, Inc., a global publishing company, and director of the division's Evidence‐based Medicine Center. Dr. Williams cocreated and codirects a hospital evidence‐based practice center. Dr. Brennan has oversight for and helped create a hospital evidence‐based practice center.

References
  1. Avorn J, Fischer M. “Bench to behavior”: translating comparative effectiveness research into improved clinical practice. Health Aff (Millwood). 2010;29(10):18911900.
  2. Rajab MH, Villamaria FJ, Rohack JJ. Evaluating the status of “translating research into practice” at a major academic healthcare system. Int J Technol Assess Health Care. 2009;25(1):8489.
  3. Timbie JW, Fox DS, Busum K, Schneider EC. Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff (Millwood). 2012;31(10):21682175.
  4. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.
  5. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):12251230.
  6. Umscheid CA, Brennan PJ. Incentivizing “structures” over “outcomes” to bridge the knowing‐doing gap. JAMA Intern Med. 2015;175(3):354.
  7. Olsen L, Aisner D, McGinnis JM, eds. Institute of Medicine (US) Roundtable on Evidence‐Based Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: National Academies Press; 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK53494/. Accessed October 29, 2014.
  8. Harrison MB, Legare F, Graham ID, Fervers B. Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010;182(2):E78E84.
  9. Mitchell MD, Williams K, Brennan PJ, Umscheid CA. Integrating local data into hospital‐based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294300.
  10. Umscheid CA, Williams K, Brennan PJ. Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):13521355.
  11. Gutowski C, Maa J, Hoo KS, Bozic KJ, Bozic K. Health technology assessment at the University of California‐San Francisco. J Healthc Manag Am Coll Healthc Exec. 2011;56(1):1529; discussion 29–30.
  12. Schottinger J, Odell RM. Kaiser Permanente Southern California regional technology management process: evidence‐based medicine operationalized. Perm J. 2006;10(1):3841.
  13. Gagnon M‐P. Hospital‐based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819824.
  14. Cicchetti A, Marchetti M, Dibidino R, Corio M. Hospital based health technology assessment world‐wide survey. Available at: http://www.htai.org/fileadmin/HTAi_Files/ISG/HospitalBasedHTA/2008Files/HospitalBasedHTAISGSurveyReport.pdf. Accessed October 11, 2015.
  15. Stevens AJ, Longson C. At the center of health care policy making: the use of health technology assessment at NICE. Med Decis Making. 2013;33(3):320324.
  16. Atkins D, Fink K, Slutsky J. Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 part 2):10351041.
  17. Slutsky JR, Clancy CM. AHRQ's Effective Health Care Program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):6770.
  18. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):13171322.
  19. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):1324.
  20. Gagnon M‐P, Desmartis M, Poder T, Witteman W. Effects and repercussions of local/hospital‐based health technology assessment (HTA): a systematic. Syst Rev. 2014;3:129.
  21. McGregor M, Arnoldo J, Barkun J, et al. Impact of TAU Reports. McGill University Health Centre. Available at: https://francais.mcgill.ca/files/tau/FINAL_TAU_IMPACT_REPORT_FEB_2008.pdf. Published Feb 1, 2008. Accessed August 19, 2014.
  22. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924926.
  23. Booth AM, Wright KE, Outhwaite H. Centre for Reviews and Dissemination databases: value, content, and developments. Int J Technol Assess Health Care. 2010;26(4):470472.
  24. Goodman C. HTA 101. Introduction to Health Technology Assessment. Available at: https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed October 11, 2015.
  25. National Institute for Health Research. Remit. NIHR HTA Programme. Available at: http://www.nets.nihr.ac.uk/programmes/hta/remit. Accessed August 20, 2014.
  26. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research Electronic Data Capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.
  27. Mitchell MD, Williams K, Kuntz G, Umscheid CA. When the decision is what to decide: Using evidence inventory reports to focus health technology assessments. Int J Technol Assess Health Care. 2011;27(2):127132.
  28. McGregor M, Brophy JM. End‐user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(02):263267.
  29. Bodeau‐Livinec F, Simon E, Montagnier‐Petrissans C, Joël M‐E, Féry‐Lemonnier E. Impact of CEDIT recommendations: an example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161168.
  30. Alexander JA, Hearld LR, Jiang HJ, Fraser I. Increasing the relevance of research to health care managers: hospital CEO imperatives for improving quality and lowering costs. Health Care Manage Rev. 2007;32(2):150159.
  31. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10.
  32. Brown M, Hurwitz J, Brixner D, Malone DC. Health care decision makers' use of comparative effectiveness research: report from a series of focus groups. J Manag Care Pharm. 2013;19(9):745754.
  33. Watt A, Cameron A, Sturm L, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133139.
  34. Hartling L, Guise J‐M, Kato E, et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ncbi.nlm.nih.gov/books/NBK274092. Accessed March 5, 2015.
  35. Umscheid CA, Hanish A, Chittams J, Weiner MG, Hecht TEH. Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi‐experimental study. BMC Med Inform Decis Mak. 2012;12:92.
  36. McGreevey JD. Order sets in electronic health records: principles of good practice. Chest. 2013;143(1):228235.
  37. Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):2631.
  38. Guidi JL, Clark K, Upton MT, et al. Clinician perception of the effectiveness of an automated early warning and response system for sepsis in an academic medical center. Ann Am Thorac Soc. 2015;12(10):15141519.
  39. Baillie CA, Epps M, Hanish A, Fishman NO, French B, Umscheid CA. Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter‐associated urinary tract infections. Infect Control Hosp Epidemiol. 2014;35(9):11471155.
  40. Baillie CA, VanZandbergen C, Tait G, et al. The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30‐day readmission. J Hosp Med. 2013;8(12):689695.
  41. Mitchell MD, Mikkelsen ME, Umscheid CA, Lee I, Fuchs BD, Halpern SD. A systematic review to inform institutional decisions about the use of extracorporeal membrane oxygenation during the H1N1 influenza pandemic. Crit Care Med. 2010;38(6):13981404.
  42. Mitchell MD, Anderson BJ, Williams K, Umscheid CA. Heparin flushing and other interventions to maintain patency of central venous catheters: a systematic review. J Adv Nurs. 2009;65(10):20072021.
  43. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):11291136.
  44. Kellerman SE, Herold J. Physician response to surveys: a review of the literature. Am J Prev Med. 2001;20(1):6167.
  45. Lee I, Agarwal RK, Lee BY, Fishman NO, Umscheid CA. Systematic review and cost analysis comparing use of chlorhexidine with use of iodine for preoperative skin antisepsis to prevent surgical site infection. Infect Control Hosp Epidemiol. 2010;31(12):12191229.
  46. Umscheid CA, Kohl BA, Williams K. Antifibrinolytic use in adult cardiac surgery. Curr Opin Hematol. 2007;14(5):455467.
  47. Wyer PC, Umscheid CA, Wright S, Silva SA, Lang E. Teaching evidence assimilation for collaborative health care (TEACH) 2009–2014: building evidence‐based capacity within health care provider organizations. EGEMS (Wash DC). 2015;3(2):1165.
  48. Han JH, Sullivan N, Leas BF, Pegues DA, Kaczmarek JL, Umscheid CA. Cleaning hospital room surfaces to prevent health care‐associated infections: a technical brief [published online August 11, 2015]. Ann Intern Med. doi:10.7326/M15‐1192.
  49. Umscheid CA, Agarwal RK, Brennan PJ, Healthcare Infection Control Practices Advisory Committee. Updating the guideline development methodology of the Healthcare Infection Control Practices Advisory Committee (HICPAC). Am J Infect Control. 2010;38(4):264273.
  50. U.S. Food and Drug Administration. FDA basics—What is a medical device? Available at: http://www.fda.gov/AboutFDA/Transparency/Basics/ucm211822.htm. Accessed November 12, 2014.
References
  1. Avorn J, Fischer M. “Bench to behavior”: translating comparative effectiveness research into improved clinical practice. Health Aff (Millwood). 2010;29(10):18911900.
  2. Rajab MH, Villamaria FJ, Rohack JJ. Evaluating the status of “translating research into practice” at a major academic healthcare system. Int J Technol Assess Health Care. 2009;25(1):8489.
  3. Timbie JW, Fox DS, Busum K, Schneider EC. Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff (Millwood). 2012;31(10):21682175.
  4. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.
  5. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):12251230.
  6. Umscheid CA, Brennan PJ. Incentivizing “structures” over “outcomes” to bridge the knowing‐doing gap. JAMA Intern Med. 2015;175(3):354.
  7. Olsen L, Aisner D, McGinnis JM, eds. Institute of Medicine (US) Roundtable on Evidence‐Based Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: National Academies Press; 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK53494/. Accessed October 29, 2014.
  8. Harrison MB, Legare F, Graham ID, Fervers B. Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010;182(2):E78E84.
  9. Mitchell MD, Williams K, Brennan PJ, Umscheid CA. Integrating local data into hospital‐based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294300.
  10. Umscheid CA, Williams K, Brennan PJ. Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):13521355.
  11. Gutowski C, Maa J, Hoo KS, Bozic KJ, Bozic K. Health technology assessment at the University of California‐San Francisco. J Healthc Manag Am Coll Healthc Exec. 2011;56(1):1529; discussion 29–30.
  12. Schottinger J, Odell RM. Kaiser Permanente Southern California regional technology management process: evidence‐based medicine operationalized. Perm J. 2006;10(1):3841.
  13. Gagnon M‐P. Hospital‐based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819824.
  14. Cicchetti A, Marchetti M, Dibidino R, Corio M. Hospital based health technology assessment world‐wide survey. Available at: http://www.htai.org/fileadmin/HTAi_Files/ISG/HospitalBasedHTA/2008Files/HospitalBasedHTAISGSurveyReport.pdf. Accessed October 11, 2015.
  15. Stevens AJ, Longson C. At the center of health care policy making: the use of health technology assessment at NICE. Med Decis Making. 2013;33(3):320324.
  16. Atkins D, Fink K, Slutsky J. Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 part 2):10351041.
  17. Slutsky JR, Clancy CM. AHRQ's Effective Health Care Program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):6770.
  18. Grimshaw JM, Russell IT. Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):13171322.
  19. Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):1324.
  20. Gagnon M‐P, Desmartis M, Poder T, Witteman W. Effects and repercussions of local/hospital‐based health technology assessment (HTA): a systematic. Syst Rev. 2014;3:129.
  21. McGregor M, Arnoldo J, Barkun J, et al. Impact of TAU Reports. McGill University Health Centre. Available at: https://francais.mcgill.ca/files/tau/FINAL_TAU_IMPACT_REPORT_FEB_2008.pdf. Published Feb 1, 2008. Accessed August 19, 2014.
  22. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924926.
  23. Booth AM, Wright KE, Outhwaite H. Centre for Reviews and Dissemination databases: value, content, and developments. Int J Technol Assess Health Care. 2010;26(4):470472.
  24. Goodman C. HTA 101. Introduction to Health Technology Assessment. Available at: https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed October 11, 2015.
  25. National Institute for Health Research. Remit. NIHR HTA Programme. Available at: http://www.nets.nihr.ac.uk/programmes/hta/remit. Accessed August 20, 2014.
  26. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research Electronic Data Capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.
  27. Mitchell MD, Williams K, Kuntz G, Umscheid CA. When the decision is what to decide: Using evidence inventory reports to focus health technology assessments. Int J Technol Assess Health Care. 2011;27(2):127132.
  28. McGregor M, Brophy JM. End‐user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(02):263267.
  29. Bodeau‐Livinec F, Simon E, Montagnier‐Petrissans C, Joël M‐E, Féry‐Lemonnier E. Impact of CEDIT recommendations: an example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161168.
  30. Alexander JA, Hearld LR, Jiang HJ, Fraser I. Increasing the relevance of research to health care managers: hospital CEO imperatives for improving quality and lowering costs. Health Care Manage Rev. 2007;32(2):150159.
  31. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10.
  32. Brown M, Hurwitz J, Brixner D, Malone DC. Health care decision makers' use of comparative effectiveness research: report from a series of focus groups. J Manag Care Pharm. 2013;19(9):745754.
  33. Watt A, Cameron A, Sturm L, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133139.
  34. Hartling L, Guise J‐M, Kato E, et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ncbi.nlm.nih.gov/books/NBK274092. Accessed March 5, 2015.
  35. Umscheid CA, Hanish A, Chittams J, Weiner MG, Hecht TEH. Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi‐experimental study. BMC Med Inform Decis Mak. 2012;12:92.
  36. McGreevey JD. Order sets in electronic health records: principles of good practice. Chest. 2013;143(1):228235.
  37. Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):2631.
  38. Guidi JL, Clark K, Upton MT, et al. Clinician perception of the effectiveness of an automated early warning and response system for sepsis in an academic medical center. Ann Am Thorac Soc. 2015;12(10):15141519.
  39. Baillie CA, Epps M, Hanish A, Fishman NO, French B, Umscheid CA. Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter‐associated urinary tract infections. Infect Control Hosp Epidemiol. 2014;35(9):11471155.
  40. Baillie CA, VanZandbergen C, Tait G, et al. The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30‐day readmission. J Hosp Med. 2013;8(12):689695.
  41. Mitchell MD, Mikkelsen ME, Umscheid CA, Lee I, Fuchs BD, Halpern SD. A systematic review to inform institutional decisions about the use of extracorporeal membrane oxygenation during the H1N1 influenza pandemic. Crit Care Med. 2010;38(6):13981404.
  42. Mitchell MD, Anderson BJ, Williams K, Umscheid CA. Heparin flushing and other interventions to maintain patency of central venous catheters: a systematic review. J Adv Nurs. 2009;65(10):20072021.
  43. Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):11291136.
  44. Kellerman SE, Herold J. Physician response to surveys: a review of the literature. Am J Prev Med. 2001;20(1):6167.
  45. Lee I, Agarwal RK, Lee BY, Fishman NO, Umscheid CA. Systematic review and cost analysis comparing use of chlorhexidine with use of iodine for preoperative skin antisepsis to prevent surgical site infection. Infect Control Hosp Epidemiol. 2010;31(12):12191229.
  46. Umscheid CA, Kohl BA, Williams K. Antifibrinolytic use in adult cardiac surgery. Curr Opin Hematol. 2007;14(5):455467.
  47. Wyer PC, Umscheid CA, Wright S, Silva SA, Lang E. Teaching evidence assimilation for collaborative health care (TEACH) 2009–2014: building evidence‐based capacity within health care provider organizations. EGEMS (Wash DC). 2015;3(2):1165.
  48. Han JH, Sullivan N, Leas BF, Pegues DA, Kaczmarek JL, Umscheid CA. Cleaning hospital room surfaces to prevent health care‐associated infections: a technical brief [published online August 11, 2015]. Ann Intern Med. doi:10.7326/M15‐1192.
  49. Umscheid CA, Agarwal RK, Brennan PJ, Healthcare Infection Control Practices Advisory Committee. Updating the guideline development methodology of the Healthcare Infection Control Practices Advisory Committee (HICPAC). Am J Infect Control. 2010;38(4):264273.
  50. U.S. Food and Drug Administration. FDA basics—What is a medical device? Available at: http://www.fda.gov/AboutFDA/Transparency/Basics/ucm211822.htm. Accessed November 12, 2014.
Issue
Journal of Hospital Medicine - 11(3)
Issue
Journal of Hospital Medicine - 11(3)
Page Number
185-192
Page Number
185-192
Article Type
Display Headline
Evidence synthesis activities of a hospital evidence‐based practice center and impact on hospital decision making
Display Headline
Evidence synthesis activities of a hospital evidence‐based practice center and impact on hospital decision making
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Craig A. Umscheid, MD, University of Pennsylvania Health System, 3535 Market Street Mezzanine, Suite 50, Philadelphia, PA 19104; Telephone: 215‐349‐8098; Fax: 215‐349‐5829; E‐mail: craig.umscheid@uphs.upenn.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Changeover of Trainee Doctors

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Glycemic control in inpatients with diabetes following august changeover of trainee doctors in England

In England, the day when trainee doctors start work for the first time in their careers or rotate to a different hospital is the first Wednesday of August. This is often referred to as the Black Wednesday in the National Health Service (NHS), as it is widely perceived that inexperience and nonfamiliarity with the new hospital systems and policies in these first few weeks lead to increased medical errors and mismanagement and may therefore cost lives.[1] However, there is very little evidence in favor of this widely held view in the NHS. A 2009 English study found a small but significant increase of 6% in the odds of death for inpatients admitted in the week following the first Wednesday in August than in the week following the last Wednesday in July, whereas a previous report did not support this.[2, 3] In the United States, the resident trainee doctor's changeover occurs in July, and its negative impact on patient outcomes is often dubbed the July phenomenon.[4] With conflicting reports of the July phenomenon on patient outcomes,[5, 6, 7] Young et al. systematically reviewed 39 studies and concluded that the July phenomenon exists in that there is increased mortality around the changeover period.[4]

It can be hypothesized that glycemic control in inpatients with diabetes would be worse in the immediate period following changeover of trainee doctors for the same reasons mentioned earlier that impact mortality. However, contrary to expectations, a recent single‐hospital study from the United States reported that changeover of resident trainee doctors did not worsen inpatient glycemic control.[8] Although the lack of confidence among trainee doctors in inpatient diabetes management has been clearly demonstrated in England,[9] the impact of August changeover of trainee doctors on inpatient glycemic control is unknown. The aim of this study was to determine whether the August changeover of trainee doctors impacted on glycemic control in inpatients with diabetes in a single English hospital.

MATERIAL AND METHODS

The study setting was a medium‐sized 550‐bed hospital in England that serves a population of approximately 360,000 residents. Capillary blood glucose (CBG) readings for adult inpatients across all wards were downloaded from the Precision Web Point‐of‐Care Data Management System (Abbott Diabetes Care Inc., Alameda, CA), an electronic database where all the CBG readings for inpatients are stored. Patient administration data were used to identify those with diabetes admitted to the hospital for at least 1 day, and only their CBG readings were included in this study. Glucometrics, a term coined by Goldberg et al., refers to standardized glucose performance metrics to assess the quality of inpatient glycemic control.[10] In this study, patient‐day glucometric measures were used, as they are considered the best indicator of inpatient glycemic control compared to other glucometrics.[10] Patient‐day glucometrics were analyzed for 4 weeks before and after Black Wednesday for the years 2012, 2013, and 2014 using Microsoft Excel 2007 (Microsoft Corp., Redmond, WA) and R version 3.1.0 (The R Foundation, Vienna, Austria). Patient‐day glucometrics analyzed were hypoglycemia (any CBG 2.2 mmol/L [40 mg/dL], any CBG 2.9 mmol/L [52 mg/dL], any CBG 3.9 mmol/L [72 mg/dL]), normoglycemia (mean CBGs between 4 and 12 mmol/L [73‐216 mg/dL]), hyperglycemia (any CBG 12.1 mmol/L [218 mg/dL]), and mean CBG. Proportions were compared using the z test, whereas sample means between the groups were compared by nonparametric Mann‐Whitney U tests, as per statistical literature.[11] All P values are 2‐tailed, and <0.05 was considered statistically significant.

Patient characteristics and healthcare professional's workload were identified as potential causes of variation in CBG readings. Regression analysis of covariance was used to identify and adjust for these factors when comparing mean glucose readings. Binomial logistic regression was used to adjust proportions of patients‐days with readings out of range and patient‐days with mean readings within range. Variables tested were length of stay as a proxy for severity of condition, number of patients whose CBG were measured in the hospital in a day as a proxy for the healthcare professional's workload, and location of the patient to account for variation in patient characteristics as the wards were specialty based. Goodness of fit was tested using the R2 value in the linear model, which indicates the proportion of outcome that is explained by the model. For binomial models, McFadden's pseudo R2 (pseudo‐R2McFadden) was used as advised for logistic models. McFadden's pseudo‐R2 ranges from 0 to 1, but unlike R2 in ordinary linear regression, values tend to be significantly lower: McFadden's pseudo R2 values between 0.2 and 0.4 indicate excellent fit.[12]

RESULTS

A total of 16,870 patient‐day CBG measures in 2730 inpatients with diabetes were analyzed. The results of all regressions are presented in Table 1. The coefficients in the first model represent the effect of each covariate on mean patient‐day CBG. For example, each extra day of hospitalization was associated with a 0.02 mmol/L (0.36 mg/dL) increase in mean patient‐day reading, ceteris paribus. The remaining models indicate the change in relative risk (in this case the proportion of patient‐days) associated with the covariates. For example, in patients who were hospitalized for 3 days, the proportion of patient‐days with at least 1 CBG greater than 12 mmol/L (216 mg/dL) was 1.01 times the comparable proportion of patients who were hospitalized for 2 days. Each additional day in the hospital significantly increased the mean CBG by 0.015 mmol/L (0.27 mg/dL) and increased the risk of having at least 1 reading below 3.9 mmol/L (72 mg/dL) or above 12 mmol/L (216 mg/dL). Monitoring more patients in a day also affected outcomes, although the effect was small. Each additional patient monitored reduced mean patient‐day CBG by 0.011 mmol/L (0.198 mg/dL) and increased the proportion of patients with at least 1 reading below 4 mmol/L (72 mg/dL) 1.01 times. Location of the patient also significantly affected CBG readings. This could have been due to either ward or patient characteristics, but lack of data on each ward's healthcare personnel and individual patient characteristics prevented further analysis of this effect, and therefore the results were used for adjustment only. All models have relatively low predictive power, as demonstrated by the low R2 and pseudo‐R2McFadden values. In the linear model that estimated the effect of covariates on mean patient‐day CBG, the R2 is 0.0270, indicating that only 2.70% of results were explained by the covariates in the model. The pseudo‐R2McFadden varied between 0.0146 and 0.0540, as presented in Table 1. Although the pseudo‐R2McFadden generally had lower values than the R2 for the linear models, values of 0.0540 and below are considered to be relatively low.[12]

Effect of Three Covariates on Blood Glucose Levels
Covariate Outcome
Change in Mean CBG for Each Patient‐Day, mmol/L (mg/dL) Change in % of Patient‐Days With Any CBG 2.2 mmol/L (40 mg/dL) Change in % of Patient‐Days With Any CBG 2.9 mmol/L (52 mg/dL) Change in % of Patient‐Days With Any CBG 3.9 mmol/L (72 mg/dL) Change in % of Patient‐Days With Mean CBG Between 4 and 12 mmol/L (73216 mg/dL) Change in % of Patient‐Days With Any CBG >12 mmol/L (218 mg/dL)
  • Each column presents results for 1 outcome (model). Coefficients for mean patient‐day glucose (model 1) represent the unit change in mean patient‐day glucose associated with the corresponding covariate. Negative values indicate a reduction in mean patient‐day CBG, and vice versa. The remaining 5 outcomes indicate the factor change in relative risk, in this case proportion of patient‐days, associated with the corresponding covariate. Values between 0 and 1 indicate a reduction in relative risk, whereas values greater than 1 indicate increased relative risk. Additional days in the hospital are the effect of each additional day of hospitalization on outcomes. For example, in patients who stay in the hospital for a total of 5 days, the proportion of patient‐days with at least 1 reading over 12 mmol/L (218 mg/dL) is 1.04 (1.014) times the proportion of patients who stay in the hospital for 1 day only. Similarly, additional patients monitored indicate the effect of monitoring each additional patient in the hospital on the day the patient‐day reading was calculated. Ward represents the effect of staying on a particular ward. There were 31 wards in total where at least 1 patient was monitored during the study. Figures represent the rangeminimum and maximum changein outcome associated with any ward, in comparison to the baseline ward, which was chosen at random and kept constant for all 6 models. Goodness of fit for the first linear model was estimated using R2. Goodness of fit for the remaining 5 logistic models was calculated using R2McFadden. See text for interpretation. Abbreviations: CBG, capillary blood glucose. *Very highly significant. Highly significant. Significant.

Additional day in the hospital 0.015 (0.27), P < 0.001* 1.00, P = 0.605 1.00, P = 0.986 1.005, P = 0.004 0.99, P < 0.001* 1.01, P < 0.001*
Additional patients monitored 0.011 (0.198), P < 0.001* 1.01, P = 0.132 1.01, P = 0.084 1.01, P = 0.021 1.00, P = 0.128 0.997, P = 0.011
Ward (range)

0.5913.68(10.62246.24)

0.3722.71 03.62 03.10 047,124.14 04,094,900
R2/pseudo‐R2McFadden 0.0247 0.0503 0.0363 0.0270 0.0140 0.0243

Table 2 summarizes outcomes for the 3 years individually. The results suggest that all indices of inpatient glycemic control that were analyzedhypoglycemia, normoglycemia, hyperglycemia, and mean CBGdid not worsen in August compared to July that year. The results are presented after adjustment for variation in the length of stay, number of patients monitored in a day, and location of the patient. Their effect on the difference in proportions of patients with at least 1 reading out of range and mean reading within range were not statistically significant. However, their effect on mean patient‐day CBG measures was statistically significant, although the effect was only a small decrease (0.4 mmol/L or 7.2 mg/dL) in the mean CBG (see Supporting Table 1 in the online version of this article for unadjusted readings).

Adjusted Patient‐Day Glucometric Data for Four Weeks Before and After the August Changeover for the Years 2012, 2013, and 2014
2012 2013 2014
Before Changeover After Changeover Before Changeover After Changeover Before Changeover After Changeover
  • NOTE: Abbreviations: CBG, capillary blood glucose. *Highly significant. Significant.

No. of inpatients with diabetes whose CBG readings were analyzed 470 482 464 427 440 447
No. of patient‐day CBG readings analyzed 2917 3159 3097 2588 2484 2625
Mean no. of CBG readings per patient‐day (range) 2.5 (127) 2.5 (123), P = 0.676 2.6 (121) 2.4 (118), P = 0.009* 2.5 (120) 2.4 (120), P = 0.028
Mean no. of CBG readings per patient‐day (range) in those where at least 1 reading was CBG 3.9 mmol/L (72 mg/dL) or CBG 12.1 mmol/L (218 mg/dL) 3.8 (127) 3.8 (123) 3.7 (121) 3.5 (118) 3.2 (120) 3.5 (120)
Mean no. of CBG readings per patient‐day (range) in those where all CBG readings were between 4 and 12 mmol/L (73216mg/dL) 1.8 (127) 1.8 (112) 1.8 (112) 1.8 (117) 1.7 (111) 1.7 (115)
% of patient‐days with any CBG 2.2 mmol/L (40 mg/dL) 0.99% 1.09%, P = 0.703 1.03% 0.88%, P = 0.544 0.84% 0.87%, P = 0.927
% of patient‐days with any CBG 2.9 mmol/L (52 mg/dL) 2.53% 2.68%, P = 0.708 2.63% 1.35%, P = 0.490 2.24% 2.31%, P = 0.874
% of patient‐days with any CBG 3.9 mmol/L (72 mg/dL) 7.25% 7.42%, P = 0.792 7.56 % 6.93%, P = 0.361 6.55% 6.70%, P = 0.858
% of patient‐days with mean CBG between 4 and 12 mmol/L (73216 mg/dL) 79.10% 79.89%, P = 0.446 78.69% 78.58%, P = 0.924 78.65% 78.61%, P = 0.973
% of patient‐days with any CBG 12.1 mmol/L (218 mg/dL) 32.32% 31.40%, P = 0.443 32.29% 32.88%, P = 0.634 32.78% 32.66%, P = 0.928
Median of mean CBG for each patient‐day in mmol/L (mg/dL) 8.0 (144.6) 7.8 (140.0) 8.4 (151.5) 8.3 (150.2) 8.9 (159.8) 8.8 (157.8)
Mean of mean CBG for each patient‐day in mmol/L (standard deviation) 9.1 (4.0) 8.8 (4.1), P = 0.033+ 9.4 (4.1) 9.2 (4.0), P = 0.075 9.8 (4.1) 9.6 (3.8), P = 0.189

DISCUSSION

This study shows that contrary to expectation, inpatient glycemic control did not worsen in the 4 weeks following the August changeover of trainee doctors for the years 2012, 2013, and 2014. In fact, inpatient glycemic control was marginally better in the first 4 weeks after changeover each year compared to the preceding 4 weeks before changeover. There may be several reasons for the findings in this study. First, since 2010 in this hospital and since 2012 nationally (further to direction from NHS England Medical Director Sir Bruce Keogh), it has become established practice that newly qualified trainee doctors shadow their colleagues at work a week prior to Black Wednesday.[13, 14] The purpose of this practice, called the preparation for professional practice is to familiarize trainee doctors with the hospital protocols and systems, improve their confidence, and potentially reduce medical errors when starting work. Second, since 2012, this hospital has also implemented the Joint British Diabetes Societies' national guidelines in managing inpatients with diabetes.[15] These guidelines are widely publicized on the changeover day during the trainee doctor's induction program. Finally, since 2012, a diabetes‐specific interactive 1‐hour educational program for trainee doctors devised by this hospital was implemented during the changeover period, which takes them through practical and problem‐solving case scenarios related to inpatient glycemic management, in particular prevention of hypoglycemia and hospital‐acquired diabetic ketoacidosis.[16] Attendance was mandatory, and informal feedback from trainee doctors about the educational program was extremely positive.

There are several limitations in this study. It could be argued that trainee doctors have very little impact on glycemic control in inpatients with diabetes. In NHS hospitals, trainee doctors are often the first port of call for managing glycemic issues in inpatients both in and out of hours, who in turn may or may not call the inpatient diabetes team wherever available. Therefore, trainee doctors' impact on glycemic control in inpatients with diabetes cannot be understated. However, it is acknowledged that in this study, a number of other factors that influence inpatient glycemic control, such as individual patient characteristics, medication errors, and the knowledge and confidence levels of individual trainee doctors, were not accounted for. Nevertheless, such factors are unlikely to have been significantly different over the 3‐year period. A further limitation was the unavailability of hospital‐wide electronic CBG data prior to 2012 to determine whether changeover impacted on inpatient glycemic control prior to this period. Another limitation was the dependence on patient administration data to identify those with diabetes, as it is well recognized that coded data in hospital data management systems can be inaccurate, though this has significantly improved over the years.[17] Finally, the most important limitation is that this is a single‐hospital study, and so the results may not be applicable to other English hospitals. Nevertheless, the finding of this study is similar to the finding in the single‐hospital study from the United States.[8]

The finding that glycemic control in inpatients with diabetes did not worsen in the 4 weeks following changeover of trainee doctors compared to the 4 weeks before changeover each year suggests that appropriate forethought and planning by the deanery foundation school and the inpatient diabetes team has prevented the anticipated deterioration of glycemic control during the August changeover of trainee doctors in this English hospital.

Disclosures: R.R. and G.R. conceived and designed the study. R.R. collected data and drafted the manuscript. R.R., D.J., and G.R. analyzed and interpreted the data. D.J. provided statistical input for analysis of the data. R.R., D.J., and G.R. critically revised the manuscript for intellectual content. All authors have approved the final version. The authors report no conflicts of interest.

Files
References
  1. Innes E. Black Wednesday: today junior doctors will start work—and cause A4(9):e7103.
  2. Aylin P, Majeed FA. The killing season—fact or fiction? BMJ. 1994;309(6970):1690.
  3. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year‐end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309315.
  4. Phillips DP, Barker GE. A July spike in fatal medication errors: a possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774779.
  5. Inaba K, Recinos G, Teixeira PG, et al. Complications and death at the start of the new academic year: is there a July phenomenon? J Trauma. 2010;68(1):1922.
  6. Borenstein SH, Choi M, Gerstle JT, Langer JC. Errors and adverse outcomes on a surgical service: what is the role of residents? J Surg Res. 2004;122(2):162166.
  7. Nicolas K, Raroque S, Rowland DY, Chaiban JT. Is There a “July Effect” for inpatient glycemic control? Endocr Pract. 2014;20(19):919924.
  8. George JT, Warriner D, McGrane DJ, et al.; TOPDOC Diabetes Study Team. Lack of confidence among trainee doctors in the management of diabetes: the Trainees Own Perception of Delivery of Care (TOPDOC) Diabetes Study. QJM. 2011;104(9):761766.
  9. Goldberg PA, Bozzo JE, Thomas PG, et al. “Glucometrics”—assessing the quality of inpatient glucose management. Diabetes Technol Ther. 2006;8(5):560569.
  10. Newbold P, Carlson WL, Thorne B. Statistics for Business and Economics. 5th ed. Upper Saddle River, NJ: Prentice Hall; 2002.
  11. Louviere JJ, Hensher AD, Swait DJ. Stated choice methods. New York, NY: Cambridge University Press; 2000.
  12. Health Education East of England. Preparing for professional practice. Available at: https://heeoe.hee.nhs.uk/foundation_faq. Accessed October 07, 2015.
  13. Department of Health. Lives will be saved as junior doctors shadow new role 2012. Available at: https://www.gov.uk/government/news/lives‐will‐be‐saved‐as‐junior‐doctors‐shadow‐new‐role. Accessed October 29, 2014.
  14. Association of British Clinical Diabetologists. Joint British Diabetes Societies for Inpatient Care. Available at: http://www.diabetologists‐abcd.org.uk/JBDS/JBDS.htm. Accessed October 8, 2014.
  15. Taylor CG, Morris C, Rayman G. An interactive 1‐h educational programme for junior doctors, increases their confidence and improves inpatient diabetes care. Diabet Med. 2012;29(12):15741578.
  16. Burns EM, Rigby E, Mamidanna R, et al. Systematic review of discharge coding accuracy. J Public Health (Oxf). 2012;34(1):138148.
Article PDF
Issue
Journal of Hospital Medicine - 11(3)
Page Number
206-209
Sections
Files
Files
Article PDF
Article PDF

In England, the day when trainee doctors start work for the first time in their careers or rotate to a different hospital is the first Wednesday of August. This is often referred to as the Black Wednesday in the National Health Service (NHS), as it is widely perceived that inexperience and nonfamiliarity with the new hospital systems and policies in these first few weeks lead to increased medical errors and mismanagement and may therefore cost lives.[1] However, there is very little evidence in favor of this widely held view in the NHS. A 2009 English study found a small but significant increase of 6% in the odds of death for inpatients admitted in the week following the first Wednesday in August than in the week following the last Wednesday in July, whereas a previous report did not support this.[2, 3] In the United States, the resident trainee doctor's changeover occurs in July, and its negative impact on patient outcomes is often dubbed the July phenomenon.[4] With conflicting reports of the July phenomenon on patient outcomes,[5, 6, 7] Young et al. systematically reviewed 39 studies and concluded that the July phenomenon exists in that there is increased mortality around the changeover period.[4]

It can be hypothesized that glycemic control in inpatients with diabetes would be worse in the immediate period following changeover of trainee doctors for the same reasons mentioned earlier that impact mortality. However, contrary to expectations, a recent single‐hospital study from the United States reported that changeover of resident trainee doctors did not worsen inpatient glycemic control.[8] Although the lack of confidence among trainee doctors in inpatient diabetes management has been clearly demonstrated in England,[9] the impact of August changeover of trainee doctors on inpatient glycemic control is unknown. The aim of this study was to determine whether the August changeover of trainee doctors impacted on glycemic control in inpatients with diabetes in a single English hospital.

MATERIAL AND METHODS

The study setting was a medium‐sized 550‐bed hospital in England that serves a population of approximately 360,000 residents. Capillary blood glucose (CBG) readings for adult inpatients across all wards were downloaded from the Precision Web Point‐of‐Care Data Management System (Abbott Diabetes Care Inc., Alameda, CA), an electronic database where all the CBG readings for inpatients are stored. Patient administration data were used to identify those with diabetes admitted to the hospital for at least 1 day, and only their CBG readings were included in this study. Glucometrics, a term coined by Goldberg et al., refers to standardized glucose performance metrics to assess the quality of inpatient glycemic control.[10] In this study, patient‐day glucometric measures were used, as they are considered the best indicator of inpatient glycemic control compared to other glucometrics.[10] Patient‐day glucometrics were analyzed for 4 weeks before and after Black Wednesday for the years 2012, 2013, and 2014 using Microsoft Excel 2007 (Microsoft Corp., Redmond, WA) and R version 3.1.0 (The R Foundation, Vienna, Austria). Patient‐day glucometrics analyzed were hypoglycemia (any CBG 2.2 mmol/L [40 mg/dL], any CBG 2.9 mmol/L [52 mg/dL], any CBG 3.9 mmol/L [72 mg/dL]), normoglycemia (mean CBGs between 4 and 12 mmol/L [73‐216 mg/dL]), hyperglycemia (any CBG 12.1 mmol/L [218 mg/dL]), and mean CBG. Proportions were compared using the z test, whereas sample means between the groups were compared by nonparametric Mann‐Whitney U tests, as per statistical literature.[11] All P values are 2‐tailed, and <0.05 was considered statistically significant.

Patient characteristics and healthcare professional's workload were identified as potential causes of variation in CBG readings. Regression analysis of covariance was used to identify and adjust for these factors when comparing mean glucose readings. Binomial logistic regression was used to adjust proportions of patients‐days with readings out of range and patient‐days with mean readings within range. Variables tested were length of stay as a proxy for severity of condition, number of patients whose CBG were measured in the hospital in a day as a proxy for the healthcare professional's workload, and location of the patient to account for variation in patient characteristics as the wards were specialty based. Goodness of fit was tested using the R2 value in the linear model, which indicates the proportion of outcome that is explained by the model. For binomial models, McFadden's pseudo R2 (pseudo‐R2McFadden) was used as advised for logistic models. McFadden's pseudo‐R2 ranges from 0 to 1, but unlike R2 in ordinary linear regression, values tend to be significantly lower: McFadden's pseudo R2 values between 0.2 and 0.4 indicate excellent fit.[12]

RESULTS

A total of 16,870 patient‐day CBG measures in 2730 inpatients with diabetes were analyzed. The results of all regressions are presented in Table 1. The coefficients in the first model represent the effect of each covariate on mean patient‐day CBG. For example, each extra day of hospitalization was associated with a 0.02 mmol/L (0.36 mg/dL) increase in mean patient‐day reading, ceteris paribus. The remaining models indicate the change in relative risk (in this case the proportion of patient‐days) associated with the covariates. For example, in patients who were hospitalized for 3 days, the proportion of patient‐days with at least 1 CBG greater than 12 mmol/L (216 mg/dL) was 1.01 times the comparable proportion of patients who were hospitalized for 2 days. Each additional day in the hospital significantly increased the mean CBG by 0.015 mmol/L (0.27 mg/dL) and increased the risk of having at least 1 reading below 3.9 mmol/L (72 mg/dL) or above 12 mmol/L (216 mg/dL). Monitoring more patients in a day also affected outcomes, although the effect was small. Each additional patient monitored reduced mean patient‐day CBG by 0.011 mmol/L (0.198 mg/dL) and increased the proportion of patients with at least 1 reading below 4 mmol/L (72 mg/dL) 1.01 times. Location of the patient also significantly affected CBG readings. This could have been due to either ward or patient characteristics, but lack of data on each ward's healthcare personnel and individual patient characteristics prevented further analysis of this effect, and therefore the results were used for adjustment only. All models have relatively low predictive power, as demonstrated by the low R2 and pseudo‐R2McFadden values. In the linear model that estimated the effect of covariates on mean patient‐day CBG, the R2 is 0.0270, indicating that only 2.70% of results were explained by the covariates in the model. The pseudo‐R2McFadden varied between 0.0146 and 0.0540, as presented in Table 1. Although the pseudo‐R2McFadden generally had lower values than the R2 for the linear models, values of 0.0540 and below are considered to be relatively low.[12]

Effect of Three Covariates on Blood Glucose Levels
Covariate Outcome
Change in Mean CBG for Each Patient‐Day, mmol/L (mg/dL) Change in % of Patient‐Days With Any CBG 2.2 mmol/L (40 mg/dL) Change in % of Patient‐Days With Any CBG 2.9 mmol/L (52 mg/dL) Change in % of Patient‐Days With Any CBG 3.9 mmol/L (72 mg/dL) Change in % of Patient‐Days With Mean CBG Between 4 and 12 mmol/L (73216 mg/dL) Change in % of Patient‐Days With Any CBG >12 mmol/L (218 mg/dL)
  • Each column presents results for 1 outcome (model). Coefficients for mean patient‐day glucose (model 1) represent the unit change in mean patient‐day glucose associated with the corresponding covariate. Negative values indicate a reduction in mean patient‐day CBG, and vice versa. The remaining 5 outcomes indicate the factor change in relative risk, in this case proportion of patient‐days, associated with the corresponding covariate. Values between 0 and 1 indicate a reduction in relative risk, whereas values greater than 1 indicate increased relative risk. Additional days in the hospital are the effect of each additional day of hospitalization on outcomes. For example, in patients who stay in the hospital for a total of 5 days, the proportion of patient‐days with at least 1 reading over 12 mmol/L (218 mg/dL) is 1.04 (1.014) times the proportion of patients who stay in the hospital for 1 day only. Similarly, additional patients monitored indicate the effect of monitoring each additional patient in the hospital on the day the patient‐day reading was calculated. Ward represents the effect of staying on a particular ward. There were 31 wards in total where at least 1 patient was monitored during the study. Figures represent the rangeminimum and maximum changein outcome associated with any ward, in comparison to the baseline ward, which was chosen at random and kept constant for all 6 models. Goodness of fit for the first linear model was estimated using R2. Goodness of fit for the remaining 5 logistic models was calculated using R2McFadden. See text for interpretation. Abbreviations: CBG, capillary blood glucose. *Very highly significant. Highly significant. Significant.

Additional day in the hospital 0.015 (0.27), P < 0.001* 1.00, P = 0.605 1.00, P = 0.986 1.005, P = 0.004 0.99, P < 0.001* 1.01, P < 0.001*
Additional patients monitored 0.011 (0.198), P < 0.001* 1.01, P = 0.132 1.01, P = 0.084 1.01, P = 0.021 1.00, P = 0.128 0.997, P = 0.011
Ward (range)

0.5913.68(10.62246.24)

0.3722.71 03.62 03.10 047,124.14 04,094,900
R2/pseudo‐R2McFadden 0.0247 0.0503 0.0363 0.0270 0.0140 0.0243

Table 2 summarizes outcomes for the 3 years individually. The results suggest that all indices of inpatient glycemic control that were analyzedhypoglycemia, normoglycemia, hyperglycemia, and mean CBGdid not worsen in August compared to July that year. The results are presented after adjustment for variation in the length of stay, number of patients monitored in a day, and location of the patient. Their effect on the difference in proportions of patients with at least 1 reading out of range and mean reading within range were not statistically significant. However, their effect on mean patient‐day CBG measures was statistically significant, although the effect was only a small decrease (0.4 mmol/L or 7.2 mg/dL) in the mean CBG (see Supporting Table 1 in the online version of this article for unadjusted readings).

Adjusted Patient‐Day Glucometric Data for Four Weeks Before and After the August Changeover for the Years 2012, 2013, and 2014
2012 2013 2014
Before Changeover After Changeover Before Changeover After Changeover Before Changeover After Changeover
  • NOTE: Abbreviations: CBG, capillary blood glucose. *Highly significant. Significant.

No. of inpatients with diabetes whose CBG readings were analyzed 470 482 464 427 440 447
No. of patient‐day CBG readings analyzed 2917 3159 3097 2588 2484 2625
Mean no. of CBG readings per patient‐day (range) 2.5 (127) 2.5 (123), P = 0.676 2.6 (121) 2.4 (118), P = 0.009* 2.5 (120) 2.4 (120), P = 0.028
Mean no. of CBG readings per patient‐day (range) in those where at least 1 reading was CBG 3.9 mmol/L (72 mg/dL) or CBG 12.1 mmol/L (218 mg/dL) 3.8 (127) 3.8 (123) 3.7 (121) 3.5 (118) 3.2 (120) 3.5 (120)
Mean no. of CBG readings per patient‐day (range) in those where all CBG readings were between 4 and 12 mmol/L (73216mg/dL) 1.8 (127) 1.8 (112) 1.8 (112) 1.8 (117) 1.7 (111) 1.7 (115)
% of patient‐days with any CBG 2.2 mmol/L (40 mg/dL) 0.99% 1.09%, P = 0.703 1.03% 0.88%, P = 0.544 0.84% 0.87%, P = 0.927
% of patient‐days with any CBG 2.9 mmol/L (52 mg/dL) 2.53% 2.68%, P = 0.708 2.63% 1.35%, P = 0.490 2.24% 2.31%, P = 0.874
% of patient‐days with any CBG 3.9 mmol/L (72 mg/dL) 7.25% 7.42%, P = 0.792 7.56 % 6.93%, P = 0.361 6.55% 6.70%, P = 0.858
% of patient‐days with mean CBG between 4 and 12 mmol/L (73216 mg/dL) 79.10% 79.89%, P = 0.446 78.69% 78.58%, P = 0.924 78.65% 78.61%, P = 0.973
% of patient‐days with any CBG 12.1 mmol/L (218 mg/dL) 32.32% 31.40%, P = 0.443 32.29% 32.88%, P = 0.634 32.78% 32.66%, P = 0.928
Median of mean CBG for each patient‐day in mmol/L (mg/dL) 8.0 (144.6) 7.8 (140.0) 8.4 (151.5) 8.3 (150.2) 8.9 (159.8) 8.8 (157.8)
Mean of mean CBG for each patient‐day in mmol/L (standard deviation) 9.1 (4.0) 8.8 (4.1), P = 0.033+ 9.4 (4.1) 9.2 (4.0), P = 0.075 9.8 (4.1) 9.6 (3.8), P = 0.189

DISCUSSION

This study shows that contrary to expectation, inpatient glycemic control did not worsen in the 4 weeks following the August changeover of trainee doctors for the years 2012, 2013, and 2014. In fact, inpatient glycemic control was marginally better in the first 4 weeks after changeover each year compared to the preceding 4 weeks before changeover. There may be several reasons for the findings in this study. First, since 2010 in this hospital and since 2012 nationally (further to direction from NHS England Medical Director Sir Bruce Keogh), it has become established practice that newly qualified trainee doctors shadow their colleagues at work a week prior to Black Wednesday.[13, 14] The purpose of this practice, called the preparation for professional practice is to familiarize trainee doctors with the hospital protocols and systems, improve their confidence, and potentially reduce medical errors when starting work. Second, since 2012, this hospital has also implemented the Joint British Diabetes Societies' national guidelines in managing inpatients with diabetes.[15] These guidelines are widely publicized on the changeover day during the trainee doctor's induction program. Finally, since 2012, a diabetes‐specific interactive 1‐hour educational program for trainee doctors devised by this hospital was implemented during the changeover period, which takes them through practical and problem‐solving case scenarios related to inpatient glycemic management, in particular prevention of hypoglycemia and hospital‐acquired diabetic ketoacidosis.[16] Attendance was mandatory, and informal feedback from trainee doctors about the educational program was extremely positive.

There are several limitations in this study. It could be argued that trainee doctors have very little impact on glycemic control in inpatients with diabetes. In NHS hospitals, trainee doctors are often the first port of call for managing glycemic issues in inpatients both in and out of hours, who in turn may or may not call the inpatient diabetes team wherever available. Therefore, trainee doctors' impact on glycemic control in inpatients with diabetes cannot be understated. However, it is acknowledged that in this study, a number of other factors that influence inpatient glycemic control, such as individual patient characteristics, medication errors, and the knowledge and confidence levels of individual trainee doctors, were not accounted for. Nevertheless, such factors are unlikely to have been significantly different over the 3‐year period. A further limitation was the unavailability of hospital‐wide electronic CBG data prior to 2012 to determine whether changeover impacted on inpatient glycemic control prior to this period. Another limitation was the dependence on patient administration data to identify those with diabetes, as it is well recognized that coded data in hospital data management systems can be inaccurate, though this has significantly improved over the years.[17] Finally, the most important limitation is that this is a single‐hospital study, and so the results may not be applicable to other English hospitals. Nevertheless, the finding of this study is similar to the finding in the single‐hospital study from the United States.[8]

The finding that glycemic control in inpatients with diabetes did not worsen in the 4 weeks following changeover of trainee doctors compared to the 4 weeks before changeover each year suggests that appropriate forethought and planning by the deanery foundation school and the inpatient diabetes team has prevented the anticipated deterioration of glycemic control during the August changeover of trainee doctors in this English hospital.

Disclosures: R.R. and G.R. conceived and designed the study. R.R. collected data and drafted the manuscript. R.R., D.J., and G.R. analyzed and interpreted the data. D.J. provided statistical input for analysis of the data. R.R., D.J., and G.R. critically revised the manuscript for intellectual content. All authors have approved the final version. The authors report no conflicts of interest.

In England, the day when trainee doctors start work for the first time in their careers or rotate to a different hospital is the first Wednesday of August. This is often referred to as the Black Wednesday in the National Health Service (NHS), as it is widely perceived that inexperience and nonfamiliarity with the new hospital systems and policies in these first few weeks lead to increased medical errors and mismanagement and may therefore cost lives.[1] However, there is very little evidence in favor of this widely held view in the NHS. A 2009 English study found a small but significant increase of 6% in the odds of death for inpatients admitted in the week following the first Wednesday in August than in the week following the last Wednesday in July, whereas a previous report did not support this.[2, 3] In the United States, the resident trainee doctor's changeover occurs in July, and its negative impact on patient outcomes is often dubbed the July phenomenon.[4] With conflicting reports of the July phenomenon on patient outcomes,[5, 6, 7] Young et al. systematically reviewed 39 studies and concluded that the July phenomenon exists in that there is increased mortality around the changeover period.[4]

It can be hypothesized that glycemic control in inpatients with diabetes would be worse in the immediate period following changeover of trainee doctors for the same reasons mentioned earlier that impact mortality. However, contrary to expectations, a recent single‐hospital study from the United States reported that changeover of resident trainee doctors did not worsen inpatient glycemic control.[8] Although the lack of confidence among trainee doctors in inpatient diabetes management has been clearly demonstrated in England,[9] the impact of August changeover of trainee doctors on inpatient glycemic control is unknown. The aim of this study was to determine whether the August changeover of trainee doctors impacted on glycemic control in inpatients with diabetes in a single English hospital.

MATERIAL AND METHODS

The study setting was a medium‐sized 550‐bed hospital in England that serves a population of approximately 360,000 residents. Capillary blood glucose (CBG) readings for adult inpatients across all wards were downloaded from the Precision Web Point‐of‐Care Data Management System (Abbott Diabetes Care Inc., Alameda, CA), an electronic database where all the CBG readings for inpatients are stored. Patient administration data were used to identify those with diabetes admitted to the hospital for at least 1 day, and only their CBG readings were included in this study. Glucometrics, a term coined by Goldberg et al., refers to standardized glucose performance metrics to assess the quality of inpatient glycemic control.[10] In this study, patient‐day glucometric measures were used, as they are considered the best indicator of inpatient glycemic control compared to other glucometrics.[10] Patient‐day glucometrics were analyzed for 4 weeks before and after Black Wednesday for the years 2012, 2013, and 2014 using Microsoft Excel 2007 (Microsoft Corp., Redmond, WA) and R version 3.1.0 (The R Foundation, Vienna, Austria). Patient‐day glucometrics analyzed were hypoglycemia (any CBG 2.2 mmol/L [40 mg/dL], any CBG 2.9 mmol/L [52 mg/dL], any CBG 3.9 mmol/L [72 mg/dL]), normoglycemia (mean CBGs between 4 and 12 mmol/L [73‐216 mg/dL]), hyperglycemia (any CBG 12.1 mmol/L [218 mg/dL]), and mean CBG. Proportions were compared using the z test, whereas sample means between the groups were compared by nonparametric Mann‐Whitney U tests, as per statistical literature.[11] All P values are 2‐tailed, and <0.05 was considered statistically significant.

Patient characteristics and healthcare professional's workload were identified as potential causes of variation in CBG readings. Regression analysis of covariance was used to identify and adjust for these factors when comparing mean glucose readings. Binomial logistic regression was used to adjust proportions of patients‐days with readings out of range and patient‐days with mean readings within range. Variables tested were length of stay as a proxy for severity of condition, number of patients whose CBG were measured in the hospital in a day as a proxy for the healthcare professional's workload, and location of the patient to account for variation in patient characteristics as the wards were specialty based. Goodness of fit was tested using the R2 value in the linear model, which indicates the proportion of outcome that is explained by the model. For binomial models, McFadden's pseudo R2 (pseudo‐R2McFadden) was used as advised for logistic models. McFadden's pseudo‐R2 ranges from 0 to 1, but unlike R2 in ordinary linear regression, values tend to be significantly lower: McFadden's pseudo R2 values between 0.2 and 0.4 indicate excellent fit.[12]

RESULTS

A total of 16,870 patient‐day CBG measures in 2730 inpatients with diabetes were analyzed. The results of all regressions are presented in Table 1. The coefficients in the first model represent the effect of each covariate on mean patient‐day CBG. For example, each extra day of hospitalization was associated with a 0.02 mmol/L (0.36 mg/dL) increase in mean patient‐day reading, ceteris paribus. The remaining models indicate the change in relative risk (in this case the proportion of patient‐days) associated with the covariates. For example, in patients who were hospitalized for 3 days, the proportion of patient‐days with at least 1 CBG greater than 12 mmol/L (216 mg/dL) was 1.01 times the comparable proportion of patients who were hospitalized for 2 days. Each additional day in the hospital significantly increased the mean CBG by 0.015 mmol/L (0.27 mg/dL) and increased the risk of having at least 1 reading below 3.9 mmol/L (72 mg/dL) or above 12 mmol/L (216 mg/dL). Monitoring more patients in a day also affected outcomes, although the effect was small. Each additional patient monitored reduced mean patient‐day CBG by 0.011 mmol/L (0.198 mg/dL) and increased the proportion of patients with at least 1 reading below 4 mmol/L (72 mg/dL) 1.01 times. Location of the patient also significantly affected CBG readings. This could have been due to either ward or patient characteristics, but lack of data on each ward's healthcare personnel and individual patient characteristics prevented further analysis of this effect, and therefore the results were used for adjustment only. All models have relatively low predictive power, as demonstrated by the low R2 and pseudo‐R2McFadden values. In the linear model that estimated the effect of covariates on mean patient‐day CBG, the R2 is 0.0270, indicating that only 2.70% of results were explained by the covariates in the model. The pseudo‐R2McFadden varied between 0.0146 and 0.0540, as presented in Table 1. Although the pseudo‐R2McFadden generally had lower values than the R2 for the linear models, values of 0.0540 and below are considered to be relatively low.[12]

Effect of Three Covariates on Blood Glucose Levels
Covariate Outcome
Change in Mean CBG for Each Patient‐Day, mmol/L (mg/dL) Change in % of Patient‐Days With Any CBG 2.2 mmol/L (40 mg/dL) Change in % of Patient‐Days With Any CBG 2.9 mmol/L (52 mg/dL) Change in % of Patient‐Days With Any CBG 3.9 mmol/L (72 mg/dL) Change in % of Patient‐Days With Mean CBG Between 4 and 12 mmol/L (73216 mg/dL) Change in % of Patient‐Days With Any CBG >12 mmol/L (218 mg/dL)
  • Each column presents results for 1 outcome (model). Coefficients for mean patient‐day glucose (model 1) represent the unit change in mean patient‐day glucose associated with the corresponding covariate. Negative values indicate a reduction in mean patient‐day CBG, and vice versa. The remaining 5 outcomes indicate the factor change in relative risk, in this case proportion of patient‐days, associated with the corresponding covariate. Values between 0 and 1 indicate a reduction in relative risk, whereas values greater than 1 indicate increased relative risk. Additional days in the hospital are the effect of each additional day of hospitalization on outcomes. For example, in patients who stay in the hospital for a total of 5 days, the proportion of patient‐days with at least 1 reading over 12 mmol/L (218 mg/dL) is 1.04 (1.014) times the proportion of patients who stay in the hospital for 1 day only. Similarly, additional patients monitored indicate the effect of monitoring each additional patient in the hospital on the day the patient‐day reading was calculated. Ward represents the effect of staying on a particular ward. There were 31 wards in total where at least 1 patient was monitored during the study. Figures represent the rangeminimum and maximum changein outcome associated with any ward, in comparison to the baseline ward, which was chosen at random and kept constant for all 6 models. Goodness of fit for the first linear model was estimated using R2. Goodness of fit for the remaining 5 logistic models was calculated using R2McFadden. See text for interpretation. Abbreviations: CBG, capillary blood glucose. *Very highly significant. Highly significant. Significant.

Additional day in the hospital 0.015 (0.27), P < 0.001* 1.00, P = 0.605 1.00, P = 0.986 1.005, P = 0.004 0.99, P < 0.001* 1.01, P < 0.001*
Additional patients monitored 0.011 (0.198), P < 0.001* 1.01, P = 0.132 1.01, P = 0.084 1.01, P = 0.021 1.00, P = 0.128 0.997, P = 0.011
Ward (range)

0.5913.68(10.62246.24)

0.3722.71 03.62 03.10 047,124.14 04,094,900
R2/pseudo‐R2McFadden 0.0247 0.0503 0.0363 0.0270 0.0140 0.0243

Table 2 summarizes outcomes for the 3 years individually. The results suggest that all indices of inpatient glycemic control that were analyzedhypoglycemia, normoglycemia, hyperglycemia, and mean CBGdid not worsen in August compared to July that year. The results are presented after adjustment for variation in the length of stay, number of patients monitored in a day, and location of the patient. Their effect on the difference in proportions of patients with at least 1 reading out of range and mean reading within range were not statistically significant. However, their effect on mean patient‐day CBG measures was statistically significant, although the effect was only a small decrease (0.4 mmol/L or 7.2 mg/dL) in the mean CBG (see Supporting Table 1 in the online version of this article for unadjusted readings).

Adjusted Patient‐Day Glucometric Data for Four Weeks Before and After the August Changeover for the Years 2012, 2013, and 2014
2012 2013 2014
Before Changeover After Changeover Before Changeover After Changeover Before Changeover After Changeover
  • NOTE: Abbreviations: CBG, capillary blood glucose. *Highly significant. Significant.

No. of inpatients with diabetes whose CBG readings were analyzed 470 482 464 427 440 447
No. of patient‐day CBG readings analyzed 2917 3159 3097 2588 2484 2625
Mean no. of CBG readings per patient‐day (range) 2.5 (127) 2.5 (123), P = 0.676 2.6 (121) 2.4 (118), P = 0.009* 2.5 (120) 2.4 (120), P = 0.028
Mean no. of CBG readings per patient‐day (range) in those where at least 1 reading was CBG 3.9 mmol/L (72 mg/dL) or CBG 12.1 mmol/L (218 mg/dL) 3.8 (127) 3.8 (123) 3.7 (121) 3.5 (118) 3.2 (120) 3.5 (120)
Mean no. of CBG readings per patient‐day (range) in those where all CBG readings were between 4 and 12 mmol/L (73216mg/dL) 1.8 (127) 1.8 (112) 1.8 (112) 1.8 (117) 1.7 (111) 1.7 (115)
% of patient‐days with any CBG 2.2 mmol/L (40 mg/dL) 0.99% 1.09%, P = 0.703 1.03% 0.88%, P = 0.544 0.84% 0.87%, P = 0.927
% of patient‐days with any CBG 2.9 mmol/L (52 mg/dL) 2.53% 2.68%, P = 0.708 2.63% 1.35%, P = 0.490 2.24% 2.31%, P = 0.874
% of patient‐days with any CBG 3.9 mmol/L (72 mg/dL) 7.25% 7.42%, P = 0.792 7.56 % 6.93%, P = 0.361 6.55% 6.70%, P = 0.858
% of patient‐days with mean CBG between 4 and 12 mmol/L (73216 mg/dL) 79.10% 79.89%, P = 0.446 78.69% 78.58%, P = 0.924 78.65% 78.61%, P = 0.973
% of patient‐days with any CBG 12.1 mmol/L (218 mg/dL) 32.32% 31.40%, P = 0.443 32.29% 32.88%, P = 0.634 32.78% 32.66%, P = 0.928
Median of mean CBG for each patient‐day in mmol/L (mg/dL) 8.0 (144.6) 7.8 (140.0) 8.4 (151.5) 8.3 (150.2) 8.9 (159.8) 8.8 (157.8)
Mean of mean CBG for each patient‐day in mmol/L (standard deviation) 9.1 (4.0) 8.8 (4.1), P = 0.033+ 9.4 (4.1) 9.2 (4.0), P = 0.075 9.8 (4.1) 9.6 (3.8), P = 0.189

DISCUSSION

This study shows that contrary to expectation, inpatient glycemic control did not worsen in the 4 weeks following the August changeover of trainee doctors for the years 2012, 2013, and 2014. In fact, inpatient glycemic control was marginally better in the first 4 weeks after changeover each year compared to the preceding 4 weeks before changeover. There may be several reasons for the findings in this study. First, since 2010 in this hospital and since 2012 nationally (further to direction from NHS England Medical Director Sir Bruce Keogh), it has become established practice that newly qualified trainee doctors shadow their colleagues at work a week prior to Black Wednesday.[13, 14] The purpose of this practice, called the preparation for professional practice is to familiarize trainee doctors with the hospital protocols and systems, improve their confidence, and potentially reduce medical errors when starting work. Second, since 2012, this hospital has also implemented the Joint British Diabetes Societies' national guidelines in managing inpatients with diabetes.[15] These guidelines are widely publicized on the changeover day during the trainee doctor's induction program. Finally, since 2012, a diabetes‐specific interactive 1‐hour educational program for trainee doctors devised by this hospital was implemented during the changeover period, which takes them through practical and problem‐solving case scenarios related to inpatient glycemic management, in particular prevention of hypoglycemia and hospital‐acquired diabetic ketoacidosis.[16] Attendance was mandatory, and informal feedback from trainee doctors about the educational program was extremely positive.

There are several limitations in this study. It could be argued that trainee doctors have very little impact on glycemic control in inpatients with diabetes. In NHS hospitals, trainee doctors are often the first port of call for managing glycemic issues in inpatients both in and out of hours, who in turn may or may not call the inpatient diabetes team wherever available. Therefore, trainee doctors' impact on glycemic control in inpatients with diabetes cannot be understated. However, it is acknowledged that in this study, a number of other factors that influence inpatient glycemic control, such as individual patient characteristics, medication errors, and the knowledge and confidence levels of individual trainee doctors, were not accounted for. Nevertheless, such factors are unlikely to have been significantly different over the 3‐year period. A further limitation was the unavailability of hospital‐wide electronic CBG data prior to 2012 to determine whether changeover impacted on inpatient glycemic control prior to this period. Another limitation was the dependence on patient administration data to identify those with diabetes, as it is well recognized that coded data in hospital data management systems can be inaccurate, though this has significantly improved over the years.[17] Finally, the most important limitation is that this is a single‐hospital study, and so the results may not be applicable to other English hospitals. Nevertheless, the finding of this study is similar to the finding in the single‐hospital study from the United States.[8]

The finding that glycemic control in inpatients with diabetes did not worsen in the 4 weeks following changeover of trainee doctors compared to the 4 weeks before changeover each year suggests that appropriate forethought and planning by the deanery foundation school and the inpatient diabetes team has prevented the anticipated deterioration of glycemic control during the August changeover of trainee doctors in this English hospital.

Disclosures: R.R. and G.R. conceived and designed the study. R.R. collected data and drafted the manuscript. R.R., D.J., and G.R. analyzed and interpreted the data. D.J. provided statistical input for analysis of the data. R.R., D.J., and G.R. critically revised the manuscript for intellectual content. All authors have approved the final version. The authors report no conflicts of interest.

References
  1. Innes E. Black Wednesday: today junior doctors will start work—and cause A4(9):e7103.
  2. Aylin P, Majeed FA. The killing season—fact or fiction? BMJ. 1994;309(6970):1690.
  3. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year‐end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309315.
  4. Phillips DP, Barker GE. A July spike in fatal medication errors: a possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774779.
  5. Inaba K, Recinos G, Teixeira PG, et al. Complications and death at the start of the new academic year: is there a July phenomenon? J Trauma. 2010;68(1):1922.
  6. Borenstein SH, Choi M, Gerstle JT, Langer JC. Errors and adverse outcomes on a surgical service: what is the role of residents? J Surg Res. 2004;122(2):162166.
  7. Nicolas K, Raroque S, Rowland DY, Chaiban JT. Is There a “July Effect” for inpatient glycemic control? Endocr Pract. 2014;20(19):919924.
  8. George JT, Warriner D, McGrane DJ, et al.; TOPDOC Diabetes Study Team. Lack of confidence among trainee doctors in the management of diabetes: the Trainees Own Perception of Delivery of Care (TOPDOC) Diabetes Study. QJM. 2011;104(9):761766.
  9. Goldberg PA, Bozzo JE, Thomas PG, et al. “Glucometrics”—assessing the quality of inpatient glucose management. Diabetes Technol Ther. 2006;8(5):560569.
  10. Newbold P, Carlson WL, Thorne B. Statistics for Business and Economics. 5th ed. Upper Saddle River, NJ: Prentice Hall; 2002.
  11. Louviere JJ, Hensher AD, Swait DJ. Stated choice methods. New York, NY: Cambridge University Press; 2000.
  12. Health Education East of England. Preparing for professional practice. Available at: https://heeoe.hee.nhs.uk/foundation_faq. Accessed October 07, 2015.
  13. Department of Health. Lives will be saved as junior doctors shadow new role 2012. Available at: https://www.gov.uk/government/news/lives‐will‐be‐saved‐as‐junior‐doctors‐shadow‐new‐role. Accessed October 29, 2014.
  14. Association of British Clinical Diabetologists. Joint British Diabetes Societies for Inpatient Care. Available at: http://www.diabetologists‐abcd.org.uk/JBDS/JBDS.htm. Accessed October 8, 2014.
  15. Taylor CG, Morris C, Rayman G. An interactive 1‐h educational programme for junior doctors, increases their confidence and improves inpatient diabetes care. Diabet Med. 2012;29(12):15741578.
  16. Burns EM, Rigby E, Mamidanna R, et al. Systematic review of discharge coding accuracy. J Public Health (Oxf). 2012;34(1):138148.
References
  1. Innes E. Black Wednesday: today junior doctors will start work—and cause A4(9):e7103.
  2. Aylin P, Majeed FA. The killing season—fact or fiction? BMJ. 1994;309(6970):1690.
  3. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year‐end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309315.
  4. Phillips DP, Barker GE. A July spike in fatal medication errors: a possible effect of new medical residents. J Gen Intern Med. 2010;25(8):774779.
  5. Inaba K, Recinos G, Teixeira PG, et al. Complications and death at the start of the new academic year: is there a July phenomenon? J Trauma. 2010;68(1):1922.
  6. Borenstein SH, Choi M, Gerstle JT, Langer JC. Errors and adverse outcomes on a surgical service: what is the role of residents? J Surg Res. 2004;122(2):162166.
  7. Nicolas K, Raroque S, Rowland DY, Chaiban JT. Is There a “July Effect” for inpatient glycemic control? Endocr Pract. 2014;20(19):919924.
  8. George JT, Warriner D, McGrane DJ, et al.; TOPDOC Diabetes Study Team. Lack of confidence among trainee doctors in the management of diabetes: the Trainees Own Perception of Delivery of Care (TOPDOC) Diabetes Study. QJM. 2011;104(9):761766.
  9. Goldberg PA, Bozzo JE, Thomas PG, et al. “Glucometrics”—assessing the quality of inpatient glucose management. Diabetes Technol Ther. 2006;8(5):560569.
  10. Newbold P, Carlson WL, Thorne B. Statistics for Business and Economics. 5th ed. Upper Saddle River, NJ: Prentice Hall; 2002.
  11. Louviere JJ, Hensher AD, Swait DJ. Stated choice methods. New York, NY: Cambridge University Press; 2000.
  12. Health Education East of England. Preparing for professional practice. Available at: https://heeoe.hee.nhs.uk/foundation_faq. Accessed October 07, 2015.
  13. Department of Health. Lives will be saved as junior doctors shadow new role 2012. Available at: https://www.gov.uk/government/news/lives‐will‐be‐saved‐as‐junior‐doctors‐shadow‐new‐role. Accessed October 29, 2014.
  14. Association of British Clinical Diabetologists. Joint British Diabetes Societies for Inpatient Care. Available at: http://www.diabetologists‐abcd.org.uk/JBDS/JBDS.htm. Accessed October 8, 2014.
  15. Taylor CG, Morris C, Rayman G. An interactive 1‐h educational programme for junior doctors, increases their confidence and improves inpatient diabetes care. Diabet Med. 2012;29(12):15741578.
  16. Burns EM, Rigby E, Mamidanna R, et al. Systematic review of discharge coding accuracy. J Public Health (Oxf). 2012;34(1):138148.
Issue
Journal of Hospital Medicine - 11(3)
Issue
Journal of Hospital Medicine - 11(3)
Page Number
206-209
Page Number
206-209
Article Type
Display Headline
Glycemic control in inpatients with diabetes following august changeover of trainee doctors in England
Display Headline
Glycemic control in inpatients with diabetes following august changeover of trainee doctors in England
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Gerry Rayman, MD, Consultant Physician and Lead for the National Inpatient Diabetes Audit, Diabetes Centre, The Ipswich Hospital NHS Trust, Heath Road, Ipswich, IP4 5PD, United Kingdom; Telephone: 0044‐1473704183; Fax: 0044‐1473704197; E‐mail: gerry.rayman@ipswichhospital.nhs.uk
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
The authors reply “Changes in patient satisfaction related to hospital renovation: The experience with a new clinical building”

We thank Mr. Zilm and colleagues for their interest in our work.[1] Certainly, we did not intend to imply that well‐designed buildings have little value in the efficient and patient‐centered delivery of healthcare. Our main goal was to highlight (1) that patients can distinguish between facility features and actual care delivery, and poor facilities alone should not be an excuse for poor patient satisfaction; and (2) that global evaluations are more dependent on perceived quality of care than on facility features. Furthermore, we agree with many of the points raised. Certainly, patient satisfaction is but 1 measure of successful facility design, and the delivery of modern healthcare requires updated facilities. However, based on our results, we think that healthcare administrators and designers should consider the return on investment on the costly features that are incorporated purely to improve patient satisfaction rather than for safety and staff effectiveness.

Referral patterns and patient expectations are likely very different for a tertiary care hospital like ours. A different relationship between facility design and patient satisfaction may indeed exist for community hospitals. However, we would caution against making this assumption without supportive evidence. Furthermore, it is difficult to attribute lack of improvement of physician scores in our study because of a ceiling effect. The baseline scores were certainly not exemplary, and there was plenty of room for improvement.

We agree that there is a need for high‐quality research to better understand the broader impact of healthcare design on meaningful outcomes. However, we are not impressed with the quality of much of the existing research tying physical facilities with patient stress or shorter length of stay, as mentioned by Mr. Zilm and colleagues. Evidence supporting investment in expensive facilities should be evaluated with the same high standards and rigor as for other healthcare decisions.

References
  1. Siddiqui ZK, Zuccarelli R, Durkin N, Wu AW, Brotman DJ. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015;10(3):165171.
Article PDF
Issue
Journal of Hospital Medicine - 10(11)
Page Number
764-765
Sections
Article PDF
Article PDF

We thank Mr. Zilm and colleagues for their interest in our work.[1] Certainly, we did not intend to imply that well‐designed buildings have little value in the efficient and patient‐centered delivery of healthcare. Our main goal was to highlight (1) that patients can distinguish between facility features and actual care delivery, and poor facilities alone should not be an excuse for poor patient satisfaction; and (2) that global evaluations are more dependent on perceived quality of care than on facility features. Furthermore, we agree with many of the points raised. Certainly, patient satisfaction is but 1 measure of successful facility design, and the delivery of modern healthcare requires updated facilities. However, based on our results, we think that healthcare administrators and designers should consider the return on investment on the costly features that are incorporated purely to improve patient satisfaction rather than for safety and staff effectiveness.

Referral patterns and patient expectations are likely very different for a tertiary care hospital like ours. A different relationship between facility design and patient satisfaction may indeed exist for community hospitals. However, we would caution against making this assumption without supportive evidence. Furthermore, it is difficult to attribute lack of improvement of physician scores in our study because of a ceiling effect. The baseline scores were certainly not exemplary, and there was plenty of room for improvement.

We agree that there is a need for high‐quality research to better understand the broader impact of healthcare design on meaningful outcomes. However, we are not impressed with the quality of much of the existing research tying physical facilities with patient stress or shorter length of stay, as mentioned by Mr. Zilm and colleagues. Evidence supporting investment in expensive facilities should be evaluated with the same high standards and rigor as for other healthcare decisions.

We thank Mr. Zilm and colleagues for their interest in our work.[1] Certainly, we did not intend to imply that well‐designed buildings have little value in the efficient and patient‐centered delivery of healthcare. Our main goal was to highlight (1) that patients can distinguish between facility features and actual care delivery, and poor facilities alone should not be an excuse for poor patient satisfaction; and (2) that global evaluations are more dependent on perceived quality of care than on facility features. Furthermore, we agree with many of the points raised. Certainly, patient satisfaction is but 1 measure of successful facility design, and the delivery of modern healthcare requires updated facilities. However, based on our results, we think that healthcare administrators and designers should consider the return on investment on the costly features that are incorporated purely to improve patient satisfaction rather than for safety and staff effectiveness.

Referral patterns and patient expectations are likely very different for a tertiary care hospital like ours. A different relationship between facility design and patient satisfaction may indeed exist for community hospitals. However, we would caution against making this assumption without supportive evidence. Furthermore, it is difficult to attribute lack of improvement of physician scores in our study because of a ceiling effect. The baseline scores were certainly not exemplary, and there was plenty of room for improvement.

We agree that there is a need for high‐quality research to better understand the broader impact of healthcare design on meaningful outcomes. However, we are not impressed with the quality of much of the existing research tying physical facilities with patient stress or shorter length of stay, as mentioned by Mr. Zilm and colleagues. Evidence supporting investment in expensive facilities should be evaluated with the same high standards and rigor as for other healthcare decisions.

References
  1. Siddiqui ZK, Zuccarelli R, Durkin N, Wu AW, Brotman DJ. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015;10(3):165171.
References
  1. Siddiqui ZK, Zuccarelli R, Durkin N, Wu AW, Brotman DJ. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015;10(3):165171.
Issue
Journal of Hospital Medicine - 10(11)
Issue
Journal of Hospital Medicine - 10(11)
Page Number
764-765
Page Number
764-765
Article Type
Display Headline
The authors reply “Changes in patient satisfaction related to hospital renovation: The experience with a new clinical building”
Display Headline
The authors reply “Changes in patient satisfaction related to hospital renovation: The experience with a new clinical building”
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Letter to the Editor

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
In reference to “Changes in patient satisfaction related to hospital renovation: The experience with a new clinical building”

We complement Dr. Siddiqui et al. on their article published in the Journal of Hospital Medicine.[1] Analysis of the role of new physical environments on care and patient satisfaction is sparse and desperately needed for this high‐cost resource in healthcare delivery. A review of the original article led us to several observations/suggestions.

The focus of the study is on perceived patient satisfaction based on 2 survey tools. As noted by the authors, there are multiple factors that must be considered related to facilitiestheir potential contribution to patient infections and falls, the ability to accommodate new technology and procedures, and the shifting practice models such as the shift from inpatient to ambulatory care. Patient‐focused care concepts are only 1 element in the design challenge and costs.

The reputation of Johns Hopkins as a major tertiary referral center is well known internationally, and it would seem reasonable to assume that many of the patients were selected or referred to the institution based on its physicians. It does not seem unreasonable to assume that facilities would play a secondary role, and that perceived satisfaction would be high regardless of the physical environment. As noted by the authors, the transferability of this finding to community hospitals and other settings is unknown.

Patient satisfaction is an important element in design, but staff satisfaction and efficiency are also significant elements in maintaining a high‐quality healthcare system. We need tools to assess the relationship between staff retention, stress levels, and medical errors and the physical environment.

The focus of the article is on the transferability of perceived satisfaction with environment to satisfaction with physician care. Previously published studies have shown a correlation with environments and views from patients rooms with reduced patient stress levels and shorter lengths of stay. Physical space should not be disregarded as a component of effective patient care.[2]

We are committed to seeking designs that are effective, safe, and adaptable to long‐term needs. We support additional research in this and other related design issues. We hope that the improvements in patient and family environments labeled as patient focused will continue to evolve to respond to real healthcare needs. It would be unfortunate if progress is diverted by misinterpretation of the articles findings.

Files
References
  1. Siddiqui ZK, Zuccarelli R, Durkin N, Wu AW, Brotman DJ. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015;10(3):165171.
  2. Ulrich RS, Zimring CP, Zhu X, et al. A review of the research literature on evidence‐based healthcare design. HERD. 2008;1(3):61125.
Article PDF
Issue
Journal of Hospital Medicine - 10(11)
Page Number
764-764
Sections
Files
Files
Article PDF
Article PDF

We complement Dr. Siddiqui et al. on their article published in the Journal of Hospital Medicine.[1] Analysis of the role of new physical environments on care and patient satisfaction is sparse and desperately needed for this high‐cost resource in healthcare delivery. A review of the original article led us to several observations/suggestions.

The focus of the study is on perceived patient satisfaction based on 2 survey tools. As noted by the authors, there are multiple factors that must be considered related to facilitiestheir potential contribution to patient infections and falls, the ability to accommodate new technology and procedures, and the shifting practice models such as the shift from inpatient to ambulatory care. Patient‐focused care concepts are only 1 element in the design challenge and costs.

The reputation of Johns Hopkins as a major tertiary referral center is well known internationally, and it would seem reasonable to assume that many of the patients were selected or referred to the institution based on its physicians. It does not seem unreasonable to assume that facilities would play a secondary role, and that perceived satisfaction would be high regardless of the physical environment. As noted by the authors, the transferability of this finding to community hospitals and other settings is unknown.

Patient satisfaction is an important element in design, but staff satisfaction and efficiency are also significant elements in maintaining a high‐quality healthcare system. We need tools to assess the relationship between staff retention, stress levels, and medical errors and the physical environment.

The focus of the article is on the transferability of perceived satisfaction with environment to satisfaction with physician care. Previously published studies have shown a correlation with environments and views from patients rooms with reduced patient stress levels and shorter lengths of stay. Physical space should not be disregarded as a component of effective patient care.[2]

We are committed to seeking designs that are effective, safe, and adaptable to long‐term needs. We support additional research in this and other related design issues. We hope that the improvements in patient and family environments labeled as patient focused will continue to evolve to respond to real healthcare needs. It would be unfortunate if progress is diverted by misinterpretation of the articles findings.

We complement Dr. Siddiqui et al. on their article published in the Journal of Hospital Medicine.[1] Analysis of the role of new physical environments on care and patient satisfaction is sparse and desperately needed for this high‐cost resource in healthcare delivery. A review of the original article led us to several observations/suggestions.

The focus of the study is on perceived patient satisfaction based on 2 survey tools. As noted by the authors, there are multiple factors that must be considered related to facilitiestheir potential contribution to patient infections and falls, the ability to accommodate new technology and procedures, and the shifting practice models such as the shift from inpatient to ambulatory care. Patient‐focused care concepts are only 1 element in the design challenge and costs.

The reputation of Johns Hopkins as a major tertiary referral center is well known internationally, and it would seem reasonable to assume that many of the patients were selected or referred to the institution based on its physicians. It does not seem unreasonable to assume that facilities would play a secondary role, and that perceived satisfaction would be high regardless of the physical environment. As noted by the authors, the transferability of this finding to community hospitals and other settings is unknown.

Patient satisfaction is an important element in design, but staff satisfaction and efficiency are also significant elements in maintaining a high‐quality healthcare system. We need tools to assess the relationship between staff retention, stress levels, and medical errors and the physical environment.

The focus of the article is on the transferability of perceived satisfaction with environment to satisfaction with physician care. Previously published studies have shown a correlation with environments and views from patients rooms with reduced patient stress levels and shorter lengths of stay. Physical space should not be disregarded as a component of effective patient care.[2]

We are committed to seeking designs that are effective, safe, and adaptable to long‐term needs. We support additional research in this and other related design issues. We hope that the improvements in patient and family environments labeled as patient focused will continue to evolve to respond to real healthcare needs. It would be unfortunate if progress is diverted by misinterpretation of the articles findings.

References
  1. Siddiqui ZK, Zuccarelli R, Durkin N, Wu AW, Brotman DJ. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015;10(3):165171.
  2. Ulrich RS, Zimring CP, Zhu X, et al. A review of the research literature on evidence‐based healthcare design. HERD. 2008;1(3):61125.
References
  1. Siddiqui ZK, Zuccarelli R, Durkin N, Wu AW, Brotman DJ. Changes in patient satisfaction related to hospital renovation: experience with a new clinical building. J Hosp Med. 2015;10(3):165171.
  2. Ulrich RS, Zimring CP, Zhu X, et al. A review of the research literature on evidence‐based healthcare design. HERD. 2008;1(3):61125.
Issue
Journal of Hospital Medicine - 10(11)
Issue
Journal of Hospital Medicine - 10(11)
Page Number
764-764
Page Number
764-764
Article Type
Display Headline
In reference to “Changes in patient satisfaction related to hospital renovation: The experience with a new clinical building”
Display Headline
In reference to “Changes in patient satisfaction related to hospital renovation: The experience with a new clinical building”
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

ED Observation

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Emergency department observation units: Less than we bargained for?

Over the past 3 decades, emergency department observation units (EDOUs) have been increasingly implemented in the United States to supplement emergency department (ED) care in a time of increasing patient volume and hospital crowding. Given the limited availability of hospital resources, EDOUs provide emergency clinicians an extended period of time to evaluate and risk‐stratify patients without necessitating difficult‐to‐obtain outpatient follow‐up or a short‐stay hospitalization. Changes in Medicare and insurer reimbursement policies have incentivized the adoption of EDOUs, and now, over one‐third of EDs nationally offer an observation unit.[1]

Much of the observation‐science literature has been condition and institution specific, showing benefits with respect to cost, quality of care, safety, and patient satisfaction.[2, 3, 4, 5] Until now, there had not been a national study on the impact of EDOUs to investigate important outcome: hospital admission rates. Capp and colleagues, using the National Hospital Ambulatory Care Survey (NHAMCS), attempt to answer a very important question: Do EDs with observation units have lower hospital admission rates?[6] To do so, they first standardize admission rates to sociodemographic and clinical features of the patients, while adjusting for hospital‐level factors. Then they compare the risk‐standardized hospital admission rate between EDs with and without an observation unit as reported in the NHAMCS. The authors make creative and elegant use of this publicly available, national dataset to suggest that EDOUs do not decrease hospital admissions.

The authors appropriately identify some limitations of using such data to answer questions where nuanced, countervailing forces drive the outcome of interest. It is important to note the basic statistical premise that the inability to disprove the null hypothesis is not the same thing as proving that the null hypothesis is true. In other words, although this study was not able to detect a difference between admission rates for hospitals with EDOUs and those without, it cannot be absolutely taken to mean that there is no relationship. The authors clearly state that this study was underpowered given that the difference of ED risk‐standardized hospital admission rates was small and therefore is at risk of type II error. In addition, unmeasured confounding may hide a true association between EDOUs and admission rates. Both static and dynamic measures of ED volume, crowding, and boarding, as well as changes in case mix or acuity may drive adoption of EDOUs,[7] while simultaneously associated with risk of hospitalization. Without balance between the EDs with and without observation units, or longitudinal measures of EDs over time as they are implemented, we are left with potentially biased estimates.

It is also important to highlight that not all EDOUs are created equal.[8] EDs may admit patients to the observation unit based on prespecified conditions or include all comers at physician discretion. Once placed in observation status, patients may or may not be managed by specific protocols to provide guidance on timing, order, and scope of testing and decision making.

Finally, care in EDOUs may be provided by emergency physicians, hospitalists, or other clinicians such as advanced practice providers (eg, physician assistants, nurse practitioners), a distinction that likely impacts the ultimate patient disposition. In fact, the NHAMCS asks the question, What type of physicians make decisions for patients in this observation or clinical decision unit? Capp et al., however, did not include this variable to further stratify the data. Although we do not know whether or not inclusion of this factor may have ultimately changed the results, it could have implications for how distinctions in who manages EDOUs could affect admission rates.

Still, the negative findings of this study seem to raise a number of questions, which should spark a broader discussion on EDOUs. The current analysis provides an important first step toward a national understanding of EDOUs and their role in acute care. Future inquiries should account for variation in observation units and the hospitals in which they are housed as well as inclusion of meaningful outcomes beyond admission rates. A number of methodological approaches can be considered to achieve this; propensity score matching within observational data may provide better balance between facilities with and without EDOUs, whereas multicenter impact analyses using controlled before‐and‐after or cluster‐randomized trials should be considered the gold standard for studying observation unit implementation. Outcomes in these studies should include long‐term changes in health, aggregate healthcare utilization, overuse of resources that do not provide high‐value care, and impacts on how care and costs may be redistributed when patients receive more care in observation units.

Although cost containment is often touted as a cornerstone of EDOUs, it is critical to know how the costs are measured and who is paying. For example, when an option to place a patient in observation exists, might clinicians utilize it for some patients who do not require further evaluation and testing and could have been safely discharged?[9] This observation creep may arise because clinicians can use EDOUs, not because they should. Motivations may include delaying difficult disposition decisions, avoiding uncertainty or liability when discharging patients, limited access to outpatient follow‐up, or a desire to utilize observation status to justify the existence of EDOUs within the institution. In this way, EDOUs may, in fact, provide low‐value care at a time of soaring healthcare costs.

Perhaps even more perplexing is the question of how costs are shifted through use of EDOUs.[10, 11] Much of the literature advertising its cost savings are only from the perspective of the insurers' or hospitals' perspective,[12] with 1 study estimating a potential annual cost savings of $4.6 million for each hospital, or $3 billion nationally, associated with the implementation of observation care.[5] But are medical centers just passing costs on to patients to avoid penalties and disincentives associated with short‐stay hospitalizations? Both private insurers and the Centers for Medicare and Medicaid Services may deny payments for admissions deemed unnecessary. Further, under the Affordable Care Act, avoiding hospitalizations may mean fewer penalties when Medicare patients later require admission for certain conditions. As such, hospitals may find huge incentives and cost savings associated with observation units. However, using EDOUs to avoid the Medicare readmission penalty may backfire when less‐sick patients requiring care beyond the ED are treated and discharged from observation, leaving more medically complex and ill patients for hospitalization, a group potentially more likely to be rehospitalized within 30 days, making readmission rates appear higher.

Nonetheless, because services provided during observation status are billed as an outpatient visit, patients may be liable for a proportion of the overall visit. In contrast to inpatient stays where, in general, patients owe a single copay for most or all of services rendered, outpatient visits typically involve a la carte billing. When accounting for costs related to professional and facilities fees, medications, laboratory tests, and advanced diagnostics and procedures, patient bills may be markedly higher when they are placed in observation status. This is especially true for patients covered by Medicare, where observation stays are not covered under Part A.

Research will need to simultaneously identify best practices for how EDOUs are implemented and administered while appraising their impact on patient‐centered outcomes and true costs, from multiple perspectives, including the patient, hospital, and healthcare system. There is reason to be optimistic about EDOUs as potentially high‐value components of the acute care delivery system. However, the widespread implementation of observation units with the assumption that it is cost saving to hospitals and insurers, without high‐quality population studies to inform their impact more broadly, may undermine acceptance by patients and health‐policy experts.

Disclosure

Nothing to report.

References
  1. Wiler JL, Ross MA, Ginde AA. National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959965.
  2. Baugh CW, Venkatesh AK, Bohan JS. Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manag Rev. 2011;36(1):2837.
  3. Goodacre S, Nicholl J, Dixon S, et al. Randomised controlled trial and economic evaluation of a chest pain observation unit compared with routine care. BMJ. 2004;328(7434):254.
  4. Rydman RJ, Roberts RR, Albrecht GL, Zalenski RJ, McDermott M. Patient satisfaction with an emergency department asthma observation unit. Acad Emerg Med. 1999;6(3):178183.
  5. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):23142323.
  6. Capp R, Sun B, Boatright D, Gross C. The Impact of emergency department observation units on U.S. emergency department admission rates. J Hosp Med. 2015;10(11):738742.
  7. Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126136.
  8. Mace SE, Graff L, Mikhail M, Ross M. A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529533.
  9. Crenshaw LA, Lindsell CJ, Storrow AB, Lyons MS. An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271279.
  10. Ross EA, Bellamy FB. Reducing patient financial liability for hospitalizations: the physician role. J Hosp Med. 2010;5(3):160162.
  11. Feng Z, Wright B, Mor V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):12511259.
  12. Abbass IM, Krause TM, Virani SS, Swint JM, Chan W, Franzini L. Revisiting the economic efficiencies of observation units. Manag Care. 2015;24(3):4652.
Article PDF
Issue
Journal of Hospital Medicine - 10(11)
Page Number
762-763
Sections
Article PDF
Article PDF

Over the past 3 decades, emergency department observation units (EDOUs) have been increasingly implemented in the United States to supplement emergency department (ED) care in a time of increasing patient volume and hospital crowding. Given the limited availability of hospital resources, EDOUs provide emergency clinicians an extended period of time to evaluate and risk‐stratify patients without necessitating difficult‐to‐obtain outpatient follow‐up or a short‐stay hospitalization. Changes in Medicare and insurer reimbursement policies have incentivized the adoption of EDOUs, and now, over one‐third of EDs nationally offer an observation unit.[1]

Much of the observation‐science literature has been condition and institution specific, showing benefits with respect to cost, quality of care, safety, and patient satisfaction.[2, 3, 4, 5] Until now, there had not been a national study on the impact of EDOUs to investigate important outcome: hospital admission rates. Capp and colleagues, using the National Hospital Ambulatory Care Survey (NHAMCS), attempt to answer a very important question: Do EDs with observation units have lower hospital admission rates?[6] To do so, they first standardize admission rates to sociodemographic and clinical features of the patients, while adjusting for hospital‐level factors. Then they compare the risk‐standardized hospital admission rate between EDs with and without an observation unit as reported in the NHAMCS. The authors make creative and elegant use of this publicly available, national dataset to suggest that EDOUs do not decrease hospital admissions.

The authors appropriately identify some limitations of using such data to answer questions where nuanced, countervailing forces drive the outcome of interest. It is important to note the basic statistical premise that the inability to disprove the null hypothesis is not the same thing as proving that the null hypothesis is true. In other words, although this study was not able to detect a difference between admission rates for hospitals with EDOUs and those without, it cannot be absolutely taken to mean that there is no relationship. The authors clearly state that this study was underpowered given that the difference of ED risk‐standardized hospital admission rates was small and therefore is at risk of type II error. In addition, unmeasured confounding may hide a true association between EDOUs and admission rates. Both static and dynamic measures of ED volume, crowding, and boarding, as well as changes in case mix or acuity may drive adoption of EDOUs,[7] while simultaneously associated with risk of hospitalization. Without balance between the EDs with and without observation units, or longitudinal measures of EDs over time as they are implemented, we are left with potentially biased estimates.

It is also important to highlight that not all EDOUs are created equal.[8] EDs may admit patients to the observation unit based on prespecified conditions or include all comers at physician discretion. Once placed in observation status, patients may or may not be managed by specific protocols to provide guidance on timing, order, and scope of testing and decision making.

Finally, care in EDOUs may be provided by emergency physicians, hospitalists, or other clinicians such as advanced practice providers (eg, physician assistants, nurse practitioners), a distinction that likely impacts the ultimate patient disposition. In fact, the NHAMCS asks the question, What type of physicians make decisions for patients in this observation or clinical decision unit? Capp et al., however, did not include this variable to further stratify the data. Although we do not know whether or not inclusion of this factor may have ultimately changed the results, it could have implications for how distinctions in who manages EDOUs could affect admission rates.

Still, the negative findings of this study seem to raise a number of questions, which should spark a broader discussion on EDOUs. The current analysis provides an important first step toward a national understanding of EDOUs and their role in acute care. Future inquiries should account for variation in observation units and the hospitals in which they are housed as well as inclusion of meaningful outcomes beyond admission rates. A number of methodological approaches can be considered to achieve this; propensity score matching within observational data may provide better balance between facilities with and without EDOUs, whereas multicenter impact analyses using controlled before‐and‐after or cluster‐randomized trials should be considered the gold standard for studying observation unit implementation. Outcomes in these studies should include long‐term changes in health, aggregate healthcare utilization, overuse of resources that do not provide high‐value care, and impacts on how care and costs may be redistributed when patients receive more care in observation units.

Although cost containment is often touted as a cornerstone of EDOUs, it is critical to know how the costs are measured and who is paying. For example, when an option to place a patient in observation exists, might clinicians utilize it for some patients who do not require further evaluation and testing and could have been safely discharged?[9] This observation creep may arise because clinicians can use EDOUs, not because they should. Motivations may include delaying difficult disposition decisions, avoiding uncertainty or liability when discharging patients, limited access to outpatient follow‐up, or a desire to utilize observation status to justify the existence of EDOUs within the institution. In this way, EDOUs may, in fact, provide low‐value care at a time of soaring healthcare costs.

Perhaps even more perplexing is the question of how costs are shifted through use of EDOUs.[10, 11] Much of the literature advertising its cost savings are only from the perspective of the insurers' or hospitals' perspective,[12] with 1 study estimating a potential annual cost savings of $4.6 million for each hospital, or $3 billion nationally, associated with the implementation of observation care.[5] But are medical centers just passing costs on to patients to avoid penalties and disincentives associated with short‐stay hospitalizations? Both private insurers and the Centers for Medicare and Medicaid Services may deny payments for admissions deemed unnecessary. Further, under the Affordable Care Act, avoiding hospitalizations may mean fewer penalties when Medicare patients later require admission for certain conditions. As such, hospitals may find huge incentives and cost savings associated with observation units. However, using EDOUs to avoid the Medicare readmission penalty may backfire when less‐sick patients requiring care beyond the ED are treated and discharged from observation, leaving more medically complex and ill patients for hospitalization, a group potentially more likely to be rehospitalized within 30 days, making readmission rates appear higher.

Nonetheless, because services provided during observation status are billed as an outpatient visit, patients may be liable for a proportion of the overall visit. In contrast to inpatient stays where, in general, patients owe a single copay for most or all of services rendered, outpatient visits typically involve a la carte billing. When accounting for costs related to professional and facilities fees, medications, laboratory tests, and advanced diagnostics and procedures, patient bills may be markedly higher when they are placed in observation status. This is especially true for patients covered by Medicare, where observation stays are not covered under Part A.

Research will need to simultaneously identify best practices for how EDOUs are implemented and administered while appraising their impact on patient‐centered outcomes and true costs, from multiple perspectives, including the patient, hospital, and healthcare system. There is reason to be optimistic about EDOUs as potentially high‐value components of the acute care delivery system. However, the widespread implementation of observation units with the assumption that it is cost saving to hospitals and insurers, without high‐quality population studies to inform their impact more broadly, may undermine acceptance by patients and health‐policy experts.

Disclosure

Nothing to report.

Over the past 3 decades, emergency department observation units (EDOUs) have been increasingly implemented in the United States to supplement emergency department (ED) care in a time of increasing patient volume and hospital crowding. Given the limited availability of hospital resources, EDOUs provide emergency clinicians an extended period of time to evaluate and risk‐stratify patients without necessitating difficult‐to‐obtain outpatient follow‐up or a short‐stay hospitalization. Changes in Medicare and insurer reimbursement policies have incentivized the adoption of EDOUs, and now, over one‐third of EDs nationally offer an observation unit.[1]

Much of the observation‐science literature has been condition and institution specific, showing benefits with respect to cost, quality of care, safety, and patient satisfaction.[2, 3, 4, 5] Until now, there had not been a national study on the impact of EDOUs to investigate important outcome: hospital admission rates. Capp and colleagues, using the National Hospital Ambulatory Care Survey (NHAMCS), attempt to answer a very important question: Do EDs with observation units have lower hospital admission rates?[6] To do so, they first standardize admission rates to sociodemographic and clinical features of the patients, while adjusting for hospital‐level factors. Then they compare the risk‐standardized hospital admission rate between EDs with and without an observation unit as reported in the NHAMCS. The authors make creative and elegant use of this publicly available, national dataset to suggest that EDOUs do not decrease hospital admissions.

The authors appropriately identify some limitations of using such data to answer questions where nuanced, countervailing forces drive the outcome of interest. It is important to note the basic statistical premise that the inability to disprove the null hypothesis is not the same thing as proving that the null hypothesis is true. In other words, although this study was not able to detect a difference between admission rates for hospitals with EDOUs and those without, it cannot be absolutely taken to mean that there is no relationship. The authors clearly state that this study was underpowered given that the difference of ED risk‐standardized hospital admission rates was small and therefore is at risk of type II error. In addition, unmeasured confounding may hide a true association between EDOUs and admission rates. Both static and dynamic measures of ED volume, crowding, and boarding, as well as changes in case mix or acuity may drive adoption of EDOUs,[7] while simultaneously associated with risk of hospitalization. Without balance between the EDs with and without observation units, or longitudinal measures of EDs over time as they are implemented, we are left with potentially biased estimates.

It is also important to highlight that not all EDOUs are created equal.[8] EDs may admit patients to the observation unit based on prespecified conditions or include all comers at physician discretion. Once placed in observation status, patients may or may not be managed by specific protocols to provide guidance on timing, order, and scope of testing and decision making.

Finally, care in EDOUs may be provided by emergency physicians, hospitalists, or other clinicians such as advanced practice providers (eg, physician assistants, nurse practitioners), a distinction that likely impacts the ultimate patient disposition. In fact, the NHAMCS asks the question, What type of physicians make decisions for patients in this observation or clinical decision unit? Capp et al., however, did not include this variable to further stratify the data. Although we do not know whether or not inclusion of this factor may have ultimately changed the results, it could have implications for how distinctions in who manages EDOUs could affect admission rates.

Still, the negative findings of this study seem to raise a number of questions, which should spark a broader discussion on EDOUs. The current analysis provides an important first step toward a national understanding of EDOUs and their role in acute care. Future inquiries should account for variation in observation units and the hospitals in which they are housed as well as inclusion of meaningful outcomes beyond admission rates. A number of methodological approaches can be considered to achieve this; propensity score matching within observational data may provide better balance between facilities with and without EDOUs, whereas multicenter impact analyses using controlled before‐and‐after or cluster‐randomized trials should be considered the gold standard for studying observation unit implementation. Outcomes in these studies should include long‐term changes in health, aggregate healthcare utilization, overuse of resources that do not provide high‐value care, and impacts on how care and costs may be redistributed when patients receive more care in observation units.

Although cost containment is often touted as a cornerstone of EDOUs, it is critical to know how the costs are measured and who is paying. For example, when an option to place a patient in observation exists, might clinicians utilize it for some patients who do not require further evaluation and testing and could have been safely discharged?[9] This observation creep may arise because clinicians can use EDOUs, not because they should. Motivations may include delaying difficult disposition decisions, avoiding uncertainty or liability when discharging patients, limited access to outpatient follow‐up, or a desire to utilize observation status to justify the existence of EDOUs within the institution. In this way, EDOUs may, in fact, provide low‐value care at a time of soaring healthcare costs.

Perhaps even more perplexing is the question of how costs are shifted through use of EDOUs.[10, 11] Much of the literature advertising its cost savings are only from the perspective of the insurers' or hospitals' perspective,[12] with 1 study estimating a potential annual cost savings of $4.6 million for each hospital, or $3 billion nationally, associated with the implementation of observation care.[5] But are medical centers just passing costs on to patients to avoid penalties and disincentives associated with short‐stay hospitalizations? Both private insurers and the Centers for Medicare and Medicaid Services may deny payments for admissions deemed unnecessary. Further, under the Affordable Care Act, avoiding hospitalizations may mean fewer penalties when Medicare patients later require admission for certain conditions. As such, hospitals may find huge incentives and cost savings associated with observation units. However, using EDOUs to avoid the Medicare readmission penalty may backfire when less‐sick patients requiring care beyond the ED are treated and discharged from observation, leaving more medically complex and ill patients for hospitalization, a group potentially more likely to be rehospitalized within 30 days, making readmission rates appear higher.

Nonetheless, because services provided during observation status are billed as an outpatient visit, patients may be liable for a proportion of the overall visit. In contrast to inpatient stays where, in general, patients owe a single copay for most or all of services rendered, outpatient visits typically involve a la carte billing. When accounting for costs related to professional and facilities fees, medications, laboratory tests, and advanced diagnostics and procedures, patient bills may be markedly higher when they are placed in observation status. This is especially true for patients covered by Medicare, where observation stays are not covered under Part A.

Research will need to simultaneously identify best practices for how EDOUs are implemented and administered while appraising their impact on patient‐centered outcomes and true costs, from multiple perspectives, including the patient, hospital, and healthcare system. There is reason to be optimistic about EDOUs as potentially high‐value components of the acute care delivery system. However, the widespread implementation of observation units with the assumption that it is cost saving to hospitals and insurers, without high‐quality population studies to inform their impact more broadly, may undermine acceptance by patients and health‐policy experts.

Disclosure

Nothing to report.

References
  1. Wiler JL, Ross MA, Ginde AA. National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959965.
  2. Baugh CW, Venkatesh AK, Bohan JS. Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manag Rev. 2011;36(1):2837.
  3. Goodacre S, Nicholl J, Dixon S, et al. Randomised controlled trial and economic evaluation of a chest pain observation unit compared with routine care. BMJ. 2004;328(7434):254.
  4. Rydman RJ, Roberts RR, Albrecht GL, Zalenski RJ, McDermott M. Patient satisfaction with an emergency department asthma observation unit. Acad Emerg Med. 1999;6(3):178183.
  5. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):23142323.
  6. Capp R, Sun B, Boatright D, Gross C. The Impact of emergency department observation units on U.S. emergency department admission rates. J Hosp Med. 2015;10(11):738742.
  7. Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126136.
  8. Mace SE, Graff L, Mikhail M, Ross M. A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529533.
  9. Crenshaw LA, Lindsell CJ, Storrow AB, Lyons MS. An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271279.
  10. Ross EA, Bellamy FB. Reducing patient financial liability for hospitalizations: the physician role. J Hosp Med. 2010;5(3):160162.
  11. Feng Z, Wright B, Mor V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):12511259.
  12. Abbass IM, Krause TM, Virani SS, Swint JM, Chan W, Franzini L. Revisiting the economic efficiencies of observation units. Manag Care. 2015;24(3):4652.
References
  1. Wiler JL, Ross MA, Ginde AA. National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959965.
  2. Baugh CW, Venkatesh AK, Bohan JS. Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manag Rev. 2011;36(1):2837.
  3. Goodacre S, Nicholl J, Dixon S, et al. Randomised controlled trial and economic evaluation of a chest pain observation unit compared with routine care. BMJ. 2004;328(7434):254.
  4. Rydman RJ, Roberts RR, Albrecht GL, Zalenski RJ, McDermott M. Patient satisfaction with an emergency department asthma observation unit. Acad Emerg Med. 1999;6(3):178183.
  5. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):23142323.
  6. Capp R, Sun B, Boatright D, Gross C. The Impact of emergency department observation units on U.S. emergency department admission rates. J Hosp Med. 2015;10(11):738742.
  7. Hoot NR, Aronsky D. Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126136.
  8. Mace SE, Graff L, Mikhail M, Ross M. A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529533.
  9. Crenshaw LA, Lindsell CJ, Storrow AB, Lyons MS. An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271279.
  10. Ross EA, Bellamy FB. Reducing patient financial liability for hospitalizations: the physician role. J Hosp Med. 2010;5(3):160162.
  11. Feng Z, Wright B, Mor V. Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):12511259.
  12. Abbass IM, Krause TM, Virani SS, Swint JM, Chan W, Franzini L. Revisiting the economic efficiencies of observation units. Manag Care. 2015;24(3):4652.
Issue
Journal of Hospital Medicine - 10(11)
Issue
Journal of Hospital Medicine - 10(11)
Page Number
762-763
Page Number
762-763
Article Type
Display Headline
Emergency department observation units: Less than we bargained for?
Display Headline
Emergency department observation units: Less than we bargained for?
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Jahan Fahimi, MD, 505 Parnassus Avenue, L126, Box 0209, San Francisco, CA 94143; Telephone: 415‐353‐1684; Fax: 415-353-3531; E‐mail: jahan.fahimi@ucsf.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

ED Observation Units and Admission Rates

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
The impact of emergency department observation units on United States emergency department admission rates

Today more than one‐third of emergency departments (EDs) in the United States have affiliated observation units, where patients can stay 24 to 48 hours without being admitted to the hospital.[1] Observation units experienced significant growth in the United States from 2005 to 2007, secondary to policy changes involving the Centers for Medicare and Medicaid Services (CMS), which expanded reimbursement for observation services to include any clinical condition. Furthermore, CMS implemented the Recovery Audit Contractor process, which was able to fine providers and facilities for inappropriate claims, with the principle method for charge recovery being inappropriate charges for short inpatient stays.

ED observation units (EDOUs) vary in the number of beds, but are often located adjacent to the emergency department.[2] It is estimated that EDOUs have the capacity for caring for 5% to 10% of any given ED volume.[2] Almost half of EDOUs are protocol driven, allowing these units to discharge up to 80% of all patients within 24 hours.[1, 2] Some studies have suggested that EDOUs are associated with a decrease in overall hospitalization rates, leading to cost savings.[1] However, these studies were limited by their single‐center design or simulated in nature. In addition, other studies show that EDOUs decrease inpatient admissions, length of stay, and costs related to specific clinical conditions such as chest pain, transient ischemic attack, and syncope.[3]

To further evaluate the association of observation units on ED hospital admission rates nationally, we analyzed the largest ED‐based survey, the 2010 National Hospital Ambulatory Medical Care Survey (NHAMCS), to assess the impact of observation units on hospital admissions from the ED. We hypothesized that observation units decrease overall hospital admissions from the ED.

METHODS

Study Design and Population

We performed a retrospective cross‐sectional analysis of ED visits from 2010. This study was exempt from institutional review board review by the University of Colorado and Yale University institutional review committee. The NHAMCS is an annual, national probability sample of ambulatory visits made to nonfederal, general, and short‐stay hospitals conducted by the Centers for Disease Control and Prevention (CDC), National Center for Health Statistics. The multistaged sample design was previously described elsewhere.[4] The 2010 NHAMCS dataset included 350 participating hospitals (unweighted sampling rate of 90%) and a total of 34,936 patient visits.[4]

Exclusions

We excluded patients who were less than 18 years old (n = 8015; 23%); left without being seen, left before examination completion, or left against medical advice (n = 813; 2%); transferred to another institution (n = 626; 1.7%); died on arrival or died in the ED (n = 60; 0.2%); and with missing data on discharge disposition (n = 100; 0.3%). Finally, we excluded hospitals with fewer than 30 visits per year (n = 307; 0.9%) to comply with reliable relative standard errors, as recommended by the CDC; after all of these exclusions there were 325 hospitals. Finally, we excluded hospitals with missing information on EDOUs (n = 783, 2.2%); our dataset at this point included 315 hospitals.

Outcomes

The primary outcome was hospital admission, either from the ED or admitted to an observation unit with subsequent hospital admission, defined as the ED risk‐standardized hospital admission rate (ED RSHAR).[5] This methodology allows for risk adjustment of case mix (ie, disease severity) for each hospital's ED admission rates and has been previously described in the evaluation of varying ED hospital admission rates using the same dataset.[5] To evaluate which hospitals had observation units, we used the following hospital survey question: Does your ED have an observation or clinical decision unit?

Identification of Variables

ED hospitalization rates were risk standardized for each hospital to account for each hospital's case mix and hospital factors such as socioeconomic status, clinical severity, and hospital characteristics. This methodology and dataset use have been previously described in detail.[5]

To account for common chief complaints leading to hospitalization and case‐mix distribution of these complaints among different hospitals, we analyzed all chief complaints and their relationship to hospital admission. We first identified those associated with an admission rate that exceeded 30% and was present in 1% or more of patient visits. The study team of researchers and clinicians determined the aforementioned cutoffs as clinically meaningful. Eight chief complaints met both criteria: chest pain and related symptoms, shortness of breath, other symptoms/probably related to psychological, general weakness, labored or difficulty breathing, fainting (syncope), unconscious arrival, and other symptoms referable to the nervous system. Chronic diseases, such as congestive heart failure, diabetes mellitus, renal disease on dialysis, and human immunodeficiency virus, were also included in the model.

Hospital factors included metropolitan status, geographic region of the country (limited to Northeast, Midwest, South, and West), teaching status, and urban or rural status.[6] We derived a new variable based on a previous study, teaching status, by combining nonprivate hospital status plus having at least 1 ED visit be evaluated by a resident.

Statistical Analyses

We used SAS version 9.2 (SAS Institute, Cary, NC) for all statistical analyses. Frequencies of all variables in the model were calculated to assess the distribution of data and quantify missing data. We did not want to have variables in the model with high collinearity. To investigate collinearity between independent variables, we calculated Spearman correlation coefficients; high collinearity was defined as r > 0.6. No variables included in the model had high collinearity.

To investigate the association of the candidate variables with hospitalization, we used survey logistic regression. Although some variables did not show an association with hospitalization, we felt they were clinically relevant and did not remove them from the model. Hierarchical logistic regression modeling (explained below) was used to calculate ED RSHAR based on the aforementioned selected variables associated with hospital admission.

Hierarchical logistic regression models (HLRM) were used to estimate RSHAR for each hospital. This approach reflects the assumption that a hospital‐specific component exists, and that it will affect the outcomes of patients at a particular institution. This method takes into consideration the hierarchical structure of the data to account for patient clustering within hospitals, and has been used by the CMS to publicly report hospital risk‐standardized rates of mortality and readmission for acute myocardial infarction, heart failure, and pneumonia.

We used a similar methodology as previously published.[5] In summary, the hospital RSHAR was calculated as a ratio of the number of predicted hospital admissions in the hospital to the number of expected hospital admissions in the hospital. This ratio is then multiplied by the national unadjusted rate of hospital admissions. We calculated the C statistic of the HLRM model to assess for overall adequacy of risk prediction. To analyze the association between ED RSHAR and EDOUs, we used analysis of variance, where the dependent variable was ED RSHAR and independent variable of interest was presence of EDOUs.

RESULTS

There were 24,232 ED visits from 315 hospitals in the United States in our study. Of these, 82 (20.6%) hospitals had an observation unit physically separate from the ED. Hospitals with and without observation units did not have different hospital patient level characteristics. There was no association between hospital ownership, teaching status, region location, urban or rural location, and hospitals with observation units when compared with hospitals without observation units (Table 1).

Comparison of Hospital Characteristics and the Presence of an Observation Unit
Hospitals With Observation Units, W% (N = 82) Hospitals Without Observation Units, W% (N = 233) P Value
  • NOTE: Abbreviation: W%, weighted.

Region of country 0.54
Northeast 10.01 15.46
Midwest 32.06 28.35
South 41.84 36.33
West 16.08 19.85
Ownership of hospitals 0.4
Voluntary, nonprofit 77.28 72.35
Government, nonfederal 18.78 16.11
Private 3.94 11.55
Urban or rural location 0.43
Urban 68.28 60.19
Rural 31.72 39.81
Teaching hospital status 0.56
Teaching hospital 63.22 68.28
Nonteaching hospital 36.78 31.71

In addition, there was no association between patient characteristics at the ED visit level in hospitals with observation units when compared with patient characteristics at the ED visit level in hospitals without observation units (Table 2). The average ED risk‐standardized hospital admission rate for hospitals with observation units was 13.7% (95% confidence interval [CI]: 11.3 to 16.0) compared to 16.0% (95% CI: 14.1 to 17.7) for hospitals without observation units (Figure 1). This difference of 2.3% (95% CI: 0.1 to 4.7) was not statistically significant.

Figure 1
Emergency department standardized admission rates for hospitals with and without observation units.
Emergency Department Patient Level Characteristics in Hospitals With and Without Observations Units
Hospitals With Observation Units, W% (N = 6,067) Hospitals Without Observation Units, W% (N = 18,165) P Value
  • NOTE: Abbreviations: HIV, human immunodeficiency virus; W%, weighted.

Sex, female 58.75 58.35 0.96
Age, y 45.17 46.08 0.32
Race 0.75
Non‐Hispanic white 63.54 66.41
Non‐Hispanic black 23.67 18.77
Hispanic 9.77 12.47
Other 3.02 2.35
Source of payment 0.87
Private 21.90 21.46
Medicare 32.73 30.55
Medicaid 22.15 23.23
Uninsured 18.61 20.25
Unknown/missing 4.61 4.51
Poverty level 0.50
<5% 13.87 15.31
5%9.9% 32.57 23.38
10%19.9% 29.81 36.29
>20% 20.32 20.18
Missing 3.44 4.83
Arrival by ambulance 0.06
Yes 20.01 18.61
No 76.12 76.34
Unknown 3.87 5.05
Severity of illness 0.58
Emergent 16.58 16.62
Nonemergent 44.09 43.85
Indeterminate 1.18 1.17
Mental health, alcohol, unclassified 38.15 38.37
Vital signs
Temperature 0.91
9095F 0.31 0.36
95.1100.4F 93.94 93.19
100.4107F 1.81 2.11
Missing 3.94 4.35
Pulse 0.60
1059 bpm 3.39 3.93
60100 bpm 72.86 75.94
>101 bpm 19.60 21.37
Missing 4.16 7.67
Systolic blood pressure 0.92
5090 mm Hg 0.90 1.02
91160 mm Hg 85.49 84.03
161260 mm Hg 11.90 12.94
Missing 1.71 2.01
Respiratory rate 0.68
411 breaths/min 0.24 0.19
1220 breaths/min 87.88 86.40
2160 breaths/min 8.90 10.09
Missing 2.98 3.32
Chief complaint associated with hospitalization
Chest pain and related symptoms 7.37 6.40 0.48
Shortness of breath 3.24 3.19 0.80
Other symptoms/probably related to psychological 1.28 0.97 0.19
General weakness 1.19 1.14 0.26
Labored or difficult breathing 0.56 0.88 0.93
Fainting (syncope) 0.44 0.42 0.09
Unconscious on arrival 0.35 0.38 0.17
Other symptoms referable to the nervous system 0.38 0.35 0.81
Chronic diseases
Congestive heart failure 4.13 4.05 0.05
Cerebrovascular disease 4.03 3.33 0.04
Diabetes 11.15 11.44 0.69
HIV 0.51 0.44 0.99
On dialysis 1.14 0.96 0.25

DISCUSSION

In this national study of hospital admissions from the ED, we did not find that hospitals with observation units had a statistically significant lower ED risk‐standardized admission rate when compared with hospitals that did not have observation units. However, the difference of ED risk‐standardized hospital admission rates between hospitals with observation units and those without observation units was very small, and we were likely underpowered to detect a statistically significant difference.

Recently, EDOUs have received much attention, in part because of increases in their numbers and frequency of use.[7] Prior studies, which did not report admission rates that were risk standardized, have also demonstrated no difference in the admission rates among hospitals with and without observation units.[6, 8] Although this result seems counterintuitive, several possible explanations exist.

One reason that there may not be a relation between the rate of inpatient admission and the presence of an observation unit is that the introduction of an EDOU appears to change physician behavior. When the option to admit to an observation unit is present, ED physicians are 2 times more likely to disposition patients to observation status without a statistically significant change in the rate of inpatient admission.[6] Studies have demonstrated that after the introduction of an observation unit, ED physicians tend to overutilize observation among patients who previously would have been discharged, while continuing to admit patients as inpatients who meet observation criteria, which could result in an increase in cost for payers and patients.[7, 9]

Observation units that are protocol driven have been associated with the best patient outcomes including shorter length of stay, lower likelihood of subsequent inpatient admission, and decreased cost.[10] Furthermore, studies evaluating EDOUs suggest increased patient satisfaction and improved patient safety, especially for protocol‐driven EDOUs.[2] However, currently, only half of dedicated observation units are protocol driven. It is also possible that the ED inpatient admission rate does not capture the full impact of an observation unit on care delivery and quality. Observation units are more likely to be present in EDs with a higher overall patient census, longer patient lengths of stay, and higher rates of ambulance diversion.[6, 8] Unfortunately, NHAMCS does not distinguish protocol‐driven versus nonprotocol‐driven observation units. From a policy standpoint, as EDOUs continue to emerge, there is an opportunity to standardize how EDOUs function by using best practices.

This study should be evaluated in the context of limitations such as heterogeneity in the management of EDOUs, limited hospital factor variables that may influence hospital admissions, and small sample size associated with each hospital. Because we were not able to determine which EDs used protocol‐driven observation units, we were not able to determine the impact of having a protocol‐driven observation unit on inpatient hospital admission rates. Additionally, the study may suffer from a selection bias, as EDs with observation units have been shown to have higher patient volume, longer patient lengths of stay, and greater rates of ED diversion. Despite the small sample size, our risk‐standardized model accounted for case mix and hospital factors associated with hospital admission rates and had a high C statistic value, which indicates that the predicted probability of being admitted from the ED highly correlates with the actual outcome of being admitted from the ED. We were unable to track hospitals longitudinally to determine if a hospital's high volume is associated with the creation of EDOUs as a means to offset its demand. However, in our analysis, we did control for overall patient volume when calculating the RHSAR. Finally, we were not able to limit the dataset to observation unit admission conditions because of the limited number of visits provided per hospital by NHAMCS. We conducted an analysis using 80% power and a P value of 0.05 to determine the sample size needed to have statistically significant results. We would require 920 hospitals to have statistically significant results, which suggests we were underpowered to detect a statistically significant difference.

In this preliminary study, we did not find an association between the presence of EDOUs and ED hospital admissions. Our study was limited by an inability to analyze administrative differences and to adjust for certain hospital factors that are likely to influence inpatient admissions via the ED. Nonetheless, our findings suggest that EDOUs merit further evaluation of their potential cost savings and the quality of the care they provide. An evaluation of ED observation departmental management is also needed to assess differences in care at observation units managed by emergency physicians versus nonemergency physicians.

Acknowledgments

Disclosures: R.C., B.S., and C.G. conceived the study. R.C. conducted the statistical analysis and was supervised by B.S. and C.G. All authors analyzed the results and interpreted findings. R.C. and D.B. drafted the manuscript, and all authors contributed substantially to its revision. All authors listed have contributed sufficiently to the project to be included as authors, and all those who are qualified to be authors are listed in the author byline. This work was previously presented at the 2013 Society for Academic Emergency Medicine Annual Meeting, Dallas, Texas. Dr. Capp is funded by a translational K award: KL2 TR001080. Dr. Gross reports grants from Johnson & Johnson, Medtronic Inc., and 21st Century Oncology during the conduct of this study. In addition, he received payment from Fair Health Inc. and ASTRO outside the submitted work. Dr. Sun receives National Institutes of Health funding. No conflicts of interest, financial or other, exist. This applies to all authors.

Files
References
  1. Wiler JL, Ross MA, Ginde AA. National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959965.
  2. Baugh CW, Venkatesh AK, Bohan JS. Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manage Rev 2011;36(1):2837.
  3. Roberts RR, Zalenski RJ, Mensah EK, et al. Costs of an emergency department‐based accelerated diagnostic protocol vs hospitalization in patients with chest pain: a randomized controlled trial. JAMA. 1997;278(20):16701676.
  4. Centers for Disease Control and Prevention. National Hospital Ambulatory Medical Care Survey. Ambulatory health care data. Questionnaires, datasets, and related documentation. 2009. Available at: http://www.cdc.gov/nchs/ahcd/ahcd_questionnaires.htm. Accessed November 1, 2011.
  5. Capp R, Ross JS, Fox JP, et al. Hospital variation in risk‐standardized hospital admission rates from US EDs among adults. Am J Emerg Med. 2014;32(8):837843.
  6. Venkatesh AK, Geisler BP, Gibson Chambers JJ, Baugh CW, Bohan JS, Schuur JD. Use of observation care in US emergency departments, 2001 to 2008. PloS One. 2011;6(9):e24326.
  7. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):23142323.
  8. Mace SE, Graff L, Mikhail M, Ross M. A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529533.
  9. Crenshaw LA, Lindsell CJ, Storrow AB, Lyons MS. An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271279.
  10. Ross MA, Hockenberry JM, Mutter R, Barrett M, Wheatley M, Pitts SR. Protocol‐driven emergency department observation units offer savings, shorter stays, and reduced admissions. Health Aff (Millwood). 2013;32(12):21492156.
Article PDF
Issue
Journal of Hospital Medicine - 10(11)
Page Number
738-742
Sections
Files
Files
Article PDF
Article PDF

Today more than one‐third of emergency departments (EDs) in the United States have affiliated observation units, where patients can stay 24 to 48 hours without being admitted to the hospital.[1] Observation units experienced significant growth in the United States from 2005 to 2007, secondary to policy changes involving the Centers for Medicare and Medicaid Services (CMS), which expanded reimbursement for observation services to include any clinical condition. Furthermore, CMS implemented the Recovery Audit Contractor process, which was able to fine providers and facilities for inappropriate claims, with the principle method for charge recovery being inappropriate charges for short inpatient stays.

ED observation units (EDOUs) vary in the number of beds, but are often located adjacent to the emergency department.[2] It is estimated that EDOUs have the capacity for caring for 5% to 10% of any given ED volume.[2] Almost half of EDOUs are protocol driven, allowing these units to discharge up to 80% of all patients within 24 hours.[1, 2] Some studies have suggested that EDOUs are associated with a decrease in overall hospitalization rates, leading to cost savings.[1] However, these studies were limited by their single‐center design or simulated in nature. In addition, other studies show that EDOUs decrease inpatient admissions, length of stay, and costs related to specific clinical conditions such as chest pain, transient ischemic attack, and syncope.[3]

To further evaluate the association of observation units on ED hospital admission rates nationally, we analyzed the largest ED‐based survey, the 2010 National Hospital Ambulatory Medical Care Survey (NHAMCS), to assess the impact of observation units on hospital admissions from the ED. We hypothesized that observation units decrease overall hospital admissions from the ED.

METHODS

Study Design and Population

We performed a retrospective cross‐sectional analysis of ED visits from 2010. This study was exempt from institutional review board review by the University of Colorado and Yale University institutional review committee. The NHAMCS is an annual, national probability sample of ambulatory visits made to nonfederal, general, and short‐stay hospitals conducted by the Centers for Disease Control and Prevention (CDC), National Center for Health Statistics. The multistaged sample design was previously described elsewhere.[4] The 2010 NHAMCS dataset included 350 participating hospitals (unweighted sampling rate of 90%) and a total of 34,936 patient visits.[4]

Exclusions

We excluded patients who were less than 18 years old (n = 8015; 23%); left without being seen, left before examination completion, or left against medical advice (n = 813; 2%); transferred to another institution (n = 626; 1.7%); died on arrival or died in the ED (n = 60; 0.2%); and with missing data on discharge disposition (n = 100; 0.3%). Finally, we excluded hospitals with fewer than 30 visits per year (n = 307; 0.9%) to comply with reliable relative standard errors, as recommended by the CDC; after all of these exclusions there were 325 hospitals. Finally, we excluded hospitals with missing information on EDOUs (n = 783, 2.2%); our dataset at this point included 315 hospitals.

Outcomes

The primary outcome was hospital admission, either from the ED or admitted to an observation unit with subsequent hospital admission, defined as the ED risk‐standardized hospital admission rate (ED RSHAR).[5] This methodology allows for risk adjustment of case mix (ie, disease severity) for each hospital's ED admission rates and has been previously described in the evaluation of varying ED hospital admission rates using the same dataset.[5] To evaluate which hospitals had observation units, we used the following hospital survey question: Does your ED have an observation or clinical decision unit?

Identification of Variables

ED hospitalization rates were risk standardized for each hospital to account for each hospital's case mix and hospital factors such as socioeconomic status, clinical severity, and hospital characteristics. This methodology and dataset use have been previously described in detail.[5]

To account for common chief complaints leading to hospitalization and case‐mix distribution of these complaints among different hospitals, we analyzed all chief complaints and their relationship to hospital admission. We first identified those associated with an admission rate that exceeded 30% and was present in 1% or more of patient visits. The study team of researchers and clinicians determined the aforementioned cutoffs as clinically meaningful. Eight chief complaints met both criteria: chest pain and related symptoms, shortness of breath, other symptoms/probably related to psychological, general weakness, labored or difficulty breathing, fainting (syncope), unconscious arrival, and other symptoms referable to the nervous system. Chronic diseases, such as congestive heart failure, diabetes mellitus, renal disease on dialysis, and human immunodeficiency virus, were also included in the model.

Hospital factors included metropolitan status, geographic region of the country (limited to Northeast, Midwest, South, and West), teaching status, and urban or rural status.[6] We derived a new variable based on a previous study, teaching status, by combining nonprivate hospital status plus having at least 1 ED visit be evaluated by a resident.

Statistical Analyses

We used SAS version 9.2 (SAS Institute, Cary, NC) for all statistical analyses. Frequencies of all variables in the model were calculated to assess the distribution of data and quantify missing data. We did not want to have variables in the model with high collinearity. To investigate collinearity between independent variables, we calculated Spearman correlation coefficients; high collinearity was defined as r > 0.6. No variables included in the model had high collinearity.

To investigate the association of the candidate variables with hospitalization, we used survey logistic regression. Although some variables did not show an association with hospitalization, we felt they were clinically relevant and did not remove them from the model. Hierarchical logistic regression modeling (explained below) was used to calculate ED RSHAR based on the aforementioned selected variables associated with hospital admission.

Hierarchical logistic regression models (HLRM) were used to estimate RSHAR for each hospital. This approach reflects the assumption that a hospital‐specific component exists, and that it will affect the outcomes of patients at a particular institution. This method takes into consideration the hierarchical structure of the data to account for patient clustering within hospitals, and has been used by the CMS to publicly report hospital risk‐standardized rates of mortality and readmission for acute myocardial infarction, heart failure, and pneumonia.

We used a similar methodology as previously published.[5] In summary, the hospital RSHAR was calculated as a ratio of the number of predicted hospital admissions in the hospital to the number of expected hospital admissions in the hospital. This ratio is then multiplied by the national unadjusted rate of hospital admissions. We calculated the C statistic of the HLRM model to assess for overall adequacy of risk prediction. To analyze the association between ED RSHAR and EDOUs, we used analysis of variance, where the dependent variable was ED RSHAR and independent variable of interest was presence of EDOUs.

RESULTS

There were 24,232 ED visits from 315 hospitals in the United States in our study. Of these, 82 (20.6%) hospitals had an observation unit physically separate from the ED. Hospitals with and without observation units did not have different hospital patient level characteristics. There was no association between hospital ownership, teaching status, region location, urban or rural location, and hospitals with observation units when compared with hospitals without observation units (Table 1).

Comparison of Hospital Characteristics and the Presence of an Observation Unit
Hospitals With Observation Units, W% (N = 82) Hospitals Without Observation Units, W% (N = 233) P Value
  • NOTE: Abbreviation: W%, weighted.

Region of country 0.54
Northeast 10.01 15.46
Midwest 32.06 28.35
South 41.84 36.33
West 16.08 19.85
Ownership of hospitals 0.4
Voluntary, nonprofit 77.28 72.35
Government, nonfederal 18.78 16.11
Private 3.94 11.55
Urban or rural location 0.43
Urban 68.28 60.19
Rural 31.72 39.81
Teaching hospital status 0.56
Teaching hospital 63.22 68.28
Nonteaching hospital 36.78 31.71

In addition, there was no association between patient characteristics at the ED visit level in hospitals with observation units when compared with patient characteristics at the ED visit level in hospitals without observation units (Table 2). The average ED risk‐standardized hospital admission rate for hospitals with observation units was 13.7% (95% confidence interval [CI]: 11.3 to 16.0) compared to 16.0% (95% CI: 14.1 to 17.7) for hospitals without observation units (Figure 1). This difference of 2.3% (95% CI: 0.1 to 4.7) was not statistically significant.

Figure 1
Emergency department standardized admission rates for hospitals with and without observation units.
Emergency Department Patient Level Characteristics in Hospitals With and Without Observations Units
Hospitals With Observation Units, W% (N = 6,067) Hospitals Without Observation Units, W% (N = 18,165) P Value
  • NOTE: Abbreviations: HIV, human immunodeficiency virus; W%, weighted.

Sex, female 58.75 58.35 0.96
Age, y 45.17 46.08 0.32
Race 0.75
Non‐Hispanic white 63.54 66.41
Non‐Hispanic black 23.67 18.77
Hispanic 9.77 12.47
Other 3.02 2.35
Source of payment 0.87
Private 21.90 21.46
Medicare 32.73 30.55
Medicaid 22.15 23.23
Uninsured 18.61 20.25
Unknown/missing 4.61 4.51
Poverty level 0.50
<5% 13.87 15.31
5%9.9% 32.57 23.38
10%19.9% 29.81 36.29
>20% 20.32 20.18
Missing 3.44 4.83
Arrival by ambulance 0.06
Yes 20.01 18.61
No 76.12 76.34
Unknown 3.87 5.05
Severity of illness 0.58
Emergent 16.58 16.62
Nonemergent 44.09 43.85
Indeterminate 1.18 1.17
Mental health, alcohol, unclassified 38.15 38.37
Vital signs
Temperature 0.91
9095F 0.31 0.36
95.1100.4F 93.94 93.19
100.4107F 1.81 2.11
Missing 3.94 4.35
Pulse 0.60
1059 bpm 3.39 3.93
60100 bpm 72.86 75.94
>101 bpm 19.60 21.37
Missing 4.16 7.67
Systolic blood pressure 0.92
5090 mm Hg 0.90 1.02
91160 mm Hg 85.49 84.03
161260 mm Hg 11.90 12.94
Missing 1.71 2.01
Respiratory rate 0.68
411 breaths/min 0.24 0.19
1220 breaths/min 87.88 86.40
2160 breaths/min 8.90 10.09
Missing 2.98 3.32
Chief complaint associated with hospitalization
Chest pain and related symptoms 7.37 6.40 0.48
Shortness of breath 3.24 3.19 0.80
Other symptoms/probably related to psychological 1.28 0.97 0.19
General weakness 1.19 1.14 0.26
Labored or difficult breathing 0.56 0.88 0.93
Fainting (syncope) 0.44 0.42 0.09
Unconscious on arrival 0.35 0.38 0.17
Other symptoms referable to the nervous system 0.38 0.35 0.81
Chronic diseases
Congestive heart failure 4.13 4.05 0.05
Cerebrovascular disease 4.03 3.33 0.04
Diabetes 11.15 11.44 0.69
HIV 0.51 0.44 0.99
On dialysis 1.14 0.96 0.25

DISCUSSION

In this national study of hospital admissions from the ED, we did not find that hospitals with observation units had a statistically significant lower ED risk‐standardized admission rate when compared with hospitals that did not have observation units. However, the difference of ED risk‐standardized hospital admission rates between hospitals with observation units and those without observation units was very small, and we were likely underpowered to detect a statistically significant difference.

Recently, EDOUs have received much attention, in part because of increases in their numbers and frequency of use.[7] Prior studies, which did not report admission rates that were risk standardized, have also demonstrated no difference in the admission rates among hospitals with and without observation units.[6, 8] Although this result seems counterintuitive, several possible explanations exist.

One reason that there may not be a relation between the rate of inpatient admission and the presence of an observation unit is that the introduction of an EDOU appears to change physician behavior. When the option to admit to an observation unit is present, ED physicians are 2 times more likely to disposition patients to observation status without a statistically significant change in the rate of inpatient admission.[6] Studies have demonstrated that after the introduction of an observation unit, ED physicians tend to overutilize observation among patients who previously would have been discharged, while continuing to admit patients as inpatients who meet observation criteria, which could result in an increase in cost for payers and patients.[7, 9]

Observation units that are protocol driven have been associated with the best patient outcomes including shorter length of stay, lower likelihood of subsequent inpatient admission, and decreased cost.[10] Furthermore, studies evaluating EDOUs suggest increased patient satisfaction and improved patient safety, especially for protocol‐driven EDOUs.[2] However, currently, only half of dedicated observation units are protocol driven. It is also possible that the ED inpatient admission rate does not capture the full impact of an observation unit on care delivery and quality. Observation units are more likely to be present in EDs with a higher overall patient census, longer patient lengths of stay, and higher rates of ambulance diversion.[6, 8] Unfortunately, NHAMCS does not distinguish protocol‐driven versus nonprotocol‐driven observation units. From a policy standpoint, as EDOUs continue to emerge, there is an opportunity to standardize how EDOUs function by using best practices.

This study should be evaluated in the context of limitations such as heterogeneity in the management of EDOUs, limited hospital factor variables that may influence hospital admissions, and small sample size associated with each hospital. Because we were not able to determine which EDs used protocol‐driven observation units, we were not able to determine the impact of having a protocol‐driven observation unit on inpatient hospital admission rates. Additionally, the study may suffer from a selection bias, as EDs with observation units have been shown to have higher patient volume, longer patient lengths of stay, and greater rates of ED diversion. Despite the small sample size, our risk‐standardized model accounted for case mix and hospital factors associated with hospital admission rates and had a high C statistic value, which indicates that the predicted probability of being admitted from the ED highly correlates with the actual outcome of being admitted from the ED. We were unable to track hospitals longitudinally to determine if a hospital's high volume is associated with the creation of EDOUs as a means to offset its demand. However, in our analysis, we did control for overall patient volume when calculating the RHSAR. Finally, we were not able to limit the dataset to observation unit admission conditions because of the limited number of visits provided per hospital by NHAMCS. We conducted an analysis using 80% power and a P value of 0.05 to determine the sample size needed to have statistically significant results. We would require 920 hospitals to have statistically significant results, which suggests we were underpowered to detect a statistically significant difference.

In this preliminary study, we did not find an association between the presence of EDOUs and ED hospital admissions. Our study was limited by an inability to analyze administrative differences and to adjust for certain hospital factors that are likely to influence inpatient admissions via the ED. Nonetheless, our findings suggest that EDOUs merit further evaluation of their potential cost savings and the quality of the care they provide. An evaluation of ED observation departmental management is also needed to assess differences in care at observation units managed by emergency physicians versus nonemergency physicians.

Acknowledgments

Disclosures: R.C., B.S., and C.G. conceived the study. R.C. conducted the statistical analysis and was supervised by B.S. and C.G. All authors analyzed the results and interpreted findings. R.C. and D.B. drafted the manuscript, and all authors contributed substantially to its revision. All authors listed have contributed sufficiently to the project to be included as authors, and all those who are qualified to be authors are listed in the author byline. This work was previously presented at the 2013 Society for Academic Emergency Medicine Annual Meeting, Dallas, Texas. Dr. Capp is funded by a translational K award: KL2 TR001080. Dr. Gross reports grants from Johnson & Johnson, Medtronic Inc., and 21st Century Oncology during the conduct of this study. In addition, he received payment from Fair Health Inc. and ASTRO outside the submitted work. Dr. Sun receives National Institutes of Health funding. No conflicts of interest, financial or other, exist. This applies to all authors.

Today more than one‐third of emergency departments (EDs) in the United States have affiliated observation units, where patients can stay 24 to 48 hours without being admitted to the hospital.[1] Observation units experienced significant growth in the United States from 2005 to 2007, secondary to policy changes involving the Centers for Medicare and Medicaid Services (CMS), which expanded reimbursement for observation services to include any clinical condition. Furthermore, CMS implemented the Recovery Audit Contractor process, which was able to fine providers and facilities for inappropriate claims, with the principle method for charge recovery being inappropriate charges for short inpatient stays.

ED observation units (EDOUs) vary in the number of beds, but are often located adjacent to the emergency department.[2] It is estimated that EDOUs have the capacity for caring for 5% to 10% of any given ED volume.[2] Almost half of EDOUs are protocol driven, allowing these units to discharge up to 80% of all patients within 24 hours.[1, 2] Some studies have suggested that EDOUs are associated with a decrease in overall hospitalization rates, leading to cost savings.[1] However, these studies were limited by their single‐center design or simulated in nature. In addition, other studies show that EDOUs decrease inpatient admissions, length of stay, and costs related to specific clinical conditions such as chest pain, transient ischemic attack, and syncope.[3]

To further evaluate the association of observation units on ED hospital admission rates nationally, we analyzed the largest ED‐based survey, the 2010 National Hospital Ambulatory Medical Care Survey (NHAMCS), to assess the impact of observation units on hospital admissions from the ED. We hypothesized that observation units decrease overall hospital admissions from the ED.

METHODS

Study Design and Population

We performed a retrospective cross‐sectional analysis of ED visits from 2010. This study was exempt from institutional review board review by the University of Colorado and Yale University institutional review committee. The NHAMCS is an annual, national probability sample of ambulatory visits made to nonfederal, general, and short‐stay hospitals conducted by the Centers for Disease Control and Prevention (CDC), National Center for Health Statistics. The multistaged sample design was previously described elsewhere.[4] The 2010 NHAMCS dataset included 350 participating hospitals (unweighted sampling rate of 90%) and a total of 34,936 patient visits.[4]

Exclusions

We excluded patients who were less than 18 years old (n = 8015; 23%); left without being seen, left before examination completion, or left against medical advice (n = 813; 2%); transferred to another institution (n = 626; 1.7%); died on arrival or died in the ED (n = 60; 0.2%); and with missing data on discharge disposition (n = 100; 0.3%). Finally, we excluded hospitals with fewer than 30 visits per year (n = 307; 0.9%) to comply with reliable relative standard errors, as recommended by the CDC; after all of these exclusions there were 325 hospitals. Finally, we excluded hospitals with missing information on EDOUs (n = 783, 2.2%); our dataset at this point included 315 hospitals.

Outcomes

The primary outcome was hospital admission, either from the ED or admitted to an observation unit with subsequent hospital admission, defined as the ED risk‐standardized hospital admission rate (ED RSHAR).[5] This methodology allows for risk adjustment of case mix (ie, disease severity) for each hospital's ED admission rates and has been previously described in the evaluation of varying ED hospital admission rates using the same dataset.[5] To evaluate which hospitals had observation units, we used the following hospital survey question: Does your ED have an observation or clinical decision unit?

Identification of Variables

ED hospitalization rates were risk standardized for each hospital to account for each hospital's case mix and hospital factors such as socioeconomic status, clinical severity, and hospital characteristics. This methodology and dataset use have been previously described in detail.[5]

To account for common chief complaints leading to hospitalization and case‐mix distribution of these complaints among different hospitals, we analyzed all chief complaints and their relationship to hospital admission. We first identified those associated with an admission rate that exceeded 30% and was present in 1% or more of patient visits. The study team of researchers and clinicians determined the aforementioned cutoffs as clinically meaningful. Eight chief complaints met both criteria: chest pain and related symptoms, shortness of breath, other symptoms/probably related to psychological, general weakness, labored or difficulty breathing, fainting (syncope), unconscious arrival, and other symptoms referable to the nervous system. Chronic diseases, such as congestive heart failure, diabetes mellitus, renal disease on dialysis, and human immunodeficiency virus, were also included in the model.

Hospital factors included metropolitan status, geographic region of the country (limited to Northeast, Midwest, South, and West), teaching status, and urban or rural status.[6] We derived a new variable based on a previous study, teaching status, by combining nonprivate hospital status plus having at least 1 ED visit be evaluated by a resident.

Statistical Analyses

We used SAS version 9.2 (SAS Institute, Cary, NC) for all statistical analyses. Frequencies of all variables in the model were calculated to assess the distribution of data and quantify missing data. We did not want to have variables in the model with high collinearity. To investigate collinearity between independent variables, we calculated Spearman correlation coefficients; high collinearity was defined as r > 0.6. No variables included in the model had high collinearity.

To investigate the association of the candidate variables with hospitalization, we used survey logistic regression. Although some variables did not show an association with hospitalization, we felt they were clinically relevant and did not remove them from the model. Hierarchical logistic regression modeling (explained below) was used to calculate ED RSHAR based on the aforementioned selected variables associated with hospital admission.

Hierarchical logistic regression models (HLRM) were used to estimate RSHAR for each hospital. This approach reflects the assumption that a hospital‐specific component exists, and that it will affect the outcomes of patients at a particular institution. This method takes into consideration the hierarchical structure of the data to account for patient clustering within hospitals, and has been used by the CMS to publicly report hospital risk‐standardized rates of mortality and readmission for acute myocardial infarction, heart failure, and pneumonia.

We used a similar methodology as previously published.[5] In summary, the hospital RSHAR was calculated as a ratio of the number of predicted hospital admissions in the hospital to the number of expected hospital admissions in the hospital. This ratio is then multiplied by the national unadjusted rate of hospital admissions. We calculated the C statistic of the HLRM model to assess for overall adequacy of risk prediction. To analyze the association between ED RSHAR and EDOUs, we used analysis of variance, where the dependent variable was ED RSHAR and independent variable of interest was presence of EDOUs.

RESULTS

There were 24,232 ED visits from 315 hospitals in the United States in our study. Of these, 82 (20.6%) hospitals had an observation unit physically separate from the ED. Hospitals with and without observation units did not have different hospital patient level characteristics. There was no association between hospital ownership, teaching status, region location, urban or rural location, and hospitals with observation units when compared with hospitals without observation units (Table 1).

Comparison of Hospital Characteristics and the Presence of an Observation Unit
Hospitals With Observation Units, W% (N = 82) Hospitals Without Observation Units, W% (N = 233) P Value
  • NOTE: Abbreviation: W%, weighted.

Region of country 0.54
Northeast 10.01 15.46
Midwest 32.06 28.35
South 41.84 36.33
West 16.08 19.85
Ownership of hospitals 0.4
Voluntary, nonprofit 77.28 72.35
Government, nonfederal 18.78 16.11
Private 3.94 11.55
Urban or rural location 0.43
Urban 68.28 60.19
Rural 31.72 39.81
Teaching hospital status 0.56
Teaching hospital 63.22 68.28
Nonteaching hospital 36.78 31.71

In addition, there was no association between patient characteristics at the ED visit level in hospitals with observation units when compared with patient characteristics at the ED visit level in hospitals without observation units (Table 2). The average ED risk‐standardized hospital admission rate for hospitals with observation units was 13.7% (95% confidence interval [CI]: 11.3 to 16.0) compared to 16.0% (95% CI: 14.1 to 17.7) for hospitals without observation units (Figure 1). This difference of 2.3% (95% CI: 0.1 to 4.7) was not statistically significant.

Figure 1
Emergency department standardized admission rates for hospitals with and without observation units.
Emergency Department Patient Level Characteristics in Hospitals With and Without Observations Units
Hospitals With Observation Units, W% (N = 6,067) Hospitals Without Observation Units, W% (N = 18,165) P Value
  • NOTE: Abbreviations: HIV, human immunodeficiency virus; W%, weighted.

Sex, female 58.75 58.35 0.96
Age, y 45.17 46.08 0.32
Race 0.75
Non‐Hispanic white 63.54 66.41
Non‐Hispanic black 23.67 18.77
Hispanic 9.77 12.47
Other 3.02 2.35
Source of payment 0.87
Private 21.90 21.46
Medicare 32.73 30.55
Medicaid 22.15 23.23
Uninsured 18.61 20.25
Unknown/missing 4.61 4.51
Poverty level 0.50
<5% 13.87 15.31
5%9.9% 32.57 23.38
10%19.9% 29.81 36.29
>20% 20.32 20.18
Missing 3.44 4.83
Arrival by ambulance 0.06
Yes 20.01 18.61
No 76.12 76.34
Unknown 3.87 5.05
Severity of illness 0.58
Emergent 16.58 16.62
Nonemergent 44.09 43.85
Indeterminate 1.18 1.17
Mental health, alcohol, unclassified 38.15 38.37
Vital signs
Temperature 0.91
9095F 0.31 0.36
95.1100.4F 93.94 93.19
100.4107F 1.81 2.11
Missing 3.94 4.35
Pulse 0.60
1059 bpm 3.39 3.93
60100 bpm 72.86 75.94
>101 bpm 19.60 21.37
Missing 4.16 7.67
Systolic blood pressure 0.92
5090 mm Hg 0.90 1.02
91160 mm Hg 85.49 84.03
161260 mm Hg 11.90 12.94
Missing 1.71 2.01
Respiratory rate 0.68
411 breaths/min 0.24 0.19
1220 breaths/min 87.88 86.40
2160 breaths/min 8.90 10.09
Missing 2.98 3.32
Chief complaint associated with hospitalization
Chest pain and related symptoms 7.37 6.40 0.48
Shortness of breath 3.24 3.19 0.80
Other symptoms/probably related to psychological 1.28 0.97 0.19
General weakness 1.19 1.14 0.26
Labored or difficult breathing 0.56 0.88 0.93
Fainting (syncope) 0.44 0.42 0.09
Unconscious on arrival 0.35 0.38 0.17
Other symptoms referable to the nervous system 0.38 0.35 0.81
Chronic diseases
Congestive heart failure 4.13 4.05 0.05
Cerebrovascular disease 4.03 3.33 0.04
Diabetes 11.15 11.44 0.69
HIV 0.51 0.44 0.99
On dialysis 1.14 0.96 0.25

DISCUSSION

In this national study of hospital admissions from the ED, we did not find that hospitals with observation units had a statistically significant lower ED risk‐standardized admission rate when compared with hospitals that did not have observation units. However, the difference of ED risk‐standardized hospital admission rates between hospitals with observation units and those without observation units was very small, and we were likely underpowered to detect a statistically significant difference.

Recently, EDOUs have received much attention, in part because of increases in their numbers and frequency of use.[7] Prior studies, which did not report admission rates that were risk standardized, have also demonstrated no difference in the admission rates among hospitals with and without observation units.[6, 8] Although this result seems counterintuitive, several possible explanations exist.

One reason that there may not be a relation between the rate of inpatient admission and the presence of an observation unit is that the introduction of an EDOU appears to change physician behavior. When the option to admit to an observation unit is present, ED physicians are 2 times more likely to disposition patients to observation status without a statistically significant change in the rate of inpatient admission.[6] Studies have demonstrated that after the introduction of an observation unit, ED physicians tend to overutilize observation among patients who previously would have been discharged, while continuing to admit patients as inpatients who meet observation criteria, which could result in an increase in cost for payers and patients.[7, 9]

Observation units that are protocol driven have been associated with the best patient outcomes including shorter length of stay, lower likelihood of subsequent inpatient admission, and decreased cost.[10] Furthermore, studies evaluating EDOUs suggest increased patient satisfaction and improved patient safety, especially for protocol‐driven EDOUs.[2] However, currently, only half of dedicated observation units are protocol driven. It is also possible that the ED inpatient admission rate does not capture the full impact of an observation unit on care delivery and quality. Observation units are more likely to be present in EDs with a higher overall patient census, longer patient lengths of stay, and higher rates of ambulance diversion.[6, 8] Unfortunately, NHAMCS does not distinguish protocol‐driven versus nonprotocol‐driven observation units. From a policy standpoint, as EDOUs continue to emerge, there is an opportunity to standardize how EDOUs function by using best practices.

This study should be evaluated in the context of limitations such as heterogeneity in the management of EDOUs, limited hospital factor variables that may influence hospital admissions, and small sample size associated with each hospital. Because we were not able to determine which EDs used protocol‐driven observation units, we were not able to determine the impact of having a protocol‐driven observation unit on inpatient hospital admission rates. Additionally, the study may suffer from a selection bias, as EDs with observation units have been shown to have higher patient volume, longer patient lengths of stay, and greater rates of ED diversion. Despite the small sample size, our risk‐standardized model accounted for case mix and hospital factors associated with hospital admission rates and had a high C statistic value, which indicates that the predicted probability of being admitted from the ED highly correlates with the actual outcome of being admitted from the ED. We were unable to track hospitals longitudinally to determine if a hospital's high volume is associated with the creation of EDOUs as a means to offset its demand. However, in our analysis, we did control for overall patient volume when calculating the RHSAR. Finally, we were not able to limit the dataset to observation unit admission conditions because of the limited number of visits provided per hospital by NHAMCS. We conducted an analysis using 80% power and a P value of 0.05 to determine the sample size needed to have statistically significant results. We would require 920 hospitals to have statistically significant results, which suggests we were underpowered to detect a statistically significant difference.

In this preliminary study, we did not find an association between the presence of EDOUs and ED hospital admissions. Our study was limited by an inability to analyze administrative differences and to adjust for certain hospital factors that are likely to influence inpatient admissions via the ED. Nonetheless, our findings suggest that EDOUs merit further evaluation of their potential cost savings and the quality of the care they provide. An evaluation of ED observation departmental management is also needed to assess differences in care at observation units managed by emergency physicians versus nonemergency physicians.

Acknowledgments

Disclosures: R.C., B.S., and C.G. conceived the study. R.C. conducted the statistical analysis and was supervised by B.S. and C.G. All authors analyzed the results and interpreted findings. R.C. and D.B. drafted the manuscript, and all authors contributed substantially to its revision. All authors listed have contributed sufficiently to the project to be included as authors, and all those who are qualified to be authors are listed in the author byline. This work was previously presented at the 2013 Society for Academic Emergency Medicine Annual Meeting, Dallas, Texas. Dr. Capp is funded by a translational K award: KL2 TR001080. Dr. Gross reports grants from Johnson & Johnson, Medtronic Inc., and 21st Century Oncology during the conduct of this study. In addition, he received payment from Fair Health Inc. and ASTRO outside the submitted work. Dr. Sun receives National Institutes of Health funding. No conflicts of interest, financial or other, exist. This applies to all authors.

References
  1. Wiler JL, Ross MA, Ginde AA. National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959965.
  2. Baugh CW, Venkatesh AK, Bohan JS. Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manage Rev 2011;36(1):2837.
  3. Roberts RR, Zalenski RJ, Mensah EK, et al. Costs of an emergency department‐based accelerated diagnostic protocol vs hospitalization in patients with chest pain: a randomized controlled trial. JAMA. 1997;278(20):16701676.
  4. Centers for Disease Control and Prevention. National Hospital Ambulatory Medical Care Survey. Ambulatory health care data. Questionnaires, datasets, and related documentation. 2009. Available at: http://www.cdc.gov/nchs/ahcd/ahcd_questionnaires.htm. Accessed November 1, 2011.
  5. Capp R, Ross JS, Fox JP, et al. Hospital variation in risk‐standardized hospital admission rates from US EDs among adults. Am J Emerg Med. 2014;32(8):837843.
  6. Venkatesh AK, Geisler BP, Gibson Chambers JJ, Baugh CW, Bohan JS, Schuur JD. Use of observation care in US emergency departments, 2001 to 2008. PloS One. 2011;6(9):e24326.
  7. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):23142323.
  8. Mace SE, Graff L, Mikhail M, Ross M. A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529533.
  9. Crenshaw LA, Lindsell CJ, Storrow AB, Lyons MS. An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271279.
  10. Ross MA, Hockenberry JM, Mutter R, Barrett M, Wheatley M, Pitts SR. Protocol‐driven emergency department observation units offer savings, shorter stays, and reduced admissions. Health Aff (Millwood). 2013;32(12):21492156.
References
  1. Wiler JL, Ross MA, Ginde AA. National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959965.
  2. Baugh CW, Venkatesh AK, Bohan JS. Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manage Rev 2011;36(1):2837.
  3. Roberts RR, Zalenski RJ, Mensah EK, et al. Costs of an emergency department‐based accelerated diagnostic protocol vs hospitalization in patients with chest pain: a randomized controlled trial. JAMA. 1997;278(20):16701676.
  4. Centers for Disease Control and Prevention. National Hospital Ambulatory Medical Care Survey. Ambulatory health care data. Questionnaires, datasets, and related documentation. 2009. Available at: http://www.cdc.gov/nchs/ahcd/ahcd_questionnaires.htm. Accessed November 1, 2011.
  5. Capp R, Ross JS, Fox JP, et al. Hospital variation in risk‐standardized hospital admission rates from US EDs among adults. Am J Emerg Med. 2014;32(8):837843.
  6. Venkatesh AK, Geisler BP, Gibson Chambers JJ, Baugh CW, Bohan JS, Schuur JD. Use of observation care in US emergency departments, 2001 to 2008. PloS One. 2011;6(9):e24326.
  7. Baugh CW, Venkatesh AK, Hilton JA, Samuel PA, Schuur JD, Bohan JS. Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):23142323.
  8. Mace SE, Graff L, Mikhail M, Ross M. A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529533.
  9. Crenshaw LA, Lindsell CJ, Storrow AB, Lyons MS. An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271279.
  10. Ross MA, Hockenberry JM, Mutter R, Barrett M, Wheatley M, Pitts SR. Protocol‐driven emergency department observation units offer savings, shorter stays, and reduced admissions. Health Aff (Millwood). 2013;32(12):21492156.
Issue
Journal of Hospital Medicine - 10(11)
Issue
Journal of Hospital Medicine - 10(11)
Page Number
738-742
Page Number
738-742
Article Type
Display Headline
The impact of emergency department observation units on United States emergency department admission rates
Display Headline
The impact of emergency department observation units on United States emergency department admission rates
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Roberta Capp, MD, Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO 80045; Telephone: 720‐848‐4270; Fax: 720‐848‐7374; E‐mail: Roberta.Capp@ucdenver.edu
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files