User login
ACA exchanges limiting for patients with blood cancers, report suggests
Credit: CDC
A new report suggests that many health plans in the insurance exchanges mandated by the Affordable Care Act (ACA) will impose high out-of-pocket costs for patients with hematologic malignancies and provide limited access to specialty treatment centers.
Furthermore, although the plans analyzed appear to provide adequate coverage of hematology/oncology drugs, most require prior authorization.
In other words, the insurer must be notified and may not approve the purchase of a drug based on medical evidence or other criteria.
This report, “2014 Individual Exchange Policies in Four States: An Early Look for Patients with Blood Cancers,” was commissioned by the Leukemia & Lymphoma Society and prepared by Milliman, Inc.
It provides a look at the 2014 individual benefit designs, coverage benefits, and premiums for policies sold on 4 state health insurance exchanges—California, New York, Florida, and Texas—with a focus on items of interest for patients with hematologic malignancies.
“[W]hile many new rules under ACA make obtaining insurance easier for people with blood cancers, such as prohibiting companies from turning away patients with pre-existing conditions and eliminating lifetime coverage limitations, the Milliman report identifies several areas of concern that we want cancer patients to be aware of and policymakers to address,” said Mark Velleca, MD, PhD, chief policy and advocacy officer of the Leukemia & Lymphoma Society.
Premium costs
To compare monthly premium rates, the report’s authors captured rates for a 50-year-old non-smoker with an annual income of $90,000 residing in Houston, Los Angeles, Miami, or New York City.
They found considerable variation according to plan type and location, but overall, plans were cheapest in Houston. Monthly premiums for Houston ranged from $234 to $520. The range was $274 to $566 for Los Angeles, $277 to $635 for Miami, and $307 to $896 for New York.
The ranges reflect the costs according to plan type. Each insurer offers 4 different health plans: Platinum (about 10% cost-sharing), Gold (roughly 20%), Silver (roughly 30%), and Bronze (roughly 40%).
Cost-sharing
The authors noted that the lower-tier Bronze and Silver plans require significant cost-sharing for patients. The report revealed high deductibles in the health plans, sometimes nearly as high as the out-of-pocket ceiling.
Deductibles for the Silver and Bronze plans are often at least $2000 and $4000, respectively, for individuals. The maximum out-of-pocket limits set for 2014 are $6350 for an individual policy and $12,799 for a family policy.
Some insurers offer plans in some states with lower out-of-pocket limits. However, the out-of-pocket limit does not apply to non-covered drugs or treatment centers.
Drug coverage
When analyzing drug coverage, the authors decided to look at 3 drugs used to treat chronic myeloid leukemia—imatinib (Gleevec), nilotinib (Tasigna), and dasatinib (Sprycel)—and 5 drugs used to treat multiple myeloma—thalidomide (Thalomid), lenalidomide (Revlimid), pomalidomide (Pomalyst), cyclophosphamide (Cytoxan), and melphalan (Alkeran).
Most of the insurers require prior authorization for these drugs, but most of them cover all 3 chronic myeloid leukemia drugs and a majority of the myeloma drugs. Pomalyst and Cytoxan are often not covered, although most insurers do cover generic cyclophosphamide.
Network adequacy
Most of the insurers studied do not cover all NCI-designated cancer and transplant centers, and a few do not cover any of these centers. The authors said this could discourage patient enrollment in these plans or mean that a patient’s recommended treatment is not covered.
And since it is unlikely that any out-of-network expenses will count toward a patient’s out-of-pocket maximum, cancer patients could accumulate thousands of dollars of medical expenses and never reach their out-of-pocket maximum.
The authors did note, however, that satisfactory cancer care can be provided outside of NCI-designated cancer and transplant centers.
For more details, see the full report.
Credit: CDC
A new report suggests that many health plans in the insurance exchanges mandated by the Affordable Care Act (ACA) will impose high out-of-pocket costs for patients with hematologic malignancies and provide limited access to specialty treatment centers.
Furthermore, although the plans analyzed appear to provide adequate coverage of hematology/oncology drugs, most require prior authorization.
In other words, the insurer must be notified and may not approve the purchase of a drug based on medical evidence or other criteria.
This report, “2014 Individual Exchange Policies in Four States: An Early Look for Patients with Blood Cancers,” was commissioned by the Leukemia & Lymphoma Society and prepared by Milliman, Inc.
It provides a look at the 2014 individual benefit designs, coverage benefits, and premiums for policies sold on 4 state health insurance exchanges—California, New York, Florida, and Texas—with a focus on items of interest for patients with hematologic malignancies.
“[W]hile many new rules under ACA make obtaining insurance easier for people with blood cancers, such as prohibiting companies from turning away patients with pre-existing conditions and eliminating lifetime coverage limitations, the Milliman report identifies several areas of concern that we want cancer patients to be aware of and policymakers to address,” said Mark Velleca, MD, PhD, chief policy and advocacy officer of the Leukemia & Lymphoma Society.
Premium costs
To compare monthly premium rates, the report’s authors captured rates for a 50-year-old non-smoker with an annual income of $90,000 residing in Houston, Los Angeles, Miami, or New York City.
They found considerable variation according to plan type and location, but overall, plans were cheapest in Houston. Monthly premiums for Houston ranged from $234 to $520. The range was $274 to $566 for Los Angeles, $277 to $635 for Miami, and $307 to $896 for New York.
The ranges reflect the costs according to plan type. Each insurer offers 4 different health plans: Platinum (about 10% cost-sharing), Gold (roughly 20%), Silver (roughly 30%), and Bronze (roughly 40%).
Cost-sharing
The authors noted that the lower-tier Bronze and Silver plans require significant cost-sharing for patients. The report revealed high deductibles in the health plans, sometimes nearly as high as the out-of-pocket ceiling.
Deductibles for the Silver and Bronze plans are often at least $2000 and $4000, respectively, for individuals. The maximum out-of-pocket limits set for 2014 are $6350 for an individual policy and $12,799 for a family policy.
Some insurers offer plans in some states with lower out-of-pocket limits. However, the out-of-pocket limit does not apply to non-covered drugs or treatment centers.
Drug coverage
When analyzing drug coverage, the authors decided to look at 3 drugs used to treat chronic myeloid leukemia—imatinib (Gleevec), nilotinib (Tasigna), and dasatinib (Sprycel)—and 5 drugs used to treat multiple myeloma—thalidomide (Thalomid), lenalidomide (Revlimid), pomalidomide (Pomalyst), cyclophosphamide (Cytoxan), and melphalan (Alkeran).
Most of the insurers require prior authorization for these drugs, but most of them cover all 3 chronic myeloid leukemia drugs and a majority of the myeloma drugs. Pomalyst and Cytoxan are often not covered, although most insurers do cover generic cyclophosphamide.
Network adequacy
Most of the insurers studied do not cover all NCI-designated cancer and transplant centers, and a few do not cover any of these centers. The authors said this could discourage patient enrollment in these plans or mean that a patient’s recommended treatment is not covered.
And since it is unlikely that any out-of-network expenses will count toward a patient’s out-of-pocket maximum, cancer patients could accumulate thousands of dollars of medical expenses and never reach their out-of-pocket maximum.
The authors did note, however, that satisfactory cancer care can be provided outside of NCI-designated cancer and transplant centers.
For more details, see the full report.
Credit: CDC
A new report suggests that many health plans in the insurance exchanges mandated by the Affordable Care Act (ACA) will impose high out-of-pocket costs for patients with hematologic malignancies and provide limited access to specialty treatment centers.
Furthermore, although the plans analyzed appear to provide adequate coverage of hematology/oncology drugs, most require prior authorization.
In other words, the insurer must be notified and may not approve the purchase of a drug based on medical evidence or other criteria.
This report, “2014 Individual Exchange Policies in Four States: An Early Look for Patients with Blood Cancers,” was commissioned by the Leukemia & Lymphoma Society and prepared by Milliman, Inc.
It provides a look at the 2014 individual benefit designs, coverage benefits, and premiums for policies sold on 4 state health insurance exchanges—California, New York, Florida, and Texas—with a focus on items of interest for patients with hematologic malignancies.
“[W]hile many new rules under ACA make obtaining insurance easier for people with blood cancers, such as prohibiting companies from turning away patients with pre-existing conditions and eliminating lifetime coverage limitations, the Milliman report identifies several areas of concern that we want cancer patients to be aware of and policymakers to address,” said Mark Velleca, MD, PhD, chief policy and advocacy officer of the Leukemia & Lymphoma Society.
Premium costs
To compare monthly premium rates, the report’s authors captured rates for a 50-year-old non-smoker with an annual income of $90,000 residing in Houston, Los Angeles, Miami, or New York City.
They found considerable variation according to plan type and location, but overall, plans were cheapest in Houston. Monthly premiums for Houston ranged from $234 to $520. The range was $274 to $566 for Los Angeles, $277 to $635 for Miami, and $307 to $896 for New York.
The ranges reflect the costs according to plan type. Each insurer offers 4 different health plans: Platinum (about 10% cost-sharing), Gold (roughly 20%), Silver (roughly 30%), and Bronze (roughly 40%).
Cost-sharing
The authors noted that the lower-tier Bronze and Silver plans require significant cost-sharing for patients. The report revealed high deductibles in the health plans, sometimes nearly as high as the out-of-pocket ceiling.
Deductibles for the Silver and Bronze plans are often at least $2000 and $4000, respectively, for individuals. The maximum out-of-pocket limits set for 2014 are $6350 for an individual policy and $12,799 for a family policy.
Some insurers offer plans in some states with lower out-of-pocket limits. However, the out-of-pocket limit does not apply to non-covered drugs or treatment centers.
Drug coverage
When analyzing drug coverage, the authors decided to look at 3 drugs used to treat chronic myeloid leukemia—imatinib (Gleevec), nilotinib (Tasigna), and dasatinib (Sprycel)—and 5 drugs used to treat multiple myeloma—thalidomide (Thalomid), lenalidomide (Revlimid), pomalidomide (Pomalyst), cyclophosphamide (Cytoxan), and melphalan (Alkeran).
Most of the insurers require prior authorization for these drugs, but most of them cover all 3 chronic myeloid leukemia drugs and a majority of the myeloma drugs. Pomalyst and Cytoxan are often not covered, although most insurers do cover generic cyclophosphamide.
Network adequacy
Most of the insurers studied do not cover all NCI-designated cancer and transplant centers, and a few do not cover any of these centers. The authors said this could discourage patient enrollment in these plans or mean that a patient’s recommended treatment is not covered.
And since it is unlikely that any out-of-network expenses will count toward a patient’s out-of-pocket maximum, cancer patients could accumulate thousands of dollars of medical expenses and never reach their out-of-pocket maximum.
The authors did note, however, that satisfactory cancer care can be provided outside of NCI-designated cancer and transplant centers.
For more details, see the full report.
State-of-the-Art Wound Healing: Skin Substitutes for Chronic Wounds
Should you communicate with patients online?
A lot of mythology regarding the new Health Insurance Portability and Accountability Act rules (which I discussed in detail a few months ago) continues to circulate. One of the biggest myths is that e-mail communication with patients is now forbidden, so let’s debunk that one right now.
Here is a statement lifted verbatim from the official HIPAA web site (FAQ section):
"Patients may initiate communications with a provider using e-mail. If this situation occurs, the health care provider can assume (unless the patient has explicitly stated otherwise) that e-mail communications are acceptable to the individual.
"If the provider feels the patient may not be aware of the possible risks of using unencrypted e-mail, or has concerns about potential liability, the provider can alert the patient of those risks, and let the patient decide whether to continue e-mail communications."
Okay, so it’s permissible – but is it a good idea? Aside from the obvious privacy issues, many physicians balk at taking on one more unreimbursed demand on their time. While no one denies that these concerns are real, there also are real benefits to be gained from properly managed online communication – among them increased practice efficiency, and increased quality of care and satisfaction for patients.
I started giving one of my e-mail addresses to selected patients several years ago as an experiment, hoping to take some pressure off of our overloaded telephone system. The patients were grateful for simplified and more direct access, and I appreciated the decrease in phone messages and interruptions while I was seeing patients. I also noticed a decrease in those frustrating, unnecessary office visits – you know, "The rash is completely gone, but you told me to come back ..."
In general, I have found that the advantages for everyone involved (not least my nurses and receptionists) far outweigh the problems. And now, newer technologies such as encryption, web-based messaging, and integrated online communication should go a long way toward assuaging privacy concerns.
Encryption software is now inexpensive, readily available, and easily added to most e-mail systems. Packages are available from companies such as EMC, Hilgraeve, Kryptiq, Proofpoint, Axway, and ZixCorp, among many others. (As always, I have no financial interest in any company mentioned in this column.)
Rather than simply encrypting their e-mail, increasing numbers of physicians are opting for the route taken by most online banking and shopping sites: a secure website. Patients sign onto it and send a message to your office. Physicians or staffers are notified in their regular e-mail of messages on the website, and then they post a reply to the patient on the site that can only be accessed by the patient. The patient is notified of the practice’s reply in his or her regular e-mail. Web-based messaging services can be incorporated into existing practice sites or can stand on their own. Medfusion, MyDocOnline, and RelayHealth are among the many vendors that offer secure cloud-based messaging services.
A big advantage of using such a service is that you’re partnering with a vendor who has to stay on top of HIPAA and other privacy requirements. Another is the option of using electronic forms, or templates. Templates ensure that patients’ messages include the information needed to process prescription refill requests, or to adequately describe their problems and provide some clinical assessment data for the physician or nurse. They also can be designed to triage messages to the front- and back-office staff, so that time is not wasted bouncing messages around the office until the proper responder is found.
Many electronic health record systems now allow you to integrate a web-based messaging system. Advantages here include the ability to view the patient’s medical record from home or anywhere else before answering the communication, and the fact that all messages automatically become a part of the patient’s record. Electronic health record vendors that provide this type of system include Allscripts, CompuGroup Medical, Cerner, Epic, GE Medical Systems, NextGen, McKesson, and Siemens.
As with any cloud-based service, insist on multiple layers of security, uninterruptible power sources, instant switchover to backup hardware in the event of a crash, and frequent, reliable backups.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is a clinical associate professor of dermatology at Seton Hall University School of Graduate Medical Education in South Orange, N.J. Dr. Eastern is a two-time past president of the Dermatological Society of New Jersey, and currently serves on its executive board. He holds teaching positions at several hospitals and has delivered more than 500 academic speaking presentations. He is the author of numerous articles and textbook chapters, and is a long-time monthly columnist for Skin & Allergy News.
A lot of mythology regarding the new Health Insurance Portability and Accountability Act rules (which I discussed in detail a few months ago) continues to circulate. One of the biggest myths is that e-mail communication with patients is now forbidden, so let’s debunk that one right now.
Here is a statement lifted verbatim from the official HIPAA web site (FAQ section):
"Patients may initiate communications with a provider using e-mail. If this situation occurs, the health care provider can assume (unless the patient has explicitly stated otherwise) that e-mail communications are acceptable to the individual.
"If the provider feels the patient may not be aware of the possible risks of using unencrypted e-mail, or has concerns about potential liability, the provider can alert the patient of those risks, and let the patient decide whether to continue e-mail communications."
Okay, so it’s permissible – but is it a good idea? Aside from the obvious privacy issues, many physicians balk at taking on one more unreimbursed demand on their time. While no one denies that these concerns are real, there also are real benefits to be gained from properly managed online communication – among them increased practice efficiency, and increased quality of care and satisfaction for patients.
I started giving one of my e-mail addresses to selected patients several years ago as an experiment, hoping to take some pressure off of our overloaded telephone system. The patients were grateful for simplified and more direct access, and I appreciated the decrease in phone messages and interruptions while I was seeing patients. I also noticed a decrease in those frustrating, unnecessary office visits – you know, "The rash is completely gone, but you told me to come back ..."
In general, I have found that the advantages for everyone involved (not least my nurses and receptionists) far outweigh the problems. And now, newer technologies such as encryption, web-based messaging, and integrated online communication should go a long way toward assuaging privacy concerns.
Encryption software is now inexpensive, readily available, and easily added to most e-mail systems. Packages are available from companies such as EMC, Hilgraeve, Kryptiq, Proofpoint, Axway, and ZixCorp, among many others. (As always, I have no financial interest in any company mentioned in this column.)
Rather than simply encrypting their e-mail, increasing numbers of physicians are opting for the route taken by most online banking and shopping sites: a secure website. Patients sign onto it and send a message to your office. Physicians or staffers are notified in their regular e-mail of messages on the website, and then they post a reply to the patient on the site that can only be accessed by the patient. The patient is notified of the practice’s reply in his or her regular e-mail. Web-based messaging services can be incorporated into existing practice sites or can stand on their own. Medfusion, MyDocOnline, and RelayHealth are among the many vendors that offer secure cloud-based messaging services.
A big advantage of using such a service is that you’re partnering with a vendor who has to stay on top of HIPAA and other privacy requirements. Another is the option of using electronic forms, or templates. Templates ensure that patients’ messages include the information needed to process prescription refill requests, or to adequately describe their problems and provide some clinical assessment data for the physician or nurse. They also can be designed to triage messages to the front- and back-office staff, so that time is not wasted bouncing messages around the office until the proper responder is found.
Many electronic health record systems now allow you to integrate a web-based messaging system. Advantages here include the ability to view the patient’s medical record from home or anywhere else before answering the communication, and the fact that all messages automatically become a part of the patient’s record. Electronic health record vendors that provide this type of system include Allscripts, CompuGroup Medical, Cerner, Epic, GE Medical Systems, NextGen, McKesson, and Siemens.
As with any cloud-based service, insist on multiple layers of security, uninterruptible power sources, instant switchover to backup hardware in the event of a crash, and frequent, reliable backups.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is a clinical associate professor of dermatology at Seton Hall University School of Graduate Medical Education in South Orange, N.J. Dr. Eastern is a two-time past president of the Dermatological Society of New Jersey, and currently serves on its executive board. He holds teaching positions at several hospitals and has delivered more than 500 academic speaking presentations. He is the author of numerous articles and textbook chapters, and is a long-time monthly columnist for Skin & Allergy News.
A lot of mythology regarding the new Health Insurance Portability and Accountability Act rules (which I discussed in detail a few months ago) continues to circulate. One of the biggest myths is that e-mail communication with patients is now forbidden, so let’s debunk that one right now.
Here is a statement lifted verbatim from the official HIPAA web site (FAQ section):
"Patients may initiate communications with a provider using e-mail. If this situation occurs, the health care provider can assume (unless the patient has explicitly stated otherwise) that e-mail communications are acceptable to the individual.
"If the provider feels the patient may not be aware of the possible risks of using unencrypted e-mail, or has concerns about potential liability, the provider can alert the patient of those risks, and let the patient decide whether to continue e-mail communications."
Okay, so it’s permissible – but is it a good idea? Aside from the obvious privacy issues, many physicians balk at taking on one more unreimbursed demand on their time. While no one denies that these concerns are real, there also are real benefits to be gained from properly managed online communication – among them increased practice efficiency, and increased quality of care and satisfaction for patients.
I started giving one of my e-mail addresses to selected patients several years ago as an experiment, hoping to take some pressure off of our overloaded telephone system. The patients were grateful for simplified and more direct access, and I appreciated the decrease in phone messages and interruptions while I was seeing patients. I also noticed a decrease in those frustrating, unnecessary office visits – you know, "The rash is completely gone, but you told me to come back ..."
In general, I have found that the advantages for everyone involved (not least my nurses and receptionists) far outweigh the problems. And now, newer technologies such as encryption, web-based messaging, and integrated online communication should go a long way toward assuaging privacy concerns.
Encryption software is now inexpensive, readily available, and easily added to most e-mail systems. Packages are available from companies such as EMC, Hilgraeve, Kryptiq, Proofpoint, Axway, and ZixCorp, among many others. (As always, I have no financial interest in any company mentioned in this column.)
Rather than simply encrypting their e-mail, increasing numbers of physicians are opting for the route taken by most online banking and shopping sites: a secure website. Patients sign onto it and send a message to your office. Physicians or staffers are notified in their regular e-mail of messages on the website, and then they post a reply to the patient on the site that can only be accessed by the patient. The patient is notified of the practice’s reply in his or her regular e-mail. Web-based messaging services can be incorporated into existing practice sites or can stand on their own. Medfusion, MyDocOnline, and RelayHealth are among the many vendors that offer secure cloud-based messaging services.
A big advantage of using such a service is that you’re partnering with a vendor who has to stay on top of HIPAA and other privacy requirements. Another is the option of using electronic forms, or templates. Templates ensure that patients’ messages include the information needed to process prescription refill requests, or to adequately describe their problems and provide some clinical assessment data for the physician or nurse. They also can be designed to triage messages to the front- and back-office staff, so that time is not wasted bouncing messages around the office until the proper responder is found.
Many electronic health record systems now allow you to integrate a web-based messaging system. Advantages here include the ability to view the patient’s medical record from home or anywhere else before answering the communication, and the fact that all messages automatically become a part of the patient’s record. Electronic health record vendors that provide this type of system include Allscripts, CompuGroup Medical, Cerner, Epic, GE Medical Systems, NextGen, McKesson, and Siemens.
As with any cloud-based service, insist on multiple layers of security, uninterruptible power sources, instant switchover to backup hardware in the event of a crash, and frequent, reliable backups.
Dr. Eastern practices dermatology and dermatologic surgery in Belleville, N.J. He is a clinical associate professor of dermatology at Seton Hall University School of Graduate Medical Education in South Orange, N.J. Dr. Eastern is a two-time past president of the Dermatological Society of New Jersey, and currently serves on its executive board. He holds teaching positions at several hospitals and has delivered more than 500 academic speaking presentations. He is the author of numerous articles and textbook chapters, and is a long-time monthly columnist for Skin & Allergy News.
CTLs prove effective against EBV lymphomas
Cytotoxic T lymphocytes (CTLs) targeting Epstein-Barr virus (EBV) proteins appear to be a promising treatment option for patients with aggressive lymphomas.
Researchers tested the autologous CTLs in a cohort of 50 patients with Hodgkin or non-Hodgkin lymphoma.
The treatment produced responses in about 62% of patients with relapsed or refractory disease.
And it sustained remissions in roughly 93% of patients who were at a high risk of relapse.
Catherine Bollard, MD, of the Children’s National Medical Center in Washington, DC, and her colleagues reported these results in the Journal of Clinical Oncology.
The investigators noted that about 40% of lymphoma patients have tumor cells expressing the type II latency EBV antigens latent membrane protein 1 (LMP1) and LMP2. But T cells specific for these antigens are present in low numbers and may not “recognize” the tumors they should attack.
So Dr Bollard and her colleagues decided to test the effects of infusing LMP-directed CTLs into 50 patients with EBV-positive lymphomas.
The researchers used adenoviral vector-transduced dendritic cells and EBV-transformed B–lymphoblastoid cell lines as antigen-presenting cells to activate and expand LMP-specific T cells.
For some patients, the team used an adenoviral vector encoding the LMP2 antigen alone (n=17). And for others, they used a vector encoding both LMP1 and LMP2 (n=33).
Twenty-nine of the patients were in remission when they received CTL infusions, but they were at a high risk of relapse. The remaining 21 patients had relapsed or refractory disease at the time of CTL infusion.
Twenty-seven of the 29 patients who received CTLs as an adjuvant treatment remained in remission from their disease at 3.1 years after treatment.
However, the 2-year event-free survival rate was 82% for this group of patients. None of them died of lymphoma, but 9 died from complications associated with the chemotherapy and radiation they had received.
“That’s why this research is important,” Dr Bollard said. “Patients with lymphomas traditionally have a good cure rate with chemotherapy and radiation. What kills them is the side effects of those treatments—second cancers, lung, and heart disease.”
Of the 21 patients with relapsed or refractory disease, 13 responded to CTL infusions. And 11 patients achieved a complete response.
In this group, the 2-year event-free survival rate was about 50%, regardless of whether patients received CTLs directed against LMP1/2 or LMP2 alone.
The investigators found that responses were associated with effector and central memory LMP1-specific CTLs but not with the patient’s type of lymphoma or lymphopenic status.
Even those patients with limited in vivo expansion of LMP-directed CTLs achieved complete responses. And this effect was associated with epitope spreading.
“This is a targeted therapeutic approach that we hope can be used early in the disease to treat relapse,” Dr Bollard said. “We saw good outcomes here. Eventually, it could be a front-line therapy.”
The researchers noted that the difficulty of tailoring CTLs for each patient has been cited as a barrier to this type of treatment. But currently available treatments can be expensive and induce severe side effects that require hospitalization.
“Although we spend some time making the cells, patients go home with few side effects and few associated hospital costs,” said study author Cliona Rooney, PhD, of the Baylor College of Medicine in Houston, Texas. “It can be less costly than chemotherapy.”
In this study, the investigators did not see any toxicities attributable to CTL infusion. One patient did have CNS deterioration 2 weeks after infusion, but this was attributed to disease progression.
And another patient developed respiratory complications about 4 weeks after a second CTL infusion. But this was attributed to an intercurrent infection, and the patient made a complete recovery.
Cytotoxic T lymphocytes (CTLs) targeting Epstein-Barr virus (EBV) proteins appear to be a promising treatment option for patients with aggressive lymphomas.
Researchers tested the autologous CTLs in a cohort of 50 patients with Hodgkin or non-Hodgkin lymphoma.
The treatment produced responses in about 62% of patients with relapsed or refractory disease.
And it sustained remissions in roughly 93% of patients who were at a high risk of relapse.
Catherine Bollard, MD, of the Children’s National Medical Center in Washington, DC, and her colleagues reported these results in the Journal of Clinical Oncology.
The investigators noted that about 40% of lymphoma patients have tumor cells expressing the type II latency EBV antigens latent membrane protein 1 (LMP1) and LMP2. But T cells specific for these antigens are present in low numbers and may not “recognize” the tumors they should attack.
So Dr Bollard and her colleagues decided to test the effects of infusing LMP-directed CTLs into 50 patients with EBV-positive lymphomas.
The researchers used adenoviral vector-transduced dendritic cells and EBV-transformed B–lymphoblastoid cell lines as antigen-presenting cells to activate and expand LMP-specific T cells.
For some patients, the team used an adenoviral vector encoding the LMP2 antigen alone (n=17). And for others, they used a vector encoding both LMP1 and LMP2 (n=33).
Twenty-nine of the patients were in remission when they received CTL infusions, but they were at a high risk of relapse. The remaining 21 patients had relapsed or refractory disease at the time of CTL infusion.
Twenty-seven of the 29 patients who received CTLs as an adjuvant treatment remained in remission from their disease at 3.1 years after treatment.
However, the 2-year event-free survival rate was 82% for this group of patients. None of them died of lymphoma, but 9 died from complications associated with the chemotherapy and radiation they had received.
“That’s why this research is important,” Dr Bollard said. “Patients with lymphomas traditionally have a good cure rate with chemotherapy and radiation. What kills them is the side effects of those treatments—second cancers, lung, and heart disease.”
Of the 21 patients with relapsed or refractory disease, 13 responded to CTL infusions. And 11 patients achieved a complete response.
In this group, the 2-year event-free survival rate was about 50%, regardless of whether patients received CTLs directed against LMP1/2 or LMP2 alone.
The investigators found that responses were associated with effector and central memory LMP1-specific CTLs but not with the patient’s type of lymphoma or lymphopenic status.
Even those patients with limited in vivo expansion of LMP-directed CTLs achieved complete responses. And this effect was associated with epitope spreading.
“This is a targeted therapeutic approach that we hope can be used early in the disease to treat relapse,” Dr Bollard said. “We saw good outcomes here. Eventually, it could be a front-line therapy.”
The researchers noted that the difficulty of tailoring CTLs for each patient has been cited as a barrier to this type of treatment. But currently available treatments can be expensive and induce severe side effects that require hospitalization.
“Although we spend some time making the cells, patients go home with few side effects and few associated hospital costs,” said study author Cliona Rooney, PhD, of the Baylor College of Medicine in Houston, Texas. “It can be less costly than chemotherapy.”
In this study, the investigators did not see any toxicities attributable to CTL infusion. One patient did have CNS deterioration 2 weeks after infusion, but this was attributed to disease progression.
And another patient developed respiratory complications about 4 weeks after a second CTL infusion. But this was attributed to an intercurrent infection, and the patient made a complete recovery.
Cytotoxic T lymphocytes (CTLs) targeting Epstein-Barr virus (EBV) proteins appear to be a promising treatment option for patients with aggressive lymphomas.
Researchers tested the autologous CTLs in a cohort of 50 patients with Hodgkin or non-Hodgkin lymphoma.
The treatment produced responses in about 62% of patients with relapsed or refractory disease.
And it sustained remissions in roughly 93% of patients who were at a high risk of relapse.
Catherine Bollard, MD, of the Children’s National Medical Center in Washington, DC, and her colleagues reported these results in the Journal of Clinical Oncology.
The investigators noted that about 40% of lymphoma patients have tumor cells expressing the type II latency EBV antigens latent membrane protein 1 (LMP1) and LMP2. But T cells specific for these antigens are present in low numbers and may not “recognize” the tumors they should attack.
So Dr Bollard and her colleagues decided to test the effects of infusing LMP-directed CTLs into 50 patients with EBV-positive lymphomas.
The researchers used adenoviral vector-transduced dendritic cells and EBV-transformed B–lymphoblastoid cell lines as antigen-presenting cells to activate and expand LMP-specific T cells.
For some patients, the team used an adenoviral vector encoding the LMP2 antigen alone (n=17). And for others, they used a vector encoding both LMP1 and LMP2 (n=33).
Twenty-nine of the patients were in remission when they received CTL infusions, but they were at a high risk of relapse. The remaining 21 patients had relapsed or refractory disease at the time of CTL infusion.
Twenty-seven of the 29 patients who received CTLs as an adjuvant treatment remained in remission from their disease at 3.1 years after treatment.
However, the 2-year event-free survival rate was 82% for this group of patients. None of them died of lymphoma, but 9 died from complications associated with the chemotherapy and radiation they had received.
“That’s why this research is important,” Dr Bollard said. “Patients with lymphomas traditionally have a good cure rate with chemotherapy and radiation. What kills them is the side effects of those treatments—second cancers, lung, and heart disease.”
Of the 21 patients with relapsed or refractory disease, 13 responded to CTL infusions. And 11 patients achieved a complete response.
In this group, the 2-year event-free survival rate was about 50%, regardless of whether patients received CTLs directed against LMP1/2 or LMP2 alone.
The investigators found that responses were associated with effector and central memory LMP1-specific CTLs but not with the patient’s type of lymphoma or lymphopenic status.
Even those patients with limited in vivo expansion of LMP-directed CTLs achieved complete responses. And this effect was associated with epitope spreading.
“This is a targeted therapeutic approach that we hope can be used early in the disease to treat relapse,” Dr Bollard said. “We saw good outcomes here. Eventually, it could be a front-line therapy.”
The researchers noted that the difficulty of tailoring CTLs for each patient has been cited as a barrier to this type of treatment. But currently available treatments can be expensive and induce severe side effects that require hospitalization.
“Although we spend some time making the cells, patients go home with few side effects and few associated hospital costs,” said study author Cliona Rooney, PhD, of the Baylor College of Medicine in Houston, Texas. “It can be less costly than chemotherapy.”
In this study, the investigators did not see any toxicities attributable to CTL infusion. One patient did have CNS deterioration 2 weeks after infusion, but this was attributed to disease progression.
And another patient developed respiratory complications about 4 weeks after a second CTL infusion. But this was attributed to an intercurrent infection, and the patient made a complete recovery.
Structured Peer Observation of Teaching
Hospitalists are increasingly responsible for educating students and housestaff in internal medicine.[1] Because the quality of teaching is an important factor in learning,[2, 3, 4] leaders in medical education have expressed concern over the rapid shift of teaching responsibilities to this new group of educators.[5, 6, 7, 8] Moreover, recent changes in duty hour restrictions have strained both student and resident education,[9, 10] necessitating the optimization of inpatient teaching.[11, 12] Many hospitalists have recently finished residency and have not had formal training in clinical teaching. Collectively, most hospital medicine groups are early in their careers, have significant clinical obligations,[13] and may not have the bandwidth or expertise to provide faculty development for improving clinical teaching.
Rationally designed and theoretically sound faculty development to improve inpatient clinical teaching is required to meet this challenge. There are a limited number of reports describing faculty development focused on strengthening the teaching of hospitalists, and only 3 utilized direct observation and feedback, 1 of which involved peer observation in the clinical setting.[14, 15, 16] This 2011 report described a narrative method of peer observation and feedback but did not assess for efficacy of the program.[16] To our knowledge, there have been no studies of structured peer observation and feedback to optimize hospitalist attendings' teaching which have evaluated the efficacy of the intervention.
We developed a faculty development program based on peer observation and feedback based on actual teaching practices, using structured feedback anchored in validated and observable measures of effective teaching. We hypothesized that participation in the program would increase confidence in key teaching skills, increase confidence in the ability to give and receive peer feedback, and strengthen attitudes toward peer observation and feedback.
METHODS
Subjects and Setting
The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 180 housestaff. Internal medicine ward attendings rotate during 2‐week blocks, and are asked to give formal teaching rounds 3 or 4 times a week (these sessions are distinct from teaching which may happen while rounding on patients). Ward teams are composed of 1 senior resident, 2 interns, and 1 to 2 medical students. The majority of internal medicine ward attendings are hospitalist faculty, hospital medicine fellows, or medicine chief residents. Because outpatient general internists and subspecialists only occasionally attend on the wards, we refer to ward attendings as attending hospitalists in this article. All attending hospitalists were eligible to participate if they attended on the wards at least twice during the academic year. The institutional review board at the University of California, San Francisco approved this study.
Theoretical Framework
We reviewed the literature to optimize our program in 3 conceptual domains: (1) overall structure of the program, (2) definition of effective teaching and (3) effective delivery of feedback.
Over‐reliance on didactics that are disconnected from the work environment is a weakness of traditional faculty development. Individuals may attempt to apply what they have learned, but receiving feedback on their actual workplace practices may be difficult. A recent perspective responds to this fragmentation by conceptualizing faculty development as embedded in both a faculty development community and a workplace community. This model emphasizes translating what faculty have learned in the classroom into practice, and highlights the importance of coaching in the workplace.[17] In accordance with this framework, we designed our program to reach beyond isolated workshops to effectively penetrate the workplace community.
We selected the Stanford Faculty Development Program (SFDP) framework for optimal clinical teaching as our model for recognizing and improving teaching skills. The SFDP was developed as a theory‐based intensive feedback method to improve teaching skills,[18, 19] and has been shown to improve teaching in the ambulatory[20] and inpatient settings.[21, 22] In this widely disseminated framework,[23, 24] excellent clinical teaching is grounded in optimizing observable behaviors organized around 7 domains.[18] A 26‐item instrument to evaluate clinical teaching (SFDP‐26) has been developed based on this framework[25] and has been validated in multiple settings.[26, 27] High‐quality teaching, as defined by the SFDP framework, has been correlated with improved educational outcomes in internal medicine clerkship students.[4]
Feedback is crucial to optimizing teaching,[28, 29, 30] particularly when it incorporates consultation[31] and narrative comments.[32] Peer feedback has several advantages over feedback from learners or from other non‐peer observers (such as supervisors or other evaluators). First, the observers benefit by gaining insight into their own weaknesses and potential areas for growth as teachers.[33, 34] Additionally, collegial observation and feedback may promote supportive teaching relationships between faculty.[35] Furthermore, peer review overcomes the biases that may be present in learner evaluations.[36] We established a 3‐stage feedback technique based on a previously described method.[37] In the first step, the observer elicits self‐appraisal from the speaker. Next, the observer provides specific, behaviorally anchored feedback in the form of 3 reinforcing comments and 2 constructive comments. Finally, the observer elicits a reflection on the feedback and helps develop a plan to improve teaching in future opportunities. We used a dyad model (paired participants repeatedly observe and give feedback to each other) to support mutual benefit and reciprocity between attendings.
Intervention
Using a modified Delphi approach, 5 medical education experts selected the 10 items that are most easily observable and salient to formal attending teaching rounds from the SFDP‐26 teaching assessment tool. A structured observation form was created, which included a checklist of the 10 selected items, space for note taking, and a template for narrative feedback (Figure 1).

We introduced the SFDP framework during a 2‐hour initial training session. Participants watched videos of teaching, learned to identify the 10 selected teaching behaviors, developed appropriate constructive and reinforcing comments, and practiced giving and receiving peer feedback.
Dyads were created on the basis of predetermined attending schedules. Participants were asked to observe and be observed twice during attending teaching rounds over the course of the academic year. Attending teaching rounds were defined as any preplanned didactic activity for ward teams. The structured observation forms were returned to the study coordinators after the observer had given feedback to the presenter. A copy of the feedback without the observer's notes was also given to each speaker. At the midpoint of the academic year, a refresher session was offered to reinforce those teaching behaviors that were the least frequently performed to date. All participants received a $50.00
Measurements and Data Collection
Participants were given a pre‐ and post‐program survey. The surveys included questions assessing confidence in ability to give feedback, receive feedback without feeling defensive, and teach effectively, as well as attitudes toward peer observation. The postprogram survey was administered at the end of the year and additionally assessed the self‐rated performance of the 10 selected teaching behaviors. A retrospective pre‐ and post‐program assessment was used for this outcome, because this method can be more reliable when participants initially may not have sufficient insight to accurately assess their own competence in specific measures.[21] The post‐program survey also included 4 questions assessing satisfaction with aspects of the program. All questions were structured as statements to which the respondent indicated degree of agreement using a 5‐point Likert scale, where 1=strongly disagree and 5=strongly agree. Structured observation forms used by participants were collected throughout the year to assess frequency of performance of the 10 selected teaching behaviors.
Statistical Analysis
We only analyzed the pre‐ and post‐program surveys that could be matched using anonymous identifiers provided by participants. For both prospective and retrospective measures, mean values and standard deviations were calculated. Wilcoxon signed rank tests for nonparametric data were performed to obtain P values. For all comparisons, a P value of <0.05 was considered significant. All comparisons were performed using Stata version 10 (StataCorp, College Station, TX).
RESULTS
Participant Characteristics and Participation in Program
Of the 37 eligible attending hospitalists, 22 (59%) enrolled. Fourteen were hospital medicine faculty, 6 were hospital medicine fellows, and 2 were internal medicine chief residents. The averagestandard deviation (SD) number of years as a ward attending was 2.2 years2.1. Seventeen (77%) reported previously having been observed and given feedback by a colleague, and 9 (41%) reported previously observing a colleague for the purpose of giving feedback.
All 22 participants attended 1 of 2, 2‐hour training sessions. Ten participants attended an hour‐long midyear refresher session. A total of 19 observation and feedback sessions took place; 15 of them occurred in the first half of the academic year. Fifteen attending hospitalists participated in at least 1 observed teaching session. Of the 11 dyads, 6 completed at least 1 observation of each other. Two dyads performed 2 observations of each other.
Fifteen participants (68% of those enrolled) completed both the pre‐ and post‐program surveys. Among these respondents, the average number of years attending was 2.92.2 years. Eight (53%) reported previously having been observed and given feedback by a colleague, and 7 (47%) reported previously observing a colleague for the purpose of giving feedback. For this subset of participants, the averageSD frequency of being observed during the program was 1.30.7, and observing was 1.10.8.
Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively
In comparison of pre‐ and post‐intervention measures, participants indicated increased confidence in their ability to evaluate their colleagues and provide feedback in all domains queried. Participants also indicated increased confidence in the efficacy of their feedback to improve their colleagues' teaching skills. Participating in the program did not significantly change pre‐intervention levels of confidence in ability to receive feedback without being defensive or confidence in ability to use feedback to improve teaching skills (Table 1).
Statement | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|
| |||||
I can accurately assess my colleagues' teaching skills. | 3.20 | 0.86 | 4.07 | 0.59 | 0.004 |
I can give accurate feedback to my colleagues regarding their teaching skills. | 3.40 | 0.63 | 4.20 | 0.56 | 0.002 |
I can give feedback in a way that that my colleague will not feel defensive about their teaching skills. | 3.60 | 0.63 | 4.20 | 0.56 | 0.046 |
My feedback will improve my colleagues' teaching skills. | 3.40 | 0.51 | 3.93 | 0.59 | 0.011 |
I can receive feedback from a colleague without being defensive about my teaching skills. | 3.87 | 0.92 | 4.27 | 0.59 | 0.156 |
I can use feedback from a colleague to improve my teaching skills. | 4.33 | 0.82 | 4.47 | 0.64 | 0.607 |
I am confident in my ability to teach students and residents during attending rounds.a | 3.21 | 0.89 | 3.71 | 0.83 | 0.026 |
I am confident in my knowledge of components of effective teaching.a | 3.21 | 0.89 | 3.71 | 0.99 | 0.035 |
Learners regard me as an effective teacher.a | 3.14 | 0.66 | 3.64 | 0.74 | 0.033 |
Self‐Rated Performance of 10 Selected Teaching Behaviors
In retrospective assessment, participants felt that their performance had improved in all 10 teaching behaviors after the intervention. This perceived improvement reached statistical significance in 8 of the 10 selected behaviors (Table 2).
SFDP Framework Category From Skeff et al.[18] | When I Give Attending Rounds, I Generally . | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|---|
| ||||||
1. Establishing a positive learning climate | Listen to learners | 4.27 | 0.59 | 4.53 | 0.52 | 0.046 |
Encourage learners to participate actively in the discussion | 4.07 | 0.70 | 4.60 | 0.51 | 0.009 | |
2. Controlling the teaching session | Call attention to time | 3.33 | 0.98 | 4.27 | 0.59 | 0.004 |
3. Communicating goals | State goals clearly and concisely | 3.40 | 0.63 | 4.27 | 0.59 | 0.001 |
State relevance of goals to learners | 3.40 | 0.74 | 4.20 | 0.68 | 0.002 | |
4. Promoting understanding and retention | Present well‐organized material | 3.87 | 0.64 | 4.07 | 0.70 | 0.083 |
Use blackboard or other visual aids | 4.27 | 0.88 | 4.47 | 0.74 | 0.158 | |
5. Evaluating the learners | Evaluate learners' ability to apply medical knowledge to specific patients | 3.33 | 0.98 | 4.00 | 0.76 | 0.005 |
6. Providing feedback to the learners | Explain to learners why he/she was correct or incorrect | 3.47 | 1.13 | 4.13 | 0.64 | 0.009 |
7. Promoting self‐directed learning | Motivate learners to learn on their own | 3.20 | 0.86 | 3.73 | 0.70 | 0.005 |
Attitudes Toward Peer Observation and Feedback
There were no significant changes in attitudes toward observation and feedback on teaching. A strong preprogram belief that observation and feedback can improve teaching skills increased slightly, but not significantly, after the program. Participants remained largely neutral in expectation of discomfort with giving or receiving peer feedback. Prior to the program, there was a slight tendency to believe that observation and feedback is more effective when done by more skilled and experienced colleagues; this belief diminished, but not significantly (Table 3).
Statement | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|
| |||||
Being observed and receiving feedback can improve my teaching skills. | 4.47 | 1.06 | 4.60 | 0.51 | 0.941 |
My teaching skills cannot improve without observation with feedback. | 2.93 | 1.39 | 3.47 | 1.30 | 0.188 |
Observation with feedback is most effective when done by colleagues who are expert educators. | 3.53 | 0.83 | 3.33 | 0.98 | 0.180 |
Observation with feedback is most effective when done by colleagues who have been teaching many years. | 3.40 | 0.91 | 3.07 | 1.03 | 0.143 |
The thought of observing and giving feedback to my colleagues makes me uncomfortable. | 3.13 | 0.92 | 3.00 | 1.13 | 0.565 |
The thought of being observed by a colleague and receiving feedback makes me uncomfortable. | 3.20 | 0.94 | 3.27 | 1.22 | 0.747 |
Program Evaluation
There were a variable number of responses to the program evaluation questions. The majority of participants found the program to be very beneficial (1=strongly disagree, 5=strongly agree [n, meanSD]): My teaching has improved as a result of this program (n=14, 4.90.3). Both giving (n=11, 4.21.6) and receiving (n=13, 4.61.1) feedback were felt to have improved teaching skills. There was strong agreement from respondents that they would participate in the program in the future: I am likely to participate in this program in the future (n=12, 4.60.9).
DISCUSSION
Previous studies have shown that teaching skills are unlikely to improve without feedback,[28, 29, 30] yet feedback for hospitalists is usually limited to summative, end‐rotation evaluations from learners, disconnected from the teaching encounter. Our theory‐based, rationally designed peer observation and feedback program resulted in increased confidence in the ability to give feedback, receive feedback, and teach effectively. Participation did not result in negative attitudes toward giving and receiving feedback from colleagues. Participants self‐reported increased performance of important teaching behaviors. Most participants rated the program very highly, and endorsed improved teaching skills as a result of the program.
Our experience provides several lessons for other groups considering the implementation of peer feedback to strengthen teaching. First, we suggest that hospitalist groups may expect variable degrees of participation in a voluntary peer feedback program. In our program, 41% of eligible attendings did not participate. We did not specifically investigate why; we speculate that they may not have had the time, believed that their teaching skills were already strong, or they may have been daunted at the idea of peer review. It is also possible that participants were a self‐selected group who were the most motivated to strengthen their teaching. Second, we note the steep decline in the number of observations in the second half of the year. Informal assessment for reasons for the drop‐off suggested that after initial enthusiasm for the program, navigating the logistics of observing the same peer in the second half of the year proved to be prohibitive to many participants. Therefore, future versions of peer feedback programs may benefit from removing the dyad requirement and encouraging all participants to observe one another whenever possible.
With these lessons in mind, we believe that a peer observation program could be implemented by other hospital medicine groups. The program does not require extensive content expertise or senior faculty but does require engaged leadership and interested and motivated faculty. Groups could identify an individual in their group with an interest in clinical teaching who could then be responsible for creating the training session (materials available upon request). We believe that with only a small upfront investment, most hospital medicine groups could use this as a model to build a peer observation program aimed at improving clinical teaching.
Our study has several limitations. As noted above, our participation rate was 59%, and the number of participating attendings declined through the year. We did not examine whether our program resulted in advances in the knowledge, skills, or attitudes of the learners; because each attending teaching session was unique, it was not possible to measure changes in learner knowledge. Our primary outcome measures relied on self‐assessment rather than higher order and more objective measures of teaching efficacy. Furthermore, our results may not be generalizable to other programs, given the heterogeneity in service structures and teaching practices across the country. This was an uncontrolled study; some of the outcomes may have naturally occurred independent of the intervention due to the natural evolution of clinical teaching. As with any educational intervention that integrates multiple strategies, we are not able to discern if the improved outcomes were the result of the initial didactic sessions, the refresher sessions, or the peer feedback itself. Serial assessments of frequency of teaching behaviors were not done due to the low number of observations in the second half of the program. Finally, our 10‐item tool derived from the validated SFDP‐26 tool is not itself a validated assessment of teaching.
We acknowledge that the increased confidence seen in our participants does not necessarily predict improved performance. Although increased confidence in core skills is a necessary step that can lead to changes in behavior, further studies are needed to determine whether the increase in faculty confidence that results from peer observation and feedback translates into improved educational outcomes.
The pressure on hospitalists to be excellent teachers is here to stay. Resources to train these faculty are scarce, yet we must prioritize faculty development in teaching to optimize the training of future physicians. Our data illustrate the benefits of peer observation and feedback. Hospitalist programs should consider this option in addressing the professional development needs of their faculty.
Acknowledgements
The authors thank Zachary Martin for administrative support for the program; Gurpreet Dhaliwal, MD, and Patricia O'Sullivan, PhD, for aid in program development; and John Amory, MD, MPH, for critical review of the manuscript. The authors thank the University of California, San Francisco Office of Medical Education for funding this work with an Educational Research Grant.
Disclosures: Funding: UCSF Office of Medical Education Educational Research Grant. Ethics approval: approved by UCSF Committee on Human Research. Previous presentations: Previous versions of this work were presented as an oral presentation at the University of California at San Francisco Medical Education Day, San Francisco, California, April 27, 2012, and as a poster presentation at the Society for General Internal Medicine 35th Annual Meeting, Orlando, Florida, May 912, 2012. The authors report no conflicts of interest.
- Hospitalist involvement in internal medicine residencies. J Hosp Med. 2009;4(8):471–475. , , .
- Is there a relationship between attending physicians' and residents' teaching skills and students' examination scores? Acad Med. 2000;75(11):1144–1146. , , , , , .
- Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances. Acad Med. 2000;75(10 suppl):S62–S64. , , .
- Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110(3):205–209. , .
- Implications of the hospitalist model for medical students' education. Acad Med. 2001;76(4):324–330. , .
- On educating and being a physician in the hospitalist era. Am J Med. 2001;111(9B):45S–47S. .
- The role of hospitalists in medical education. Am J Med. 1999;107(4):305–309. , .
- Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636–641. , , , , , .
- Impact of duty hour regulations on medical students' education: views of key clinical faculty. J Gen Intern Med. 2008;23(7):1084–1089. , , , et al.
- The impact of resident duty hours reform on the internal medicine core clerkship: results from the clerkship directors in internal medicine survey. Acad Med. 2006;81(12):1038–1044. , , , , , .
- Effects of resident work hour limitations on faculty professional lives. J Gen Intern Med. 2008;23(7):1077–1083. , , , .
- Teaching internal medicine residents in the new era. Inpatient attending with duty‐hour regulations. J Gen Intern Med. 2006;21(5):447–452. , .
- Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):5–9. , , , .
- Using observed structured teaching exercises (OSTE) to enhance hospitalist teaching during family centered rounds. J Hosp Med. 2011;6(7):423–427. , , , .
- Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161–166. , , , .
- How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151–155. , , , .
- Reframing research on faculty development. Acad Med. 2011;86(4):421–428. , .
- The Stanford faculty development program: a dissemination approach to faculty development for medical teachers. Teach Learn Med. 1992;4(3):180–187. , , , et al.
- Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75(3):465–470. .
- The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430–434. , , , .
- Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self‐assessment ratings. Eval Health Prof. 1992;15(3):350–366. , , .
- Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315–322. , , , , .
- Regional teaching improvement programs for community‐based teachers. Am J Med. 1999;106(1):76–80. , , , , .
- Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):1156–1161. , , , .
- Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688–695. , , , .
- Student and resident evaluations of faculty—how reliable are they? Factorial validation of an educational framework using residents' evaluations of clinician‐educators. Acad Med. 1999;74(10):S25–S27. , , , .
- Students' global assessments of clinical teachers: a reliable and valid measure of teaching effectiveness. Acad Med. 1998;73(10 suppl):S72–S74. , .
- The practice of giving feedback to improve teaching: what is effective? J Higher Educ. 1993;64(5):574–593. .
- Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(suppl 2):S56–S63. , , , et al.
- A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497–526. , , , et al.
- Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387–396. , .
- Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71(10):1100–1102. , .
- Lessons learned from a peer review of bedside teaching. Acad Med. 2004;79(4):343–346. .
- Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131–135. , , , .
- Twelve tips for peer observation of teaching. Med Teach. 2007;29(4):297–300. , , .
- Assessing the quality of teaching. Am J Med. 1999;106(4):381–384. , .
- To the point: medical education reviews—providing feedback. Am J Obstet Gynecol. 2007;196(6):508–513. , , , , , .
Hospitalists are increasingly responsible for educating students and housestaff in internal medicine.[1] Because the quality of teaching is an important factor in learning,[2, 3, 4] leaders in medical education have expressed concern over the rapid shift of teaching responsibilities to this new group of educators.[5, 6, 7, 8] Moreover, recent changes in duty hour restrictions have strained both student and resident education,[9, 10] necessitating the optimization of inpatient teaching.[11, 12] Many hospitalists have recently finished residency and have not had formal training in clinical teaching. Collectively, most hospital medicine groups are early in their careers, have significant clinical obligations,[13] and may not have the bandwidth or expertise to provide faculty development for improving clinical teaching.
Rationally designed and theoretically sound faculty development to improve inpatient clinical teaching is required to meet this challenge. There are a limited number of reports describing faculty development focused on strengthening the teaching of hospitalists, and only 3 utilized direct observation and feedback, 1 of which involved peer observation in the clinical setting.[14, 15, 16] This 2011 report described a narrative method of peer observation and feedback but did not assess for efficacy of the program.[16] To our knowledge, there have been no studies of structured peer observation and feedback to optimize hospitalist attendings' teaching which have evaluated the efficacy of the intervention.
We developed a faculty development program based on peer observation and feedback based on actual teaching practices, using structured feedback anchored in validated and observable measures of effective teaching. We hypothesized that participation in the program would increase confidence in key teaching skills, increase confidence in the ability to give and receive peer feedback, and strengthen attitudes toward peer observation and feedback.
METHODS
Subjects and Setting
The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 180 housestaff. Internal medicine ward attendings rotate during 2‐week blocks, and are asked to give formal teaching rounds 3 or 4 times a week (these sessions are distinct from teaching which may happen while rounding on patients). Ward teams are composed of 1 senior resident, 2 interns, and 1 to 2 medical students. The majority of internal medicine ward attendings are hospitalist faculty, hospital medicine fellows, or medicine chief residents. Because outpatient general internists and subspecialists only occasionally attend on the wards, we refer to ward attendings as attending hospitalists in this article. All attending hospitalists were eligible to participate if they attended on the wards at least twice during the academic year. The institutional review board at the University of California, San Francisco approved this study.
Theoretical Framework
We reviewed the literature to optimize our program in 3 conceptual domains: (1) overall structure of the program, (2) definition of effective teaching and (3) effective delivery of feedback.
Over‐reliance on didactics that are disconnected from the work environment is a weakness of traditional faculty development. Individuals may attempt to apply what they have learned, but receiving feedback on their actual workplace practices may be difficult. A recent perspective responds to this fragmentation by conceptualizing faculty development as embedded in both a faculty development community and a workplace community. This model emphasizes translating what faculty have learned in the classroom into practice, and highlights the importance of coaching in the workplace.[17] In accordance with this framework, we designed our program to reach beyond isolated workshops to effectively penetrate the workplace community.
We selected the Stanford Faculty Development Program (SFDP) framework for optimal clinical teaching as our model for recognizing and improving teaching skills. The SFDP was developed as a theory‐based intensive feedback method to improve teaching skills,[18, 19] and has been shown to improve teaching in the ambulatory[20] and inpatient settings.[21, 22] In this widely disseminated framework,[23, 24] excellent clinical teaching is grounded in optimizing observable behaviors organized around 7 domains.[18] A 26‐item instrument to evaluate clinical teaching (SFDP‐26) has been developed based on this framework[25] and has been validated in multiple settings.[26, 27] High‐quality teaching, as defined by the SFDP framework, has been correlated with improved educational outcomes in internal medicine clerkship students.[4]
Feedback is crucial to optimizing teaching,[28, 29, 30] particularly when it incorporates consultation[31] and narrative comments.[32] Peer feedback has several advantages over feedback from learners or from other non‐peer observers (such as supervisors or other evaluators). First, the observers benefit by gaining insight into their own weaknesses and potential areas for growth as teachers.[33, 34] Additionally, collegial observation and feedback may promote supportive teaching relationships between faculty.[35] Furthermore, peer review overcomes the biases that may be present in learner evaluations.[36] We established a 3‐stage feedback technique based on a previously described method.[37] In the first step, the observer elicits self‐appraisal from the speaker. Next, the observer provides specific, behaviorally anchored feedback in the form of 3 reinforcing comments and 2 constructive comments. Finally, the observer elicits a reflection on the feedback and helps develop a plan to improve teaching in future opportunities. We used a dyad model (paired participants repeatedly observe and give feedback to each other) to support mutual benefit and reciprocity between attendings.
Intervention
Using a modified Delphi approach, 5 medical education experts selected the 10 items that are most easily observable and salient to formal attending teaching rounds from the SFDP‐26 teaching assessment tool. A structured observation form was created, which included a checklist of the 10 selected items, space for note taking, and a template for narrative feedback (Figure 1).

We introduced the SFDP framework during a 2‐hour initial training session. Participants watched videos of teaching, learned to identify the 10 selected teaching behaviors, developed appropriate constructive and reinforcing comments, and practiced giving and receiving peer feedback.
Dyads were created on the basis of predetermined attending schedules. Participants were asked to observe and be observed twice during attending teaching rounds over the course of the academic year. Attending teaching rounds were defined as any preplanned didactic activity for ward teams. The structured observation forms were returned to the study coordinators after the observer had given feedback to the presenter. A copy of the feedback without the observer's notes was also given to each speaker. At the midpoint of the academic year, a refresher session was offered to reinforce those teaching behaviors that were the least frequently performed to date. All participants received a $50.00
Measurements and Data Collection
Participants were given a pre‐ and post‐program survey. The surveys included questions assessing confidence in ability to give feedback, receive feedback without feeling defensive, and teach effectively, as well as attitudes toward peer observation. The postprogram survey was administered at the end of the year and additionally assessed the self‐rated performance of the 10 selected teaching behaviors. A retrospective pre‐ and post‐program assessment was used for this outcome, because this method can be more reliable when participants initially may not have sufficient insight to accurately assess their own competence in specific measures.[21] The post‐program survey also included 4 questions assessing satisfaction with aspects of the program. All questions were structured as statements to which the respondent indicated degree of agreement using a 5‐point Likert scale, where 1=strongly disagree and 5=strongly agree. Structured observation forms used by participants were collected throughout the year to assess frequency of performance of the 10 selected teaching behaviors.
Statistical Analysis
We only analyzed the pre‐ and post‐program surveys that could be matched using anonymous identifiers provided by participants. For both prospective and retrospective measures, mean values and standard deviations were calculated. Wilcoxon signed rank tests for nonparametric data were performed to obtain P values. For all comparisons, a P value of <0.05 was considered significant. All comparisons were performed using Stata version 10 (StataCorp, College Station, TX).
RESULTS
Participant Characteristics and Participation in Program
Of the 37 eligible attending hospitalists, 22 (59%) enrolled. Fourteen were hospital medicine faculty, 6 were hospital medicine fellows, and 2 were internal medicine chief residents. The averagestandard deviation (SD) number of years as a ward attending was 2.2 years2.1. Seventeen (77%) reported previously having been observed and given feedback by a colleague, and 9 (41%) reported previously observing a colleague for the purpose of giving feedback.
All 22 participants attended 1 of 2, 2‐hour training sessions. Ten participants attended an hour‐long midyear refresher session. A total of 19 observation and feedback sessions took place; 15 of them occurred in the first half of the academic year. Fifteen attending hospitalists participated in at least 1 observed teaching session. Of the 11 dyads, 6 completed at least 1 observation of each other. Two dyads performed 2 observations of each other.
Fifteen participants (68% of those enrolled) completed both the pre‐ and post‐program surveys. Among these respondents, the average number of years attending was 2.92.2 years. Eight (53%) reported previously having been observed and given feedback by a colleague, and 7 (47%) reported previously observing a colleague for the purpose of giving feedback. For this subset of participants, the averageSD frequency of being observed during the program was 1.30.7, and observing was 1.10.8.
Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively
In comparison of pre‐ and post‐intervention measures, participants indicated increased confidence in their ability to evaluate their colleagues and provide feedback in all domains queried. Participants also indicated increased confidence in the efficacy of their feedback to improve their colleagues' teaching skills. Participating in the program did not significantly change pre‐intervention levels of confidence in ability to receive feedback without being defensive or confidence in ability to use feedback to improve teaching skills (Table 1).
Statement | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|
| |||||
I can accurately assess my colleagues' teaching skills. | 3.20 | 0.86 | 4.07 | 0.59 | 0.004 |
I can give accurate feedback to my colleagues regarding their teaching skills. | 3.40 | 0.63 | 4.20 | 0.56 | 0.002 |
I can give feedback in a way that that my colleague will not feel defensive about their teaching skills. | 3.60 | 0.63 | 4.20 | 0.56 | 0.046 |
My feedback will improve my colleagues' teaching skills. | 3.40 | 0.51 | 3.93 | 0.59 | 0.011 |
I can receive feedback from a colleague without being defensive about my teaching skills. | 3.87 | 0.92 | 4.27 | 0.59 | 0.156 |
I can use feedback from a colleague to improve my teaching skills. | 4.33 | 0.82 | 4.47 | 0.64 | 0.607 |
I am confident in my ability to teach students and residents during attending rounds.a | 3.21 | 0.89 | 3.71 | 0.83 | 0.026 |
I am confident in my knowledge of components of effective teaching.a | 3.21 | 0.89 | 3.71 | 0.99 | 0.035 |
Learners regard me as an effective teacher.a | 3.14 | 0.66 | 3.64 | 0.74 | 0.033 |
Self‐Rated Performance of 10 Selected Teaching Behaviors
In retrospective assessment, participants felt that their performance had improved in all 10 teaching behaviors after the intervention. This perceived improvement reached statistical significance in 8 of the 10 selected behaviors (Table 2).
SFDP Framework Category From Skeff et al.[18] | When I Give Attending Rounds, I Generally . | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|---|
| ||||||
1. Establishing a positive learning climate | Listen to learners | 4.27 | 0.59 | 4.53 | 0.52 | 0.046 |
Encourage learners to participate actively in the discussion | 4.07 | 0.70 | 4.60 | 0.51 | 0.009 | |
2. Controlling the teaching session | Call attention to time | 3.33 | 0.98 | 4.27 | 0.59 | 0.004 |
3. Communicating goals | State goals clearly and concisely | 3.40 | 0.63 | 4.27 | 0.59 | 0.001 |
State relevance of goals to learners | 3.40 | 0.74 | 4.20 | 0.68 | 0.002 | |
4. Promoting understanding and retention | Present well‐organized material | 3.87 | 0.64 | 4.07 | 0.70 | 0.083 |
Use blackboard or other visual aids | 4.27 | 0.88 | 4.47 | 0.74 | 0.158 | |
5. Evaluating the learners | Evaluate learners' ability to apply medical knowledge to specific patients | 3.33 | 0.98 | 4.00 | 0.76 | 0.005 |
6. Providing feedback to the learners | Explain to learners why he/she was correct or incorrect | 3.47 | 1.13 | 4.13 | 0.64 | 0.009 |
7. Promoting self‐directed learning | Motivate learners to learn on their own | 3.20 | 0.86 | 3.73 | 0.70 | 0.005 |
Attitudes Toward Peer Observation and Feedback
There were no significant changes in attitudes toward observation and feedback on teaching. A strong preprogram belief that observation and feedback can improve teaching skills increased slightly, but not significantly, after the program. Participants remained largely neutral in expectation of discomfort with giving or receiving peer feedback. Prior to the program, there was a slight tendency to believe that observation and feedback is more effective when done by more skilled and experienced colleagues; this belief diminished, but not significantly (Table 3).
Statement | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|
| |||||
Being observed and receiving feedback can improve my teaching skills. | 4.47 | 1.06 | 4.60 | 0.51 | 0.941 |
My teaching skills cannot improve without observation with feedback. | 2.93 | 1.39 | 3.47 | 1.30 | 0.188 |
Observation with feedback is most effective when done by colleagues who are expert educators. | 3.53 | 0.83 | 3.33 | 0.98 | 0.180 |
Observation with feedback is most effective when done by colleagues who have been teaching many years. | 3.40 | 0.91 | 3.07 | 1.03 | 0.143 |
The thought of observing and giving feedback to my colleagues makes me uncomfortable. | 3.13 | 0.92 | 3.00 | 1.13 | 0.565 |
The thought of being observed by a colleague and receiving feedback makes me uncomfortable. | 3.20 | 0.94 | 3.27 | 1.22 | 0.747 |
Program Evaluation
There were a variable number of responses to the program evaluation questions. The majority of participants found the program to be very beneficial (1=strongly disagree, 5=strongly agree [n, meanSD]): My teaching has improved as a result of this program (n=14, 4.90.3). Both giving (n=11, 4.21.6) and receiving (n=13, 4.61.1) feedback were felt to have improved teaching skills. There was strong agreement from respondents that they would participate in the program in the future: I am likely to participate in this program in the future (n=12, 4.60.9).
DISCUSSION
Previous studies have shown that teaching skills are unlikely to improve without feedback,[28, 29, 30] yet feedback for hospitalists is usually limited to summative, end‐rotation evaluations from learners, disconnected from the teaching encounter. Our theory‐based, rationally designed peer observation and feedback program resulted in increased confidence in the ability to give feedback, receive feedback, and teach effectively. Participation did not result in negative attitudes toward giving and receiving feedback from colleagues. Participants self‐reported increased performance of important teaching behaviors. Most participants rated the program very highly, and endorsed improved teaching skills as a result of the program.
Our experience provides several lessons for other groups considering the implementation of peer feedback to strengthen teaching. First, we suggest that hospitalist groups may expect variable degrees of participation in a voluntary peer feedback program. In our program, 41% of eligible attendings did not participate. We did not specifically investigate why; we speculate that they may not have had the time, believed that their teaching skills were already strong, or they may have been daunted at the idea of peer review. It is also possible that participants were a self‐selected group who were the most motivated to strengthen their teaching. Second, we note the steep decline in the number of observations in the second half of the year. Informal assessment for reasons for the drop‐off suggested that after initial enthusiasm for the program, navigating the logistics of observing the same peer in the second half of the year proved to be prohibitive to many participants. Therefore, future versions of peer feedback programs may benefit from removing the dyad requirement and encouraging all participants to observe one another whenever possible.
With these lessons in mind, we believe that a peer observation program could be implemented by other hospital medicine groups. The program does not require extensive content expertise or senior faculty but does require engaged leadership and interested and motivated faculty. Groups could identify an individual in their group with an interest in clinical teaching who could then be responsible for creating the training session (materials available upon request). We believe that with only a small upfront investment, most hospital medicine groups could use this as a model to build a peer observation program aimed at improving clinical teaching.
Our study has several limitations. As noted above, our participation rate was 59%, and the number of participating attendings declined through the year. We did not examine whether our program resulted in advances in the knowledge, skills, or attitudes of the learners; because each attending teaching session was unique, it was not possible to measure changes in learner knowledge. Our primary outcome measures relied on self‐assessment rather than higher order and more objective measures of teaching efficacy. Furthermore, our results may not be generalizable to other programs, given the heterogeneity in service structures and teaching practices across the country. This was an uncontrolled study; some of the outcomes may have naturally occurred independent of the intervention due to the natural evolution of clinical teaching. As with any educational intervention that integrates multiple strategies, we are not able to discern if the improved outcomes were the result of the initial didactic sessions, the refresher sessions, or the peer feedback itself. Serial assessments of frequency of teaching behaviors were not done due to the low number of observations in the second half of the program. Finally, our 10‐item tool derived from the validated SFDP‐26 tool is not itself a validated assessment of teaching.
We acknowledge that the increased confidence seen in our participants does not necessarily predict improved performance. Although increased confidence in core skills is a necessary step that can lead to changes in behavior, further studies are needed to determine whether the increase in faculty confidence that results from peer observation and feedback translates into improved educational outcomes.
The pressure on hospitalists to be excellent teachers is here to stay. Resources to train these faculty are scarce, yet we must prioritize faculty development in teaching to optimize the training of future physicians. Our data illustrate the benefits of peer observation and feedback. Hospitalist programs should consider this option in addressing the professional development needs of their faculty.
Acknowledgements
The authors thank Zachary Martin for administrative support for the program; Gurpreet Dhaliwal, MD, and Patricia O'Sullivan, PhD, for aid in program development; and John Amory, MD, MPH, for critical review of the manuscript. The authors thank the University of California, San Francisco Office of Medical Education for funding this work with an Educational Research Grant.
Disclosures: Funding: UCSF Office of Medical Education Educational Research Grant. Ethics approval: approved by UCSF Committee on Human Research. Previous presentations: Previous versions of this work were presented as an oral presentation at the University of California at San Francisco Medical Education Day, San Francisco, California, April 27, 2012, and as a poster presentation at the Society for General Internal Medicine 35th Annual Meeting, Orlando, Florida, May 912, 2012. The authors report no conflicts of interest.
Hospitalists are increasingly responsible for educating students and housestaff in internal medicine.[1] Because the quality of teaching is an important factor in learning,[2, 3, 4] leaders in medical education have expressed concern over the rapid shift of teaching responsibilities to this new group of educators.[5, 6, 7, 8] Moreover, recent changes in duty hour restrictions have strained both student and resident education,[9, 10] necessitating the optimization of inpatient teaching.[11, 12] Many hospitalists have recently finished residency and have not had formal training in clinical teaching. Collectively, most hospital medicine groups are early in their careers, have significant clinical obligations,[13] and may not have the bandwidth or expertise to provide faculty development for improving clinical teaching.
Rationally designed and theoretically sound faculty development to improve inpatient clinical teaching is required to meet this challenge. There are a limited number of reports describing faculty development focused on strengthening the teaching of hospitalists, and only 3 utilized direct observation and feedback, 1 of which involved peer observation in the clinical setting.[14, 15, 16] This 2011 report described a narrative method of peer observation and feedback but did not assess for efficacy of the program.[16] To our knowledge, there have been no studies of structured peer observation and feedback to optimize hospitalist attendings' teaching which have evaluated the efficacy of the intervention.
We developed a faculty development program based on peer observation and feedback based on actual teaching practices, using structured feedback anchored in validated and observable measures of effective teaching. We hypothesized that participation in the program would increase confidence in key teaching skills, increase confidence in the ability to give and receive peer feedback, and strengthen attitudes toward peer observation and feedback.
METHODS
Subjects and Setting
The study was conducted at a 570‐bed academic, tertiary care medical center affiliated with an internal medicine residency program of 180 housestaff. Internal medicine ward attendings rotate during 2‐week blocks, and are asked to give formal teaching rounds 3 or 4 times a week (these sessions are distinct from teaching which may happen while rounding on patients). Ward teams are composed of 1 senior resident, 2 interns, and 1 to 2 medical students. The majority of internal medicine ward attendings are hospitalist faculty, hospital medicine fellows, or medicine chief residents. Because outpatient general internists and subspecialists only occasionally attend on the wards, we refer to ward attendings as attending hospitalists in this article. All attending hospitalists were eligible to participate if they attended on the wards at least twice during the academic year. The institutional review board at the University of California, San Francisco approved this study.
Theoretical Framework
We reviewed the literature to optimize our program in 3 conceptual domains: (1) overall structure of the program, (2) definition of effective teaching and (3) effective delivery of feedback.
Over‐reliance on didactics that are disconnected from the work environment is a weakness of traditional faculty development. Individuals may attempt to apply what they have learned, but receiving feedback on their actual workplace practices may be difficult. A recent perspective responds to this fragmentation by conceptualizing faculty development as embedded in both a faculty development community and a workplace community. This model emphasizes translating what faculty have learned in the classroom into practice, and highlights the importance of coaching in the workplace.[17] In accordance with this framework, we designed our program to reach beyond isolated workshops to effectively penetrate the workplace community.
We selected the Stanford Faculty Development Program (SFDP) framework for optimal clinical teaching as our model for recognizing and improving teaching skills. The SFDP was developed as a theory‐based intensive feedback method to improve teaching skills,[18, 19] and has been shown to improve teaching in the ambulatory[20] and inpatient settings.[21, 22] In this widely disseminated framework,[23, 24] excellent clinical teaching is grounded in optimizing observable behaviors organized around 7 domains.[18] A 26‐item instrument to evaluate clinical teaching (SFDP‐26) has been developed based on this framework[25] and has been validated in multiple settings.[26, 27] High‐quality teaching, as defined by the SFDP framework, has been correlated with improved educational outcomes in internal medicine clerkship students.[4]
Feedback is crucial to optimizing teaching,[28, 29, 30] particularly when it incorporates consultation[31] and narrative comments.[32] Peer feedback has several advantages over feedback from learners or from other non‐peer observers (such as supervisors or other evaluators). First, the observers benefit by gaining insight into their own weaknesses and potential areas for growth as teachers.[33, 34] Additionally, collegial observation and feedback may promote supportive teaching relationships between faculty.[35] Furthermore, peer review overcomes the biases that may be present in learner evaluations.[36] We established a 3‐stage feedback technique based on a previously described method.[37] In the first step, the observer elicits self‐appraisal from the speaker. Next, the observer provides specific, behaviorally anchored feedback in the form of 3 reinforcing comments and 2 constructive comments. Finally, the observer elicits a reflection on the feedback and helps develop a plan to improve teaching in future opportunities. We used a dyad model (paired participants repeatedly observe and give feedback to each other) to support mutual benefit and reciprocity between attendings.
Intervention
Using a modified Delphi approach, 5 medical education experts selected the 10 items that are most easily observable and salient to formal attending teaching rounds from the SFDP‐26 teaching assessment tool. A structured observation form was created, which included a checklist of the 10 selected items, space for note taking, and a template for narrative feedback (Figure 1).

We introduced the SFDP framework during a 2‐hour initial training session. Participants watched videos of teaching, learned to identify the 10 selected teaching behaviors, developed appropriate constructive and reinforcing comments, and practiced giving and receiving peer feedback.
Dyads were created on the basis of predetermined attending schedules. Participants were asked to observe and be observed twice during attending teaching rounds over the course of the academic year. Attending teaching rounds were defined as any preplanned didactic activity for ward teams. The structured observation forms were returned to the study coordinators after the observer had given feedback to the presenter. A copy of the feedback without the observer's notes was also given to each speaker. At the midpoint of the academic year, a refresher session was offered to reinforce those teaching behaviors that were the least frequently performed to date. All participants received a $50.00
Measurements and Data Collection
Participants were given a pre‐ and post‐program survey. The surveys included questions assessing confidence in ability to give feedback, receive feedback without feeling defensive, and teach effectively, as well as attitudes toward peer observation. The postprogram survey was administered at the end of the year and additionally assessed the self‐rated performance of the 10 selected teaching behaviors. A retrospective pre‐ and post‐program assessment was used for this outcome, because this method can be more reliable when participants initially may not have sufficient insight to accurately assess their own competence in specific measures.[21] The post‐program survey also included 4 questions assessing satisfaction with aspects of the program. All questions were structured as statements to which the respondent indicated degree of agreement using a 5‐point Likert scale, where 1=strongly disagree and 5=strongly agree. Structured observation forms used by participants were collected throughout the year to assess frequency of performance of the 10 selected teaching behaviors.
Statistical Analysis
We only analyzed the pre‐ and post‐program surveys that could be matched using anonymous identifiers provided by participants. For both prospective and retrospective measures, mean values and standard deviations were calculated. Wilcoxon signed rank tests for nonparametric data were performed to obtain P values. For all comparisons, a P value of <0.05 was considered significant. All comparisons were performed using Stata version 10 (StataCorp, College Station, TX).
RESULTS
Participant Characteristics and Participation in Program
Of the 37 eligible attending hospitalists, 22 (59%) enrolled. Fourteen were hospital medicine faculty, 6 were hospital medicine fellows, and 2 were internal medicine chief residents. The averagestandard deviation (SD) number of years as a ward attending was 2.2 years2.1. Seventeen (77%) reported previously having been observed and given feedback by a colleague, and 9 (41%) reported previously observing a colleague for the purpose of giving feedback.
All 22 participants attended 1 of 2, 2‐hour training sessions. Ten participants attended an hour‐long midyear refresher session. A total of 19 observation and feedback sessions took place; 15 of them occurred in the first half of the academic year. Fifteen attending hospitalists participated in at least 1 observed teaching session. Of the 11 dyads, 6 completed at least 1 observation of each other. Two dyads performed 2 observations of each other.
Fifteen participants (68% of those enrolled) completed both the pre‐ and post‐program surveys. Among these respondents, the average number of years attending was 2.92.2 years. Eight (53%) reported previously having been observed and given feedback by a colleague, and 7 (47%) reported previously observing a colleague for the purpose of giving feedback. For this subset of participants, the averageSD frequency of being observed during the program was 1.30.7, and observing was 1.10.8.
Confidence in Ability to Give Feedback, Receive Feedback, and Teach Effectively
In comparison of pre‐ and post‐intervention measures, participants indicated increased confidence in their ability to evaluate their colleagues and provide feedback in all domains queried. Participants also indicated increased confidence in the efficacy of their feedback to improve their colleagues' teaching skills. Participating in the program did not significantly change pre‐intervention levels of confidence in ability to receive feedback without being defensive or confidence in ability to use feedback to improve teaching skills (Table 1).
Statement | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|
| |||||
I can accurately assess my colleagues' teaching skills. | 3.20 | 0.86 | 4.07 | 0.59 | 0.004 |
I can give accurate feedback to my colleagues regarding their teaching skills. | 3.40 | 0.63 | 4.20 | 0.56 | 0.002 |
I can give feedback in a way that that my colleague will not feel defensive about their teaching skills. | 3.60 | 0.63 | 4.20 | 0.56 | 0.046 |
My feedback will improve my colleagues' teaching skills. | 3.40 | 0.51 | 3.93 | 0.59 | 0.011 |
I can receive feedback from a colleague without being defensive about my teaching skills. | 3.87 | 0.92 | 4.27 | 0.59 | 0.156 |
I can use feedback from a colleague to improve my teaching skills. | 4.33 | 0.82 | 4.47 | 0.64 | 0.607 |
I am confident in my ability to teach students and residents during attending rounds.a | 3.21 | 0.89 | 3.71 | 0.83 | 0.026 |
I am confident in my knowledge of components of effective teaching.a | 3.21 | 0.89 | 3.71 | 0.99 | 0.035 |
Learners regard me as an effective teacher.a | 3.14 | 0.66 | 3.64 | 0.74 | 0.033 |
Self‐Rated Performance of 10 Selected Teaching Behaviors
In retrospective assessment, participants felt that their performance had improved in all 10 teaching behaviors after the intervention. This perceived improvement reached statistical significance in 8 of the 10 selected behaviors (Table 2).
SFDP Framework Category From Skeff et al.[18] | When I Give Attending Rounds, I Generally . | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|---|
| ||||||
1. Establishing a positive learning climate | Listen to learners | 4.27 | 0.59 | 4.53 | 0.52 | 0.046 |
Encourage learners to participate actively in the discussion | 4.07 | 0.70 | 4.60 | 0.51 | 0.009 | |
2. Controlling the teaching session | Call attention to time | 3.33 | 0.98 | 4.27 | 0.59 | 0.004 |
3. Communicating goals | State goals clearly and concisely | 3.40 | 0.63 | 4.27 | 0.59 | 0.001 |
State relevance of goals to learners | 3.40 | 0.74 | 4.20 | 0.68 | 0.002 | |
4. Promoting understanding and retention | Present well‐organized material | 3.87 | 0.64 | 4.07 | 0.70 | 0.083 |
Use blackboard or other visual aids | 4.27 | 0.88 | 4.47 | 0.74 | 0.158 | |
5. Evaluating the learners | Evaluate learners' ability to apply medical knowledge to specific patients | 3.33 | 0.98 | 4.00 | 0.76 | 0.005 |
6. Providing feedback to the learners | Explain to learners why he/she was correct or incorrect | 3.47 | 1.13 | 4.13 | 0.64 | 0.009 |
7. Promoting self‐directed learning | Motivate learners to learn on their own | 3.20 | 0.86 | 3.73 | 0.70 | 0.005 |
Attitudes Toward Peer Observation and Feedback
There were no significant changes in attitudes toward observation and feedback on teaching. A strong preprogram belief that observation and feedback can improve teaching skills increased slightly, but not significantly, after the program. Participants remained largely neutral in expectation of discomfort with giving or receiving peer feedback. Prior to the program, there was a slight tendency to believe that observation and feedback is more effective when done by more skilled and experienced colleagues; this belief diminished, but not significantly (Table 3).
Statement | Mean Pre | SD | Mean Post | SD | P |
---|---|---|---|---|---|
| |||||
Being observed and receiving feedback can improve my teaching skills. | 4.47 | 1.06 | 4.60 | 0.51 | 0.941 |
My teaching skills cannot improve without observation with feedback. | 2.93 | 1.39 | 3.47 | 1.30 | 0.188 |
Observation with feedback is most effective when done by colleagues who are expert educators. | 3.53 | 0.83 | 3.33 | 0.98 | 0.180 |
Observation with feedback is most effective when done by colleagues who have been teaching many years. | 3.40 | 0.91 | 3.07 | 1.03 | 0.143 |
The thought of observing and giving feedback to my colleagues makes me uncomfortable. | 3.13 | 0.92 | 3.00 | 1.13 | 0.565 |
The thought of being observed by a colleague and receiving feedback makes me uncomfortable. | 3.20 | 0.94 | 3.27 | 1.22 | 0.747 |
Program Evaluation
There were a variable number of responses to the program evaluation questions. The majority of participants found the program to be very beneficial (1=strongly disagree, 5=strongly agree [n, meanSD]): My teaching has improved as a result of this program (n=14, 4.90.3). Both giving (n=11, 4.21.6) and receiving (n=13, 4.61.1) feedback were felt to have improved teaching skills. There was strong agreement from respondents that they would participate in the program in the future: I am likely to participate in this program in the future (n=12, 4.60.9).
DISCUSSION
Previous studies have shown that teaching skills are unlikely to improve without feedback,[28, 29, 30] yet feedback for hospitalists is usually limited to summative, end‐rotation evaluations from learners, disconnected from the teaching encounter. Our theory‐based, rationally designed peer observation and feedback program resulted in increased confidence in the ability to give feedback, receive feedback, and teach effectively. Participation did not result in negative attitudes toward giving and receiving feedback from colleagues. Participants self‐reported increased performance of important teaching behaviors. Most participants rated the program very highly, and endorsed improved teaching skills as a result of the program.
Our experience provides several lessons for other groups considering the implementation of peer feedback to strengthen teaching. First, we suggest that hospitalist groups may expect variable degrees of participation in a voluntary peer feedback program. In our program, 41% of eligible attendings did not participate. We did not specifically investigate why; we speculate that they may not have had the time, believed that their teaching skills were already strong, or they may have been daunted at the idea of peer review. It is also possible that participants were a self‐selected group who were the most motivated to strengthen their teaching. Second, we note the steep decline in the number of observations in the second half of the year. Informal assessment for reasons for the drop‐off suggested that after initial enthusiasm for the program, navigating the logistics of observing the same peer in the second half of the year proved to be prohibitive to many participants. Therefore, future versions of peer feedback programs may benefit from removing the dyad requirement and encouraging all participants to observe one another whenever possible.
With these lessons in mind, we believe that a peer observation program could be implemented by other hospital medicine groups. The program does not require extensive content expertise or senior faculty but does require engaged leadership and interested and motivated faculty. Groups could identify an individual in their group with an interest in clinical teaching who could then be responsible for creating the training session (materials available upon request). We believe that with only a small upfront investment, most hospital medicine groups could use this as a model to build a peer observation program aimed at improving clinical teaching.
Our study has several limitations. As noted above, our participation rate was 59%, and the number of participating attendings declined through the year. We did not examine whether our program resulted in advances in the knowledge, skills, or attitudes of the learners; because each attending teaching session was unique, it was not possible to measure changes in learner knowledge. Our primary outcome measures relied on self‐assessment rather than higher order and more objective measures of teaching efficacy. Furthermore, our results may not be generalizable to other programs, given the heterogeneity in service structures and teaching practices across the country. This was an uncontrolled study; some of the outcomes may have naturally occurred independent of the intervention due to the natural evolution of clinical teaching. As with any educational intervention that integrates multiple strategies, we are not able to discern if the improved outcomes were the result of the initial didactic sessions, the refresher sessions, or the peer feedback itself. Serial assessments of frequency of teaching behaviors were not done due to the low number of observations in the second half of the program. Finally, our 10‐item tool derived from the validated SFDP‐26 tool is not itself a validated assessment of teaching.
We acknowledge that the increased confidence seen in our participants does not necessarily predict improved performance. Although increased confidence in core skills is a necessary step that can lead to changes in behavior, further studies are needed to determine whether the increase in faculty confidence that results from peer observation and feedback translates into improved educational outcomes.
The pressure on hospitalists to be excellent teachers is here to stay. Resources to train these faculty are scarce, yet we must prioritize faculty development in teaching to optimize the training of future physicians. Our data illustrate the benefits of peer observation and feedback. Hospitalist programs should consider this option in addressing the professional development needs of their faculty.
Acknowledgements
The authors thank Zachary Martin for administrative support for the program; Gurpreet Dhaliwal, MD, and Patricia O'Sullivan, PhD, for aid in program development; and John Amory, MD, MPH, for critical review of the manuscript. The authors thank the University of California, San Francisco Office of Medical Education for funding this work with an Educational Research Grant.
Disclosures: Funding: UCSF Office of Medical Education Educational Research Grant. Ethics approval: approved by UCSF Committee on Human Research. Previous presentations: Previous versions of this work were presented as an oral presentation at the University of California at San Francisco Medical Education Day, San Francisco, California, April 27, 2012, and as a poster presentation at the Society for General Internal Medicine 35th Annual Meeting, Orlando, Florida, May 912, 2012. The authors report no conflicts of interest.
- Hospitalist involvement in internal medicine residencies. J Hosp Med. 2009;4(8):471–475. , , .
- Is there a relationship between attending physicians' and residents' teaching skills and students' examination scores? Acad Med. 2000;75(11):1144–1146. , , , , , .
- Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances. Acad Med. 2000;75(10 suppl):S62–S64. , , .
- Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110(3):205–209. , .
- Implications of the hospitalist model for medical students' education. Acad Med. 2001;76(4):324–330. , .
- On educating and being a physician in the hospitalist era. Am J Med. 2001;111(9B):45S–47S. .
- The role of hospitalists in medical education. Am J Med. 1999;107(4):305–309. , .
- Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636–641. , , , , , .
- Impact of duty hour regulations on medical students' education: views of key clinical faculty. J Gen Intern Med. 2008;23(7):1084–1089. , , , et al.
- The impact of resident duty hours reform on the internal medicine core clerkship: results from the clerkship directors in internal medicine survey. Acad Med. 2006;81(12):1038–1044. , , , , , .
- Effects of resident work hour limitations on faculty professional lives. J Gen Intern Med. 2008;23(7):1077–1083. , , , .
- Teaching internal medicine residents in the new era. Inpatient attending with duty‐hour regulations. J Gen Intern Med. 2006;21(5):447–452. , .
- Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):5–9. , , , .
- Using observed structured teaching exercises (OSTE) to enhance hospitalist teaching during family centered rounds. J Hosp Med. 2011;6(7):423–427. , , , .
- Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161–166. , , , .
- How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151–155. , , , .
- Reframing research on faculty development. Acad Med. 2011;86(4):421–428. , .
- The Stanford faculty development program: a dissemination approach to faculty development for medical teachers. Teach Learn Med. 1992;4(3):180–187. , , , et al.
- Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75(3):465–470. .
- The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430–434. , , , .
- Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self‐assessment ratings. Eval Health Prof. 1992;15(3):350–366. , , .
- Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315–322. , , , , .
- Regional teaching improvement programs for community‐based teachers. Am J Med. 1999;106(1):76–80. , , , , .
- Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):1156–1161. , , , .
- Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688–695. , , , .
- Student and resident evaluations of faculty—how reliable are they? Factorial validation of an educational framework using residents' evaluations of clinician‐educators. Acad Med. 1999;74(10):S25–S27. , , , .
- Students' global assessments of clinical teachers: a reliable and valid measure of teaching effectiveness. Acad Med. 1998;73(10 suppl):S72–S74. , .
- The practice of giving feedback to improve teaching: what is effective? J Higher Educ. 1993;64(5):574–593. .
- Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(suppl 2):S56–S63. , , , et al.
- A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497–526. , , , et al.
- Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387–396. , .
- Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71(10):1100–1102. , .
- Lessons learned from a peer review of bedside teaching. Acad Med. 2004;79(4):343–346. .
- Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131–135. , , , .
- Twelve tips for peer observation of teaching. Med Teach. 2007;29(4):297–300. , , .
- Assessing the quality of teaching. Am J Med. 1999;106(4):381–384. , .
- To the point: medical education reviews—providing feedback. Am J Obstet Gynecol. 2007;196(6):508–513. , , , , , .
- Hospitalist involvement in internal medicine residencies. J Hosp Med. 2009;4(8):471–475. , , .
- Is there a relationship between attending physicians' and residents' teaching skills and students' examination scores? Acad Med. 2000;75(11):1144–1146. , , , , , .
- Six‐year documentation of the association between excellent clinical teaching and improved students' examination performances. Acad Med. 2000;75(10 suppl):S62–S64. , , .
- Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110(3):205–209. , .
- Implications of the hospitalist model for medical students' education. Acad Med. 2001;76(4):324–330. , .
- On educating and being a physician in the hospitalist era. Am J Med. 2001;111(9B):45S–47S. .
- The role of hospitalists in medical education. Am J Med. 1999;107(4):305–309. , .
- Challenges and opportunities in academic hospital medicine: report from the academic hospital medicine summit. J Gen Intern Med. 2009;24(5):636–641. , , , , , .
- Impact of duty hour regulations on medical students' education: views of key clinical faculty. J Gen Intern Med. 2008;23(7):1084–1089. , , , et al.
- The impact of resident duty hours reform on the internal medicine core clerkship: results from the clerkship directors in internal medicine survey. Acad Med. 2006;81(12):1038–1044. , , , , , .
- Effects of resident work hour limitations on faculty professional lives. J Gen Intern Med. 2008;23(7):1077–1083. , , , .
- Teaching internal medicine residents in the new era. Inpatient attending with duty‐hour regulations. J Gen Intern Med. 2006;21(5):447–452. , .
- Survey of US academic hospitalist leaders about mentorship and academic activities in hospitalist groups. J Hosp Med. 2011;6(1):5–9. , , , .
- Using observed structured teaching exercises (OSTE) to enhance hospitalist teaching during family centered rounds. J Hosp Med. 2011;6(7):423–427. , , , .
- Investing in the future: building an academic hospitalist faculty development program. J Hosp Med. 2011;6(3):161–166. , , , .
- How to become a better clinical teacher: a collaborative peer observation process. Med Teach. 2011;33(2):151–155. , , , .
- Reframing research on faculty development. Acad Med. 2011;86(4):421–428. , .
- The Stanford faculty development program: a dissemination approach to faculty development for medical teachers. Teach Learn Med. 1992;4(3):180–187. , , , et al.
- Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75(3):465–470. .
- The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430–434. , , , .
- Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self‐assessment ratings. Eval Health Prof. 1992;15(3):350–366. , , .
- Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315–322. , , , , .
- Regional teaching improvement programs for community‐based teachers. Am J Med. 1999;106(1):76–80. , , , , .
- Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152(6):1156–1161. , , , .
- Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73(6):688–695. , , , .
- Student and resident evaluations of faculty—how reliable are they? Factorial validation of an educational framework using residents' evaluations of clinician‐educators. Acad Med. 1999;74(10):S25–S27. , , , .
- Students' global assessments of clinical teachers: a reliable and valid measure of teaching effectiveness. Acad Med. 1998;73(10 suppl):S72–S74. , .
- The practice of giving feedback to improve teaching: what is effective? J Higher Educ. 1993;64(5):574–593. .
- Faculty development. A resource for clinical teachers. J Gen Intern Med. 1997;12(suppl 2):S56–S63. , , , et al.
- A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28(6):497–526. , , , et al.
- Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73(4):387–396. , .
- Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71(10):1100–1102. , .
- Lessons learned from a peer review of bedside teaching. Acad Med. 2004;79(4):343–346. .
- Evaluating an instrument for the peer review of inpatient teaching. Med Teach. 2003;25(2):131–135. , , , .
- Twelve tips for peer observation of teaching. Med Teach. 2007;29(4):297–300. , , .
- Assessing the quality of teaching. Am J Med. 1999;106(4):381–384. , .
- To the point: medical education reviews—providing feedback. Am J Obstet Gynecol. 2007;196(6):508–513. , , , , , .
© 2014 Society of Hospital Medicine
Discharge Before Noon
Late afternoon hospital discharges are thought to create admission bottlenecks in the emergency department (ED).[1] ED overcrowding increases the length of stay (LOS) of patients[2] and is a major dissatisfier for both patients and staff.[3] In our medical center, ED patients who are admitted after 1:00 pm have a 0.6‐day longer risk‐adjusted LOS than those admitted before 1:00 pm (M. Radford, MD, written communication, March 2012).
Many potential barriers to discharging patients early in the day exist.[4] However, comprehensive discharge planning favorably impacts discharge times.[5] There are limited published data regarding discharging patients early in the day. Studies have focused on improved discharge care coordination,[6, 7] in‐room display of planned discharge time,[8] and a discharge brunch.[9] In January 2012, the calendar month discharge before noon (DBN) percentage for 2 inpatient medicine units in our institution was approximately 7%, well below the organizational goal of 30%. We describe an intervention to sustainably increase the DBN percentage.
METHODS
Setting
The intervention took place on the 17th floor of New York University (NYU) Langone Medical Center's Tisch Hospital, an urban, academic medical center. All patients on the 17th floor received the intervention.
The 17th floor is composed of 2 acute care inpatient medical units17E and 17W. Each unit has 35 medical beds including a 16‐bed medical step down unit (SDU). Medical teams on the floor consist of 4 housestaff teams, a nurse practitioner (NP) team, and an SDU team. Each housestaff and NP team is led by a hospitalist, who is the attending of record for the majority of patients, though some patients on these teams are cared for by private attendings. Medical teams admit patients to any unit based upon bed availability. Nurses are assigned patients by acuity, not by medical team.
Intervention
Kick‐Off Event, Definition of Responsibilities, and Checklist
All stakeholders and front‐line staff were invited to a kickoff event on March 5, 2012. This event included education and discussion about the importance of a safe and early discharge from the patient and staff perspective. Roles in the discharge process were clearly defined and a corresponding checklist was created (Table 1). The checklist was used at least once per day during afternoon interdisciplinary in preparation for next‐day DBNs. Discharge date and time are communicated by the medical team to individual patients and families on the day a patient is identified for DBN. Patients and families did not receive additional orientation to the DBN initiative.
Discharge Task | Responsible Team Member |
---|---|
| |
MD discharge summary and medication reconciliation | Resident or NP |
Discharge order | Resident or NP |
Prescription(s) | Resident or NP |
Communicate discharge date and time to patient/family | Resident/hospitalist/NP |
Patient education | Nurse |
RN discharge summary | Nurse |
Patient belongings/clothing | Nurse |
Patient education/teaching | Nurse |
Final labs/tests | Nurse |
Assess Foley catheter need and remove | Nurse |
Transportation | Social worker and care manager |
At‐home services (HHA/HA/private hire) | Social worker and care manager |
Equipment/supplies (DME, O2, ostomy supplies) | Social worker and care manager |
Interdisciplinary Rounds and DBN Website
In the past, interdisciplinary rounds, attended by each unit's charge nurse (CN), the medical resident or NP, the hospitalist, the team‐based social work (SW), and care management (CM) occurred in the morning between 9:00 am and 10:00 am. With the DBN initiative, additional afternoon interdisciplinary rounds were held at 3:00 pm. These rounds were designed to identify the next day's DBNs. Multidisciplinary team members were asked to complete the checklist responsibilities the same day that DBNs were identified rather than waiting until the day of discharge. A DBN website was created, and CMs were asked to log anticipated DBNs on this site after 3:00 pm rounds. The website generates a daily automated email at 4:30 pm to the DBN listserv with a list of the next day's anticipated DBNs. The listserv includes all hospitalists, residents, NPs, CNs, nurse managers (NM), medical directors, bed management, building services, SWs, and CMs. Additional departments were subsequently added as the DBN initiative became standard of care.
Assistant NMs update the DBN website overnight, adding patients identified by nursing staff as a possible DBN and highlighting changes in the condition of previously identified patients. At 7:00 am, an automated update email is sent by the website to the listserv. The automated emails include the DBN checklist, key phone numbers, and useful links.
Daily Leadership Meeting, Ongoing Process Improvement, and Real‐Time Feedback
Weekdays at 11:00 am, an interdisciplinary leadership meeting occurs with the medical directors, assistant NMs, CNs, and representatives from SW, CM, and hospital administration. At this meeting, all discharges from the previous day are reviewed to identify areas for improvement and trends in barriers to DBN. The current day's expected DBNs are also reviewed to address discharge bottlenecks in real time. Daily feedback was provided via a poster displayed in staff areas with daily DBN statistics.
Reward and Recognition
At the kickoff, a prize system was announced for the conclusion of the first month of the intervention if DBN thresholds were met. Rewards included a pizza party and raffle gift certificates. To hardwire the process, these rewards were repeated at the conclusion of each of the first 3 months of the intervention.
Changes to the Floor
There were notable changes to the floor during the time of this intervention. From October 25, 2012 until January 1, 2013, the hospital was closed after evacuation due to Hurricane Sandy. Units 17E and 17W reopened on January 14, 2013. The NP team was not restarted with the reopening. All other floor processes, including the DBN interventions, were restarted. The time period of floor closure was excluded in this analysis. The initial medical center goal was 30% DBNs. During the intervention period, the goal increased to 40%.
Data Collection and Analysis
Primary Outcome: Calendar Month DBN Percentage
The date and time of discharge are recorded by the discharging nurse or patient unit assistant in our electronic medical record (Epic, Madison WI) at the time the patient leaves the unit. Utilizing NYU's cost accounting system (Enterprise Performance Systems Inc., Chicago, IL), we obtained discharge date and time among inpatients discharged from units 17E and 17W between June 1, 2011 and March 4, 2012 (the baseline period) and March 5, 2012 and June 31, 2013 (the intervention period). Data from October 25, 2012 to the end of January 2013 were excluded due to hospital closure from Hurricane Sandy. The analysis includes 8 months of baseline data and 13 months of intervention data (not counting the excluded months from hospital closure), measuring the extent to which improvement was sustained. To match organizational criteria for DBN, we excluded patients on the units in the patient class observation, deaths, and inpatient hospice.
Patients were identified as DBNs if the discharge time was before 12:01 pm, in accordance with our medical center administration's definition of DBN. Calendar month DBN percentage was calculated by dividing the number of DBN patients during the calendar month by the total number of discharged patients during the calendar month. The proportion of DBNs in the baseline population was compared to the proportion of DBNs in the intervention population. Statistical significance for the change in DBN was evaluated by use of a 2‐tailed z test.
Secondary Outcomes: Observed‐to‐Expected LOS and 30‐Day Readmission Rate
Expected LOS was provided by the University Health Consortium (UHC). UHC calculates a risk‐adjusted LOS for each patient by assigning a severity of illness, selection of a patient population to serve as a basis of the model, and use of statistical regression to assign an expected LOS in days. Observed‐to‐expected (O/E) LOS for each patient is calculated by dividing the expected LOS in days by the observed (actual) LOS in days. The average of the O/E LOS for all patients in the baseline period was compared to the average O/E LOS for all patients in the intervention period. This average was calculated by summing the O/E LOS of all patients in each time period and dividing by the total number of patients. In accordance with our medical center administration's reporting standards, we report the mean of the O/E LOS. For statistical evaluation of this non‐normally distributed continuous variable, we also report the median of the O/E LOS for the baseline and intervention time period and use the Wilcoxon rank sum test to evaluate for statistical significance.
Readmission cases are identified by the clinical quality and effectiveness department at our medical center using the UHC definition of all patients who are readmitted to a hospital within 30 days of discharge from the index admission. The 30‐day readmission rate is calculated by dividing the total number of cases identified as readmissions within 30 days by the total number of admissions over the same time period. This rate was obtained on a calendar‐month basis for patients discharged from the index admission before noon, after noon, and in total. These rates were averaged over the baseline and intervention period. The proportion of 30‐day readmissions in the baseline population was compared to the proportion of 30‐day readmissions in the intervention population. Statistical significance for the change in 30‐day readmissions was evaluated by use of a 2‐tailed z test.
RESULTS
Primary Outcome: Calendar Month DBN Percentage
The calendar month DBN percentage increased in the first month of the intervention, from 16% to 42% (Figure 1). This improvement was sustained throughout the intervention, with an average calendar month DBN percentage of 38% over the 13‐month intervention period. Use of a 2‐tailed z test to compare the pre‐intervention proportion (11%) of patients who were DBN with the post‐intervention proportion (38%) who were DBN showed a statistically significant change (z score 23.6, P = 0.0002). Units 17E and 17W had a combined 2536 total discharges in the baseline period, with 265 patients discharged before noon. In the intervention period, 3277 total discharges occurred, with 1236 patients discharged before noon. The average time of discharge moved 1 hour and 31 minutes, from 3:43 pm in the baseline period to 2:13 pm in the intervention period.

Secondary Outcomes: O/E LOS and 30‐Day Readmission Rate
The average O/E LOS during the baseline period was 1.06, and this declined during the intervention period to 0.96 [Table 2]. Using the Wilcoxon rank sum test, we found a statistically significant difference between the O/E LOS in the baseline (median 0.82) and intervention (median 0.76) periods (P = 0.0001). The average 30‐day readmission rate declined from 14.3% during the baseline to 13.1% during the intervention period. The change in 30‐day readmission rate was not statistically significant (z score=1.3132, P=0.1902). The change in readmission rate was similar and not statistically significant whether the patient was discharged before (13.6% baseline vs 12.6% intervention, P=0.66) or after noon (14.4% baseline vs 13.4% intervention, P=0.35) (Figure 2).
Units 17E and 17W | Baseline Period | Intervention Period | Change | P Value |
---|---|---|---|---|
| ||||
O/E LOS, mean (median) | 1.06 (0.82) | 0.96 (0.76) | 10% | 0.0001a |
30‐day readmission rate | 14.3 | 13.1 | 1.2% | 0.1902 |

DISCUSSION
Throughput and discharges late in the day are challenges that face all medical centers. We demonstrate that successful and sustainable improvements in DBN are possible. We were able to increase the DBN percentage from 11% in the baseline period to an average of 38% in the 13 months after our intervention. Our success allowed us to surpass our medical center's initial goal of 30% DBN.
The intervention took place on 2 inpatient medical units at an urban, academic medical center. This intervention is likely generalizable to comparable medical centers. The study is limited by the implementation of multiple interventions as part of the DBN initiative at the same time. We are unable to isolate the effect of individual changes. If other medical centers wish to use a similar intervention, we believe the 3 most important parts of our intervention are: (1) kickoff event to engage all staff with a clear definition of roles; (2) daily real‐time feedback, utilizing tools such as unit boards tracking the DBN percentage; and (3) a standardized form of communication for expected DBNs. In our experience, for a DBN to be successful, the team, patient, and family members must be alerted, and discharge plans must be initiated at least 1 day prior to the expected discharge. Attempting to discharge a patient before noon when they have been identified on the day of discharge is a losing proposition, both to achieve a coordinated, safe discharge and for staff and patient satisfaction.
The O/E LOS and 30‐day readmission rate declined over the intervention period, suggesting that there is no negative effect on these metrics. There was concern that staff would choose to keep patients an extra night to allow for an extra DBN the following day. This was actively discouraged during the kickoff event and throughout the intervention period at interdisciplinary rounds and through informal communications. Based upon the decline in O/E LOS, this did not occur. There was also concern that the 30‐day readmission rate may increase if patients are discharged earlier in the day than usual. We observed an actual but not statistically significant decline in 30‐day readmission rate, potentially due to improved communication between team members and earlier identification of expected discharges at the prior day's afternoon DBN rounds. It is unknown if the decline in O/E LOS and 30‐day readmission rate was effected by the DBN initiative. Many other initiatives were ongoing within the medical center that could be effecting these variables. More research is required to better understand the true effect of DBN on LOS and 30‐day readmission rate.
There is limited literature on discharge early in the day. One previous study showed improvement in the DBN percentage on an obstetric floor through the institution of a discharge brunch.[9] Another report showed a modest increase (from 19.6% to 26%) in DBNs with the use of scheduled discharges.[6] This study was of unclear duration and was not specific to medical units. Another study focused on the use of in‐room display boards to document the expected day and time of patient discharge.[8] That report focused on the ability to schedule and achieve the scheduled discharge date and time. The authors describe a trend toward more discharges early in the day but provide no specific data on this effect. The only study looking specifically at discharge early in the day on a medical unit showed improvement in the discharge before 1:00 pm percentage, but was of small size (81 total patients) and short duration (1 month).[7] Our study is of larger size and longer duration, is focused on implementation of a medical service, and provides a comprehensive system that should be reproducible.
There are several next steps to our work. We will continue to monitor the DBN percentage and the ongoing sustainability of the project. We plan to investigate the effect of this rapid and notable increase in DBN percentage on a variety of patient outcomes and hospital metrics, including patient satisfaction, timeliness of ED admissions, intensive care unit transfers to the medical floor, and direct admissions.
Our study demonstrates that increased timely discharge is an achievable and sustainable goal for medical centers. Future work will allow for better understanding of the full effects of such an intervention on patient outcomes and hospital metrics.
- Impact of admission and discharge peak times on hospital overcrowding. Stud Health Technol Inform. 2011;168:82–88. , , , .
- Boarding inpatients in the emergency department increases discharged patient length of stay. J Emerg Med. 2013;44(1):230–235. , , , , , .
- Overcrowding in the nation's emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000;35(1):63–68. , .
- Caregiver perceptions of the reasons for delayed hospital discharge. Eff Clin Pract. 2001;4(6):250–255. , , .
- Daily multidisciplinary discharge rounds in a trauma center: a little time, well spent. J Trauma. 2009;66(3):880–887. , , , et al.
- All roads lead to scheduled discharges. Nursing. 2008;38(12):61–63. , , , .
- Discharging patients earlier in the day: a concept worth evaluating. Health Care Manag (Frederick). 2007;26(2):142–146. , , , .
- In‐room display of day and time patient is anticipated to leave hospital: a “discharge appointment”. J Hosp Med. 2007;2(1):13–16. , , , et al.
- The discharge brunch: reducing chaos and increasing smiles on the OB unit. Nurs Womens Health. 2009;13(5):402–409. , .
Late afternoon hospital discharges are thought to create admission bottlenecks in the emergency department (ED).[1] ED overcrowding increases the length of stay (LOS) of patients[2] and is a major dissatisfier for both patients and staff.[3] In our medical center, ED patients who are admitted after 1:00 pm have a 0.6‐day longer risk‐adjusted LOS than those admitted before 1:00 pm (M. Radford, MD, written communication, March 2012).
Many potential barriers to discharging patients early in the day exist.[4] However, comprehensive discharge planning favorably impacts discharge times.[5] There are limited published data regarding discharging patients early in the day. Studies have focused on improved discharge care coordination,[6, 7] in‐room display of planned discharge time,[8] and a discharge brunch.[9] In January 2012, the calendar month discharge before noon (DBN) percentage for 2 inpatient medicine units in our institution was approximately 7%, well below the organizational goal of 30%. We describe an intervention to sustainably increase the DBN percentage.
METHODS
Setting
The intervention took place on the 17th floor of New York University (NYU) Langone Medical Center's Tisch Hospital, an urban, academic medical center. All patients on the 17th floor received the intervention.
The 17th floor is composed of 2 acute care inpatient medical units17E and 17W. Each unit has 35 medical beds including a 16‐bed medical step down unit (SDU). Medical teams on the floor consist of 4 housestaff teams, a nurse practitioner (NP) team, and an SDU team. Each housestaff and NP team is led by a hospitalist, who is the attending of record for the majority of patients, though some patients on these teams are cared for by private attendings. Medical teams admit patients to any unit based upon bed availability. Nurses are assigned patients by acuity, not by medical team.
Intervention
Kick‐Off Event, Definition of Responsibilities, and Checklist
All stakeholders and front‐line staff were invited to a kickoff event on March 5, 2012. This event included education and discussion about the importance of a safe and early discharge from the patient and staff perspective. Roles in the discharge process were clearly defined and a corresponding checklist was created (Table 1). The checklist was used at least once per day during afternoon interdisciplinary in preparation for next‐day DBNs. Discharge date and time are communicated by the medical team to individual patients and families on the day a patient is identified for DBN. Patients and families did not receive additional orientation to the DBN initiative.
Discharge Task | Responsible Team Member |
---|---|
| |
MD discharge summary and medication reconciliation | Resident or NP |
Discharge order | Resident or NP |
Prescription(s) | Resident or NP |
Communicate discharge date and time to patient/family | Resident/hospitalist/NP |
Patient education | Nurse |
RN discharge summary | Nurse |
Patient belongings/clothing | Nurse |
Patient education/teaching | Nurse |
Final labs/tests | Nurse |
Assess Foley catheter need and remove | Nurse |
Transportation | Social worker and care manager |
At‐home services (HHA/HA/private hire) | Social worker and care manager |
Equipment/supplies (DME, O2, ostomy supplies) | Social worker and care manager |
Interdisciplinary Rounds and DBN Website
In the past, interdisciplinary rounds, attended by each unit's charge nurse (CN), the medical resident or NP, the hospitalist, the team‐based social work (SW), and care management (CM) occurred in the morning between 9:00 am and 10:00 am. With the DBN initiative, additional afternoon interdisciplinary rounds were held at 3:00 pm. These rounds were designed to identify the next day's DBNs. Multidisciplinary team members were asked to complete the checklist responsibilities the same day that DBNs were identified rather than waiting until the day of discharge. A DBN website was created, and CMs were asked to log anticipated DBNs on this site after 3:00 pm rounds. The website generates a daily automated email at 4:30 pm to the DBN listserv with a list of the next day's anticipated DBNs. The listserv includes all hospitalists, residents, NPs, CNs, nurse managers (NM), medical directors, bed management, building services, SWs, and CMs. Additional departments were subsequently added as the DBN initiative became standard of care.
Assistant NMs update the DBN website overnight, adding patients identified by nursing staff as a possible DBN and highlighting changes in the condition of previously identified patients. At 7:00 am, an automated update email is sent by the website to the listserv. The automated emails include the DBN checklist, key phone numbers, and useful links.
Daily Leadership Meeting, Ongoing Process Improvement, and Real‐Time Feedback
Weekdays at 11:00 am, an interdisciplinary leadership meeting occurs with the medical directors, assistant NMs, CNs, and representatives from SW, CM, and hospital administration. At this meeting, all discharges from the previous day are reviewed to identify areas for improvement and trends in barriers to DBN. The current day's expected DBNs are also reviewed to address discharge bottlenecks in real time. Daily feedback was provided via a poster displayed in staff areas with daily DBN statistics.
Reward and Recognition
At the kickoff, a prize system was announced for the conclusion of the first month of the intervention if DBN thresholds were met. Rewards included a pizza party and raffle gift certificates. To hardwire the process, these rewards were repeated at the conclusion of each of the first 3 months of the intervention.
Changes to the Floor
There were notable changes to the floor during the time of this intervention. From October 25, 2012 until January 1, 2013, the hospital was closed after evacuation due to Hurricane Sandy. Units 17E and 17W reopened on January 14, 2013. The NP team was not restarted with the reopening. All other floor processes, including the DBN interventions, were restarted. The time period of floor closure was excluded in this analysis. The initial medical center goal was 30% DBNs. During the intervention period, the goal increased to 40%.
Data Collection and Analysis
Primary Outcome: Calendar Month DBN Percentage
The date and time of discharge are recorded by the discharging nurse or patient unit assistant in our electronic medical record (Epic, Madison WI) at the time the patient leaves the unit. Utilizing NYU's cost accounting system (Enterprise Performance Systems Inc., Chicago, IL), we obtained discharge date and time among inpatients discharged from units 17E and 17W between June 1, 2011 and March 4, 2012 (the baseline period) and March 5, 2012 and June 31, 2013 (the intervention period). Data from October 25, 2012 to the end of January 2013 were excluded due to hospital closure from Hurricane Sandy. The analysis includes 8 months of baseline data and 13 months of intervention data (not counting the excluded months from hospital closure), measuring the extent to which improvement was sustained. To match organizational criteria for DBN, we excluded patients on the units in the patient class observation, deaths, and inpatient hospice.
Patients were identified as DBNs if the discharge time was before 12:01 pm, in accordance with our medical center administration's definition of DBN. Calendar month DBN percentage was calculated by dividing the number of DBN patients during the calendar month by the total number of discharged patients during the calendar month. The proportion of DBNs in the baseline population was compared to the proportion of DBNs in the intervention population. Statistical significance for the change in DBN was evaluated by use of a 2‐tailed z test.
Secondary Outcomes: Observed‐to‐Expected LOS and 30‐Day Readmission Rate
Expected LOS was provided by the University Health Consortium (UHC). UHC calculates a risk‐adjusted LOS for each patient by assigning a severity of illness, selection of a patient population to serve as a basis of the model, and use of statistical regression to assign an expected LOS in days. Observed‐to‐expected (O/E) LOS for each patient is calculated by dividing the expected LOS in days by the observed (actual) LOS in days. The average of the O/E LOS for all patients in the baseline period was compared to the average O/E LOS for all patients in the intervention period. This average was calculated by summing the O/E LOS of all patients in each time period and dividing by the total number of patients. In accordance with our medical center administration's reporting standards, we report the mean of the O/E LOS. For statistical evaluation of this non‐normally distributed continuous variable, we also report the median of the O/E LOS for the baseline and intervention time period and use the Wilcoxon rank sum test to evaluate for statistical significance.
Readmission cases are identified by the clinical quality and effectiveness department at our medical center using the UHC definition of all patients who are readmitted to a hospital within 30 days of discharge from the index admission. The 30‐day readmission rate is calculated by dividing the total number of cases identified as readmissions within 30 days by the total number of admissions over the same time period. This rate was obtained on a calendar‐month basis for patients discharged from the index admission before noon, after noon, and in total. These rates were averaged over the baseline and intervention period. The proportion of 30‐day readmissions in the baseline population was compared to the proportion of 30‐day readmissions in the intervention population. Statistical significance for the change in 30‐day readmissions was evaluated by use of a 2‐tailed z test.
RESULTS
Primary Outcome: Calendar Month DBN Percentage
The calendar month DBN percentage increased in the first month of the intervention, from 16% to 42% (Figure 1). This improvement was sustained throughout the intervention, with an average calendar month DBN percentage of 38% over the 13‐month intervention period. Use of a 2‐tailed z test to compare the pre‐intervention proportion (11%) of patients who were DBN with the post‐intervention proportion (38%) who were DBN showed a statistically significant change (z score 23.6, P = 0.0002). Units 17E and 17W had a combined 2536 total discharges in the baseline period, with 265 patients discharged before noon. In the intervention period, 3277 total discharges occurred, with 1236 patients discharged before noon. The average time of discharge moved 1 hour and 31 minutes, from 3:43 pm in the baseline period to 2:13 pm in the intervention period.

Secondary Outcomes: O/E LOS and 30‐Day Readmission Rate
The average O/E LOS during the baseline period was 1.06, and this declined during the intervention period to 0.96 [Table 2]. Using the Wilcoxon rank sum test, we found a statistically significant difference between the O/E LOS in the baseline (median 0.82) and intervention (median 0.76) periods (P = 0.0001). The average 30‐day readmission rate declined from 14.3% during the baseline to 13.1% during the intervention period. The change in 30‐day readmission rate was not statistically significant (z score=1.3132, P=0.1902). The change in readmission rate was similar and not statistically significant whether the patient was discharged before (13.6% baseline vs 12.6% intervention, P=0.66) or after noon (14.4% baseline vs 13.4% intervention, P=0.35) (Figure 2).
Units 17E and 17W | Baseline Period | Intervention Period | Change | P Value |
---|---|---|---|---|
| ||||
O/E LOS, mean (median) | 1.06 (0.82) | 0.96 (0.76) | 10% | 0.0001a |
30‐day readmission rate | 14.3 | 13.1 | 1.2% | 0.1902 |

DISCUSSION
Throughput and discharges late in the day are challenges that face all medical centers. We demonstrate that successful and sustainable improvements in DBN are possible. We were able to increase the DBN percentage from 11% in the baseline period to an average of 38% in the 13 months after our intervention. Our success allowed us to surpass our medical center's initial goal of 30% DBN.
The intervention took place on 2 inpatient medical units at an urban, academic medical center. This intervention is likely generalizable to comparable medical centers. The study is limited by the implementation of multiple interventions as part of the DBN initiative at the same time. We are unable to isolate the effect of individual changes. If other medical centers wish to use a similar intervention, we believe the 3 most important parts of our intervention are: (1) kickoff event to engage all staff with a clear definition of roles; (2) daily real‐time feedback, utilizing tools such as unit boards tracking the DBN percentage; and (3) a standardized form of communication for expected DBNs. In our experience, for a DBN to be successful, the team, patient, and family members must be alerted, and discharge plans must be initiated at least 1 day prior to the expected discharge. Attempting to discharge a patient before noon when they have been identified on the day of discharge is a losing proposition, both to achieve a coordinated, safe discharge and for staff and patient satisfaction.
The O/E LOS and 30‐day readmission rate declined over the intervention period, suggesting that there is no negative effect on these metrics. There was concern that staff would choose to keep patients an extra night to allow for an extra DBN the following day. This was actively discouraged during the kickoff event and throughout the intervention period at interdisciplinary rounds and through informal communications. Based upon the decline in O/E LOS, this did not occur. There was also concern that the 30‐day readmission rate may increase if patients are discharged earlier in the day than usual. We observed an actual but not statistically significant decline in 30‐day readmission rate, potentially due to improved communication between team members and earlier identification of expected discharges at the prior day's afternoon DBN rounds. It is unknown if the decline in O/E LOS and 30‐day readmission rate was effected by the DBN initiative. Many other initiatives were ongoing within the medical center that could be effecting these variables. More research is required to better understand the true effect of DBN on LOS and 30‐day readmission rate.
There is limited literature on discharge early in the day. One previous study showed improvement in the DBN percentage on an obstetric floor through the institution of a discharge brunch.[9] Another report showed a modest increase (from 19.6% to 26%) in DBNs with the use of scheduled discharges.[6] This study was of unclear duration and was not specific to medical units. Another study focused on the use of in‐room display boards to document the expected day and time of patient discharge.[8] That report focused on the ability to schedule and achieve the scheduled discharge date and time. The authors describe a trend toward more discharges early in the day but provide no specific data on this effect. The only study looking specifically at discharge early in the day on a medical unit showed improvement in the discharge before 1:00 pm percentage, but was of small size (81 total patients) and short duration (1 month).[7] Our study is of larger size and longer duration, is focused on implementation of a medical service, and provides a comprehensive system that should be reproducible.
There are several next steps to our work. We will continue to monitor the DBN percentage and the ongoing sustainability of the project. We plan to investigate the effect of this rapid and notable increase in DBN percentage on a variety of patient outcomes and hospital metrics, including patient satisfaction, timeliness of ED admissions, intensive care unit transfers to the medical floor, and direct admissions.
Our study demonstrates that increased timely discharge is an achievable and sustainable goal for medical centers. Future work will allow for better understanding of the full effects of such an intervention on patient outcomes and hospital metrics.
Late afternoon hospital discharges are thought to create admission bottlenecks in the emergency department (ED).[1] ED overcrowding increases the length of stay (LOS) of patients[2] and is a major dissatisfier for both patients and staff.[3] In our medical center, ED patients who are admitted after 1:00 pm have a 0.6‐day longer risk‐adjusted LOS than those admitted before 1:00 pm (M. Radford, MD, written communication, March 2012).
Many potential barriers to discharging patients early in the day exist.[4] However, comprehensive discharge planning favorably impacts discharge times.[5] There are limited published data regarding discharging patients early in the day. Studies have focused on improved discharge care coordination,[6, 7] in‐room display of planned discharge time,[8] and a discharge brunch.[9] In January 2012, the calendar month discharge before noon (DBN) percentage for 2 inpatient medicine units in our institution was approximately 7%, well below the organizational goal of 30%. We describe an intervention to sustainably increase the DBN percentage.
METHODS
Setting
The intervention took place on the 17th floor of New York University (NYU) Langone Medical Center's Tisch Hospital, an urban, academic medical center. All patients on the 17th floor received the intervention.
The 17th floor is composed of 2 acute care inpatient medical units17E and 17W. Each unit has 35 medical beds including a 16‐bed medical step down unit (SDU). Medical teams on the floor consist of 4 housestaff teams, a nurse practitioner (NP) team, and an SDU team. Each housestaff and NP team is led by a hospitalist, who is the attending of record for the majority of patients, though some patients on these teams are cared for by private attendings. Medical teams admit patients to any unit based upon bed availability. Nurses are assigned patients by acuity, not by medical team.
Intervention
Kick‐Off Event, Definition of Responsibilities, and Checklist
All stakeholders and front‐line staff were invited to a kickoff event on March 5, 2012. This event included education and discussion about the importance of a safe and early discharge from the patient and staff perspective. Roles in the discharge process were clearly defined and a corresponding checklist was created (Table 1). The checklist was used at least once per day during afternoon interdisciplinary in preparation for next‐day DBNs. Discharge date and time are communicated by the medical team to individual patients and families on the day a patient is identified for DBN. Patients and families did not receive additional orientation to the DBN initiative.
Discharge Task | Responsible Team Member |
---|---|
| |
MD discharge summary and medication reconciliation | Resident or NP |
Discharge order | Resident or NP |
Prescription(s) | Resident or NP |
Communicate discharge date and time to patient/family | Resident/hospitalist/NP |
Patient education | Nurse |
RN discharge summary | Nurse |
Patient belongings/clothing | Nurse |
Patient education/teaching | Nurse |
Final labs/tests | Nurse |
Assess Foley catheter need and remove | Nurse |
Transportation | Social worker and care manager |
At‐home services (HHA/HA/private hire) | Social worker and care manager |
Equipment/supplies (DME, O2, ostomy supplies) | Social worker and care manager |
Interdisciplinary Rounds and DBN Website
In the past, interdisciplinary rounds, attended by each unit's charge nurse (CN), the medical resident or NP, the hospitalist, the team‐based social work (SW), and care management (CM) occurred in the morning between 9:00 am and 10:00 am. With the DBN initiative, additional afternoon interdisciplinary rounds were held at 3:00 pm. These rounds were designed to identify the next day's DBNs. Multidisciplinary team members were asked to complete the checklist responsibilities the same day that DBNs were identified rather than waiting until the day of discharge. A DBN website was created, and CMs were asked to log anticipated DBNs on this site after 3:00 pm rounds. The website generates a daily automated email at 4:30 pm to the DBN listserv with a list of the next day's anticipated DBNs. The listserv includes all hospitalists, residents, NPs, CNs, nurse managers (NM), medical directors, bed management, building services, SWs, and CMs. Additional departments were subsequently added as the DBN initiative became standard of care.
Assistant NMs update the DBN website overnight, adding patients identified by nursing staff as a possible DBN and highlighting changes in the condition of previously identified patients. At 7:00 am, an automated update email is sent by the website to the listserv. The automated emails include the DBN checklist, key phone numbers, and useful links.
Daily Leadership Meeting, Ongoing Process Improvement, and Real‐Time Feedback
Weekdays at 11:00 am, an interdisciplinary leadership meeting occurs with the medical directors, assistant NMs, CNs, and representatives from SW, CM, and hospital administration. At this meeting, all discharges from the previous day are reviewed to identify areas for improvement and trends in barriers to DBN. The current day's expected DBNs are also reviewed to address discharge bottlenecks in real time. Daily feedback was provided via a poster displayed in staff areas with daily DBN statistics.
Reward and Recognition
At the kickoff, a prize system was announced for the conclusion of the first month of the intervention if DBN thresholds were met. Rewards included a pizza party and raffle gift certificates. To hardwire the process, these rewards were repeated at the conclusion of each of the first 3 months of the intervention.
Changes to the Floor
There were notable changes to the floor during the time of this intervention. From October 25, 2012 until January 1, 2013, the hospital was closed after evacuation due to Hurricane Sandy. Units 17E and 17W reopened on January 14, 2013. The NP team was not restarted with the reopening. All other floor processes, including the DBN interventions, were restarted. The time period of floor closure was excluded in this analysis. The initial medical center goal was 30% DBNs. During the intervention period, the goal increased to 40%.
Data Collection and Analysis
Primary Outcome: Calendar Month DBN Percentage
The date and time of discharge are recorded by the discharging nurse or patient unit assistant in our electronic medical record (Epic, Madison WI) at the time the patient leaves the unit. Utilizing NYU's cost accounting system (Enterprise Performance Systems Inc., Chicago, IL), we obtained discharge date and time among inpatients discharged from units 17E and 17W between June 1, 2011 and March 4, 2012 (the baseline period) and March 5, 2012 and June 31, 2013 (the intervention period). Data from October 25, 2012 to the end of January 2013 were excluded due to hospital closure from Hurricane Sandy. The analysis includes 8 months of baseline data and 13 months of intervention data (not counting the excluded months from hospital closure), measuring the extent to which improvement was sustained. To match organizational criteria for DBN, we excluded patients on the units in the patient class observation, deaths, and inpatient hospice.
Patients were identified as DBNs if the discharge time was before 12:01 pm, in accordance with our medical center administration's definition of DBN. Calendar month DBN percentage was calculated by dividing the number of DBN patients during the calendar month by the total number of discharged patients during the calendar month. The proportion of DBNs in the baseline population was compared to the proportion of DBNs in the intervention population. Statistical significance for the change in DBN was evaluated by use of a 2‐tailed z test.
Secondary Outcomes: Observed‐to‐Expected LOS and 30‐Day Readmission Rate
Expected LOS was provided by the University Health Consortium (UHC). UHC calculates a risk‐adjusted LOS for each patient by assigning a severity of illness, selection of a patient population to serve as a basis of the model, and use of statistical regression to assign an expected LOS in days. Observed‐to‐expected (O/E) LOS for each patient is calculated by dividing the expected LOS in days by the observed (actual) LOS in days. The average of the O/E LOS for all patients in the baseline period was compared to the average O/E LOS for all patients in the intervention period. This average was calculated by summing the O/E LOS of all patients in each time period and dividing by the total number of patients. In accordance with our medical center administration's reporting standards, we report the mean of the O/E LOS. For statistical evaluation of this non‐normally distributed continuous variable, we also report the median of the O/E LOS for the baseline and intervention time period and use the Wilcoxon rank sum test to evaluate for statistical significance.
Readmission cases are identified by the clinical quality and effectiveness department at our medical center using the UHC definition of all patients who are readmitted to a hospital within 30 days of discharge from the index admission. The 30‐day readmission rate is calculated by dividing the total number of cases identified as readmissions within 30 days by the total number of admissions over the same time period. This rate was obtained on a calendar‐month basis for patients discharged from the index admission before noon, after noon, and in total. These rates were averaged over the baseline and intervention period. The proportion of 30‐day readmissions in the baseline population was compared to the proportion of 30‐day readmissions in the intervention population. Statistical significance for the change in 30‐day readmissions was evaluated by use of a 2‐tailed z test.
RESULTS
Primary Outcome: Calendar Month DBN Percentage
The calendar month DBN percentage increased in the first month of the intervention, from 16% to 42% (Figure 1). This improvement was sustained throughout the intervention, with an average calendar month DBN percentage of 38% over the 13‐month intervention period. Use of a 2‐tailed z test to compare the pre‐intervention proportion (11%) of patients who were DBN with the post‐intervention proportion (38%) who were DBN showed a statistically significant change (z score 23.6, P = 0.0002). Units 17E and 17W had a combined 2536 total discharges in the baseline period, with 265 patients discharged before noon. In the intervention period, 3277 total discharges occurred, with 1236 patients discharged before noon. The average time of discharge moved 1 hour and 31 minutes, from 3:43 pm in the baseline period to 2:13 pm in the intervention period.

Secondary Outcomes: O/E LOS and 30‐Day Readmission Rate
The average O/E LOS during the baseline period was 1.06, and this declined during the intervention period to 0.96 [Table 2]. Using the Wilcoxon rank sum test, we found a statistically significant difference between the O/E LOS in the baseline (median 0.82) and intervention (median 0.76) periods (P = 0.0001). The average 30‐day readmission rate declined from 14.3% during the baseline to 13.1% during the intervention period. The change in 30‐day readmission rate was not statistically significant (z score=1.3132, P=0.1902). The change in readmission rate was similar and not statistically significant whether the patient was discharged before (13.6% baseline vs 12.6% intervention, P=0.66) or after noon (14.4% baseline vs 13.4% intervention, P=0.35) (Figure 2).
Units 17E and 17W | Baseline Period | Intervention Period | Change | P Value |
---|---|---|---|---|
| ||||
O/E LOS, mean (median) | 1.06 (0.82) | 0.96 (0.76) | 10% | 0.0001a |
30‐day readmission rate | 14.3 | 13.1 | 1.2% | 0.1902 |

DISCUSSION
Throughput and discharges late in the day are challenges that face all medical centers. We demonstrate that successful and sustainable improvements in DBN are possible. We were able to increase the DBN percentage from 11% in the baseline period to an average of 38% in the 13 months after our intervention. Our success allowed us to surpass our medical center's initial goal of 30% DBN.
The intervention took place on 2 inpatient medical units at an urban, academic medical center. This intervention is likely generalizable to comparable medical centers. The study is limited by the implementation of multiple interventions as part of the DBN initiative at the same time. We are unable to isolate the effect of individual changes. If other medical centers wish to use a similar intervention, we believe the 3 most important parts of our intervention are: (1) kickoff event to engage all staff with a clear definition of roles; (2) daily real‐time feedback, utilizing tools such as unit boards tracking the DBN percentage; and (3) a standardized form of communication for expected DBNs. In our experience, for a DBN to be successful, the team, patient, and family members must be alerted, and discharge plans must be initiated at least 1 day prior to the expected discharge. Attempting to discharge a patient before noon when they have been identified on the day of discharge is a losing proposition, both to achieve a coordinated, safe discharge and for staff and patient satisfaction.
The O/E LOS and 30‐day readmission rate declined over the intervention period, suggesting that there is no negative effect on these metrics. There was concern that staff would choose to keep patients an extra night to allow for an extra DBN the following day. This was actively discouraged during the kickoff event and throughout the intervention period at interdisciplinary rounds and through informal communications. Based upon the decline in O/E LOS, this did not occur. There was also concern that the 30‐day readmission rate may increase if patients are discharged earlier in the day than usual. We observed an actual but not statistically significant decline in 30‐day readmission rate, potentially due to improved communication between team members and earlier identification of expected discharges at the prior day's afternoon DBN rounds. It is unknown if the decline in O/E LOS and 30‐day readmission rate was effected by the DBN initiative. Many other initiatives were ongoing within the medical center that could be effecting these variables. More research is required to better understand the true effect of DBN on LOS and 30‐day readmission rate.
There is limited literature on discharge early in the day. One previous study showed improvement in the DBN percentage on an obstetric floor through the institution of a discharge brunch.[9] Another report showed a modest increase (from 19.6% to 26%) in DBNs with the use of scheduled discharges.[6] This study was of unclear duration and was not specific to medical units. Another study focused on the use of in‐room display boards to document the expected day and time of patient discharge.[8] That report focused on the ability to schedule and achieve the scheduled discharge date and time. The authors describe a trend toward more discharges early in the day but provide no specific data on this effect. The only study looking specifically at discharge early in the day on a medical unit showed improvement in the discharge before 1:00 pm percentage, but was of small size (81 total patients) and short duration (1 month).[7] Our study is of larger size and longer duration, is focused on implementation of a medical service, and provides a comprehensive system that should be reproducible.
There are several next steps to our work. We will continue to monitor the DBN percentage and the ongoing sustainability of the project. We plan to investigate the effect of this rapid and notable increase in DBN percentage on a variety of patient outcomes and hospital metrics, including patient satisfaction, timeliness of ED admissions, intensive care unit transfers to the medical floor, and direct admissions.
Our study demonstrates that increased timely discharge is an achievable and sustainable goal for medical centers. Future work will allow for better understanding of the full effects of such an intervention on patient outcomes and hospital metrics.
- Impact of admission and discharge peak times on hospital overcrowding. Stud Health Technol Inform. 2011;168:82–88. , , , .
- Boarding inpatients in the emergency department increases discharged patient length of stay. J Emerg Med. 2013;44(1):230–235. , , , , , .
- Overcrowding in the nation's emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000;35(1):63–68. , .
- Caregiver perceptions of the reasons for delayed hospital discharge. Eff Clin Pract. 2001;4(6):250–255. , , .
- Daily multidisciplinary discharge rounds in a trauma center: a little time, well spent. J Trauma. 2009;66(3):880–887. , , , et al.
- All roads lead to scheduled discharges. Nursing. 2008;38(12):61–63. , , , .
- Discharging patients earlier in the day: a concept worth evaluating. Health Care Manag (Frederick). 2007;26(2):142–146. , , , .
- In‐room display of day and time patient is anticipated to leave hospital: a “discharge appointment”. J Hosp Med. 2007;2(1):13–16. , , , et al.
- The discharge brunch: reducing chaos and increasing smiles on the OB unit. Nurs Womens Health. 2009;13(5):402–409. , .
- Impact of admission and discharge peak times on hospital overcrowding. Stud Health Technol Inform. 2011;168:82–88. , , , .
- Boarding inpatients in the emergency department increases discharged patient length of stay. J Emerg Med. 2013;44(1):230–235. , , , , , .
- Overcrowding in the nation's emergency departments: complex causes and disturbing effects. Ann Emerg Med. 2000;35(1):63–68. , .
- Caregiver perceptions of the reasons for delayed hospital discharge. Eff Clin Pract. 2001;4(6):250–255. , , .
- Daily multidisciplinary discharge rounds in a trauma center: a little time, well spent. J Trauma. 2009;66(3):880–887. , , , et al.
- All roads lead to scheduled discharges. Nursing. 2008;38(12):61–63. , , , .
- Discharging patients earlier in the day: a concept worth evaluating. Health Care Manag (Frederick). 2007;26(2):142–146. , , , .
- In‐room display of day and time patient is anticipated to leave hospital: a “discharge appointment”. J Hosp Med. 2007;2(1):13–16. , , , et al.
- The discharge brunch: reducing chaos and increasing smiles on the OB unit. Nurs Womens Health. 2009;13(5):402–409. , .
© 2014 Society of Hospital Medicine
Residents' ECG Interpretation Skills
Decreased efficiency at the beginning of residency training likely results in preventable harm for patients, a phenomenon known as the July Effect.[1, 2] Postgraduate year (PGY)1 residents enter training with a variety of clinical skills and experiences, and concerns exist regarding their preparation to enter graduate medical education (GME).[3] Electrocardiogram (ECG) interpretation is a core clinical skill that residents must have on the first day of training to manage patients, recognize emergencies, and develop evidence‐based and cost‐effective treatment plans. We assessed incoming PGY‐1 residents' ability to interpret common ECG findings as part of a rigorous boot camp experience.[4]
METHODS
This was an institutional review board‐approved pre‐post study of 81 new PGY‐1 residents' ECG interpretation skills. Subjects represented all trainees from internal medicine (n=47), emergency medicine (n=13), anesthesiology (n=11), and general surgery (n=10), who entered GME at Northwestern University in June 2013. Residents completed a pretest, followed by a 60‐minute interactive small group tutorial and a post‐test. Program faculty and expert cardiologists selected 10 common ECG findings for the study, many representing medical emergencies requiring immediate treatment. The diagnoses were: normal sinus rhythm, hyperkalemia, right bundle branch block (RBBB), left bundle branch block (LBBB), complete heart block, lateral wall myocardial infarction (MI), anterior wall MI, atrial fibrillation, ventricular paced rhythm, and ventricular tachycardia (VT). ECGs were selected from an online reference set (
RESULTS
All 81 residents completed the study. The mean age was 27 years, and 56% were male. Eighty (99%) graduated from a US medical school. The mean United States Medical Licensing Examination scores were step 1: 243.8 (14.4) and step 2: 251.8 (13.6). Twenty‐six (32%) completed a cardiology rotation in medical school. Before the pretest, residents self‐assessed their ECG interpretation skills as a mean of 61.8 (standard deviation 17.2) using a scale of 0 (not confident) to 100 (very confident). Pretest results ranged from 60.5% correct (complete heart block) to 96.3% correct (normal sinus rhythm). Eighteen residents (22%) did not recognize hyperkalemia, 20 (25%) were unable to identify RBBB, and 15 (18%) LBBB. Twenty‐two (27%) could not discern a lateral wall MI, and 8 residents (10%) missed an anterior wall MI. Sixteen (20%) could not diagnose atrial fibrillation, 18 (22%) could not identify a ventricular paced rhythm, and 13 (16%) did not recognize VT. Mean post‐test scores improved significantly for 5 cases (P<0.05), but did not rise significantly for normal sinus rhythm, lateral wall MI, anterior wall MI, hyperkalemia, and ventricular paced rhythm 1.

DISCUSSION
PGY‐1 residents from multiple specialties were not confident regarding their ability to interpret ECGs and could not reliably identify 10 basic findings. This is despite graduating almost exclusively from US medical schools and performing at high levels on standardized tests. Although boot camp improved recognition of important ECG findings, including VT and bundle branch blocks, identification of emergent diagnoses such as lateral/anterior MI and hyperkalemia require additional training and close supervision during patient care. This study provides further evidence that the preparation of PGY‐1 residents to enter GME is lacking. Recent calls for inclusion of cost‐consciousness and stewardship of resources as a seventh competency for residents[5] are challenging, because incoming trainees do not uniformly possess the basic clinical skills needed to make these judgments.[3, 4] If residents cannot reliably interpret ECGs, it is not possible to determine cost‐effective testing strategies for patients with cardiac conditions. Based on the result of this study and others,[3, 4] we believe medical schools should agree upon specific graduation requirements to ensure all students have mastered core competencies and are prepared to enter GME.
Acknowledgments
Disclosure: Nothing to report.
- The July effect: fertile ground for systems improvement. Ann Intern Med. 2011;155(5):331–332. , .
- July effect: impact of the academic year‐end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309–315. , , , , , .
- Assessing residents' competencies at baseline: identifying the gaps. Acad Med. 2004;79(6):564–570. , , , .
- Making July safer: simulation‐based mastery learning during intern boot camp. Acad Med. 2013;88(2):233–239. , , , et al.
- Providing high‐value, cost‐conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386–388. .
Decreased efficiency at the beginning of residency training likely results in preventable harm for patients, a phenomenon known as the July Effect.[1, 2] Postgraduate year (PGY)1 residents enter training with a variety of clinical skills and experiences, and concerns exist regarding their preparation to enter graduate medical education (GME).[3] Electrocardiogram (ECG) interpretation is a core clinical skill that residents must have on the first day of training to manage patients, recognize emergencies, and develop evidence‐based and cost‐effective treatment plans. We assessed incoming PGY‐1 residents' ability to interpret common ECG findings as part of a rigorous boot camp experience.[4]
METHODS
This was an institutional review board‐approved pre‐post study of 81 new PGY‐1 residents' ECG interpretation skills. Subjects represented all trainees from internal medicine (n=47), emergency medicine (n=13), anesthesiology (n=11), and general surgery (n=10), who entered GME at Northwestern University in June 2013. Residents completed a pretest, followed by a 60‐minute interactive small group tutorial and a post‐test. Program faculty and expert cardiologists selected 10 common ECG findings for the study, many representing medical emergencies requiring immediate treatment. The diagnoses were: normal sinus rhythm, hyperkalemia, right bundle branch block (RBBB), left bundle branch block (LBBB), complete heart block, lateral wall myocardial infarction (MI), anterior wall MI, atrial fibrillation, ventricular paced rhythm, and ventricular tachycardia (VT). ECGs were selected from an online reference set (
RESULTS
All 81 residents completed the study. The mean age was 27 years, and 56% were male. Eighty (99%) graduated from a US medical school. The mean United States Medical Licensing Examination scores were step 1: 243.8 (14.4) and step 2: 251.8 (13.6). Twenty‐six (32%) completed a cardiology rotation in medical school. Before the pretest, residents self‐assessed their ECG interpretation skills as a mean of 61.8 (standard deviation 17.2) using a scale of 0 (not confident) to 100 (very confident). Pretest results ranged from 60.5% correct (complete heart block) to 96.3% correct (normal sinus rhythm). Eighteen residents (22%) did not recognize hyperkalemia, 20 (25%) were unable to identify RBBB, and 15 (18%) LBBB. Twenty‐two (27%) could not discern a lateral wall MI, and 8 residents (10%) missed an anterior wall MI. Sixteen (20%) could not diagnose atrial fibrillation, 18 (22%) could not identify a ventricular paced rhythm, and 13 (16%) did not recognize VT. Mean post‐test scores improved significantly for 5 cases (P<0.05), but did not rise significantly for normal sinus rhythm, lateral wall MI, anterior wall MI, hyperkalemia, and ventricular paced rhythm 1.

DISCUSSION
PGY‐1 residents from multiple specialties were not confident regarding their ability to interpret ECGs and could not reliably identify 10 basic findings. This is despite graduating almost exclusively from US medical schools and performing at high levels on standardized tests. Although boot camp improved recognition of important ECG findings, including VT and bundle branch blocks, identification of emergent diagnoses such as lateral/anterior MI and hyperkalemia require additional training and close supervision during patient care. This study provides further evidence that the preparation of PGY‐1 residents to enter GME is lacking. Recent calls for inclusion of cost‐consciousness and stewardship of resources as a seventh competency for residents[5] are challenging, because incoming trainees do not uniformly possess the basic clinical skills needed to make these judgments.[3, 4] If residents cannot reliably interpret ECGs, it is not possible to determine cost‐effective testing strategies for patients with cardiac conditions. Based on the result of this study and others,[3, 4] we believe medical schools should agree upon specific graduation requirements to ensure all students have mastered core competencies and are prepared to enter GME.
Acknowledgments
Disclosure: Nothing to report.
Decreased efficiency at the beginning of residency training likely results in preventable harm for patients, a phenomenon known as the July Effect.[1, 2] Postgraduate year (PGY)1 residents enter training with a variety of clinical skills and experiences, and concerns exist regarding their preparation to enter graduate medical education (GME).[3] Electrocardiogram (ECG) interpretation is a core clinical skill that residents must have on the first day of training to manage patients, recognize emergencies, and develop evidence‐based and cost‐effective treatment plans. We assessed incoming PGY‐1 residents' ability to interpret common ECG findings as part of a rigorous boot camp experience.[4]
METHODS
This was an institutional review board‐approved pre‐post study of 81 new PGY‐1 residents' ECG interpretation skills. Subjects represented all trainees from internal medicine (n=47), emergency medicine (n=13), anesthesiology (n=11), and general surgery (n=10), who entered GME at Northwestern University in June 2013. Residents completed a pretest, followed by a 60‐minute interactive small group tutorial and a post‐test. Program faculty and expert cardiologists selected 10 common ECG findings for the study, many representing medical emergencies requiring immediate treatment. The diagnoses were: normal sinus rhythm, hyperkalemia, right bundle branch block (RBBB), left bundle branch block (LBBB), complete heart block, lateral wall myocardial infarction (MI), anterior wall MI, atrial fibrillation, ventricular paced rhythm, and ventricular tachycardia (VT). ECGs were selected from an online reference set (
RESULTS
All 81 residents completed the study. The mean age was 27 years, and 56% were male. Eighty (99%) graduated from a US medical school. The mean United States Medical Licensing Examination scores were step 1: 243.8 (14.4) and step 2: 251.8 (13.6). Twenty‐six (32%) completed a cardiology rotation in medical school. Before the pretest, residents self‐assessed their ECG interpretation skills as a mean of 61.8 (standard deviation 17.2) using a scale of 0 (not confident) to 100 (very confident). Pretest results ranged from 60.5% correct (complete heart block) to 96.3% correct (normal sinus rhythm). Eighteen residents (22%) did not recognize hyperkalemia, 20 (25%) were unable to identify RBBB, and 15 (18%) LBBB. Twenty‐two (27%) could not discern a lateral wall MI, and 8 residents (10%) missed an anterior wall MI. Sixteen (20%) could not diagnose atrial fibrillation, 18 (22%) could not identify a ventricular paced rhythm, and 13 (16%) did not recognize VT. Mean post‐test scores improved significantly for 5 cases (P<0.05), but did not rise significantly for normal sinus rhythm, lateral wall MI, anterior wall MI, hyperkalemia, and ventricular paced rhythm 1.

DISCUSSION
PGY‐1 residents from multiple specialties were not confident regarding their ability to interpret ECGs and could not reliably identify 10 basic findings. This is despite graduating almost exclusively from US medical schools and performing at high levels on standardized tests. Although boot camp improved recognition of important ECG findings, including VT and bundle branch blocks, identification of emergent diagnoses such as lateral/anterior MI and hyperkalemia require additional training and close supervision during patient care. This study provides further evidence that the preparation of PGY‐1 residents to enter GME is lacking. Recent calls for inclusion of cost‐consciousness and stewardship of resources as a seventh competency for residents[5] are challenging, because incoming trainees do not uniformly possess the basic clinical skills needed to make these judgments.[3, 4] If residents cannot reliably interpret ECGs, it is not possible to determine cost‐effective testing strategies for patients with cardiac conditions. Based on the result of this study and others,[3, 4] we believe medical schools should agree upon specific graduation requirements to ensure all students have mastered core competencies and are prepared to enter GME.
Acknowledgments
Disclosure: Nothing to report.
- The July effect: fertile ground for systems improvement. Ann Intern Med. 2011;155(5):331–332. , .
- July effect: impact of the academic year‐end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309–315. , , , , , .
- Assessing residents' competencies at baseline: identifying the gaps. Acad Med. 2004;79(6):564–570. , , , .
- Making July safer: simulation‐based mastery learning during intern boot camp. Acad Med. 2013;88(2):233–239. , , , et al.
- Providing high‐value, cost‐conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386–388. .
- The July effect: fertile ground for systems improvement. Ann Intern Med. 2011;155(5):331–332. , .
- July effect: impact of the academic year‐end changeover on patient outcomes: a systematic review. Ann Intern Med. 2011;155(5):309–315. , , , , , .
- Assessing residents' competencies at baseline: identifying the gaps. Acad Med. 2004;79(6):564–570. , , , .
- Making July safer: simulation‐based mastery learning during intern boot camp. Acad Med. 2013;88(2):233–239. , , , et al.
- Providing high‐value, cost‐conscious care: a critical seventh general competency for physicians. Ann Intern Med. 2011;155(6):386–388. .
Entrusting Residents with Tasks
Determining when residents are independently prepared to perform clinical care tasks safely is not easy or understood. Educators have struggled to identify robust ways to evaluate trainees and their preparedness to treat patients while unsupervised. Trust allows the trainee to experience increasing levels of participation and responsibility in the workplace in a way that builds competence for future practice. The breadth of knowledge and skills required to become a competent and safe physician, coupled with the busy workload confound this challenge. Notably, a technically proficient trainee may not have the clinical judgment to treat patients without supervision.
The Accreditation Council of Graduate Medical Education (ACGME) has previously outlined 6 core competencies for residency training: patient care, medical knowledge, practice‐based learning and improvement, interpersonal and communication skills, professionalism, and systems‐based practice.[1] A systematic literature review suggests that traditional trainee evaluation tools are difficult to use and unreliable in measuring the competencies independently from one another, whereas certain competencies are consistently difficult to quantify in a reliable and valid way.[2] The evaluation of trainees' clinical performance despite efforts to create objective tools remain strongly influenced by subjective measures and continues to be highly variable among different evaluators.[3] Objectively measuring resident autonomy and readiness to supervise junior colleagues remains imprecise.[4]
The ACGME's Next Accreditation System (NAS) incorporates educational milestones as part of the reporting of resident training outcomes.[5] The milestones allow for the translation of the core competencies into integrative and observable abilities. Furthermore, the milestone categories are stratified into tiers to allow progress to be measured longitudinally and by task complexity using a novel assessment strategy.
The development of trust between supervisors and trainees is a critical step in decisions to allow increased responsibility and the provision of autonomous decision making, which is an important aspect of physician training. Identifying the factors that influence the supervisors' evaluation of resident competency and capability is at the crux of trainee maturation as well as patient safety.[4] Trust, defined as believability and discernment by attendings of resident physicians, plays a large role in attending evaluations of residents during their clinical rotations.[3] Trust impacts the decisions of successful performance of entrustable professional activities (EPAs), or those tasks that require mastery prior to completion of training milestones.[6] A study of entrustment decisions made by attending anesthesiologists identified the factors that contribute to the amount of autonomy given to residents, such as trainee trustworthiness, medical knowledge, and level of training.[4] The aim of our study, building on this study, was 2‐fold: (1) use deductive qualitative analysis to apply this framework to existing resident and attending data, and (2) define the categories within this framework and describe how internal medicine attending and resident physician perceptions of trust can impact clinical decision making and patient care.
METHODS
We are reporting on a secondary data analysis of interview transcripts from a study conducted on the inpatient general medicine service at the University of Chicago, an academic tertiary care medical center. The methods for data collection and full consent have been outlined previously.[7, 8, 9] The institutional review board of the University of Chicago approved this study.
Briefly, between January 2006 and November 2006, all eligible internal medicine resident physicians, postgraduate year (PGY)‐2 or PGY‐3, and attending physicians, either generalists or hospitalists, were privately interviewed within 1 week of their final call night on the inpatient general medicine rotation to assess decision making and clinical supervision during the rotation. All interviews were conducted by 1 investigator (J.F.), and discussions were audio taped and transcribed for analysis. Interviews were conducted at the conclusion of the rotation to prevent any influence on resident and attending behavior during the rotation.
The critical incident technique, a procedure used for collecting direct observations of human behavior that have critical significance on the decision‐making process, was used to solicit examples of ineffective supervision, inquiring about 2 to 3 important clinical decisions made on the most recent call night, with probes to identify issues of trust, autonomy, and decision making.[10] A critical incident can be described as one that makes a significant contribution, either positively or negatively, on the process.
Appreciative inquiry, a technique that aims to uncover the best things about the clinical encounter being explored, was used to solicit examples of effective supervision. Probes are used to identify factors, either personal or situational, that influenced the withholding or provision of resident autonomy during periods of clinical care delivery.[11]
All identifiable information was removed from the interview transcripts to protect participant and patient confidentiality. Deductive qualitative analysis was performed using the conceptual EPA framework, which describes several factors that influence the attending physicians' decisions to deem a resident trustworthy to independently fulfill a specific clinical task.[4] These factors include (1) the nature of the task, (2) the qualities of the supervisor, (3) the qualities of the trainee and the quality of the relationship between the supervisor and the trainee, and (4) the circumstances surrounding the clinical task.
The deidentified, anonymous transcripts were reviewed by 2 investigators (K.J.C., J.M.F.) and analyzed using the constant comparative methods to deductively map the content to the existing framework and generate novel sub themes.[12, 13, 14] Novel categories within each of the domains were inductively generated. Two reviewers (K.J.C., J.M.F.) independently applied the themes to a randomly selected 10% portion of the interview transcripts to assess the inter‐rater reliability. The inter‐rater agreement was assessed using the generalized kappa statistic. The discrepancies between reviewers regarding assignment of codes were resolved via discussion and third party adjudication until consensus was achieved on thematic structure. The codes were then applied to the entire dataset.
RESULTS
Between January 2006 and November 2006, 46 of 50 (88%) attending physicians and 44 of 50 (92%) resident physicians were interviewed following the conclusion of their general medicine inpatient rotation. Of attending physicians, 55% were male, 45% were female, and 38% were academic faculty hospitalists. Of the residents who completed interviews, 47% were male, 53% were female, 52% were PGY‐2, and 45% were PGY‐3.
A total of 535 mentions of trust were abstracted from the transcripts. The 4 major domains that influence trusttrainee factors (Table 1), supervisor factors (Table 2), task factors (Table 3), and systems factors (Table 4)were deductively coded with several emerging novel categories and subthemes. The domains were consistent across the postgraduate year of trainee. No differences in themes were noted, other than those explicitly stated, between the postgraduate years.
Domain (N) | Category (N) | Subtheme (N) | Definition and Representative Comment |
---|---|---|---|
| |||
Trainee factors (170); characteristics specific to the trainee that either promote or discourage trust. | Personal characteristics (78); traits that impact attendings' decision regarding trust/allowance of autonomy. | Confidence and overconfidence (29) | Displayed level of comfort when approaching specific clinical situations. I think I havea personality and presenting style [that] people think that I know what I am talkingabout and they just let me run with it. (R) |
Accountability (18) | Sense of responsibility, including ability to follow‐up on details regarding patient care. [What] bothered me the most was that that kind of lack of accountability for patient careand it makes the whole dynamic of rounds much more stressful. I ended up asking him to page me every day to run the list. (A) | ||
Familiarity/ reputation (18) | Comfort with trainee gained through prior working experience, or reputation of the trainee based on discussion with other supervisors. I do have to get to know someone a little to develop that level of trust, to know that it is okay to not check the labs every day, okay to not talk to them every afternoon. (A) | ||
Honesty (13) | Sense trainee is not withholding information in order to impact decision making toward a specific outcome. [The residents] have more information than I do and they can clearly spin that information, and it is very difficult to unravelunless you treat them like a hostile witness on the stand.(A) | ||
Clinical attributes (92); skills demonstrated in the context of patient care that promote or inhibit trust. | Leadership (19) | Ability to organize, teach, and manage coresidents, interns, and students. I want them to be in chargedeciding the plan and sitting down with the team before rounds. (A) | |
Communication (12) | Establishing and encouraging conversation with supervisor regarding decision making.Some residents call me regularly and let me know what's going on and others don't, and those who don't I really have trouble withif you're not calling to check in, then I don't trust your judgment. (A) | ||
Specialty (6) | Trainee future career plans. Whether it's right or wrong, nonmedicine interns may not be as attentive to smaller details, and so I had to be attentive to smaller details on [his] patients. (R2) | ||
Medical knowledge (39) | Ability to display appropriate level of clinical acumen and apply evidence‐based medicine. I definitelygo on my own gestalt of talking with them and deciding if what they do is reasonable. If they can't explain things to me, that's when I worry. (A) | ||
Recognition of limitations (16) | Trainee's ability to recognize his/her own weaknesses, accept criticism, and solicit help when appropriate. The first thing is that they know their limits and ask for help either in rounds or outside of rounds. That indicates to me that as they are out there on their own they are less likely to do things that they don't understand. (A) |
Domain (N) | Major Category (N) | Subtheme (N) | Definition and Representative Comment |
---|---|---|---|
| |||
Supervisor factors (120); characteristics specific to the supervisor which either promote or discourage trust. | Approachability (34); personality traits, such as approachability, which impact the trainees' perception regarding trust/allowance of autonomy. | Sense that the attending physician is available to and receptive to questions from trainees. I think [attending physicians] being approachable and available to you if you need them is really helpful. (R) | |
Clinical attributes (86); skills demonstrated in the context of patient care that promote or inhibit trust. | Institutional obligation (17) | Attending physician is the one contractually and legally responsible for the provision of high‐quality and appropriate patient care. If [the residents] have a good reason I can be argued out of my position. I am ultimately responsible andhave to choose if there is some serious dispute. (A) | |
Experience and expertise (29) | Clinical experience, area of specialty, and research interests of the attending physician. You have to be confident in your own clinical skills and knowledge, confident enough that you can say its okay for me to let go a little bit. (A) | ||
Observation‐based evaluation (27) | Evaluation of trainee decision‐making ability during the early part of the attending/trainee relationship. It's usually the first post‐call day experience, the first on‐call and post‐call day experience. One of the big things is [if they can] tell if a patient is sick or not sickif they are missing at that level then I get very nervous. I really get a sense [of] how they think about patients. (A) | ||
Educational obligation (13) | Acknowledging the role of the attending as clinical teacher. My theory with the interns was that they should do it because that's how you learn. (R) |
Domain (N) | Major Category (N) | Subtheme (N) | Definition |
---|---|---|---|
| |||
Task factors (146); details or characteristics of the task that encouraged or impeded contacting the supervisor. | Clinical characteristics (103) | Case complexity (25) | Evaluation of the level of difficulty in patient management. I don't expect to be always looking over [the resident's] shoulder, I don't check labs everyday, and I don't call them if I see potassium of 3; I assume that they are going to take care of it. |
Family/ethical dilemma (10) | Uncertainty regarding respecting the wishes of patients and other ethical dilemmas. There was 1 time I called because we had a very sick patient who had a lot of family asking for more aggressive measures, and I called to be a part of the conversation. | ||
Interdepartment collaboration (18) | Difficulties when treating patients managed by multiple consult services. I have called [the attending] when I have had trouble pushing things through the systemif we had trouble getting tests or trouble with a particular consult team I would call him. | ||
Urgency/severity of illness (13) | Clinical condition of patient requires immediate or urgent intervention. If I have something that is really pressing I would probably page my attending. If it's a question [of] just something that I didn't know the answer to [or] wasn't that urgent I could turn to my fellow residents. | ||
Transitions of care (37) | Communication with supervisor because of concern/uncertainty regarding patient transition decisions. We wanted to know if it was okay to discharge somebody or if something changes where something in the plan changes. I usually text page her or call her. | ||
Situation or environment characteristics (49) | Proximity of attending physicians and support staff (10) | Availability of attending physicians and staff resources . I have been called in once or twice to help with a lumbar puncture or paracentesis, but not too often. The procedure service makes life much easier than it used to be. | |
Team culture (33) | Presence or absence of a collaborative and supportive group environment. I had a team that I did trust. I think we communicated well; we were all sort of on the same page. | ||
Time of day (6) | Time of the task. Once its past 11 pm, I feel like I shouldn't call, the threshold is higherthe patient has to be sicker. |
Domain (N) | Major Categories (N) | Definition |
---|---|---|
| ||
Systems factors (99); unmodifiable factors not related to personal characteristics or knowledge of trainee or supervisor. | Workload (15) | Increasing trainee clinical workload results in a more intensive experience. They [residents] get 10 patients within a pretty concentrated timeso they really have to absorb a lot of information in a short period of time. |
Institutional culture (4) | Anticipated quality of the trainee because of the status of the institution. I assume that our residents and interns are top notch, so I go in with this real assumption that I expect the best of them because we are [the best]. | |
Clinical experience of trainee (36) | Types of clinical experience prior to supervisor/trainee interaction. The interns have done as much [general inpatient medicine] months as I havethey had both done like 2 or 3 months really close together, so they were sort of at their peak knowledge. | |
Level of training (25) | Postgraduate year of trainee. It depends on the experience level of the resident. A second year who just finished internship, I am going to supervise more closely and be more detail oriented; a fourth year medicine‐pediatrics resident who is almost done, I will supervise a lot less. | |
Duty hours/efficiency pressures (5) | Absence of residents due to other competing factors, including compliance with work‐hour restrictions. Before the work‐hour [restrictions], when [residents] were here all the time and knew everything about the patients, I found them to be a lot more reliableand now they are still supposed to be in charge, but hell I am here more often than they are. I am here every day, I have more information than they do. How can you run the show if you are not here every day? | |
Philosophy of medical education (14) | Belief that trainees learn by the provision of completely autonomous decision making. When you are not around, [the residents] have autonomy, they are the people making the initial decisions and making the initial assessments. They are the ones who are there in the middle of the night, the ones who are there at 3 o'clock in the afternoon. The resident is supposed to have room to make decisions. When I am not there, it's not my show. |
Trainee Factors
Attending and resident physicians both cited trainee factors as major determinants of granting entrustment (Table 1). Within the domain, the categories described included trainee personal characteristics and clinical characteristics. Of the subthemes noted within the major category of personal characteristics, the perceived confidence or overconfidence of the trainee was most often mentioned. Other subthemes included accountability, familiarity, and honesty. Attending physicians reported using perceived resident confidence as a gauge of the trainee's true ability and comfort. Conversely, some attending physicians reported that perceived overconfidence was a red flag that warranted increased scrutiny. Overconfidence was identified by faculty as trainees with an inability to recognize their limitations in either technical skill or knowledge. Confidence was noted in trainees that recognized their own limitations while also enacting effective management plans, and those physicians that prioritized the patient needs over their personal needs.
The clinical attributes of trainees described by attendings included: leadership skills, communication skills, anticipated specialty, medical knowledge, and perceived recognition of limitations. All participants expressed that the possession of adequate medical knowledge was the most important clinical skills‐related factor in the development of trust. Trainee demonstration of judgment, including applying evidence‐based practice, was used to support attending physician's decision to give residents more autonomy in managing patients. Many attending physicians described a specific pattern of observation and evaluation, in which they would rely on impressions shaped early in the rotation to inform their decisions of entrustment throughout the rotation. The use of this early litmus test was highlighted by several attending physicians. This litmus test described the importance of behavior on the first day/call night and postcall interactions as particularly important opportunities to gauge the ability of a resident to triage new patient admissions, manage their anxiety and uncertainty, and demonstrate maturity and professionalism. Several faculty members discussed examples of their litmus test including checking and knowing laboratory data prior to rounds but not mentioning their findings until they had noted the resident was unaware ([I]f I see a 2 g hemoglobin drop when I check the [electronic medical record {EMR}] and they don't bring it up, I will bring it to their attention, and then I'll get more involved.) or assessing the management of both straightforward and complex patients. They would then use this initial impression to determine their degree of involvement in the care of the patient.
The quality and nature of the communication skills, particularly the increased frequency of contact between resident and attending, was used as a barometer of trainee judgment. Furthermore, attending physicians expressed that they would often micromanage patient care if they did not trust a trainee's ability to reliably and frequently communicate patient status as well as the attendings concerns and uncertainty about future decisions. Some level of uncertainty was generally seen in a positive light by attending physicians, because it signaled that trainees had a mature understanding of their limitations. Finally, the trainee's expressed future specialty, especially if the trainee was a preliminary PGY‐1 resident, or a more senior resident anticipating subspecialty training in a procedural specialty, impacted the degree of autonomy provided.
Supervisor Factors
Supervisor characteristics were further categorized into their approachability and clinical attributes (Table 2). Approachability as a proxy for quality of the relationship, was cited as the personality characteristic that most influenced trust by the residents. This was often described by both attending and resident physicians as the presence of a supportive team atmosphere created through explicit declaration of availability to help with patient care tasks. Some attending physicians described the importance of expressing enthusiasm when receiving queries from their team to foster an atmosphere of nonjudgmental collaboration.
The clinical experience and knowledge base of the attending physician played a role in the provision of autonomy, particularly in times of disagreement about particular clinical decisions. Conversely, attending physicians who had spent less time on inpatient general medicine were more willing to yield to resident suggestions.
Task Factors
The domain of task factors was further divided into the categories that pertained to the clinical aspects of the task and those that pertained to the context, that is the environment in which the entrustment decisions were made (Table 3). Clinical characteristics included case complexity, presence of an ethical dilemma, interdepartmental collaboration, urgency/severity of situation, and transitions of care. The environmental characteristics included physical proximity of supervisors/support, team culture, and time of day. Increasing case complexity, especially the coexistence of legal and/or ethical dilemmas, was often mentioned as a factor driving greater attending involvement. Conversely, straightforward clinical decisions, such as electrolyte repletion, were described as sufficiently easy to allow limited attending involvement. Transitions of care, such as patient discharge or transfer, required greater communication and attending involvement or guidance, regardless of case complexity.
Attending and resident physicians reported that the team dynamics played a large role in the development, granting, or discouragement of trust. Teams with a positive rapport reported a collaborative environment that fostered increased trust by the attending and led to greater resident autonomy. Conversely, team discord that influenced the supervisor‐trainee relationship, often defined as toxic attitudes within the team, was often singled out as the reason attending physicians would feel the need to engage more directly in patient care and by extension have less trust in residents to manage their patients.
Systems Factors
Systems factors were described as the nonmodifiable factors, unrelated to either the characteristics of the supervisor, trainee, or the clinical task (Table 4). The subthemes that emerged included workload, institutional culture, trainee experience, level of training, and duty hours/efficiency pressures. Residents and attending physicians noted that trainee PGY and clinical experience commonly influenced the provision of autonomy and supervision by attendings. Participants reported that the importance of adequate clinical experience was of greater concern given the new duty‐hour restrictions, increased workload, as well as efficiency pressures. Attending physicians noted that trainee absences, even when required to comply with duty‐hour restrictions, had a negative effect on entrustment‐granting decisions. Many attendings felt that a trainee had to be physically present to make informed decisions on the inpatient medicine service.
DISCUSSION
Clinical supervisors must hold the quality of care constant while balancing the amount of supervision and autonomy provided to learners in procedural tasks and clinical decision making. We found that the development of trust is multifactorial and highly contextual. It occurs under the broad constructs of task, supervisor, trainee, and environmental factors, and is well described in prior work. We also demonstrate that often what determines these broader factors is highly subjective, frequently independent of objective measures of trainee performance. Many decisions are based on personal characteristics, such as the perception of honesty, disposition, perceived confidence or perceived overconfidence of the trainee, prior experience, and expressed future field of specialty.
Our findings are consistent with prior research, but go further in describing and demonstrating the existence and innovative use of factors, other than clinical knowledge and skill, in the formation of a multidimensional construct of trust. Kennedy et al. identified 4 dimensions of trust knowledge and skill, discernment, conscientiousness, and truthfulness[15]and demonstrated that supervising physicians rely on specific processes to assess trainee trustworthiness, specifically the use of double checks and language cues. This is consistent with our results, which demonstrate that many attending physicians independently verify information, such as laboratory findings, to inform their perceptions of trainee honesty, attention to detail, and ability to follow orders reliably. Furthermore, our subthemes of communication and the demonstration of logical clinical reasoning correspond to Kennedy's use of language cues.[15] We found that language cues are used as markers of trustworthiness, particularly early on in the rotation, as a litmus test to gauge the trainee's integrity and ability to assess and treat patients unsupervised.
To date, much has been written about the importance of direct observation in the evaluation of trainees.[16, 17, 18, 19] Our results demonstrate that supervising clinicians use a multifactorial, highly nuanced, and subjective process despite validated performance‐based assessment methods, such as the objective structured clinical exam or mini‐clinical evaluation exercise, to assess competence and grant entrustement.[3] Several factors utilized to determine trustworthiness in addition to direct observation are subjective in nature, specifically the trainee's prior experience and expressed career choice.
It is encouraging that attending physicians make use of direct observations to inform decisions of entrustment, albeit in an informal and unstructured way. They also seem to take into account the context and setting in which the observation occurs, and consider both the environmental factors as well as factors that relate to the task itself.[20] For example, attendings and residents reported that team dynamics played a large role in influencing trust decisions. We also found that attending physicians rely on indirect observation and will inquire among their colleagues and other senior residents to gain information about their trainees abilities and integrity. Evaluation tools that facilitate sharing of trainees' level of preparedness, prior feedback, and experience could facilitate the determination of readiness to complete EPAs as well as the reporting of achieved milestones in accordance with the ACGME NAS.
Sharing knowledge about trainees among attendings is common and of increasing importance in the context of attending physicians' shortened exposure to trainees due to the residency work‐hour restrictions and growing productivity pressures. In our study, attending physicians described work‐hour restrictions as detrimental to trainee trustworthiness, either in the context of decreased accountability for patient care or as intrinsic to the nature of forced absences that kept trainees from fully participating in daily ward activities and knowing their patients. Attending physicians felt that trainees did not know their patients well enough to be able to make independent decisions about care. The increased transition to a shift‐based structure of inpatient medicine may result in increasingly less time for direct observation and make it more difficult for attendings to justify their decisions about engendering trust. In addition, the increased fragmentation that is noted in training secondary to the work‐hour regulations may in fact have consequences on the development of clinical skill and decision making, such that increased attention to the need for supervision and longer lead to entrustment may be needed in certain circumstances. Attendings need guidance on how to improve their ability to observe trainees in the context of the new work environment, and how to role model decision making more effectively in the compressed time exposure to housestaff.
Our study has several limitations. The organizational structure and culture of our institution are unique to 1 academic setting. This may undermine our ability to generalize these research findings and analysis to the population at large.[21] In addition, recall bias may have played into the interpretation of the interview content given the timing with which they were performed after the conclusion of the rotation. The study interviews took place in 2006, and it is reasonable to believe that some perceptions concerning duty‐hour restrictions and competency‐based graduate medical education have changed. However, from our ongoing research over the past 5 years[4] and our personal experience with entrustment factors, we believe that the participants' perceptions of trust and competency are valid and have largely remained unchanged, given the similarity in findings to the accepted ten Cate framework. In addition, this work was done following the first iteration of the work‐hour regulations but prior to the implementation of explicit supervisory levels, so it may indeed represent a truer state of the supervisory relationship before external regulations were applied. Finally, this work represents an internal medicine residency training program and may not be generalizable to other specialties that posses different cultural factors that impact the decision for entrustment. However, the congruence of our data with that of the original work of ten Cate, which was done in gynecology,[6] and that of Sterkenberg et al. in anesthesiology,[4] supports our key factors being ubiquitous to all training programs.
In conclusion, we provide new insights into subjective factors that inform the perceptions of trust and entrustment decisions by supervising physicians, specifically subjective trainee characteristics, team dynamics, and informal observation. There was agreement among attendings about which elements of competence are considered most important in their entrustment decisions related to trainee, supervisor, task, and environmental factors. Rather than undervaluing the use of personal factors in the determination of trust, we believe that acknowledgement and appreciation of these factors may be important to give supervisors more confidence and better tools to assess resident physicians, and to understand how their personality traits relate to and impact their professional competence. Our findings are relevant for the development of assessment instruments to evaluate whether medical graduates are ready for safe practice without supervision.
ACKNOWLEDGEMENTS
Disclosures: Dr. Kevin Choo was supported by Scholarship and Discovery, University of Chicago, while in his role as a fourth‐year medical student. This study received institutional review board approval prior to evaluation of our human participants. Portions of this study were presented as an oral abstract at the 35th Annual Meeting of the Society of General Internal Medicine, Orlando, Florida, May 912, 2012.
- Accreditation Council for Graduate Medical Education Common Program Requirements. Available at: http://www.acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx, Accessed November 30, 2013.
- Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Acad Med. 2009;84:301–309. , , .
- Toward authentic clinical evaluation: pitfalls in the pursuit of competency. Acad Med. 2010;85(5):780–786. , , , , .
- When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):1408–1417. , , , , .
- The next GME accreditation system—rationale and benefits. N Eng J Med. 2012;366(11):1051–1056. , , , .
- Trust, competence and the supervisor's role in postgraduate training. BMJ. 2006;333:748–751. .
- Clinical Decision Making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122–126. , , , , .
- Strategies for effective on call supervision for internal medicine residents: the superb/safety model. J Grad Med Educ. 2010;2(1):46–52. , , , et al.
- On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784–788. , , , , .
- The critical incident technique. Psychol Bull. 1954;51.4:327–359. .
- Critical evaluation of appreciative inquiry: bridging an apparent paradox. Action Res. 2006;4(4):401–418. , .
- Basics of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1998. , .
- How to Design and Evaluate Research in Education. New York, NY: McGraw Hill; 2003. , .
- Qualitative Data Analysis. Thousand Oaks, CA: Sage; 1994. , .
- Point‐of‐care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;84:S89–S92. , , , .
- Viewpoint: competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542–547. , .
- Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:1156–1165. , , , et al.
- Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):1316–1326. , , .
- Assessment in medical education. N Engl J Med. 2007;356:387–396. .
- A prospective study of paediatric cardiac surgical microsystems: assessing the relationships between non‐routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20(7):599–603. , , , , . .
- Generalizability and transferability of meta‐synthesis research findings. J Adv Nurs. 2010;66(2):246–254. .
Determining when residents are independently prepared to perform clinical care tasks safely is not easy or understood. Educators have struggled to identify robust ways to evaluate trainees and their preparedness to treat patients while unsupervised. Trust allows the trainee to experience increasing levels of participation and responsibility in the workplace in a way that builds competence for future practice. The breadth of knowledge and skills required to become a competent and safe physician, coupled with the busy workload confound this challenge. Notably, a technically proficient trainee may not have the clinical judgment to treat patients without supervision.
The Accreditation Council of Graduate Medical Education (ACGME) has previously outlined 6 core competencies for residency training: patient care, medical knowledge, practice‐based learning and improvement, interpersonal and communication skills, professionalism, and systems‐based practice.[1] A systematic literature review suggests that traditional trainee evaluation tools are difficult to use and unreliable in measuring the competencies independently from one another, whereas certain competencies are consistently difficult to quantify in a reliable and valid way.[2] The evaluation of trainees' clinical performance despite efforts to create objective tools remain strongly influenced by subjective measures and continues to be highly variable among different evaluators.[3] Objectively measuring resident autonomy and readiness to supervise junior colleagues remains imprecise.[4]
The ACGME's Next Accreditation System (NAS) incorporates educational milestones as part of the reporting of resident training outcomes.[5] The milestones allow for the translation of the core competencies into integrative and observable abilities. Furthermore, the milestone categories are stratified into tiers to allow progress to be measured longitudinally and by task complexity using a novel assessment strategy.
The development of trust between supervisors and trainees is a critical step in decisions to allow increased responsibility and the provision of autonomous decision making, which is an important aspect of physician training. Identifying the factors that influence the supervisors' evaluation of resident competency and capability is at the crux of trainee maturation as well as patient safety.[4] Trust, defined as believability and discernment by attendings of resident physicians, plays a large role in attending evaluations of residents during their clinical rotations.[3] Trust impacts the decisions of successful performance of entrustable professional activities (EPAs), or those tasks that require mastery prior to completion of training milestones.[6] A study of entrustment decisions made by attending anesthesiologists identified the factors that contribute to the amount of autonomy given to residents, such as trainee trustworthiness, medical knowledge, and level of training.[4] The aim of our study, building on this study, was 2‐fold: (1) use deductive qualitative analysis to apply this framework to existing resident and attending data, and (2) define the categories within this framework and describe how internal medicine attending and resident physician perceptions of trust can impact clinical decision making and patient care.
METHODS
We are reporting on a secondary data analysis of interview transcripts from a study conducted on the inpatient general medicine service at the University of Chicago, an academic tertiary care medical center. The methods for data collection and full consent have been outlined previously.[7, 8, 9] The institutional review board of the University of Chicago approved this study.
Briefly, between January 2006 and November 2006, all eligible internal medicine resident physicians, postgraduate year (PGY)‐2 or PGY‐3, and attending physicians, either generalists or hospitalists, were privately interviewed within 1 week of their final call night on the inpatient general medicine rotation to assess decision making and clinical supervision during the rotation. All interviews were conducted by 1 investigator (J.F.), and discussions were audio taped and transcribed for analysis. Interviews were conducted at the conclusion of the rotation to prevent any influence on resident and attending behavior during the rotation.
The critical incident technique, a procedure used for collecting direct observations of human behavior that have critical significance on the decision‐making process, was used to solicit examples of ineffective supervision, inquiring about 2 to 3 important clinical decisions made on the most recent call night, with probes to identify issues of trust, autonomy, and decision making.[10] A critical incident can be described as one that makes a significant contribution, either positively or negatively, on the process.
Appreciative inquiry, a technique that aims to uncover the best things about the clinical encounter being explored, was used to solicit examples of effective supervision. Probes are used to identify factors, either personal or situational, that influenced the withholding or provision of resident autonomy during periods of clinical care delivery.[11]
All identifiable information was removed from the interview transcripts to protect participant and patient confidentiality. Deductive qualitative analysis was performed using the conceptual EPA framework, which describes several factors that influence the attending physicians' decisions to deem a resident trustworthy to independently fulfill a specific clinical task.[4] These factors include (1) the nature of the task, (2) the qualities of the supervisor, (3) the qualities of the trainee and the quality of the relationship between the supervisor and the trainee, and (4) the circumstances surrounding the clinical task.
The deidentified, anonymous transcripts were reviewed by 2 investigators (K.J.C., J.M.F.) and analyzed using the constant comparative methods to deductively map the content to the existing framework and generate novel sub themes.[12, 13, 14] Novel categories within each of the domains were inductively generated. Two reviewers (K.J.C., J.M.F.) independently applied the themes to a randomly selected 10% portion of the interview transcripts to assess the inter‐rater reliability. The inter‐rater agreement was assessed using the generalized kappa statistic. The discrepancies between reviewers regarding assignment of codes were resolved via discussion and third party adjudication until consensus was achieved on thematic structure. The codes were then applied to the entire dataset.
RESULTS
Between January 2006 and November 2006, 46 of 50 (88%) attending physicians and 44 of 50 (92%) resident physicians were interviewed following the conclusion of their general medicine inpatient rotation. Of attending physicians, 55% were male, 45% were female, and 38% were academic faculty hospitalists. Of the residents who completed interviews, 47% were male, 53% were female, 52% were PGY‐2, and 45% were PGY‐3.
A total of 535 mentions of trust were abstracted from the transcripts. The 4 major domains that influence trusttrainee factors (Table 1), supervisor factors (Table 2), task factors (Table 3), and systems factors (Table 4)were deductively coded with several emerging novel categories and subthemes. The domains were consistent across the postgraduate year of trainee. No differences in themes were noted, other than those explicitly stated, between the postgraduate years.
Domain (N) | Category (N) | Subtheme (N) | Definition and Representative Comment |
---|---|---|---|
| |||
Trainee factors (170); characteristics specific to the trainee that either promote or discourage trust. | Personal characteristics (78); traits that impact attendings' decision regarding trust/allowance of autonomy. | Confidence and overconfidence (29) | Displayed level of comfort when approaching specific clinical situations. I think I havea personality and presenting style [that] people think that I know what I am talkingabout and they just let me run with it. (R) |
Accountability (18) | Sense of responsibility, including ability to follow‐up on details regarding patient care. [What] bothered me the most was that that kind of lack of accountability for patient careand it makes the whole dynamic of rounds much more stressful. I ended up asking him to page me every day to run the list. (A) | ||
Familiarity/ reputation (18) | Comfort with trainee gained through prior working experience, or reputation of the trainee based on discussion with other supervisors. I do have to get to know someone a little to develop that level of trust, to know that it is okay to not check the labs every day, okay to not talk to them every afternoon. (A) | ||
Honesty (13) | Sense trainee is not withholding information in order to impact decision making toward a specific outcome. [The residents] have more information than I do and they can clearly spin that information, and it is very difficult to unravelunless you treat them like a hostile witness on the stand.(A) | ||
Clinical attributes (92); skills demonstrated in the context of patient care that promote or inhibit trust. | Leadership (19) | Ability to organize, teach, and manage coresidents, interns, and students. I want them to be in chargedeciding the plan and sitting down with the team before rounds. (A) | |
Communication (12) | Establishing and encouraging conversation with supervisor regarding decision making.Some residents call me regularly and let me know what's going on and others don't, and those who don't I really have trouble withif you're not calling to check in, then I don't trust your judgment. (A) | ||
Specialty (6) | Trainee future career plans. Whether it's right or wrong, nonmedicine interns may not be as attentive to smaller details, and so I had to be attentive to smaller details on [his] patients. (R2) | ||
Medical knowledge (39) | Ability to display appropriate level of clinical acumen and apply evidence‐based medicine. I definitelygo on my own gestalt of talking with them and deciding if what they do is reasonable. If they can't explain things to me, that's when I worry. (A) | ||
Recognition of limitations (16) | Trainee's ability to recognize his/her own weaknesses, accept criticism, and solicit help when appropriate. The first thing is that they know their limits and ask for help either in rounds or outside of rounds. That indicates to me that as they are out there on their own they are less likely to do things that they don't understand. (A) |
Domain (N) | Major Category (N) | Subtheme (N) | Definition and Representative Comment |
---|---|---|---|
| |||
Supervisor factors (120); characteristics specific to the supervisor which either promote or discourage trust. | Approachability (34); personality traits, such as approachability, which impact the trainees' perception regarding trust/allowance of autonomy. | Sense that the attending physician is available to and receptive to questions from trainees. I think [attending physicians] being approachable and available to you if you need them is really helpful. (R) | |
Clinical attributes (86); skills demonstrated in the context of patient care that promote or inhibit trust. | Institutional obligation (17) | Attending physician is the one contractually and legally responsible for the provision of high‐quality and appropriate patient care. If [the residents] have a good reason I can be argued out of my position. I am ultimately responsible andhave to choose if there is some serious dispute. (A) | |
Experience and expertise (29) | Clinical experience, area of specialty, and research interests of the attending physician. You have to be confident in your own clinical skills and knowledge, confident enough that you can say its okay for me to let go a little bit. (A) | ||
Observation‐based evaluation (27) | Evaluation of trainee decision‐making ability during the early part of the attending/trainee relationship. It's usually the first post‐call day experience, the first on‐call and post‐call day experience. One of the big things is [if they can] tell if a patient is sick or not sickif they are missing at that level then I get very nervous. I really get a sense [of] how they think about patients. (A) | ||
Educational obligation (13) | Acknowledging the role of the attending as clinical teacher. My theory with the interns was that they should do it because that's how you learn. (R) |
Domain (N) | Major Category (N) | Subtheme (N) | Definition |
---|---|---|---|
| |||
Task factors (146); details or characteristics of the task that encouraged or impeded contacting the supervisor. | Clinical characteristics (103) | Case complexity (25) | Evaluation of the level of difficulty in patient management. I don't expect to be always looking over [the resident's] shoulder, I don't check labs everyday, and I don't call them if I see potassium of 3; I assume that they are going to take care of it. |
Family/ethical dilemma (10) | Uncertainty regarding respecting the wishes of patients and other ethical dilemmas. There was 1 time I called because we had a very sick patient who had a lot of family asking for more aggressive measures, and I called to be a part of the conversation. | ||
Interdepartment collaboration (18) | Difficulties when treating patients managed by multiple consult services. I have called [the attending] when I have had trouble pushing things through the systemif we had trouble getting tests or trouble with a particular consult team I would call him. | ||
Urgency/severity of illness (13) | Clinical condition of patient requires immediate or urgent intervention. If I have something that is really pressing I would probably page my attending. If it's a question [of] just something that I didn't know the answer to [or] wasn't that urgent I could turn to my fellow residents. | ||
Transitions of care (37) | Communication with supervisor because of concern/uncertainty regarding patient transition decisions. We wanted to know if it was okay to discharge somebody or if something changes where something in the plan changes. I usually text page her or call her. | ||
Situation or environment characteristics (49) | Proximity of attending physicians and support staff (10) | Availability of attending physicians and staff resources . I have been called in once or twice to help with a lumbar puncture or paracentesis, but not too often. The procedure service makes life much easier than it used to be. | |
Team culture (33) | Presence or absence of a collaborative and supportive group environment. I had a team that I did trust. I think we communicated well; we were all sort of on the same page. | ||
Time of day (6) | Time of the task. Once its past 11 pm, I feel like I shouldn't call, the threshold is higherthe patient has to be sicker. |
Domain (N) | Major Categories (N) | Definition |
---|---|---|
| ||
Systems factors (99); unmodifiable factors not related to personal characteristics or knowledge of trainee or supervisor. | Workload (15) | Increasing trainee clinical workload results in a more intensive experience. They [residents] get 10 patients within a pretty concentrated timeso they really have to absorb a lot of information in a short period of time. |
Institutional culture (4) | Anticipated quality of the trainee because of the status of the institution. I assume that our residents and interns are top notch, so I go in with this real assumption that I expect the best of them because we are [the best]. | |
Clinical experience of trainee (36) | Types of clinical experience prior to supervisor/trainee interaction. The interns have done as much [general inpatient medicine] months as I havethey had both done like 2 or 3 months really close together, so they were sort of at their peak knowledge. | |
Level of training (25) | Postgraduate year of trainee. It depends on the experience level of the resident. A second year who just finished internship, I am going to supervise more closely and be more detail oriented; a fourth year medicine‐pediatrics resident who is almost done, I will supervise a lot less. | |
Duty hours/efficiency pressures (5) | Absence of residents due to other competing factors, including compliance with work‐hour restrictions. Before the work‐hour [restrictions], when [residents] were here all the time and knew everything about the patients, I found them to be a lot more reliableand now they are still supposed to be in charge, but hell I am here more often than they are. I am here every day, I have more information than they do. How can you run the show if you are not here every day? | |
Philosophy of medical education (14) | Belief that trainees learn by the provision of completely autonomous decision making. When you are not around, [the residents] have autonomy, they are the people making the initial decisions and making the initial assessments. They are the ones who are there in the middle of the night, the ones who are there at 3 o'clock in the afternoon. The resident is supposed to have room to make decisions. When I am not there, it's not my show. |
Trainee Factors
Attending and resident physicians both cited trainee factors as major determinants of granting entrustment (Table 1). Within the domain, the categories described included trainee personal characteristics and clinical characteristics. Of the subthemes noted within the major category of personal characteristics, the perceived confidence or overconfidence of the trainee was most often mentioned. Other subthemes included accountability, familiarity, and honesty. Attending physicians reported using perceived resident confidence as a gauge of the trainee's true ability and comfort. Conversely, some attending physicians reported that perceived overconfidence was a red flag that warranted increased scrutiny. Overconfidence was identified by faculty as trainees with an inability to recognize their limitations in either technical skill or knowledge. Confidence was noted in trainees that recognized their own limitations while also enacting effective management plans, and those physicians that prioritized the patient needs over their personal needs.
The clinical attributes of trainees described by attendings included: leadership skills, communication skills, anticipated specialty, medical knowledge, and perceived recognition of limitations. All participants expressed that the possession of adequate medical knowledge was the most important clinical skills‐related factor in the development of trust. Trainee demonstration of judgment, including applying evidence‐based practice, was used to support attending physician's decision to give residents more autonomy in managing patients. Many attending physicians described a specific pattern of observation and evaluation, in which they would rely on impressions shaped early in the rotation to inform their decisions of entrustment throughout the rotation. The use of this early litmus test was highlighted by several attending physicians. This litmus test described the importance of behavior on the first day/call night and postcall interactions as particularly important opportunities to gauge the ability of a resident to triage new patient admissions, manage their anxiety and uncertainty, and demonstrate maturity and professionalism. Several faculty members discussed examples of their litmus test including checking and knowing laboratory data prior to rounds but not mentioning their findings until they had noted the resident was unaware ([I]f I see a 2 g hemoglobin drop when I check the [electronic medical record {EMR}] and they don't bring it up, I will bring it to their attention, and then I'll get more involved.) or assessing the management of both straightforward and complex patients. They would then use this initial impression to determine their degree of involvement in the care of the patient.
The quality and nature of the communication skills, particularly the increased frequency of contact between resident and attending, was used as a barometer of trainee judgment. Furthermore, attending physicians expressed that they would often micromanage patient care if they did not trust a trainee's ability to reliably and frequently communicate patient status as well as the attendings concerns and uncertainty about future decisions. Some level of uncertainty was generally seen in a positive light by attending physicians, because it signaled that trainees had a mature understanding of their limitations. Finally, the trainee's expressed future specialty, especially if the trainee was a preliminary PGY‐1 resident, or a more senior resident anticipating subspecialty training in a procedural specialty, impacted the degree of autonomy provided.
Supervisor Factors
Supervisor characteristics were further categorized into their approachability and clinical attributes (Table 2). Approachability as a proxy for quality of the relationship, was cited as the personality characteristic that most influenced trust by the residents. This was often described by both attending and resident physicians as the presence of a supportive team atmosphere created through explicit declaration of availability to help with patient care tasks. Some attending physicians described the importance of expressing enthusiasm when receiving queries from their team to foster an atmosphere of nonjudgmental collaboration.
The clinical experience and knowledge base of the attending physician played a role in the provision of autonomy, particularly in times of disagreement about particular clinical decisions. Conversely, attending physicians who had spent less time on inpatient general medicine were more willing to yield to resident suggestions.
Task Factors
The domain of task factors was further divided into the categories that pertained to the clinical aspects of the task and those that pertained to the context, that is the environment in which the entrustment decisions were made (Table 3). Clinical characteristics included case complexity, presence of an ethical dilemma, interdepartmental collaboration, urgency/severity of situation, and transitions of care. The environmental characteristics included physical proximity of supervisors/support, team culture, and time of day. Increasing case complexity, especially the coexistence of legal and/or ethical dilemmas, was often mentioned as a factor driving greater attending involvement. Conversely, straightforward clinical decisions, such as electrolyte repletion, were described as sufficiently easy to allow limited attending involvement. Transitions of care, such as patient discharge or transfer, required greater communication and attending involvement or guidance, regardless of case complexity.
Attending and resident physicians reported that the team dynamics played a large role in the development, granting, or discouragement of trust. Teams with a positive rapport reported a collaborative environment that fostered increased trust by the attending and led to greater resident autonomy. Conversely, team discord that influenced the supervisor‐trainee relationship, often defined as toxic attitudes within the team, was often singled out as the reason attending physicians would feel the need to engage more directly in patient care and by extension have less trust in residents to manage their patients.
Systems Factors
Systems factors were described as the nonmodifiable factors, unrelated to either the characteristics of the supervisor, trainee, or the clinical task (Table 4). The subthemes that emerged included workload, institutional culture, trainee experience, level of training, and duty hours/efficiency pressures. Residents and attending physicians noted that trainee PGY and clinical experience commonly influenced the provision of autonomy and supervision by attendings. Participants reported that the importance of adequate clinical experience was of greater concern given the new duty‐hour restrictions, increased workload, as well as efficiency pressures. Attending physicians noted that trainee absences, even when required to comply with duty‐hour restrictions, had a negative effect on entrustment‐granting decisions. Many attendings felt that a trainee had to be physically present to make informed decisions on the inpatient medicine service.
DISCUSSION
Clinical supervisors must hold the quality of care constant while balancing the amount of supervision and autonomy provided to learners in procedural tasks and clinical decision making. We found that the development of trust is multifactorial and highly contextual. It occurs under the broad constructs of task, supervisor, trainee, and environmental factors, and is well described in prior work. We also demonstrate that often what determines these broader factors is highly subjective, frequently independent of objective measures of trainee performance. Many decisions are based on personal characteristics, such as the perception of honesty, disposition, perceived confidence or perceived overconfidence of the trainee, prior experience, and expressed future field of specialty.
Our findings are consistent with prior research, but go further in describing and demonstrating the existence and innovative use of factors, other than clinical knowledge and skill, in the formation of a multidimensional construct of trust. Kennedy et al. identified 4 dimensions of trust knowledge and skill, discernment, conscientiousness, and truthfulness[15]and demonstrated that supervising physicians rely on specific processes to assess trainee trustworthiness, specifically the use of double checks and language cues. This is consistent with our results, which demonstrate that many attending physicians independently verify information, such as laboratory findings, to inform their perceptions of trainee honesty, attention to detail, and ability to follow orders reliably. Furthermore, our subthemes of communication and the demonstration of logical clinical reasoning correspond to Kennedy's use of language cues.[15] We found that language cues are used as markers of trustworthiness, particularly early on in the rotation, as a litmus test to gauge the trainee's integrity and ability to assess and treat patients unsupervised.
To date, much has been written about the importance of direct observation in the evaluation of trainees.[16, 17, 18, 19] Our results demonstrate that supervising clinicians use a multifactorial, highly nuanced, and subjective process despite validated performance‐based assessment methods, such as the objective structured clinical exam or mini‐clinical evaluation exercise, to assess competence and grant entrustement.[3] Several factors utilized to determine trustworthiness in addition to direct observation are subjective in nature, specifically the trainee's prior experience and expressed career choice.
It is encouraging that attending physicians make use of direct observations to inform decisions of entrustment, albeit in an informal and unstructured way. They also seem to take into account the context and setting in which the observation occurs, and consider both the environmental factors as well as factors that relate to the task itself.[20] For example, attendings and residents reported that team dynamics played a large role in influencing trust decisions. We also found that attending physicians rely on indirect observation and will inquire among their colleagues and other senior residents to gain information about their trainees abilities and integrity. Evaluation tools that facilitate sharing of trainees' level of preparedness, prior feedback, and experience could facilitate the determination of readiness to complete EPAs as well as the reporting of achieved milestones in accordance with the ACGME NAS.
Sharing knowledge about trainees among attendings is common and of increasing importance in the context of attending physicians' shortened exposure to trainees due to the residency work‐hour restrictions and growing productivity pressures. In our study, attending physicians described work‐hour restrictions as detrimental to trainee trustworthiness, either in the context of decreased accountability for patient care or as intrinsic to the nature of forced absences that kept trainees from fully participating in daily ward activities and knowing their patients. Attending physicians felt that trainees did not know their patients well enough to be able to make independent decisions about care. The increased transition to a shift‐based structure of inpatient medicine may result in increasingly less time for direct observation and make it more difficult for attendings to justify their decisions about engendering trust. In addition, the increased fragmentation that is noted in training secondary to the work‐hour regulations may in fact have consequences on the development of clinical skill and decision making, such that increased attention to the need for supervision and longer lead to entrustment may be needed in certain circumstances. Attendings need guidance on how to improve their ability to observe trainees in the context of the new work environment, and how to role model decision making more effectively in the compressed time exposure to housestaff.
Our study has several limitations. The organizational structure and culture of our institution are unique to 1 academic setting. This may undermine our ability to generalize these research findings and analysis to the population at large.[21] In addition, recall bias may have played into the interpretation of the interview content given the timing with which they were performed after the conclusion of the rotation. The study interviews took place in 2006, and it is reasonable to believe that some perceptions concerning duty‐hour restrictions and competency‐based graduate medical education have changed. However, from our ongoing research over the past 5 years[4] and our personal experience with entrustment factors, we believe that the participants' perceptions of trust and competency are valid and have largely remained unchanged, given the similarity in findings to the accepted ten Cate framework. In addition, this work was done following the first iteration of the work‐hour regulations but prior to the implementation of explicit supervisory levels, so it may indeed represent a truer state of the supervisory relationship before external regulations were applied. Finally, this work represents an internal medicine residency training program and may not be generalizable to other specialties that posses different cultural factors that impact the decision for entrustment. However, the congruence of our data with that of the original work of ten Cate, which was done in gynecology,[6] and that of Sterkenberg et al. in anesthesiology,[4] supports our key factors being ubiquitous to all training programs.
In conclusion, we provide new insights into subjective factors that inform the perceptions of trust and entrustment decisions by supervising physicians, specifically subjective trainee characteristics, team dynamics, and informal observation. There was agreement among attendings about which elements of competence are considered most important in their entrustment decisions related to trainee, supervisor, task, and environmental factors. Rather than undervaluing the use of personal factors in the determination of trust, we believe that acknowledgement and appreciation of these factors may be important to give supervisors more confidence and better tools to assess resident physicians, and to understand how their personality traits relate to and impact their professional competence. Our findings are relevant for the development of assessment instruments to evaluate whether medical graduates are ready for safe practice without supervision.
ACKNOWLEDGEMENTS
Disclosures: Dr. Kevin Choo was supported by Scholarship and Discovery, University of Chicago, while in his role as a fourth‐year medical student. This study received institutional review board approval prior to evaluation of our human participants. Portions of this study were presented as an oral abstract at the 35th Annual Meeting of the Society of General Internal Medicine, Orlando, Florida, May 912, 2012.
Determining when residents are independently prepared to perform clinical care tasks safely is not easy or understood. Educators have struggled to identify robust ways to evaluate trainees and their preparedness to treat patients while unsupervised. Trust allows the trainee to experience increasing levels of participation and responsibility in the workplace in a way that builds competence for future practice. The breadth of knowledge and skills required to become a competent and safe physician, coupled with the busy workload confound this challenge. Notably, a technically proficient trainee may not have the clinical judgment to treat patients without supervision.
The Accreditation Council of Graduate Medical Education (ACGME) has previously outlined 6 core competencies for residency training: patient care, medical knowledge, practice‐based learning and improvement, interpersonal and communication skills, professionalism, and systems‐based practice.[1] A systematic literature review suggests that traditional trainee evaluation tools are difficult to use and unreliable in measuring the competencies independently from one another, whereas certain competencies are consistently difficult to quantify in a reliable and valid way.[2] The evaluation of trainees' clinical performance despite efforts to create objective tools remain strongly influenced by subjective measures and continues to be highly variable among different evaluators.[3] Objectively measuring resident autonomy and readiness to supervise junior colleagues remains imprecise.[4]
The ACGME's Next Accreditation System (NAS) incorporates educational milestones as part of the reporting of resident training outcomes.[5] The milestones allow for the translation of the core competencies into integrative and observable abilities. Furthermore, the milestone categories are stratified into tiers to allow progress to be measured longitudinally and by task complexity using a novel assessment strategy.
The development of trust between supervisors and trainees is a critical step in decisions to allow increased responsibility and the provision of autonomous decision making, which is an important aspect of physician training. Identifying the factors that influence the supervisors' evaluation of resident competency and capability is at the crux of trainee maturation as well as patient safety.[4] Trust, defined as believability and discernment by attendings of resident physicians, plays a large role in attending evaluations of residents during their clinical rotations.[3] Trust impacts the decisions of successful performance of entrustable professional activities (EPAs), or those tasks that require mastery prior to completion of training milestones.[6] A study of entrustment decisions made by attending anesthesiologists identified the factors that contribute to the amount of autonomy given to residents, such as trainee trustworthiness, medical knowledge, and level of training.[4] The aim of our study, building on this study, was 2‐fold: (1) use deductive qualitative analysis to apply this framework to existing resident and attending data, and (2) define the categories within this framework and describe how internal medicine attending and resident physician perceptions of trust can impact clinical decision making and patient care.
METHODS
We are reporting on a secondary data analysis of interview transcripts from a study conducted on the inpatient general medicine service at the University of Chicago, an academic tertiary care medical center. The methods for data collection and full consent have been outlined previously.[7, 8, 9] The institutional review board of the University of Chicago approved this study.
Briefly, between January 2006 and November 2006, all eligible internal medicine resident physicians, postgraduate year (PGY)‐2 or PGY‐3, and attending physicians, either generalists or hospitalists, were privately interviewed within 1 week of their final call night on the inpatient general medicine rotation to assess decision making and clinical supervision during the rotation. All interviews were conducted by 1 investigator (J.F.), and discussions were audio taped and transcribed for analysis. Interviews were conducted at the conclusion of the rotation to prevent any influence on resident and attending behavior during the rotation.
The critical incident technique, a procedure used for collecting direct observations of human behavior that have critical significance on the decision‐making process, was used to solicit examples of ineffective supervision, inquiring about 2 to 3 important clinical decisions made on the most recent call night, with probes to identify issues of trust, autonomy, and decision making.[10] A critical incident can be described as one that makes a significant contribution, either positively or negatively, on the process.
Appreciative inquiry, a technique that aims to uncover the best things about the clinical encounter being explored, was used to solicit examples of effective supervision. Probes are used to identify factors, either personal or situational, that influenced the withholding or provision of resident autonomy during periods of clinical care delivery.[11]
All identifiable information was removed from the interview transcripts to protect participant and patient confidentiality. Deductive qualitative analysis was performed using the conceptual EPA framework, which describes several factors that influence the attending physicians' decisions to deem a resident trustworthy to independently fulfill a specific clinical task.[4] These factors include (1) the nature of the task, (2) the qualities of the supervisor, (3) the qualities of the trainee and the quality of the relationship between the supervisor and the trainee, and (4) the circumstances surrounding the clinical task.
The deidentified, anonymous transcripts were reviewed by 2 investigators (K.J.C., J.M.F.) and analyzed using the constant comparative methods to deductively map the content to the existing framework and generate novel sub themes.[12, 13, 14] Novel categories within each of the domains were inductively generated. Two reviewers (K.J.C., J.M.F.) independently applied the themes to a randomly selected 10% portion of the interview transcripts to assess the inter‐rater reliability. The inter‐rater agreement was assessed using the generalized kappa statistic. The discrepancies between reviewers regarding assignment of codes were resolved via discussion and third party adjudication until consensus was achieved on thematic structure. The codes were then applied to the entire dataset.
RESULTS
Between January 2006 and November 2006, 46 of 50 (88%) attending physicians and 44 of 50 (92%) resident physicians were interviewed following the conclusion of their general medicine inpatient rotation. Of attending physicians, 55% were male, 45% were female, and 38% were academic faculty hospitalists. Of the residents who completed interviews, 47% were male, 53% were female, 52% were PGY‐2, and 45% were PGY‐3.
A total of 535 mentions of trust were abstracted from the transcripts. The 4 major domains that influence trusttrainee factors (Table 1), supervisor factors (Table 2), task factors (Table 3), and systems factors (Table 4)were deductively coded with several emerging novel categories and subthemes. The domains were consistent across the postgraduate year of trainee. No differences in themes were noted, other than those explicitly stated, between the postgraduate years.
Domain (N) | Category (N) | Subtheme (N) | Definition and Representative Comment |
---|---|---|---|
| |||
Trainee factors (170); characteristics specific to the trainee that either promote or discourage trust. | Personal characteristics (78); traits that impact attendings' decision regarding trust/allowance of autonomy. | Confidence and overconfidence (29) | Displayed level of comfort when approaching specific clinical situations. I think I havea personality and presenting style [that] people think that I know what I am talkingabout and they just let me run with it. (R) |
Accountability (18) | Sense of responsibility, including ability to follow‐up on details regarding patient care. [What] bothered me the most was that that kind of lack of accountability for patient careand it makes the whole dynamic of rounds much more stressful. I ended up asking him to page me every day to run the list. (A) | ||
Familiarity/ reputation (18) | Comfort with trainee gained through prior working experience, or reputation of the trainee based on discussion with other supervisors. I do have to get to know someone a little to develop that level of trust, to know that it is okay to not check the labs every day, okay to not talk to them every afternoon. (A) | ||
Honesty (13) | Sense trainee is not withholding information in order to impact decision making toward a specific outcome. [The residents] have more information than I do and they can clearly spin that information, and it is very difficult to unravelunless you treat them like a hostile witness on the stand.(A) | ||
Clinical attributes (92); skills demonstrated in the context of patient care that promote or inhibit trust. | Leadership (19) | Ability to organize, teach, and manage coresidents, interns, and students. I want them to be in chargedeciding the plan and sitting down with the team before rounds. (A) | |
Communication (12) | Establishing and encouraging conversation with supervisor regarding decision making.Some residents call me regularly and let me know what's going on and others don't, and those who don't I really have trouble withif you're not calling to check in, then I don't trust your judgment. (A) | ||
Specialty (6) | Trainee future career plans. Whether it's right or wrong, nonmedicine interns may not be as attentive to smaller details, and so I had to be attentive to smaller details on [his] patients. (R2) | ||
Medical knowledge (39) | Ability to display appropriate level of clinical acumen and apply evidence‐based medicine. I definitelygo on my own gestalt of talking with them and deciding if what they do is reasonable. If they can't explain things to me, that's when I worry. (A) | ||
Recognition of limitations (16) | Trainee's ability to recognize his/her own weaknesses, accept criticism, and solicit help when appropriate. The first thing is that they know their limits and ask for help either in rounds or outside of rounds. That indicates to me that as they are out there on their own they are less likely to do things that they don't understand. (A) |
Domain (N) | Major Category (N) | Subtheme (N) | Definition and Representative Comment |
---|---|---|---|
| |||
Supervisor factors (120); characteristics specific to the supervisor which either promote or discourage trust. | Approachability (34); personality traits, such as approachability, which impact the trainees' perception regarding trust/allowance of autonomy. | Sense that the attending physician is available to and receptive to questions from trainees. I think [attending physicians] being approachable and available to you if you need them is really helpful. (R) | |
Clinical attributes (86); skills demonstrated in the context of patient care that promote or inhibit trust. | Institutional obligation (17) | Attending physician is the one contractually and legally responsible for the provision of high‐quality and appropriate patient care. If [the residents] have a good reason I can be argued out of my position. I am ultimately responsible andhave to choose if there is some serious dispute. (A) | |
Experience and expertise (29) | Clinical experience, area of specialty, and research interests of the attending physician. You have to be confident in your own clinical skills and knowledge, confident enough that you can say its okay for me to let go a little bit. (A) | ||
Observation‐based evaluation (27) | Evaluation of trainee decision‐making ability during the early part of the attending/trainee relationship. It's usually the first post‐call day experience, the first on‐call and post‐call day experience. One of the big things is [if they can] tell if a patient is sick or not sickif they are missing at that level then I get very nervous. I really get a sense [of] how they think about patients. (A) | ||
Educational obligation (13) | Acknowledging the role of the attending as clinical teacher. My theory with the interns was that they should do it because that's how you learn. (R) |
Domain (N) | Major Category (N) | Subtheme (N) | Definition |
---|---|---|---|
| |||
Task factors (146); details or characteristics of the task that encouraged or impeded contacting the supervisor. | Clinical characteristics (103) | Case complexity (25) | Evaluation of the level of difficulty in patient management. I don't expect to be always looking over [the resident's] shoulder, I don't check labs everyday, and I don't call them if I see potassium of 3; I assume that they are going to take care of it. |
Family/ethical dilemma (10) | Uncertainty regarding respecting the wishes of patients and other ethical dilemmas. There was 1 time I called because we had a very sick patient who had a lot of family asking for more aggressive measures, and I called to be a part of the conversation. | ||
Interdepartment collaboration (18) | Difficulties when treating patients managed by multiple consult services. I have called [the attending] when I have had trouble pushing things through the systemif we had trouble getting tests or trouble with a particular consult team I would call him. | ||
Urgency/severity of illness (13) | Clinical condition of patient requires immediate or urgent intervention. If I have something that is really pressing I would probably page my attending. If it's a question [of] just something that I didn't know the answer to [or] wasn't that urgent I could turn to my fellow residents. | ||
Transitions of care (37) | Communication with supervisor because of concern/uncertainty regarding patient transition decisions. We wanted to know if it was okay to discharge somebody or if something changes where something in the plan changes. I usually text page her or call her. | ||
Situation or environment characteristics (49) | Proximity of attending physicians and support staff (10) | Availability of attending physicians and staff resources . I have been called in once or twice to help with a lumbar puncture or paracentesis, but not too often. The procedure service makes life much easier than it used to be. | |
Team culture (33) | Presence or absence of a collaborative and supportive group environment. I had a team that I did trust. I think we communicated well; we were all sort of on the same page. | ||
Time of day (6) | Time of the task. Once its past 11 pm, I feel like I shouldn't call, the threshold is higherthe patient has to be sicker. |
Domain (N) | Major Categories (N) | Definition |
---|---|---|
| ||
Systems factors (99); unmodifiable factors not related to personal characteristics or knowledge of trainee or supervisor. | Workload (15) | Increasing trainee clinical workload results in a more intensive experience. They [residents] get 10 patients within a pretty concentrated timeso they really have to absorb a lot of information in a short period of time. |
Institutional culture (4) | Anticipated quality of the trainee because of the status of the institution. I assume that our residents and interns are top notch, so I go in with this real assumption that I expect the best of them because we are [the best]. | |
Clinical experience of trainee (36) | Types of clinical experience prior to supervisor/trainee interaction. The interns have done as much [general inpatient medicine] months as I havethey had both done like 2 or 3 months really close together, so they were sort of at their peak knowledge. | |
Level of training (25) | Postgraduate year of trainee. It depends on the experience level of the resident. A second year who just finished internship, I am going to supervise more closely and be more detail oriented; a fourth year medicine‐pediatrics resident who is almost done, I will supervise a lot less. | |
Duty hours/efficiency pressures (5) | Absence of residents due to other competing factors, including compliance with work‐hour restrictions. Before the work‐hour [restrictions], when [residents] were here all the time and knew everything about the patients, I found them to be a lot more reliableand now they are still supposed to be in charge, but hell I am here more often than they are. I am here every day, I have more information than they do. How can you run the show if you are not here every day? | |
Philosophy of medical education (14) | Belief that trainees learn by the provision of completely autonomous decision making. When you are not around, [the residents] have autonomy, they are the people making the initial decisions and making the initial assessments. They are the ones who are there in the middle of the night, the ones who are there at 3 o'clock in the afternoon. The resident is supposed to have room to make decisions. When I am not there, it's not my show. |
Trainee Factors
Attending and resident physicians both cited trainee factors as major determinants of granting entrustment (Table 1). Within the domain, the categories described included trainee personal characteristics and clinical characteristics. Of the subthemes noted within the major category of personal characteristics, the perceived confidence or overconfidence of the trainee was most often mentioned. Other subthemes included accountability, familiarity, and honesty. Attending physicians reported using perceived resident confidence as a gauge of the trainee's true ability and comfort. Conversely, some attending physicians reported that perceived overconfidence was a red flag that warranted increased scrutiny. Overconfidence was identified by faculty as trainees with an inability to recognize their limitations in either technical skill or knowledge. Confidence was noted in trainees that recognized their own limitations while also enacting effective management plans, and those physicians that prioritized the patient needs over their personal needs.
The clinical attributes of trainees described by attendings included: leadership skills, communication skills, anticipated specialty, medical knowledge, and perceived recognition of limitations. All participants expressed that the possession of adequate medical knowledge was the most important clinical skills‐related factor in the development of trust. Trainee demonstration of judgment, including applying evidence‐based practice, was used to support attending physician's decision to give residents more autonomy in managing patients. Many attending physicians described a specific pattern of observation and evaluation, in which they would rely on impressions shaped early in the rotation to inform their decisions of entrustment throughout the rotation. The use of this early litmus test was highlighted by several attending physicians. This litmus test described the importance of behavior on the first day/call night and postcall interactions as particularly important opportunities to gauge the ability of a resident to triage new patient admissions, manage their anxiety and uncertainty, and demonstrate maturity and professionalism. Several faculty members discussed examples of their litmus test including checking and knowing laboratory data prior to rounds but not mentioning their findings until they had noted the resident was unaware ([I]f I see a 2 g hemoglobin drop when I check the [electronic medical record {EMR}] and they don't bring it up, I will bring it to their attention, and then I'll get more involved.) or assessing the management of both straightforward and complex patients. They would then use this initial impression to determine their degree of involvement in the care of the patient.
The quality and nature of the communication skills, particularly the increased frequency of contact between resident and attending, was used as a barometer of trainee judgment. Furthermore, attending physicians expressed that they would often micromanage patient care if they did not trust a trainee's ability to reliably and frequently communicate patient status as well as the attendings concerns and uncertainty about future decisions. Some level of uncertainty was generally seen in a positive light by attending physicians, because it signaled that trainees had a mature understanding of their limitations. Finally, the trainee's expressed future specialty, especially if the trainee was a preliminary PGY‐1 resident, or a more senior resident anticipating subspecialty training in a procedural specialty, impacted the degree of autonomy provided.
Supervisor Factors
Supervisor characteristics were further categorized into their approachability and clinical attributes (Table 2). Approachability as a proxy for quality of the relationship, was cited as the personality characteristic that most influenced trust by the residents. This was often described by both attending and resident physicians as the presence of a supportive team atmosphere created through explicit declaration of availability to help with patient care tasks. Some attending physicians described the importance of expressing enthusiasm when receiving queries from their team to foster an atmosphere of nonjudgmental collaboration.
The clinical experience and knowledge base of the attending physician played a role in the provision of autonomy, particularly in times of disagreement about particular clinical decisions. Conversely, attending physicians who had spent less time on inpatient general medicine were more willing to yield to resident suggestions.
Task Factors
The domain of task factors was further divided into the categories that pertained to the clinical aspects of the task and those that pertained to the context, that is the environment in which the entrustment decisions were made (Table 3). Clinical characteristics included case complexity, presence of an ethical dilemma, interdepartmental collaboration, urgency/severity of situation, and transitions of care. The environmental characteristics included physical proximity of supervisors/support, team culture, and time of day. Increasing case complexity, especially the coexistence of legal and/or ethical dilemmas, was often mentioned as a factor driving greater attending involvement. Conversely, straightforward clinical decisions, such as electrolyte repletion, were described as sufficiently easy to allow limited attending involvement. Transitions of care, such as patient discharge or transfer, required greater communication and attending involvement or guidance, regardless of case complexity.
Attending and resident physicians reported that the team dynamics played a large role in the development, granting, or discouragement of trust. Teams with a positive rapport reported a collaborative environment that fostered increased trust by the attending and led to greater resident autonomy. Conversely, team discord that influenced the supervisor‐trainee relationship, often defined as toxic attitudes within the team, was often singled out as the reason attending physicians would feel the need to engage more directly in patient care and by extension have less trust in residents to manage their patients.
Systems Factors
Systems factors were described as the nonmodifiable factors, unrelated to either the characteristics of the supervisor, trainee, or the clinical task (Table 4). The subthemes that emerged included workload, institutional culture, trainee experience, level of training, and duty hours/efficiency pressures. Residents and attending physicians noted that trainee PGY and clinical experience commonly influenced the provision of autonomy and supervision by attendings. Participants reported that the importance of adequate clinical experience was of greater concern given the new duty‐hour restrictions, increased workload, as well as efficiency pressures. Attending physicians noted that trainee absences, even when required to comply with duty‐hour restrictions, had a negative effect on entrustment‐granting decisions. Many attendings felt that a trainee had to be physically present to make informed decisions on the inpatient medicine service.
DISCUSSION
Clinical supervisors must hold the quality of care constant while balancing the amount of supervision and autonomy provided to learners in procedural tasks and clinical decision making. We found that the development of trust is multifactorial and highly contextual. It occurs under the broad constructs of task, supervisor, trainee, and environmental factors, and is well described in prior work. We also demonstrate that often what determines these broader factors is highly subjective, frequently independent of objective measures of trainee performance. Many decisions are based on personal characteristics, such as the perception of honesty, disposition, perceived confidence or perceived overconfidence of the trainee, prior experience, and expressed future field of specialty.
Our findings are consistent with prior research, but go further in describing and demonstrating the existence and innovative use of factors, other than clinical knowledge and skill, in the formation of a multidimensional construct of trust. Kennedy et al. identified 4 dimensions of trust knowledge and skill, discernment, conscientiousness, and truthfulness[15]and demonstrated that supervising physicians rely on specific processes to assess trainee trustworthiness, specifically the use of double checks and language cues. This is consistent with our results, which demonstrate that many attending physicians independently verify information, such as laboratory findings, to inform their perceptions of trainee honesty, attention to detail, and ability to follow orders reliably. Furthermore, our subthemes of communication and the demonstration of logical clinical reasoning correspond to Kennedy's use of language cues.[15] We found that language cues are used as markers of trustworthiness, particularly early on in the rotation, as a litmus test to gauge the trainee's integrity and ability to assess and treat patients unsupervised.
To date, much has been written about the importance of direct observation in the evaluation of trainees.[16, 17, 18, 19] Our results demonstrate that supervising clinicians use a multifactorial, highly nuanced, and subjective process despite validated performance‐based assessment methods, such as the objective structured clinical exam or mini‐clinical evaluation exercise, to assess competence and grant entrustement.[3] Several factors utilized to determine trustworthiness in addition to direct observation are subjective in nature, specifically the trainee's prior experience and expressed career choice.
It is encouraging that attending physicians make use of direct observations to inform decisions of entrustment, albeit in an informal and unstructured way. They also seem to take into account the context and setting in which the observation occurs, and consider both the environmental factors as well as factors that relate to the task itself.[20] For example, attendings and residents reported that team dynamics played a large role in influencing trust decisions. We also found that attending physicians rely on indirect observation and will inquire among their colleagues and other senior residents to gain information about their trainees abilities and integrity. Evaluation tools that facilitate sharing of trainees' level of preparedness, prior feedback, and experience could facilitate the determination of readiness to complete EPAs as well as the reporting of achieved milestones in accordance with the ACGME NAS.
Sharing knowledge about trainees among attendings is common and of increasing importance in the context of attending physicians' shortened exposure to trainees due to the residency work‐hour restrictions and growing productivity pressures. In our study, attending physicians described work‐hour restrictions as detrimental to trainee trustworthiness, either in the context of decreased accountability for patient care or as intrinsic to the nature of forced absences that kept trainees from fully participating in daily ward activities and knowing their patients. Attending physicians felt that trainees did not know their patients well enough to be able to make independent decisions about care. The increased transition to a shift‐based structure of inpatient medicine may result in increasingly less time for direct observation and make it more difficult for attendings to justify their decisions about engendering trust. In addition, the increased fragmentation that is noted in training secondary to the work‐hour regulations may in fact have consequences on the development of clinical skill and decision making, such that increased attention to the need for supervision and longer lead to entrustment may be needed in certain circumstances. Attendings need guidance on how to improve their ability to observe trainees in the context of the new work environment, and how to role model decision making more effectively in the compressed time exposure to housestaff.
Our study has several limitations. The organizational structure and culture of our institution are unique to 1 academic setting. This may undermine our ability to generalize these research findings and analysis to the population at large.[21] In addition, recall bias may have played into the interpretation of the interview content given the timing with which they were performed after the conclusion of the rotation. The study interviews took place in 2006, and it is reasonable to believe that some perceptions concerning duty‐hour restrictions and competency‐based graduate medical education have changed. However, from our ongoing research over the past 5 years[4] and our personal experience with entrustment factors, we believe that the participants' perceptions of trust and competency are valid and have largely remained unchanged, given the similarity in findings to the accepted ten Cate framework. In addition, this work was done following the first iteration of the work‐hour regulations but prior to the implementation of explicit supervisory levels, so it may indeed represent a truer state of the supervisory relationship before external regulations were applied. Finally, this work represents an internal medicine residency training program and may not be generalizable to other specialties that posses different cultural factors that impact the decision for entrustment. However, the congruence of our data with that of the original work of ten Cate, which was done in gynecology,[6] and that of Sterkenberg et al. in anesthesiology,[4] supports our key factors being ubiquitous to all training programs.
In conclusion, we provide new insights into subjective factors that inform the perceptions of trust and entrustment decisions by supervising physicians, specifically subjective trainee characteristics, team dynamics, and informal observation. There was agreement among attendings about which elements of competence are considered most important in their entrustment decisions related to trainee, supervisor, task, and environmental factors. Rather than undervaluing the use of personal factors in the determination of trust, we believe that acknowledgement and appreciation of these factors may be important to give supervisors more confidence and better tools to assess resident physicians, and to understand how their personality traits relate to and impact their professional competence. Our findings are relevant for the development of assessment instruments to evaluate whether medical graduates are ready for safe practice without supervision.
ACKNOWLEDGEMENTS
Disclosures: Dr. Kevin Choo was supported by Scholarship and Discovery, University of Chicago, while in his role as a fourth‐year medical student. This study received institutional review board approval prior to evaluation of our human participants. Portions of this study were presented as an oral abstract at the 35th Annual Meeting of the Society of General Internal Medicine, Orlando, Florida, May 912, 2012.
- Accreditation Council for Graduate Medical Education Common Program Requirements. Available at: http://www.acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx, Accessed November 30, 2013.
- Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Acad Med. 2009;84:301–309. , , .
- Toward authentic clinical evaluation: pitfalls in the pursuit of competency. Acad Med. 2010;85(5):780–786. , , , , .
- When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):1408–1417. , , , , .
- The next GME accreditation system—rationale and benefits. N Eng J Med. 2012;366(11):1051–1056. , , , .
- Trust, competence and the supervisor's role in postgraduate training. BMJ. 2006;333:748–751. .
- Clinical Decision Making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122–126. , , , , .
- Strategies for effective on call supervision for internal medicine residents: the superb/safety model. J Grad Med Educ. 2010;2(1):46–52. , , , et al.
- On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784–788. , , , , .
- The critical incident technique. Psychol Bull. 1954;51.4:327–359. .
- Critical evaluation of appreciative inquiry: bridging an apparent paradox. Action Res. 2006;4(4):401–418. , .
- Basics of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1998. , .
- How to Design and Evaluate Research in Education. New York, NY: McGraw Hill; 2003. , .
- Qualitative Data Analysis. Thousand Oaks, CA: Sage; 1994. , .
- Point‐of‐care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;84:S89–S92. , , , .
- Viewpoint: competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542–547. , .
- Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:1156–1165. , , , et al.
- Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):1316–1326. , , .
- Assessment in medical education. N Engl J Med. 2007;356:387–396. .
- A prospective study of paediatric cardiac surgical microsystems: assessing the relationships between non‐routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20(7):599–603. , , , , . .
- Generalizability and transferability of meta‐synthesis research findings. J Adv Nurs. 2010;66(2):246–254. .
- Accreditation Council for Graduate Medical Education Common Program Requirements. Available at: http://www.acgme.org/acgmeweb/tabid/429/ProgramandInstitutionalAccreditation/CommonProgramRequirements.aspx, Accessed November 30, 2013.
- Measurement of the general competencies of the Accreditation Council for Graduate Medical Education: a systematic review. Acad Med. 2009;84:301–309. , , .
- Toward authentic clinical evaluation: pitfalls in the pursuit of competency. Acad Med. 2010;85(5):780–786. , , , , .
- When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85(9):1408–1417. , , , , .
- The next GME accreditation system—rationale and benefits. N Eng J Med. 2012;366(11):1051–1056. , , , .
- Trust, competence and the supervisor's role in postgraduate training. BMJ. 2006;333:748–751. .
- Clinical Decision Making and impact on patient care: a qualitative study. Qual Saf Health Care. 2008;17(2):122–126. , , , , .
- Strategies for effective on call supervision for internal medicine residents: the superb/safety model. J Grad Med Educ. 2010;2(1):46–52. , , , et al.
- On‐call supervision and resident autonomy: from micromanager to absentee attending. Am J Med. 2009;122(8):784–788. , , , , .
- The critical incident technique. Psychol Bull. 1954;51.4:327–359. .
- Critical evaluation of appreciative inquiry: bridging an apparent paradox. Action Res. 2006;4(4):401–418. , .
- Basics of Qualitative Research. 2nd ed. Thousand Oaks, CA: Sage Publications; 1998. , .
- How to Design and Evaluate Research in Education. New York, NY: McGraw Hill; 2003. , .
- Qualitative Data Analysis. Thousand Oaks, CA: Sage; 1994. , .
- Point‐of‐care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;84:S89–S92. , , , .
- Viewpoint: competency‐based postgraduate training: can we bridge the gap between theory and clinical practice? Acad Med. 2007;82(6):542–547. , .
- Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:1156–1165. , , , et al.
- Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA. 2009;302(12):1316–1326. , , .
- Assessment in medical education. N Engl J Med. 2007;356:387–396. .
- A prospective study of paediatric cardiac surgical microsystems: assessing the relationships between non‐routine events, teamwork and patient outcomes. BMJ Qual Saf. 2011;20(7):599–603. , , , , . .
- Generalizability and transferability of meta‐synthesis research findings. J Adv Nurs. 2010;66(2):246–254. .
© 2014 Society of Hospital Medicine
Ponatinib back on the market
Less than 3 months after it was pulled from the market due to safety concerns, ponatinib (Iclusig) is once again commercially available in the US.
Ariad Pharmaceuticals, Inc., has begun shipping the drug to Biologics, Inc., its exclusive specialty pharmacy. And the pharmacy has started filling prescriptions and distributing ponatinib to patients in need.
The drug is approved by the US Food and Drug Administration (FDA) to treat chronic myeloid leukemia (CML) or Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph+ ALL) that is resistant to or intolerant of other tyrosine kinase inhibitors (TKIs).
Safety concerns prompt action
Last October, the latest results of the phase 2 PACE trial revealed that ponatinib can increase a patient’s risk of arterial and venous thrombotic events. So all trials of the drug were placed on partial clinical hold, with the exception of the phase 3 EPIC trial, which was discontinued.
Then, the FDA suspended sales and marketing of ponatinib, pending results of a safety evaluation. But in December, the agency decided the drug could return to the market if new safety measures were implemented.
The FDA approved revised prescribing information and a communications Risk Evaluation and Mitigation Strategy for ponatinib. The prescribing information includes a revised indication statement and boxed warning, updated safety information, and recommendations regarding dosing considerations for prescribers.
Now, ponatinib is indicated for the treatment of:
- Adults with T315I-positive CML (chronic, accelerated, or blast phase)
- Adults with T315I-positive Ph+ ALL
- Adults with CML (chronic, accelerated, or blast phase) who cannot receive another TKI
- Adults with Ph+ ALL who cannot receive another TKI.
The starting dose of ponatinib remains 45 mg daily.
IND program
On November 1, 2013, there were approximately 640 patients receiving ponatinib through commercial channels in the US. Since then, the drug was only made available through emergency and single-patient investigational new drug (IND) applications, which were reviewed and approved by the FDA on a case-by-case basis.
The FDA has approved more than 370 INDs since early November, and more than 300 patients have received ponatinib at no cost through this process.
Ariad expects most of these patients, many of whom received a 3-month supply of ponatinib, to transition from the IND program to commercial therapy by the end of the first quarter of 2014. The IND program is now closed to new patients with Ph+ leukemias.
Ponatinib is currently priced in the US at approximately $125,000 per year. For more information on the drug, visit www.iclusig.com.
Less than 3 months after it was pulled from the market due to safety concerns, ponatinib (Iclusig) is once again commercially available in the US.
Ariad Pharmaceuticals, Inc., has begun shipping the drug to Biologics, Inc., its exclusive specialty pharmacy. And the pharmacy has started filling prescriptions and distributing ponatinib to patients in need.
The drug is approved by the US Food and Drug Administration (FDA) to treat chronic myeloid leukemia (CML) or Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph+ ALL) that is resistant to or intolerant of other tyrosine kinase inhibitors (TKIs).
Safety concerns prompt action
Last October, the latest results of the phase 2 PACE trial revealed that ponatinib can increase a patient’s risk of arterial and venous thrombotic events. So all trials of the drug were placed on partial clinical hold, with the exception of the phase 3 EPIC trial, which was discontinued.
Then, the FDA suspended sales and marketing of ponatinib, pending results of a safety evaluation. But in December, the agency decided the drug could return to the market if new safety measures were implemented.
The FDA approved revised prescribing information and a communications Risk Evaluation and Mitigation Strategy for ponatinib. The prescribing information includes a revised indication statement and boxed warning, updated safety information, and recommendations regarding dosing considerations for prescribers.
Now, ponatinib is indicated for the treatment of:
- Adults with T315I-positive CML (chronic, accelerated, or blast phase)
- Adults with T315I-positive Ph+ ALL
- Adults with CML (chronic, accelerated, or blast phase) who cannot receive another TKI
- Adults with Ph+ ALL who cannot receive another TKI.
The starting dose of ponatinib remains 45 mg daily.
IND program
On November 1, 2013, there were approximately 640 patients receiving ponatinib through commercial channels in the US. Since then, the drug was only made available through emergency and single-patient investigational new drug (IND) applications, which were reviewed and approved by the FDA on a case-by-case basis.
The FDA has approved more than 370 INDs since early November, and more than 300 patients have received ponatinib at no cost through this process.
Ariad expects most of these patients, many of whom received a 3-month supply of ponatinib, to transition from the IND program to commercial therapy by the end of the first quarter of 2014. The IND program is now closed to new patients with Ph+ leukemias.
Ponatinib is currently priced in the US at approximately $125,000 per year. For more information on the drug, visit www.iclusig.com.
Less than 3 months after it was pulled from the market due to safety concerns, ponatinib (Iclusig) is once again commercially available in the US.
Ariad Pharmaceuticals, Inc., has begun shipping the drug to Biologics, Inc., its exclusive specialty pharmacy. And the pharmacy has started filling prescriptions and distributing ponatinib to patients in need.
The drug is approved by the US Food and Drug Administration (FDA) to treat chronic myeloid leukemia (CML) or Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph+ ALL) that is resistant to or intolerant of other tyrosine kinase inhibitors (TKIs).
Safety concerns prompt action
Last October, the latest results of the phase 2 PACE trial revealed that ponatinib can increase a patient’s risk of arterial and venous thrombotic events. So all trials of the drug were placed on partial clinical hold, with the exception of the phase 3 EPIC trial, which was discontinued.
Then, the FDA suspended sales and marketing of ponatinib, pending results of a safety evaluation. But in December, the agency decided the drug could return to the market if new safety measures were implemented.
The FDA approved revised prescribing information and a communications Risk Evaluation and Mitigation Strategy for ponatinib. The prescribing information includes a revised indication statement and boxed warning, updated safety information, and recommendations regarding dosing considerations for prescribers.
Now, ponatinib is indicated for the treatment of:
- Adults with T315I-positive CML (chronic, accelerated, or blast phase)
- Adults with T315I-positive Ph+ ALL
- Adults with CML (chronic, accelerated, or blast phase) who cannot receive another TKI
- Adults with Ph+ ALL who cannot receive another TKI.
The starting dose of ponatinib remains 45 mg daily.
IND program
On November 1, 2013, there were approximately 640 patients receiving ponatinib through commercial channels in the US. Since then, the drug was only made available through emergency and single-patient investigational new drug (IND) applications, which were reviewed and approved by the FDA on a case-by-case basis.
The FDA has approved more than 370 INDs since early November, and more than 300 patients have received ponatinib at no cost through this process.
Ariad expects most of these patients, many of whom received a 3-month supply of ponatinib, to transition from the IND program to commercial therapy by the end of the first quarter of 2014. The IND program is now closed to new patients with Ph+ leukemias.
Ponatinib is currently priced in the US at approximately $125,000 per year. For more information on the drug, visit www.iclusig.com.
Committee votes against rivaroxaban for ACS
Credit: Mass. General Hospital
A US Food and Drug Administration (FDA) advisory committee has voted against expanding the indication for the anticoagulant rivaroxaban (Xarelto).
The drug’s developers are seeking approval for rivaroxaban to be used in combination with standard antiplatelet therapy to reduce the risk of thrombotic cardiovascular events in patients with acute coronary syndrome (ACS). The proposed dose is 2.5 mg twice daily for 90 days.
But the FDA’s Cardiovascular and Renal Drugs Advisory Committee voted—nearly unanimously (with 1 abstention)—against this approval.
And the FDA will take this recommendation into account when deciding whether or not to expand rivaroxaban’s indication. This will be the FDA’s third time reviewing the drug for the aforementioned indication.
Rivaroxaban is currently FDA-approved to reduce the risk of stroke and thrombosis in patients with non-valvular atrial fibrillation, treat patients with venous thromboembolism (VTE), reduce the risk of recurrent VTE, and reduce the risk of VTE in patients who have undergone knee replacement surgery or hip replacement surgery.
Headed for a third rejection?
The advisory committee’s recommendation was based on a review of data from the phase 3 ATLAS ACS 2 TIMI 51 trial, which were published in NEJM in January 2012.
The study showed that rivaroxaban, given in combination with standard antiplatelet therapy, reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke in ACS patients. But it also increased the risk of major bleeding and intracranial hemorrhage.
Based on these results, rivaroxaban’s developers—Janssen Research & Development, LLC, and Bayer HealthCare—filed for FDA approval of rivaroxaban to treat patients with ACS.
In June 2012, the FDA rejected the application, a month after an advisory committee voted against the approval. The committee had expressed concerns about the risks of bleeding associated with rivaroxaban, as well as reservations about missing data from the ATLAS ACS 2 TIMI 51 trial.
Though Janssen and Bayer went on to submit the missing data, the FDA rejected the drug again in March 2013. However, the FDA suggested the companies apply for approval using rivaroxaban for a limited time after ACS development, as the drug might be safer and more effective when given this way.
So the companies submitted an application for rivaroxaban given within the first 90 days of ACS diagnosis.
The advisory committee voted against this use of the drug, however, saying it seems the benefits of this treatment still do not outweigh the risks for this patient population.
A representative from Janssen said the company still believes rivaroxaban can be useful for patients with ACS, and Janssen and Bayer will work with the FDA to address the issues the committee raised.
For more details and data on rivaroxaban, see the briefing information compiled for the advisory committee’s meeting.
Credit: Mass. General Hospital
A US Food and Drug Administration (FDA) advisory committee has voted against expanding the indication for the anticoagulant rivaroxaban (Xarelto).
The drug’s developers are seeking approval for rivaroxaban to be used in combination with standard antiplatelet therapy to reduce the risk of thrombotic cardiovascular events in patients with acute coronary syndrome (ACS). The proposed dose is 2.5 mg twice daily for 90 days.
But the FDA’s Cardiovascular and Renal Drugs Advisory Committee voted—nearly unanimously (with 1 abstention)—against this approval.
And the FDA will take this recommendation into account when deciding whether or not to expand rivaroxaban’s indication. This will be the FDA’s third time reviewing the drug for the aforementioned indication.
Rivaroxaban is currently FDA-approved to reduce the risk of stroke and thrombosis in patients with non-valvular atrial fibrillation, treat patients with venous thromboembolism (VTE), reduce the risk of recurrent VTE, and reduce the risk of VTE in patients who have undergone knee replacement surgery or hip replacement surgery.
Headed for a third rejection?
The advisory committee’s recommendation was based on a review of data from the phase 3 ATLAS ACS 2 TIMI 51 trial, which were published in NEJM in January 2012.
The study showed that rivaroxaban, given in combination with standard antiplatelet therapy, reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke in ACS patients. But it also increased the risk of major bleeding and intracranial hemorrhage.
Based on these results, rivaroxaban’s developers—Janssen Research & Development, LLC, and Bayer HealthCare—filed for FDA approval of rivaroxaban to treat patients with ACS.
In June 2012, the FDA rejected the application, a month after an advisory committee voted against the approval. The committee had expressed concerns about the risks of bleeding associated with rivaroxaban, as well as reservations about missing data from the ATLAS ACS 2 TIMI 51 trial.
Though Janssen and Bayer went on to submit the missing data, the FDA rejected the drug again in March 2013. However, the FDA suggested the companies apply for approval using rivaroxaban for a limited time after ACS development, as the drug might be safer and more effective when given this way.
So the companies submitted an application for rivaroxaban given within the first 90 days of ACS diagnosis.
The advisory committee voted against this use of the drug, however, saying it seems the benefits of this treatment still do not outweigh the risks for this patient population.
A representative from Janssen said the company still believes rivaroxaban can be useful for patients with ACS, and Janssen and Bayer will work with the FDA to address the issues the committee raised.
For more details and data on rivaroxaban, see the briefing information compiled for the advisory committee’s meeting.
Credit: Mass. General Hospital
A US Food and Drug Administration (FDA) advisory committee has voted against expanding the indication for the anticoagulant rivaroxaban (Xarelto).
The drug’s developers are seeking approval for rivaroxaban to be used in combination with standard antiplatelet therapy to reduce the risk of thrombotic cardiovascular events in patients with acute coronary syndrome (ACS). The proposed dose is 2.5 mg twice daily for 90 days.
But the FDA’s Cardiovascular and Renal Drugs Advisory Committee voted—nearly unanimously (with 1 abstention)—against this approval.
And the FDA will take this recommendation into account when deciding whether or not to expand rivaroxaban’s indication. This will be the FDA’s third time reviewing the drug for the aforementioned indication.
Rivaroxaban is currently FDA-approved to reduce the risk of stroke and thrombosis in patients with non-valvular atrial fibrillation, treat patients with venous thromboembolism (VTE), reduce the risk of recurrent VTE, and reduce the risk of VTE in patients who have undergone knee replacement surgery or hip replacement surgery.
Headed for a third rejection?
The advisory committee’s recommendation was based on a review of data from the phase 3 ATLAS ACS 2 TIMI 51 trial, which were published in NEJM in January 2012.
The study showed that rivaroxaban, given in combination with standard antiplatelet therapy, reduced the composite endpoint of cardiovascular death, myocardial infarction, and stroke in ACS patients. But it also increased the risk of major bleeding and intracranial hemorrhage.
Based on these results, rivaroxaban’s developers—Janssen Research & Development, LLC, and Bayer HealthCare—filed for FDA approval of rivaroxaban to treat patients with ACS.
In June 2012, the FDA rejected the application, a month after an advisory committee voted against the approval. The committee had expressed concerns about the risks of bleeding associated with rivaroxaban, as well as reservations about missing data from the ATLAS ACS 2 TIMI 51 trial.
Though Janssen and Bayer went on to submit the missing data, the FDA rejected the drug again in March 2013. However, the FDA suggested the companies apply for approval using rivaroxaban for a limited time after ACS development, as the drug might be safer and more effective when given this way.
So the companies submitted an application for rivaroxaban given within the first 90 days of ACS diagnosis.
The advisory committee voted against this use of the drug, however, saying it seems the benefits of this treatment still do not outweigh the risks for this patient population.
A representative from Janssen said the company still believes rivaroxaban can be useful for patients with ACS, and Janssen and Bayer will work with the FDA to address the issues the committee raised.
For more details and data on rivaroxaban, see the briefing information compiled for the advisory committee’s meeting.