Burden of psychiatric comorbidity higher in MS patients

Article Type
Changed
Fri, 01/18/2019 - 15:22
Display Headline
Burden of psychiatric comorbidity higher in MS patients

The burden of psychiatric comorbidity is greater in patients with multiple sclerosis (MS), compared with the general population, reported Dr. Ruth Ann Marrie and coauthors from the departments of psychiatry and medicine at the University of Manitoba, Winnipeg.

A study of 44,452 MS patients and 220,849 controls in four Canadian provinces from 1995 to 2005 found that the incidence of depression in the MS group was 0.98% (95% CI; 0.81%-1.15%), compared with 0.72% (95% CI; 0.67%-0.76%) in the control group. The prevalence of depression was 20.1% in MS patients (19.5%-20.6%), compared with 11.9% (11.8%-12.1%) in the matched population, the authors noted.

HUNG KUO CHUN/Thinkstock

Also, the incidence and prevalence of anxiety disorder in the MS population was 0.64% (0.54%-0.73%) and 8.7% (8.4%-9.1%), respectively, compared with 0.42% (0.39%-0.45%) and 5.1% (4.9%-5.2%) in controls .

For bipolar disorder, the MS group had an incidence of 0.33% (0.26%-0.39%), compared with 0.16% (0.14%-0.18%) in controls. Prevalence was 4.7% (4.4%-4.9%) in the MS group and 2.3% (2.2%-2.3%) in controls .

Lastly, in schizophrenia, MS patients had an incidence of 0.060% (0.031%-0.080%), compared with 0.018% (0.011%-0.024%) in controls. Prevalence was 1.28% (1.15%-1.41%), in the MS group and 1.03% (0.99%-1.08%) in controls, the investigators said.

The findings suggest a “nonspecific effect of MS on psychiatric comorbidity,” Dr. Marrie and colleagues said in the report.

“From a policy perspective, this implies the need for general psychiatric support rather than illness-specific strategies,” they concluded.

Read the study in Neurology.

mrajaraman@frontlinemedcom.com

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

The burden of psychiatric comorbidity is greater in patients with multiple sclerosis (MS), compared with the general population, reported Dr. Ruth Ann Marrie and coauthors from the departments of psychiatry and medicine at the University of Manitoba, Winnipeg.

A study of 44,452 MS patients and 220,849 controls in four Canadian provinces from 1995 to 2005 found that the incidence of depression in the MS group was 0.98% (95% CI; 0.81%-1.15%), compared with 0.72% (95% CI; 0.67%-0.76%) in the control group. The prevalence of depression was 20.1% in MS patients (19.5%-20.6%), compared with 11.9% (11.8%-12.1%) in the matched population, the authors noted.

HUNG KUO CHUN/Thinkstock

Also, the incidence and prevalence of anxiety disorder in the MS population was 0.64% (0.54%-0.73%) and 8.7% (8.4%-9.1%), respectively, compared with 0.42% (0.39%-0.45%) and 5.1% (4.9%-5.2%) in controls .

For bipolar disorder, the MS group had an incidence of 0.33% (0.26%-0.39%), compared with 0.16% (0.14%-0.18%) in controls. Prevalence was 4.7% (4.4%-4.9%) in the MS group and 2.3% (2.2%-2.3%) in controls .

Lastly, in schizophrenia, MS patients had an incidence of 0.060% (0.031%-0.080%), compared with 0.018% (0.011%-0.024%) in controls. Prevalence was 1.28% (1.15%-1.41%), in the MS group and 1.03% (0.99%-1.08%) in controls, the investigators said.

The findings suggest a “nonspecific effect of MS on psychiatric comorbidity,” Dr. Marrie and colleagues said in the report.

“From a policy perspective, this implies the need for general psychiatric support rather than illness-specific strategies,” they concluded.

Read the study in Neurology.

mrajaraman@frontlinemedcom.com

The burden of psychiatric comorbidity is greater in patients with multiple sclerosis (MS), compared with the general population, reported Dr. Ruth Ann Marrie and coauthors from the departments of psychiatry and medicine at the University of Manitoba, Winnipeg.

A study of 44,452 MS patients and 220,849 controls in four Canadian provinces from 1995 to 2005 found that the incidence of depression in the MS group was 0.98% (95% CI; 0.81%-1.15%), compared with 0.72% (95% CI; 0.67%-0.76%) in the control group. The prevalence of depression was 20.1% in MS patients (19.5%-20.6%), compared with 11.9% (11.8%-12.1%) in the matched population, the authors noted.

HUNG KUO CHUN/Thinkstock

Also, the incidence and prevalence of anxiety disorder in the MS population was 0.64% (0.54%-0.73%) and 8.7% (8.4%-9.1%), respectively, compared with 0.42% (0.39%-0.45%) and 5.1% (4.9%-5.2%) in controls .

For bipolar disorder, the MS group had an incidence of 0.33% (0.26%-0.39%), compared with 0.16% (0.14%-0.18%) in controls. Prevalence was 4.7% (4.4%-4.9%) in the MS group and 2.3% (2.2%-2.3%) in controls .

Lastly, in schizophrenia, MS patients had an incidence of 0.060% (0.031%-0.080%), compared with 0.018% (0.011%-0.024%) in controls. Prevalence was 1.28% (1.15%-1.41%), in the MS group and 1.03% (0.99%-1.08%) in controls, the investigators said.

The findings suggest a “nonspecific effect of MS on psychiatric comorbidity,” Dr. Marrie and colleagues said in the report.

“From a policy perspective, this implies the need for general psychiatric support rather than illness-specific strategies,” they concluded.

Read the study in Neurology.

mrajaraman@frontlinemedcom.com

References

References

Publications
Publications
Topics
Article Type
Display Headline
Burden of psychiatric comorbidity higher in MS patients
Display Headline
Burden of psychiatric comorbidity higher in MS patients
Article Source

PURLs Copyright

Inside the Article

New and Noteworthy Information—November 2015

Article Type
Changed
Mon, 01/07/2019 - 10:13
Display Headline
New and Noteworthy Information—November 2015

HLA-DRB1*1501, adolescent summer sun habits, and BMI at the age of 20 independently affect age of multiple sclerosis (MS) onset, according to a study published online ahead of print October 7 in Neurology. This cross-sectional study included 1,161 Danish patients with MS. Lifestyle questionnaires and blood samples for genotyping were collected from all participants from 2009 to 2012. Information on age at onset was obtained from the Danish MS Treatment Registry. Younger age at onset was significantly associated with low exposure to summer sun in adolescence, higher BMI at age 20, and the HLA-DRB1*1501 risk allele in both univariate analyses and in a multivariable regression analysis. No association was found between age at onset and other single-nucleotide polymorphisms studied or vitamin D-associated environmental factors.

Treatment responses for autoimmune ataxia are more likely in patients with nonparaneoplastic disorders and those with exclusively plasma membrane protein (PMP) antibodies, according to a study published online ahead of print September 28 in JAMA Neurology. Investigators examined 118 patients with ataxia who were 18 or older, were seropositive for at least one neural autoantibody, had received at least one immunotherapy or cancer therapy, and had neurologist-reported outcomes documented from January 1, 1989, through December 31, 2013. Fifty-four patients had neurologic improvements. Kaplan-Meier analyses revealed that progression to wheelchair dependence occurred significantly faster among patients with neuronal nuclear or cytoplasmic antibody positivity only, although those with glutamic acid decarboxylase 65-kDa isoform autoimmunity progressed to wheelchair dependence at a rate similar to those with PMP autoimmunity.

Patients with celiac disease are not at increased risk for dementia overall, though they may be at increased risk for vascular dementia, according to a study published online ahead of print September 29 in Journal of Alzheimer’s Disease. Researchers compared the incidence of a subsequent dementia diagnosis among 8,846 older adults with celiac disease to that among 43,474 age- and gender-matched controls. The median age of the study population was 63, and 56% of participants were female. During a median follow-up time of 8.4 years, dementia was diagnosed in 4.3% of patients with celiac disease and 4.4% of controls. The researchers observed an increased risk of dementia in the first year following a diagnosis of celiac disease, but the increased risk was restricted to vascular dementia and was not present for Alzheimer’s dementia.

Infection may trigger childhood arterial ischemic stroke, while routine vaccinations appear to protect against it, according to a study published online ahead of print September 30 in Neurology. This international case–control study included 355 children with confirmed cases of arterial ischemic stroke and 354 controls without stroke. Median age was 7.6 for cases and 9.3 for controls. Infection in the week prior to stroke, or interview date for controls, was reported in 18% of cases versus 3% of controls. Infection thus conferred a 6.3-fold increased risk of arterial ischemic stroke. Children with some, few, or no routine vaccinations were at higher stroke risk than those receiving all or most vaccinations. Risk factors for arterial ischemic stroke included infection in the prior week, undervaccination, black race, and rural residence.

Amyloid PET and CSF biomarkers identify early Alzheimer’s disease with equal accuracy, according to a study published October 6 in Neurology. Researchers examined 122 healthy elderly people and 34 patients with mild cognitive impairment who developed Alzheimer’s disease dementia within three years (MCI-AD). They examined β-amyloid deposition in nine brain regions with [18F]-flutemetamol PET. CSF was analyzed with INNOTEST and EUROIMMUN ELISAs. CSF samples and PET scans each identified approximately 90% of patients who later received a diagnosis of Alzheimer’s disease. The best CSF measures for identifying MCI-AD were Aβ42/total tau and Aβ42/hyperphosphorylated tau, which performed better than CSF Aβ42 and Aβ42/40. CSF Aβ42/total tau had the highest accuracy of all CSF and PET biomarkers. The combination of CSF and PET was not better than either individual biomarker.

A combination of dextromethorphan and quinidine demonstrated clinically relevant efficacy for agitation in patients with probable Alzheimer’s disease and was generally well tolerated, according to a study published September 22 in JAMA. A total of 194 patients completed a preliminary 10-week phase II randomized clinical trial. In the sequential parallel comparison design, 152 patients received dextromethorphan–quinidine, and 127 received placebo. Analysis combining all patients and rerandomized placebo nonresponders showed significantly reduced agitation and aggression scores for dextromethorphan–quinidine versus placebo. Among all patients, mean agitation and aggression scores were reduced from 7.1 to 3.8 with dextromethorphan–quinidine and from 7.0 to 5.3 with placebo. Between-group treatment differences were significant. Among rerandomized placebo nonresponders, agitation and aggression scores were reduced from 5.8 to 3.8 with dextromethorphan–quinidine and from 6.7 to 5.8 with placebo.

 

 

The FDA has approved Betaconnect, an electronic autoinjector for the treatment of relapsing-remitting multiple sclerosis. Bayer HealthCare (Whippany, NJ) manufactures Betaconnect, which will be available to patients receiving Betaseron beginning in early 2016. The autoinjector, which was created based on feedback from patients and caregivers, offers customizable injection speed and depth settings that allow patients to administer injections quietly and precisely. Betaconnect also has an optional backup reminder function that tells patients the time of their next injection. In addition, the automatic needle insertion and retraction and a visual and audio end-of-dose indication tell patients when the injection is complete. Patients should speak with a healthcare provider before making any changes to injection depth or speed settings.

In patients with an intracranial pressure of more than 20 mmHg after traumatic brain injury (TBI), therapeutic hypothermia plus standard care to reduce intracranial pressure do not result in outcomes better than those associated with standard care alone, according to a study published online ahead of print October 7 in New England Journal of Medicine. Investigators enrolled 387 patients with TBI from November 2009 through October 2014 in a study. Barbiturates and decompressive craniectomy were required to control intracranial pressure in 54% of patients who received standard care and in 44% of patients who received hypothermia and standard care. The hypothermia group had worse outcomes in general than the standard-care group. A favorable outcome occurred in 26% of patients in the hypothermia group and in 37% of patients in the control group.

Differing manifestations of postconcussion symptoms on functional MRI (fMRI) between younger and older patients indicate that age influences the activation, modulation, and allocation of working memory processing resources after mild traumatic brain injury (MTBI), according to a study published online ahead of print October 6 in Radiology. Researchers performed fMRI exams on 13 young adults and 13 older adults with MTBI and 26 age- and gender-matched controls. Younger patients performing working-memory tasks had initial hyperactivation in the right precuneus and right inferior parietal gyrus, compared with younger controls. Older patients performing these tasks had hypoactivation in the right precuneus and right inferior frontal gyrus, compared with older controls. Younger patients, but not older patients, had partial recovery of activation pattern and decreased postconcussion symptoms at follow-up.

An immune system gene is associated with higher rates of amyloid plaque buildup in the brains of patients with Alzheimer’s disease and older adults at risk for the disease, according to a study published in the October issue of Brain. Investigators performed a genome-wide association study of longitudinal change in brain amyloid burden measured by 18F-florbetapir PET. They found that interleukin-1 receptor accessory protein (IL1RAP) was associated with higher rates of amyloid accumulation, independent of APOE ε4 status. This novel association was validated by deep sequencing. IL1RAP rs12053868-G carriers were more likely to progress from mild cognitive impairment to Alzheimer’s disease and exhibited greater longitudinal temporal cortex atrophy on MRI. In independent cohorts, rs12053868-G was associated with accelerated cognitive decline and lower cortical 11C-PBR28 PET signal.

For children with tuberous sclerosis complex and medically intractable epilepsy, a greater extent of resection is associated with a greater probability of seizure freedom, according to a study published in the October issue of Neurosurgery. Seventy-four patients were included in this retrospective chart review, and their median age at the time of surgery was 120 months. Engel Class I outcome was achieved in 65% and in 50% of patients at the one- and two-year follow-up, respectively. On univariate analyses, younger age at seizure onset, larger size of predominant tuber, and resection larger than a tuberectomy were associated with a longer duration of seizure freedom. In multivariate analyses, resection larger than a tuberectomy was independently associated with a longer duration of seizure freedom.

A new imaging method that uses a 7-T magnet shows promise in locating hard-to-find epileptic foci by visualizing the neurotransmitter glutamate, according to a study published October 14 in Science Translational Medicine. In a pilot study, researchers applied glutamate chemical exchange saturation transfer (GluCEST) to patients with nonlesional temporal lobe epilepsy based on conventional MRI. GluCEST correctly lateralized the temporal lobe seizure focus on visual and quantitative analyses in all patients. Hippocampal volumes were not significantly different between hemispheres. GluCEST allowed high-resolution functional imaging of brain glutamate and has the potential to identify the epileptic focus in patients previously deemed nonlesional. This method may lead to improved clinical outcomes for temporal lobe epilepsy as well as other localization-related epilepsies, according to the researchers.

Kimberly Williams

References

Author and Disclosure Information

Issue
Neurology Reviews - 23(11)
Publications
Page Number
7-8
Legacy Keywords
Essential Tremor, MS, Celiac disease, stroke, Alzheimer's disease, TBI
Sections
Author and Disclosure Information

Author and Disclosure Information

HLA-DRB1*1501, adolescent summer sun habits, and BMI at the age of 20 independently affect age of multiple sclerosis (MS) onset, according to a study published online ahead of print October 7 in Neurology. This cross-sectional study included 1,161 Danish patients with MS. Lifestyle questionnaires and blood samples for genotyping were collected from all participants from 2009 to 2012. Information on age at onset was obtained from the Danish MS Treatment Registry. Younger age at onset was significantly associated with low exposure to summer sun in adolescence, higher BMI at age 20, and the HLA-DRB1*1501 risk allele in both univariate analyses and in a multivariable regression analysis. No association was found between age at onset and other single-nucleotide polymorphisms studied or vitamin D-associated environmental factors.

Treatment responses for autoimmune ataxia are more likely in patients with nonparaneoplastic disorders and those with exclusively plasma membrane protein (PMP) antibodies, according to a study published online ahead of print September 28 in JAMA Neurology. Investigators examined 118 patients with ataxia who were 18 or older, were seropositive for at least one neural autoantibody, had received at least one immunotherapy or cancer therapy, and had neurologist-reported outcomes documented from January 1, 1989, through December 31, 2013. Fifty-four patients had neurologic improvements. Kaplan-Meier analyses revealed that progression to wheelchair dependence occurred significantly faster among patients with neuronal nuclear or cytoplasmic antibody positivity only, although those with glutamic acid decarboxylase 65-kDa isoform autoimmunity progressed to wheelchair dependence at a rate similar to those with PMP autoimmunity.

Patients with celiac disease are not at increased risk for dementia overall, though they may be at increased risk for vascular dementia, according to a study published online ahead of print September 29 in Journal of Alzheimer’s Disease. Researchers compared the incidence of a subsequent dementia diagnosis among 8,846 older adults with celiac disease to that among 43,474 age- and gender-matched controls. The median age of the study population was 63, and 56% of participants were female. During a median follow-up time of 8.4 years, dementia was diagnosed in 4.3% of patients with celiac disease and 4.4% of controls. The researchers observed an increased risk of dementia in the first year following a diagnosis of celiac disease, but the increased risk was restricted to vascular dementia and was not present for Alzheimer’s dementia.

Infection may trigger childhood arterial ischemic stroke, while routine vaccinations appear to protect against it, according to a study published online ahead of print September 30 in Neurology. This international case–control study included 355 children with confirmed cases of arterial ischemic stroke and 354 controls without stroke. Median age was 7.6 for cases and 9.3 for controls. Infection in the week prior to stroke, or interview date for controls, was reported in 18% of cases versus 3% of controls. Infection thus conferred a 6.3-fold increased risk of arterial ischemic stroke. Children with some, few, or no routine vaccinations were at higher stroke risk than those receiving all or most vaccinations. Risk factors for arterial ischemic stroke included infection in the prior week, undervaccination, black race, and rural residence.

Amyloid PET and CSF biomarkers identify early Alzheimer’s disease with equal accuracy, according to a study published October 6 in Neurology. Researchers examined 122 healthy elderly people and 34 patients with mild cognitive impairment who developed Alzheimer’s disease dementia within three years (MCI-AD). They examined β-amyloid deposition in nine brain regions with [18F]-flutemetamol PET. CSF was analyzed with INNOTEST and EUROIMMUN ELISAs. CSF samples and PET scans each identified approximately 90% of patients who later received a diagnosis of Alzheimer’s disease. The best CSF measures for identifying MCI-AD were Aβ42/total tau and Aβ42/hyperphosphorylated tau, which performed better than CSF Aβ42 and Aβ42/40. CSF Aβ42/total tau had the highest accuracy of all CSF and PET biomarkers. The combination of CSF and PET was not better than either individual biomarker.

A combination of dextromethorphan and quinidine demonstrated clinically relevant efficacy for agitation in patients with probable Alzheimer’s disease and was generally well tolerated, according to a study published September 22 in JAMA. A total of 194 patients completed a preliminary 10-week phase II randomized clinical trial. In the sequential parallel comparison design, 152 patients received dextromethorphan–quinidine, and 127 received placebo. Analysis combining all patients and rerandomized placebo nonresponders showed significantly reduced agitation and aggression scores for dextromethorphan–quinidine versus placebo. Among all patients, mean agitation and aggression scores were reduced from 7.1 to 3.8 with dextromethorphan–quinidine and from 7.0 to 5.3 with placebo. Between-group treatment differences were significant. Among rerandomized placebo nonresponders, agitation and aggression scores were reduced from 5.8 to 3.8 with dextromethorphan–quinidine and from 6.7 to 5.8 with placebo.

 

 

The FDA has approved Betaconnect, an electronic autoinjector for the treatment of relapsing-remitting multiple sclerosis. Bayer HealthCare (Whippany, NJ) manufactures Betaconnect, which will be available to patients receiving Betaseron beginning in early 2016. The autoinjector, which was created based on feedback from patients and caregivers, offers customizable injection speed and depth settings that allow patients to administer injections quietly and precisely. Betaconnect also has an optional backup reminder function that tells patients the time of their next injection. In addition, the automatic needle insertion and retraction and a visual and audio end-of-dose indication tell patients when the injection is complete. Patients should speak with a healthcare provider before making any changes to injection depth or speed settings.

In patients with an intracranial pressure of more than 20 mmHg after traumatic brain injury (TBI), therapeutic hypothermia plus standard care to reduce intracranial pressure do not result in outcomes better than those associated with standard care alone, according to a study published online ahead of print October 7 in New England Journal of Medicine. Investigators enrolled 387 patients with TBI from November 2009 through October 2014 in a study. Barbiturates and decompressive craniectomy were required to control intracranial pressure in 54% of patients who received standard care and in 44% of patients who received hypothermia and standard care. The hypothermia group had worse outcomes in general than the standard-care group. A favorable outcome occurred in 26% of patients in the hypothermia group and in 37% of patients in the control group.

Differing manifestations of postconcussion symptoms on functional MRI (fMRI) between younger and older patients indicate that age influences the activation, modulation, and allocation of working memory processing resources after mild traumatic brain injury (MTBI), according to a study published online ahead of print October 6 in Radiology. Researchers performed fMRI exams on 13 young adults and 13 older adults with MTBI and 26 age- and gender-matched controls. Younger patients performing working-memory tasks had initial hyperactivation in the right precuneus and right inferior parietal gyrus, compared with younger controls. Older patients performing these tasks had hypoactivation in the right precuneus and right inferior frontal gyrus, compared with older controls. Younger patients, but not older patients, had partial recovery of activation pattern and decreased postconcussion symptoms at follow-up.

An immune system gene is associated with higher rates of amyloid plaque buildup in the brains of patients with Alzheimer’s disease and older adults at risk for the disease, according to a study published in the October issue of Brain. Investigators performed a genome-wide association study of longitudinal change in brain amyloid burden measured by 18F-florbetapir PET. They found that interleukin-1 receptor accessory protein (IL1RAP) was associated with higher rates of amyloid accumulation, independent of APOE ε4 status. This novel association was validated by deep sequencing. IL1RAP rs12053868-G carriers were more likely to progress from mild cognitive impairment to Alzheimer’s disease and exhibited greater longitudinal temporal cortex atrophy on MRI. In independent cohorts, rs12053868-G was associated with accelerated cognitive decline and lower cortical 11C-PBR28 PET signal.

For children with tuberous sclerosis complex and medically intractable epilepsy, a greater extent of resection is associated with a greater probability of seizure freedom, according to a study published in the October issue of Neurosurgery. Seventy-four patients were included in this retrospective chart review, and their median age at the time of surgery was 120 months. Engel Class I outcome was achieved in 65% and in 50% of patients at the one- and two-year follow-up, respectively. On univariate analyses, younger age at seizure onset, larger size of predominant tuber, and resection larger than a tuberectomy were associated with a longer duration of seizure freedom. In multivariate analyses, resection larger than a tuberectomy was independently associated with a longer duration of seizure freedom.

A new imaging method that uses a 7-T magnet shows promise in locating hard-to-find epileptic foci by visualizing the neurotransmitter glutamate, according to a study published October 14 in Science Translational Medicine. In a pilot study, researchers applied glutamate chemical exchange saturation transfer (GluCEST) to patients with nonlesional temporal lobe epilepsy based on conventional MRI. GluCEST correctly lateralized the temporal lobe seizure focus on visual and quantitative analyses in all patients. Hippocampal volumes were not significantly different between hemispheres. GluCEST allowed high-resolution functional imaging of brain glutamate and has the potential to identify the epileptic focus in patients previously deemed nonlesional. This method may lead to improved clinical outcomes for temporal lobe epilepsy as well as other localization-related epilepsies, according to the researchers.

Kimberly Williams

HLA-DRB1*1501, adolescent summer sun habits, and BMI at the age of 20 independently affect age of multiple sclerosis (MS) onset, according to a study published online ahead of print October 7 in Neurology. This cross-sectional study included 1,161 Danish patients with MS. Lifestyle questionnaires and blood samples for genotyping were collected from all participants from 2009 to 2012. Information on age at onset was obtained from the Danish MS Treatment Registry. Younger age at onset was significantly associated with low exposure to summer sun in adolescence, higher BMI at age 20, and the HLA-DRB1*1501 risk allele in both univariate analyses and in a multivariable regression analysis. No association was found between age at onset and other single-nucleotide polymorphisms studied or vitamin D-associated environmental factors.

Treatment responses for autoimmune ataxia are more likely in patients with nonparaneoplastic disorders and those with exclusively plasma membrane protein (PMP) antibodies, according to a study published online ahead of print September 28 in JAMA Neurology. Investigators examined 118 patients with ataxia who were 18 or older, were seropositive for at least one neural autoantibody, had received at least one immunotherapy or cancer therapy, and had neurologist-reported outcomes documented from January 1, 1989, through December 31, 2013. Fifty-four patients had neurologic improvements. Kaplan-Meier analyses revealed that progression to wheelchair dependence occurred significantly faster among patients with neuronal nuclear or cytoplasmic antibody positivity only, although those with glutamic acid decarboxylase 65-kDa isoform autoimmunity progressed to wheelchair dependence at a rate similar to those with PMP autoimmunity.

Patients with celiac disease are not at increased risk for dementia overall, though they may be at increased risk for vascular dementia, according to a study published online ahead of print September 29 in Journal of Alzheimer’s Disease. Researchers compared the incidence of a subsequent dementia diagnosis among 8,846 older adults with celiac disease to that among 43,474 age- and gender-matched controls. The median age of the study population was 63, and 56% of participants were female. During a median follow-up time of 8.4 years, dementia was diagnosed in 4.3% of patients with celiac disease and 4.4% of controls. The researchers observed an increased risk of dementia in the first year following a diagnosis of celiac disease, but the increased risk was restricted to vascular dementia and was not present for Alzheimer’s dementia.

Infection may trigger childhood arterial ischemic stroke, while routine vaccinations appear to protect against it, according to a study published online ahead of print September 30 in Neurology. This international case–control study included 355 children with confirmed cases of arterial ischemic stroke and 354 controls without stroke. Median age was 7.6 for cases and 9.3 for controls. Infection in the week prior to stroke, or interview date for controls, was reported in 18% of cases versus 3% of controls. Infection thus conferred a 6.3-fold increased risk of arterial ischemic stroke. Children with some, few, or no routine vaccinations were at higher stroke risk than those receiving all or most vaccinations. Risk factors for arterial ischemic stroke included infection in the prior week, undervaccination, black race, and rural residence.

Amyloid PET and CSF biomarkers identify early Alzheimer’s disease with equal accuracy, according to a study published October 6 in Neurology. Researchers examined 122 healthy elderly people and 34 patients with mild cognitive impairment who developed Alzheimer’s disease dementia within three years (MCI-AD). They examined β-amyloid deposition in nine brain regions with [18F]-flutemetamol PET. CSF was analyzed with INNOTEST and EUROIMMUN ELISAs. CSF samples and PET scans each identified approximately 90% of patients who later received a diagnosis of Alzheimer’s disease. The best CSF measures for identifying MCI-AD were Aβ42/total tau and Aβ42/hyperphosphorylated tau, which performed better than CSF Aβ42 and Aβ42/40. CSF Aβ42/total tau had the highest accuracy of all CSF and PET biomarkers. The combination of CSF and PET was not better than either individual biomarker.

A combination of dextromethorphan and quinidine demonstrated clinically relevant efficacy for agitation in patients with probable Alzheimer’s disease and was generally well tolerated, according to a study published September 22 in JAMA. A total of 194 patients completed a preliminary 10-week phase II randomized clinical trial. In the sequential parallel comparison design, 152 patients received dextromethorphan–quinidine, and 127 received placebo. Analysis combining all patients and rerandomized placebo nonresponders showed significantly reduced agitation and aggression scores for dextromethorphan–quinidine versus placebo. Among all patients, mean agitation and aggression scores were reduced from 7.1 to 3.8 with dextromethorphan–quinidine and from 7.0 to 5.3 with placebo. Between-group treatment differences were significant. Among rerandomized placebo nonresponders, agitation and aggression scores were reduced from 5.8 to 3.8 with dextromethorphan–quinidine and from 6.7 to 5.8 with placebo.

 

 

The FDA has approved Betaconnect, an electronic autoinjector for the treatment of relapsing-remitting multiple sclerosis. Bayer HealthCare (Whippany, NJ) manufactures Betaconnect, which will be available to patients receiving Betaseron beginning in early 2016. The autoinjector, which was created based on feedback from patients and caregivers, offers customizable injection speed and depth settings that allow patients to administer injections quietly and precisely. Betaconnect also has an optional backup reminder function that tells patients the time of their next injection. In addition, the automatic needle insertion and retraction and a visual and audio end-of-dose indication tell patients when the injection is complete. Patients should speak with a healthcare provider before making any changes to injection depth or speed settings.

In patients with an intracranial pressure of more than 20 mmHg after traumatic brain injury (TBI), therapeutic hypothermia plus standard care to reduce intracranial pressure do not result in outcomes better than those associated with standard care alone, according to a study published online ahead of print October 7 in New England Journal of Medicine. Investigators enrolled 387 patients with TBI from November 2009 through October 2014 in a study. Barbiturates and decompressive craniectomy were required to control intracranial pressure in 54% of patients who received standard care and in 44% of patients who received hypothermia and standard care. The hypothermia group had worse outcomes in general than the standard-care group. A favorable outcome occurred in 26% of patients in the hypothermia group and in 37% of patients in the control group.

Differing manifestations of postconcussion symptoms on functional MRI (fMRI) between younger and older patients indicate that age influences the activation, modulation, and allocation of working memory processing resources after mild traumatic brain injury (MTBI), according to a study published online ahead of print October 6 in Radiology. Researchers performed fMRI exams on 13 young adults and 13 older adults with MTBI and 26 age- and gender-matched controls. Younger patients performing working-memory tasks had initial hyperactivation in the right precuneus and right inferior parietal gyrus, compared with younger controls. Older patients performing these tasks had hypoactivation in the right precuneus and right inferior frontal gyrus, compared with older controls. Younger patients, but not older patients, had partial recovery of activation pattern and decreased postconcussion symptoms at follow-up.

An immune system gene is associated with higher rates of amyloid plaque buildup in the brains of patients with Alzheimer’s disease and older adults at risk for the disease, according to a study published in the October issue of Brain. Investigators performed a genome-wide association study of longitudinal change in brain amyloid burden measured by 18F-florbetapir PET. They found that interleukin-1 receptor accessory protein (IL1RAP) was associated with higher rates of amyloid accumulation, independent of APOE ε4 status. This novel association was validated by deep sequencing. IL1RAP rs12053868-G carriers were more likely to progress from mild cognitive impairment to Alzheimer’s disease and exhibited greater longitudinal temporal cortex atrophy on MRI. In independent cohorts, rs12053868-G was associated with accelerated cognitive decline and lower cortical 11C-PBR28 PET signal.

For children with tuberous sclerosis complex and medically intractable epilepsy, a greater extent of resection is associated with a greater probability of seizure freedom, according to a study published in the October issue of Neurosurgery. Seventy-four patients were included in this retrospective chart review, and their median age at the time of surgery was 120 months. Engel Class I outcome was achieved in 65% and in 50% of patients at the one- and two-year follow-up, respectively. On univariate analyses, younger age at seizure onset, larger size of predominant tuber, and resection larger than a tuberectomy were associated with a longer duration of seizure freedom. In multivariate analyses, resection larger than a tuberectomy was independently associated with a longer duration of seizure freedom.

A new imaging method that uses a 7-T magnet shows promise in locating hard-to-find epileptic foci by visualizing the neurotransmitter glutamate, according to a study published October 14 in Science Translational Medicine. In a pilot study, researchers applied glutamate chemical exchange saturation transfer (GluCEST) to patients with nonlesional temporal lobe epilepsy based on conventional MRI. GluCEST correctly lateralized the temporal lobe seizure focus on visual and quantitative analyses in all patients. Hippocampal volumes were not significantly different between hemispheres. GluCEST allowed high-resolution functional imaging of brain glutamate and has the potential to identify the epileptic focus in patients previously deemed nonlesional. This method may lead to improved clinical outcomes for temporal lobe epilepsy as well as other localization-related epilepsies, according to the researchers.

Kimberly Williams

References

References

Issue
Neurology Reviews - 23(11)
Issue
Neurology Reviews - 23(11)
Page Number
7-8
Page Number
7-8
Publications
Publications
Article Type
Display Headline
New and Noteworthy Information—November 2015
Display Headline
New and Noteworthy Information—November 2015
Legacy Keywords
Essential Tremor, MS, Celiac disease, stroke, Alzheimer's disease, TBI
Legacy Keywords
Essential Tremor, MS, Celiac disease, stroke, Alzheimer's disease, TBI
Sections
Article Source

PURLs Copyright

Inside the Article

Lymphedema Patients Benefit from Pneumatic Compression Devices

Article Type
Changed
Fri, 09/14/2018 - 12:07
Display Headline
Lymphedema Patients Benefit from Pneumatic Compression Devices

NEW YORK - Patients with lymphedema may reduce their risk of cellulitis, as well as the number of outpatient visits, by using an advanced pneumatic compression device (APCD), according to a new study.

"Our study demonstrates, for the first time, that receipt of an advanced pneumatic compression device is associated with significant improvements in key clinical endpoints for lymphedema patients, both for those with cancer and those without," Dr. Pinar Karaca-Mandic of the University of Minnesota School of Public Health in Minneapolis said by email.

"This finding has important implications for the patients who suffer from the disease, especially for those who have high rates of cellulitis. These devices serve as a viable self-management option and can reduce the need for more intensive outpatient care in rehabilitative settings," she added.

Advanced devices have more garment chambers and greater adjustability than earlier devices, the researchers wrote.

Dr. Karaca-Mandic and colleagues used a commercial insurance claims database to compare outcomes for 12 months before and 12 months after APCD purchase (Flexitouch System, Tactile Medical) by 718 patients (374 with cancer) between 2008 and 2012.

Lymphedema-related outcomes had either primary or secondary diagnosis codes.

The patients' mean age was 54.2, 84.8% were female, and 71.6% were non-Hispanic white. Just over half (52.2%) had hypertension, and breast cancer (39.6%) was the predominant disease in the cancer group.

As reported online October 7 in JAMA Dermatology, the adjusted rate of cellulitis diagnoses fell from 21.1% before APCD use to 4.5% afterward (p<0.001), a 79% decline. The noncancer group had a 75% decline, from 28.8% to 7.3% (p<0.001).

The noncancer group also had a 54% decline in adjusted rate of hospitalizations, from 7.0% to 3.2% (p=0.02), the authors reported.

Both groups had declines in receipt of manual therapy, from an adjusted rate of 35.6% before APCD use to 24.9% afterward for cancer patients (p<0.001) and from 32.3% to 21.2% for noncancer patients (p<0.001).

The adjusted rate of outpatient visits fell from 58.6% to 41.4% in the cancer cohort and from 52.6% to 31.4% in the noncancer group (p<0.001 for both).

Total costs per patient, excluding medical equipment, declined from $2597 to $1642 for cancer patients (p=0.002) and from $2937 to $1883 (p=0.007) for noncancer patients.

"While our findings are based upon the outcomes from one specific device, it is possible other such devices may also reduce patient burden. This warrants explorations in future studies. In addition, our study was not designed to assess the long term effectiveness of the device. That should be studied in future work," Dr. Karaca-Mandic explained.

Also, she pointed out, her team didn't look at nonmonetary expenses such as productivity loss and caretaker costs. "To the extent that device use improves physical functioning and lowers such costs as well, the impact is likely much larger than we can measure," she added.

Dr. Peter J. Franks, of the Center for Research and Implementation of Clinical Practice in London, UK, said by email, "We have these devices that appear to work. The problem is that the evidence on efficacy and cost effectiveness is so poor. The article gave some retrospective observational data that implied that the incidence of infection (cellulitis) was reduced. This is important, as infections lead to further deterioration of the lymphatic system, making the situation worse for the patient and increasing the risk of further infections."

"It is hard to say how generalizable the results are to other devices, though fundamentally they all work in similar ways," said Dr. Franks, who coauthored an accompanying editorial. "I think that this is an important step in how we consider the use of medical devices."

Cynthia Shechter, an occupational therapist in New York City who is a lymphedema specialist for cancer patients, said by email, "When looking for the right device, look for a pump that contains multiple chambers, operates on a short thirty-second cycle time, and applies graduated compression."

 

 

"The body operates on a pressure gradient system, so it is imperative to obtain a gradient or graduated compression pump. Pressure at the feet or hand is greater than the thigh or shoulder," she added.

"Clinicians practicing in the treatment of lymphedema need to be open-minded regarding less traditional treatment options for this insidious condition, including the use of traditional and advanced pneumatic compression devices," Shechter said.

"This study indicates that use of an APCD reduces the necessity for therapy. However, rehabilitation therapy for primary and secondary lymphedema, at least a short course of

treatment, is important, especially in order to ensure patients are adequately educated in lymphedema care, management, and precautions," she said.

"There should be a follow-up study performed to discuss a patient's ability to sustain use of the APCD versus a traditional pneumatic pump, and the long-term success in both preventing infection and in reduction of therapy visits," Shechter said.

Tactile Medical partially supported this research and employs one coauthor as chief medical officer. Dr. Karaca-Mandic, Dr. Franks, and his coauthor reported consulting for the company.

Issue
The Hospitalist - 2015(11)
Publications
Sections

NEW YORK - Patients with lymphedema may reduce their risk of cellulitis, as well as the number of outpatient visits, by using an advanced pneumatic compression device (APCD), according to a new study.

"Our study demonstrates, for the first time, that receipt of an advanced pneumatic compression device is associated with significant improvements in key clinical endpoints for lymphedema patients, both for those with cancer and those without," Dr. Pinar Karaca-Mandic of the University of Minnesota School of Public Health in Minneapolis said by email.

"This finding has important implications for the patients who suffer from the disease, especially for those who have high rates of cellulitis. These devices serve as a viable self-management option and can reduce the need for more intensive outpatient care in rehabilitative settings," she added.

Advanced devices have more garment chambers and greater adjustability than earlier devices, the researchers wrote.

Dr. Karaca-Mandic and colleagues used a commercial insurance claims database to compare outcomes for 12 months before and 12 months after APCD purchase (Flexitouch System, Tactile Medical) by 718 patients (374 with cancer) between 2008 and 2012.

Lymphedema-related outcomes had either primary or secondary diagnosis codes.

The patients' mean age was 54.2, 84.8% were female, and 71.6% were non-Hispanic white. Just over half (52.2%) had hypertension, and breast cancer (39.6%) was the predominant disease in the cancer group.

As reported online October 7 in JAMA Dermatology, the adjusted rate of cellulitis diagnoses fell from 21.1% before APCD use to 4.5% afterward (p<0.001), a 79% decline. The noncancer group had a 75% decline, from 28.8% to 7.3% (p<0.001).

The noncancer group also had a 54% decline in adjusted rate of hospitalizations, from 7.0% to 3.2% (p=0.02), the authors reported.

Both groups had declines in receipt of manual therapy, from an adjusted rate of 35.6% before APCD use to 24.9% afterward for cancer patients (p<0.001) and from 32.3% to 21.2% for noncancer patients (p<0.001).

The adjusted rate of outpatient visits fell from 58.6% to 41.4% in the cancer cohort and from 52.6% to 31.4% in the noncancer group (p<0.001 for both).

Total costs per patient, excluding medical equipment, declined from $2597 to $1642 for cancer patients (p=0.002) and from $2937 to $1883 (p=0.007) for noncancer patients.

"While our findings are based upon the outcomes from one specific device, it is possible other such devices may also reduce patient burden. This warrants explorations in future studies. In addition, our study was not designed to assess the long term effectiveness of the device. That should be studied in future work," Dr. Karaca-Mandic explained.

Also, she pointed out, her team didn't look at nonmonetary expenses such as productivity loss and caretaker costs. "To the extent that device use improves physical functioning and lowers such costs as well, the impact is likely much larger than we can measure," she added.

Dr. Peter J. Franks, of the Center for Research and Implementation of Clinical Practice in London, UK, said by email, "We have these devices that appear to work. The problem is that the evidence on efficacy and cost effectiveness is so poor. The article gave some retrospective observational data that implied that the incidence of infection (cellulitis) was reduced. This is important, as infections lead to further deterioration of the lymphatic system, making the situation worse for the patient and increasing the risk of further infections."

"It is hard to say how generalizable the results are to other devices, though fundamentally they all work in similar ways," said Dr. Franks, who coauthored an accompanying editorial. "I think that this is an important step in how we consider the use of medical devices."

Cynthia Shechter, an occupational therapist in New York City who is a lymphedema specialist for cancer patients, said by email, "When looking for the right device, look for a pump that contains multiple chambers, operates on a short thirty-second cycle time, and applies graduated compression."

 

 

"The body operates on a pressure gradient system, so it is imperative to obtain a gradient or graduated compression pump. Pressure at the feet or hand is greater than the thigh or shoulder," she added.

"Clinicians practicing in the treatment of lymphedema need to be open-minded regarding less traditional treatment options for this insidious condition, including the use of traditional and advanced pneumatic compression devices," Shechter said.

"This study indicates that use of an APCD reduces the necessity for therapy. However, rehabilitation therapy for primary and secondary lymphedema, at least a short course of

treatment, is important, especially in order to ensure patients are adequately educated in lymphedema care, management, and precautions," she said.

"There should be a follow-up study performed to discuss a patient's ability to sustain use of the APCD versus a traditional pneumatic pump, and the long-term success in both preventing infection and in reduction of therapy visits," Shechter said.

Tactile Medical partially supported this research and employs one coauthor as chief medical officer. Dr. Karaca-Mandic, Dr. Franks, and his coauthor reported consulting for the company.

NEW YORK - Patients with lymphedema may reduce their risk of cellulitis, as well as the number of outpatient visits, by using an advanced pneumatic compression device (APCD), according to a new study.

"Our study demonstrates, for the first time, that receipt of an advanced pneumatic compression device is associated with significant improvements in key clinical endpoints for lymphedema patients, both for those with cancer and those without," Dr. Pinar Karaca-Mandic of the University of Minnesota School of Public Health in Minneapolis said by email.

"This finding has important implications for the patients who suffer from the disease, especially for those who have high rates of cellulitis. These devices serve as a viable self-management option and can reduce the need for more intensive outpatient care in rehabilitative settings," she added.

Advanced devices have more garment chambers and greater adjustability than earlier devices, the researchers wrote.

Dr. Karaca-Mandic and colleagues used a commercial insurance claims database to compare outcomes for 12 months before and 12 months after APCD purchase (Flexitouch System, Tactile Medical) by 718 patients (374 with cancer) between 2008 and 2012.

Lymphedema-related outcomes had either primary or secondary diagnosis codes.

The patients' mean age was 54.2, 84.8% were female, and 71.6% were non-Hispanic white. Just over half (52.2%) had hypertension, and breast cancer (39.6%) was the predominant disease in the cancer group.

As reported online October 7 in JAMA Dermatology, the adjusted rate of cellulitis diagnoses fell from 21.1% before APCD use to 4.5% afterward (p<0.001), a 79% decline. The noncancer group had a 75% decline, from 28.8% to 7.3% (p<0.001).

The noncancer group also had a 54% decline in adjusted rate of hospitalizations, from 7.0% to 3.2% (p=0.02), the authors reported.

Both groups had declines in receipt of manual therapy, from an adjusted rate of 35.6% before APCD use to 24.9% afterward for cancer patients (p<0.001) and from 32.3% to 21.2% for noncancer patients (p<0.001).

The adjusted rate of outpatient visits fell from 58.6% to 41.4% in the cancer cohort and from 52.6% to 31.4% in the noncancer group (p<0.001 for both).

Total costs per patient, excluding medical equipment, declined from $2597 to $1642 for cancer patients (p=0.002) and from $2937 to $1883 (p=0.007) for noncancer patients.

"While our findings are based upon the outcomes from one specific device, it is possible other such devices may also reduce patient burden. This warrants explorations in future studies. In addition, our study was not designed to assess the long term effectiveness of the device. That should be studied in future work," Dr. Karaca-Mandic explained.

Also, she pointed out, her team didn't look at nonmonetary expenses such as productivity loss and caretaker costs. "To the extent that device use improves physical functioning and lowers such costs as well, the impact is likely much larger than we can measure," she added.

Dr. Peter J. Franks, of the Center for Research and Implementation of Clinical Practice in London, UK, said by email, "We have these devices that appear to work. The problem is that the evidence on efficacy and cost effectiveness is so poor. The article gave some retrospective observational data that implied that the incidence of infection (cellulitis) was reduced. This is important, as infections lead to further deterioration of the lymphatic system, making the situation worse for the patient and increasing the risk of further infections."

"It is hard to say how generalizable the results are to other devices, though fundamentally they all work in similar ways," said Dr. Franks, who coauthored an accompanying editorial. "I think that this is an important step in how we consider the use of medical devices."

Cynthia Shechter, an occupational therapist in New York City who is a lymphedema specialist for cancer patients, said by email, "When looking for the right device, look for a pump that contains multiple chambers, operates on a short thirty-second cycle time, and applies graduated compression."

 

 

"The body operates on a pressure gradient system, so it is imperative to obtain a gradient or graduated compression pump. Pressure at the feet or hand is greater than the thigh or shoulder," she added.

"Clinicians practicing in the treatment of lymphedema need to be open-minded regarding less traditional treatment options for this insidious condition, including the use of traditional and advanced pneumatic compression devices," Shechter said.

"This study indicates that use of an APCD reduces the necessity for therapy. However, rehabilitation therapy for primary and secondary lymphedema, at least a short course of

treatment, is important, especially in order to ensure patients are adequately educated in lymphedema care, management, and precautions," she said.

"There should be a follow-up study performed to discuss a patient's ability to sustain use of the APCD versus a traditional pneumatic pump, and the long-term success in both preventing infection and in reduction of therapy visits," Shechter said.

Tactile Medical partially supported this research and employs one coauthor as chief medical officer. Dr. Karaca-Mandic, Dr. Franks, and his coauthor reported consulting for the company.

Issue
The Hospitalist - 2015(11)
Issue
The Hospitalist - 2015(11)
Publications
Publications
Article Type
Display Headline
Lymphedema Patients Benefit from Pneumatic Compression Devices
Display Headline
Lymphedema Patients Benefit from Pneumatic Compression Devices
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Endovascular thrombectomy vs tPA: better function, same mortality

Meta-analyses have inherent limitations
Article Type
Changed
Fri, 01/18/2019 - 15:22
Display Headline
Endovascular thrombectomy vs tPA: better function, same mortality

Endovascular mechanical thrombectomy yielded better function and revascularization rates but similar mortality and intracranial hemorrhage rates as standard medical therapy using tissue plasminogen activator (tPA) in a meta-analysis of eight high-quality randomized clinical trials comparing the two approaches for acute ischemic stroke.

The results were published online Nov. 3 in JAMA.

Copyright American Stroke Association

This meta-analysis included only large multicenter trials published from 2013 to the present. Previous trials and meta-analyses “had several well-recognized limitations” including inconsistent use of vascular imaging to confirm vessel occlusion before randomization, variable use of tPA in patients who eventually were assigned to endovascular therapy, and reliance on less effective and now outdated mechanical devices, said Dr. Jetan H. Badhiwala of the division of neurosurgery, University of Toronto, and his associates.

The eight trials included 2,423 patients (mean age, 67.4 years); 46.7% were women. A total of 1,313 patients underwent endovascular therapy, defined as the intra-arterial use of a microcatheter or other device for mechanical thrombectomy, with or without the local use of a chemical thrombolytic agent. The remaining 1,110 received standard medical therapy (tPA). The interval between stroke onset and endovascular treatment varied from 5 to 12 hours across these studies, with a mean of 3.8 hours.

Patients who had endovascular thrombectomy showed significantly higher rates of functional independence at 90 days (44.6%) than did those who had tPA (31.8%), for an OR of 1.71 and a number needed to treat of 8. The rate of angiographic revascularization at 24 hours also was markedly higher for endovascular thrombectomy (75.8% vs 34.1%), for an OR of 6.49, the investigators said (JAMA 2015;314:1832-43).

However, there were no significant differences between the two study groups in rates of symptomatic intracranial hemorrhage at 90 days (5.7% vs 5.1%) or all-cause mortality at 90 days (15.8% vs 17.8%), and overall morbidity including in-hospital rates of deep venous thrombosis, MI, and pneumonia also were similar.

No sponsor or source of financial support was reported for this study. Dr. Badhiwala and his associates reported having no relevant financial disclosures.

References

Click for Credit Link
Body

It is important to note some limitations with this well-conducted meta-analysis. First, functional outcomes showed significant heterogeneity, which the authors attributed to variations in patient-, treatment-, and study-related factors.

Second, the confidence intervals for mortality and intracranial hemorrhage were wide, indicating that more data are necessary to fully inform these outcomes.

Third, five of the eight trials were halted early because of the evident superiority of endovascular thrombectomy, which means they fell substantially short (by up to 74%) of their planned sample sizes. This tends to cause overestimation of treatment effects. Fourth, nearly all these strokes involved carotid territory, nearly all the patients were on the young end of the age spectrum, and very few participants had comorbidities such as AF or diabetes. Such favorable characteristics do not reflect real-world experience with ischemic stroke.

Dr. Joanna M. Wardlaw and Dr. Martin S. Dennis are at the Centre for Clinical Brain Sciences at the University of Edinburgh (Scotland). They reported having no relevant financial disclosures. Dr. Wardlaw and Dr. Dennis made these remarks in an editorial accompanying Dr. Badhiwala’s meta-analysis (JAMA 2015;314:1803-4).

Author and Disclosure Information

Publications
Topics
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

It is important to note some limitations with this well-conducted meta-analysis. First, functional outcomes showed significant heterogeneity, which the authors attributed to variations in patient-, treatment-, and study-related factors.

Second, the confidence intervals for mortality and intracranial hemorrhage were wide, indicating that more data are necessary to fully inform these outcomes.

Third, five of the eight trials were halted early because of the evident superiority of endovascular thrombectomy, which means they fell substantially short (by up to 74%) of their planned sample sizes. This tends to cause overestimation of treatment effects. Fourth, nearly all these strokes involved carotid territory, nearly all the patients were on the young end of the age spectrum, and very few participants had comorbidities such as AF or diabetes. Such favorable characteristics do not reflect real-world experience with ischemic stroke.

Dr. Joanna M. Wardlaw and Dr. Martin S. Dennis are at the Centre for Clinical Brain Sciences at the University of Edinburgh (Scotland). They reported having no relevant financial disclosures. Dr. Wardlaw and Dr. Dennis made these remarks in an editorial accompanying Dr. Badhiwala’s meta-analysis (JAMA 2015;314:1803-4).

Body

It is important to note some limitations with this well-conducted meta-analysis. First, functional outcomes showed significant heterogeneity, which the authors attributed to variations in patient-, treatment-, and study-related factors.

Second, the confidence intervals for mortality and intracranial hemorrhage were wide, indicating that more data are necessary to fully inform these outcomes.

Third, five of the eight trials were halted early because of the evident superiority of endovascular thrombectomy, which means they fell substantially short (by up to 74%) of their planned sample sizes. This tends to cause overestimation of treatment effects. Fourth, nearly all these strokes involved carotid territory, nearly all the patients were on the young end of the age spectrum, and very few participants had comorbidities such as AF or diabetes. Such favorable characteristics do not reflect real-world experience with ischemic stroke.

Dr. Joanna M. Wardlaw and Dr. Martin S. Dennis are at the Centre for Clinical Brain Sciences at the University of Edinburgh (Scotland). They reported having no relevant financial disclosures. Dr. Wardlaw and Dr. Dennis made these remarks in an editorial accompanying Dr. Badhiwala’s meta-analysis (JAMA 2015;314:1803-4).

Title
Meta-analyses have inherent limitations
Meta-analyses have inherent limitations

Endovascular mechanical thrombectomy yielded better function and revascularization rates but similar mortality and intracranial hemorrhage rates as standard medical therapy using tissue plasminogen activator (tPA) in a meta-analysis of eight high-quality randomized clinical trials comparing the two approaches for acute ischemic stroke.

The results were published online Nov. 3 in JAMA.

Copyright American Stroke Association

This meta-analysis included only large multicenter trials published from 2013 to the present. Previous trials and meta-analyses “had several well-recognized limitations” including inconsistent use of vascular imaging to confirm vessel occlusion before randomization, variable use of tPA in patients who eventually were assigned to endovascular therapy, and reliance on less effective and now outdated mechanical devices, said Dr. Jetan H. Badhiwala of the division of neurosurgery, University of Toronto, and his associates.

The eight trials included 2,423 patients (mean age, 67.4 years); 46.7% were women. A total of 1,313 patients underwent endovascular therapy, defined as the intra-arterial use of a microcatheter or other device for mechanical thrombectomy, with or without the local use of a chemical thrombolytic agent. The remaining 1,110 received standard medical therapy (tPA). The interval between stroke onset and endovascular treatment varied from 5 to 12 hours across these studies, with a mean of 3.8 hours.

Patients who had endovascular thrombectomy showed significantly higher rates of functional independence at 90 days (44.6%) than did those who had tPA (31.8%), for an OR of 1.71 and a number needed to treat of 8. The rate of angiographic revascularization at 24 hours also was markedly higher for endovascular thrombectomy (75.8% vs 34.1%), for an OR of 6.49, the investigators said (JAMA 2015;314:1832-43).

However, there were no significant differences between the two study groups in rates of symptomatic intracranial hemorrhage at 90 days (5.7% vs 5.1%) or all-cause mortality at 90 days (15.8% vs 17.8%), and overall morbidity including in-hospital rates of deep venous thrombosis, MI, and pneumonia also were similar.

No sponsor or source of financial support was reported for this study. Dr. Badhiwala and his associates reported having no relevant financial disclosures.

Endovascular mechanical thrombectomy yielded better function and revascularization rates but similar mortality and intracranial hemorrhage rates as standard medical therapy using tissue plasminogen activator (tPA) in a meta-analysis of eight high-quality randomized clinical trials comparing the two approaches for acute ischemic stroke.

The results were published online Nov. 3 in JAMA.

Copyright American Stroke Association

This meta-analysis included only large multicenter trials published from 2013 to the present. Previous trials and meta-analyses “had several well-recognized limitations” including inconsistent use of vascular imaging to confirm vessel occlusion before randomization, variable use of tPA in patients who eventually were assigned to endovascular therapy, and reliance on less effective and now outdated mechanical devices, said Dr. Jetan H. Badhiwala of the division of neurosurgery, University of Toronto, and his associates.

The eight trials included 2,423 patients (mean age, 67.4 years); 46.7% were women. A total of 1,313 patients underwent endovascular therapy, defined as the intra-arterial use of a microcatheter or other device for mechanical thrombectomy, with or without the local use of a chemical thrombolytic agent. The remaining 1,110 received standard medical therapy (tPA). The interval between stroke onset and endovascular treatment varied from 5 to 12 hours across these studies, with a mean of 3.8 hours.

Patients who had endovascular thrombectomy showed significantly higher rates of functional independence at 90 days (44.6%) than did those who had tPA (31.8%), for an OR of 1.71 and a number needed to treat of 8. The rate of angiographic revascularization at 24 hours also was markedly higher for endovascular thrombectomy (75.8% vs 34.1%), for an OR of 6.49, the investigators said (JAMA 2015;314:1832-43).

However, there were no significant differences between the two study groups in rates of symptomatic intracranial hemorrhage at 90 days (5.7% vs 5.1%) or all-cause mortality at 90 days (15.8% vs 17.8%), and overall morbidity including in-hospital rates of deep venous thrombosis, MI, and pneumonia also were similar.

No sponsor or source of financial support was reported for this study. Dr. Badhiwala and his associates reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Endovascular thrombectomy vs tPA: better function, same mortality
Display Headline
Endovascular thrombectomy vs tPA: better function, same mortality
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Endovascular mechanical thrombectomy yielded better function and angiographic revascularization but similar mortality and rate of intracranial hemorrhage compared with standard medical therapy (tPA) for acute ischemic stroke.

Major finding: Patients who had endovascular thrombectomy showed significantly higher rates of functional independence at 90 days (44.6%) than did those who had tPA (31.8%), for an OR of 1.71 and a number needed to treat of 8.

Data source: A meta-analysis of eight high-quality multicenter randomized clinical trials published during 2013-2015 involving 2,423 adults with acute ischemic stroke.

Disclosures: No sponsor or source of financial support was reported for this study. Dr. Badhiwala and his associates reported having no relevant financial disclosures.

ACOG plans consensus conference on uniform guidelines for breast cancer screening

Article Type
Changed
Thu, 12/15/2022 - 18:01
Display Headline
ACOG plans consensus conference on uniform guidelines for breast cancer screening

The Susan G. Komen Foundation estimates that 84% of breast cancers are found through mammography.1 Clearly, the value of mammography is proven. But controversy and confusion abound on how much mammography, and beginning at what age, is best for women.

Currently, the United States Preventive Services Task Force (USPSTF), the American Cancer Society (ACS), and the American College of Obstetricians and Gynecologists (ACOG) all have differing recommendations about mammography and about the importance of clinical breast examinations. These inconsistencies largely are due to different interpretations of the same data, not the data itself, and tend to center on how harm is defined and measured. Importantly, these differences can wreak havoc on our patients’ confidence in our counsel and decision making, and can complicate women’s access to screening. Under the Affordable Care Act, women are guaranteed coverage of annual mammograms, but new USPSTF recommendations, due out soon, may undermine that guarantee.

On October 20, ACOG responded to the ACS’ new recommendations on breast cancer screening by emphasizing our continued advice that women should begin annual mammography screening at age 40, along with a clinical breast exam.2

Consensus conference plansIn an effort to address widespread confusion among patients, health care professionals, and payers, ACOG is convening a consensus conference in January 2016, with the goal of arriving at a consistent set of guidelines that can be agreed to, implemented clinically across the country, and hopefully adopted by insurers, as well. Major organizations and providers of women’s health care, including ACS, will gather to evaluate and interpret the data in greater detail and to consider the available data in the broader context of patient care.

Without doubt, guidelines and recommendations will need to evolve as new evidence emerges, but our hope is that scientific and medical organizations can look at the same evidence and speak with one voice on what is best for women’s health. Our patients would benefit from that alone.

ACOG’s recommendations, summarized

  • Clinical breast examination every year for women aged 19 and older.
  • Screening mammography every year for women aged 40 and older.
  • Breast self-awareness has the potential to detect palpable breast cancer and can be recommended.2

 

Share your thoughts! Send your Letter to the Editor to rbarbieri@frontlinemedcom.com. Please include your name and the city and state in which you practice.

References
  1. Susan G. Komen Web site. Accuracy of mammograms. http://ww5.komen.org/BreastCancer/AccuracyofMammograms.html. Updated June 26, 2015. Accessed October 30, 2015.
  2. ACOG Statement on Revised American Cancer Society Recommendations on Breast Cancer Screening. American College of Obstetricians and Gynecologists Web site. http://www.acog.org/About-ACOG/News-Room/Statements/2015/ACOG-Statement-on-Recommendations-on-Breast-Cancer-Screening. Published October 20, 2015. Accessed October 30, 2015.
Author and Disclosure Information


Ms. DiVenere is Officer, Government and Political Affairs, at the American Congress of Obstetricians and Gynecologists, Washington, DC.

 

The author reports no financial relationships relevant to this article.

Issue
OBG Management - 27(11)
Publications
Topics
Legacy Keywords
Lucia DiVenere, ACOG, American College of Obstetricians and Gynecologists,breast cancer,breast cancer screening guidelines,Susan G. Komen Foundation,mammography,United States Preventive Services Task Force,USPSTF,American Cancer Society,ACS,clinical breast examination,Affordable Care Act, ACA,
Sections
Author and Disclosure Information


Ms. DiVenere is Officer, Government and Political Affairs, at the American Congress of Obstetricians and Gynecologists, Washington, DC.

 

The author reports no financial relationships relevant to this article.

Author and Disclosure Information


Ms. DiVenere is Officer, Government and Political Affairs, at the American Congress of Obstetricians and Gynecologists, Washington, DC.

 

The author reports no financial relationships relevant to this article.

Related Articles

The Susan G. Komen Foundation estimates that 84% of breast cancers are found through mammography.1 Clearly, the value of mammography is proven. But controversy and confusion abound on how much mammography, and beginning at what age, is best for women.

Currently, the United States Preventive Services Task Force (USPSTF), the American Cancer Society (ACS), and the American College of Obstetricians and Gynecologists (ACOG) all have differing recommendations about mammography and about the importance of clinical breast examinations. These inconsistencies largely are due to different interpretations of the same data, not the data itself, and tend to center on how harm is defined and measured. Importantly, these differences can wreak havoc on our patients’ confidence in our counsel and decision making, and can complicate women’s access to screening. Under the Affordable Care Act, women are guaranteed coverage of annual mammograms, but new USPSTF recommendations, due out soon, may undermine that guarantee.

On October 20, ACOG responded to the ACS’ new recommendations on breast cancer screening by emphasizing our continued advice that women should begin annual mammography screening at age 40, along with a clinical breast exam.2

Consensus conference plansIn an effort to address widespread confusion among patients, health care professionals, and payers, ACOG is convening a consensus conference in January 2016, with the goal of arriving at a consistent set of guidelines that can be agreed to, implemented clinically across the country, and hopefully adopted by insurers, as well. Major organizations and providers of women’s health care, including ACS, will gather to evaluate and interpret the data in greater detail and to consider the available data in the broader context of patient care.

Without doubt, guidelines and recommendations will need to evolve as new evidence emerges, but our hope is that scientific and medical organizations can look at the same evidence and speak with one voice on what is best for women’s health. Our patients would benefit from that alone.

ACOG’s recommendations, summarized

  • Clinical breast examination every year for women aged 19 and older.
  • Screening mammography every year for women aged 40 and older.
  • Breast self-awareness has the potential to detect palpable breast cancer and can be recommended.2

 

Share your thoughts! Send your Letter to the Editor to rbarbieri@frontlinemedcom.com. Please include your name and the city and state in which you practice.

The Susan G. Komen Foundation estimates that 84% of breast cancers are found through mammography.1 Clearly, the value of mammography is proven. But controversy and confusion abound on how much mammography, and beginning at what age, is best for women.

Currently, the United States Preventive Services Task Force (USPSTF), the American Cancer Society (ACS), and the American College of Obstetricians and Gynecologists (ACOG) all have differing recommendations about mammography and about the importance of clinical breast examinations. These inconsistencies largely are due to different interpretations of the same data, not the data itself, and tend to center on how harm is defined and measured. Importantly, these differences can wreak havoc on our patients’ confidence in our counsel and decision making, and can complicate women’s access to screening. Under the Affordable Care Act, women are guaranteed coverage of annual mammograms, but new USPSTF recommendations, due out soon, may undermine that guarantee.

On October 20, ACOG responded to the ACS’ new recommendations on breast cancer screening by emphasizing our continued advice that women should begin annual mammography screening at age 40, along with a clinical breast exam.2

Consensus conference plansIn an effort to address widespread confusion among patients, health care professionals, and payers, ACOG is convening a consensus conference in January 2016, with the goal of arriving at a consistent set of guidelines that can be agreed to, implemented clinically across the country, and hopefully adopted by insurers, as well. Major organizations and providers of women’s health care, including ACS, will gather to evaluate and interpret the data in greater detail and to consider the available data in the broader context of patient care.

Without doubt, guidelines and recommendations will need to evolve as new evidence emerges, but our hope is that scientific and medical organizations can look at the same evidence and speak with one voice on what is best for women’s health. Our patients would benefit from that alone.

ACOG’s recommendations, summarized

  • Clinical breast examination every year for women aged 19 and older.
  • Screening mammography every year for women aged 40 and older.
  • Breast self-awareness has the potential to detect palpable breast cancer and can be recommended.2

 

Share your thoughts! Send your Letter to the Editor to rbarbieri@frontlinemedcom.com. Please include your name and the city and state in which you practice.

References
  1. Susan G. Komen Web site. Accuracy of mammograms. http://ww5.komen.org/BreastCancer/AccuracyofMammograms.html. Updated June 26, 2015. Accessed October 30, 2015.
  2. ACOG Statement on Revised American Cancer Society Recommendations on Breast Cancer Screening. American College of Obstetricians and Gynecologists Web site. http://www.acog.org/About-ACOG/News-Room/Statements/2015/ACOG-Statement-on-Recommendations-on-Breast-Cancer-Screening. Published October 20, 2015. Accessed October 30, 2015.
References
  1. Susan G. Komen Web site. Accuracy of mammograms. http://ww5.komen.org/BreastCancer/AccuracyofMammograms.html. Updated June 26, 2015. Accessed October 30, 2015.
  2. ACOG Statement on Revised American Cancer Society Recommendations on Breast Cancer Screening. American College of Obstetricians and Gynecologists Web site. http://www.acog.org/About-ACOG/News-Room/Statements/2015/ACOG-Statement-on-Recommendations-on-Breast-Cancer-Screening. Published October 20, 2015. Accessed October 30, 2015.
Issue
OBG Management - 27(11)
Issue
OBG Management - 27(11)
Publications
Publications
Topics
Article Type
Display Headline
ACOG plans consensus conference on uniform guidelines for breast cancer screening
Display Headline
ACOG plans consensus conference on uniform guidelines for breast cancer screening
Legacy Keywords
Lucia DiVenere, ACOG, American College of Obstetricians and Gynecologists,breast cancer,breast cancer screening guidelines,Susan G. Komen Foundation,mammography,United States Preventive Services Task Force,USPSTF,American Cancer Society,ACS,clinical breast examination,Affordable Care Act, ACA,
Legacy Keywords
Lucia DiVenere, ACOG, American College of Obstetricians and Gynecologists,breast cancer,breast cancer screening guidelines,Susan G. Komen Foundation,mammography,United States Preventive Services Task Force,USPSTF,American Cancer Society,ACS,clinical breast examination,Affordable Care Act, ACA,
Sections

Post-ibrutinib management in MCL unclear, speaker says

Article Type
Changed
Tue, 11/03/2015 - 06:00
Display Headline
Post-ibrutinib management in MCL unclear, speaker says

Mantle cell lymphoma

NEW YORK—Despite an “unprecedented” single-agent response rate and progression-free survival (PFS) in previously treated mantle cell lymphoma (MCL) patients, those with multiple risk factors have a dismal outcome following ibrutinib failure.

So after ibrutinib, what’s next in MCL? That was the question asked at Lymphoma & Myeloma 2015.

Peter Martin, MD, of Weill Cornell Medical College in New York, New York, discussed some possibilities.

Ibrutinib (Imbruvica) was approved by the US Food and Drug Administration for MCL based on the PCYC-1104 trial, which showed an overall response rate of 68%. In the MCL2001 trial, the overall response rate was 63%. 

The median PFS for MCL was 13 months in PCYC-1104 and 10.5 months in MCL2001. The median overall survival (OS) was close to 2 years in PCYC-1104 and 18 months in MCL2001.

“So this is where I think it starts to get interesting,” Dr Martin said. “People were able to live for several months after progressing on ibrutinib. [However,] our experience at Cornell was not necessarily consistent with that.”

Cornell investigators, along with colleagues from Ohio State University, compiled data on patients who had been treated in clinical trials at their institutions and reviewed their survival after progression on ibrutinib. These patients had a median OS of 4 months. 

In reviewing the patients’ Mantle Cell Lymphoma International Prognostic Index (MIPI) scores, Dr Martin said they were arguably a higher-risk population.

Dr Martin collected data on 114 relapsed/refractory MCL patients from centers around the world and found they had a lower response rate (50%) with ibrutinib overall and a lower duration of ibrutinib therapy (4.7 months).

The median OS after stopping ibrutinib was 2.9 months for the entire group. For patients who did not receive any subsequent therapy after failure, it was 0.8 months. Patients who received treatment after ibrutinib failure had a median OS of 5.8 months.

“And it didn’t seem to matter what we gave,” Dr Martin said. “Those treatments were pretty short-lived.” 

The median time to next treatment with the first subsequent therapy was 2.4 months. These therapies included bendamustine, cytarabine, and lenalidomide. 

“There was no statistical association between survival and choice of therapy,” Dr Martin said.

What was significant, by univariate Cox regression analysis, was the patients’ MIPI prior to ibrutinib therapy (P=0.0002) and the duration of ibrutinib treatment (P=0.0465).

“So at this point in time, I think it’s fair to say that there is insufficient data to recommend any specific treatment following ibrutinib failure,” Dr Martin said.

However, he did make a few suggestions for treating high-risk patients.

Treatment suggestions after ibrutinib failure

Dr Martin’s first suggestion is to focus on symptom management rather than active therapy for the older, frailer patients. Second, consider allogeneic stem cell transplant in any high-risk patient responding to ibrutinib.

Third, consider continuing ibrutinib therapy while starting the next therapy. And fourth, consider some form of continuous therapy that does not depend on TP53.

Dr Martin admitted that what to do following ibrutinib failure remains cloudy.

“Conducting a clinical trial will be tricky,” he said, “because the median time from ibrutinib failure to the next therapy was 9 days, and we’re targeting a very high-risk patient population.”

In addition, on average 80% have expression of Ki67.

Currently, a phase 2 trial of copanlisib (NCT02455297) is the only post-ibrutinib clinical trial in MCL open. Copanlisib is a potent and reversible phosphatidylinositol-3-kinase (PI3K) inhibitor with activity against both alpha and delta isoforms of PI3K. Preliminary results of the trial demonstrated response in 5 of 7 MCL patients.

 

 

So perhaps the best approach, Dr Martin suggested, would be to improve response and prevent relapse while on ibrutinib using combination therapies.

A phase 1/1b trial of ibrutinib with bendamustine and rituximab (BR) is underway. Of 17 MCL patients treated thus far, 94% have responded, and 76% have achieved a CR. But 25% developed grade 3 rash.

Ibrutinib is also being studied in combination with rituximab in MCL. The combination has produced an overall response rate of 88%, with 40% of patients achieving a CR.

“My interpretation from all these studies is you can probably add ibrutinib to any other effective anti-MCL therapy and improve that therapy,” Dr Martin said. “But there are questions, obviously, that still arise.”

Overcoming ibrutinib resistance

Dr Martin explained that, to use combinations rationally, we need to understand mechanisms of ibrutinib resistance, “and that’s not so straightforward.”

Mutations in MCL likely have multiple mechanisms of resistance. Mutations occur predominantly in 3 groups of genes involving NF-kB, PIM/mTOR, and epigenetic modifiers. 

A number of trials are underway to hit some of these pathways, Dr Martin said.

Researchers at Cornell are studying ibrutinib plus palbociclib, an inhibitor of CDK4/CDK6 approved for advanced breast cancer, in a phase 1 trial of MCL patients.

The combination “very early on, has seen a high number of complete responses, which have been exciting,” Dr Martin said.

There are many ongoing ibrutinib trials in previously treated patients, including ones with carfilzomib, palbociclib, bortezomib, venetoclax, lenalidomide, and TGR-1202. In addition, the frontline trial of BR +/- ibrutinib is expected to have results soon.  

“[A]nd once that happens, my guess is that this frontline trial, once it’s read out, essentially, makes all these other trials irrelevant because the minute ibrutinib moves into the frontline setting, it makes it very difficult to evaluate in a subsequent setting,” Dr Martin said. “So within a couple of years, it will be standard in the frontline setting.”

Dr Martin is concerned that resources are insufficient—there are too many studies, too few patients, and too little time—to find another, potentially more effective agent or combination. 

He said there won’t be a one-size-fits-all approach to MCL either before or after ibrutinib, and collaboration among institutions, companies, and cooperative groups will be needed. 

“Management in the post-ibrutinib setting remains unclear,” he said, “and these patients should be treated in a clinical trial if possible.”

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Mantle cell lymphoma

NEW YORK—Despite an “unprecedented” single-agent response rate and progression-free survival (PFS) in previously treated mantle cell lymphoma (MCL) patients, those with multiple risk factors have a dismal outcome following ibrutinib failure.

So after ibrutinib, what’s next in MCL? That was the question asked at Lymphoma & Myeloma 2015.

Peter Martin, MD, of Weill Cornell Medical College in New York, New York, discussed some possibilities.

Ibrutinib (Imbruvica) was approved by the US Food and Drug Administration for MCL based on the PCYC-1104 trial, which showed an overall response rate of 68%. In the MCL2001 trial, the overall response rate was 63%. 

The median PFS for MCL was 13 months in PCYC-1104 and 10.5 months in MCL2001. The median overall survival (OS) was close to 2 years in PCYC-1104 and 18 months in MCL2001.

“So this is where I think it starts to get interesting,” Dr Martin said. “People were able to live for several months after progressing on ibrutinib. [However,] our experience at Cornell was not necessarily consistent with that.”

Cornell investigators, along with colleagues from Ohio State University, compiled data on patients who had been treated in clinical trials at their institutions and reviewed their survival after progression on ibrutinib. These patients had a median OS of 4 months. 

In reviewing the patients’ Mantle Cell Lymphoma International Prognostic Index (MIPI) scores, Dr Martin said they were arguably a higher-risk population.

Dr Martin collected data on 114 relapsed/refractory MCL patients from centers around the world and found they had a lower response rate (50%) with ibrutinib overall and a lower duration of ibrutinib therapy (4.7 months).

The median OS after stopping ibrutinib was 2.9 months for the entire group. For patients who did not receive any subsequent therapy after failure, it was 0.8 months. Patients who received treatment after ibrutinib failure had a median OS of 5.8 months.

“And it didn’t seem to matter what we gave,” Dr Martin said. “Those treatments were pretty short-lived.” 

The median time to next treatment with the first subsequent therapy was 2.4 months. These therapies included bendamustine, cytarabine, and lenalidomide. 

“There was no statistical association between survival and choice of therapy,” Dr Martin said.

What was significant, by univariate Cox regression analysis, was the patients’ MIPI prior to ibrutinib therapy (P=0.0002) and the duration of ibrutinib treatment (P=0.0465).

“So at this point in time, I think it’s fair to say that there is insufficient data to recommend any specific treatment following ibrutinib failure,” Dr Martin said.

However, he did make a few suggestions for treating high-risk patients.

Treatment suggestions after ibrutinib failure

Dr Martin’s first suggestion is to focus on symptom management rather than active therapy for the older, frailer patients. Second, consider allogeneic stem cell transplant in any high-risk patient responding to ibrutinib.

Third, consider continuing ibrutinib therapy while starting the next therapy. And fourth, consider some form of continuous therapy that does not depend on TP53.

Dr Martin admitted that what to do following ibrutinib failure remains cloudy.

“Conducting a clinical trial will be tricky,” he said, “because the median time from ibrutinib failure to the next therapy was 9 days, and we’re targeting a very high-risk patient population.”

In addition, on average 80% have expression of Ki67.

Currently, a phase 2 trial of copanlisib (NCT02455297) is the only post-ibrutinib clinical trial in MCL open. Copanlisib is a potent and reversible phosphatidylinositol-3-kinase (PI3K) inhibitor with activity against both alpha and delta isoforms of PI3K. Preliminary results of the trial demonstrated response in 5 of 7 MCL patients.

 

 

So perhaps the best approach, Dr Martin suggested, would be to improve response and prevent relapse while on ibrutinib using combination therapies.

A phase 1/1b trial of ibrutinib with bendamustine and rituximab (BR) is underway. Of 17 MCL patients treated thus far, 94% have responded, and 76% have achieved a CR. But 25% developed grade 3 rash.

Ibrutinib is also being studied in combination with rituximab in MCL. The combination has produced an overall response rate of 88%, with 40% of patients achieving a CR.

“My interpretation from all these studies is you can probably add ibrutinib to any other effective anti-MCL therapy and improve that therapy,” Dr Martin said. “But there are questions, obviously, that still arise.”

Overcoming ibrutinib resistance

Dr Martin explained that, to use combinations rationally, we need to understand mechanisms of ibrutinib resistance, “and that’s not so straightforward.”

Mutations in MCL likely have multiple mechanisms of resistance. Mutations occur predominantly in 3 groups of genes involving NF-kB, PIM/mTOR, and epigenetic modifiers. 

A number of trials are underway to hit some of these pathways, Dr Martin said.

Researchers at Cornell are studying ibrutinib plus palbociclib, an inhibitor of CDK4/CDK6 approved for advanced breast cancer, in a phase 1 trial of MCL patients.

The combination “very early on, has seen a high number of complete responses, which have been exciting,” Dr Martin said.

There are many ongoing ibrutinib trials in previously treated patients, including ones with carfilzomib, palbociclib, bortezomib, venetoclax, lenalidomide, and TGR-1202. In addition, the frontline trial of BR +/- ibrutinib is expected to have results soon.  

“[A]nd once that happens, my guess is that this frontline trial, once it’s read out, essentially, makes all these other trials irrelevant because the minute ibrutinib moves into the frontline setting, it makes it very difficult to evaluate in a subsequent setting,” Dr Martin said. “So within a couple of years, it will be standard in the frontline setting.”

Dr Martin is concerned that resources are insufficient—there are too many studies, too few patients, and too little time—to find another, potentially more effective agent or combination. 

He said there won’t be a one-size-fits-all approach to MCL either before or after ibrutinib, and collaboration among institutions, companies, and cooperative groups will be needed. 

“Management in the post-ibrutinib setting remains unclear,” he said, “and these patients should be treated in a clinical trial if possible.”

Mantle cell lymphoma

NEW YORK—Despite an “unprecedented” single-agent response rate and progression-free survival (PFS) in previously treated mantle cell lymphoma (MCL) patients, those with multiple risk factors have a dismal outcome following ibrutinib failure.

So after ibrutinib, what’s next in MCL? That was the question asked at Lymphoma & Myeloma 2015.

Peter Martin, MD, of Weill Cornell Medical College in New York, New York, discussed some possibilities.

Ibrutinib (Imbruvica) was approved by the US Food and Drug Administration for MCL based on the PCYC-1104 trial, which showed an overall response rate of 68%. In the MCL2001 trial, the overall response rate was 63%. 

The median PFS for MCL was 13 months in PCYC-1104 and 10.5 months in MCL2001. The median overall survival (OS) was close to 2 years in PCYC-1104 and 18 months in MCL2001.

“So this is where I think it starts to get interesting,” Dr Martin said. “People were able to live for several months after progressing on ibrutinib. [However,] our experience at Cornell was not necessarily consistent with that.”

Cornell investigators, along with colleagues from Ohio State University, compiled data on patients who had been treated in clinical trials at their institutions and reviewed their survival after progression on ibrutinib. These patients had a median OS of 4 months. 

In reviewing the patients’ Mantle Cell Lymphoma International Prognostic Index (MIPI) scores, Dr Martin said they were arguably a higher-risk population.

Dr Martin collected data on 114 relapsed/refractory MCL patients from centers around the world and found they had a lower response rate (50%) with ibrutinib overall and a lower duration of ibrutinib therapy (4.7 months).

The median OS after stopping ibrutinib was 2.9 months for the entire group. For patients who did not receive any subsequent therapy after failure, it was 0.8 months. Patients who received treatment after ibrutinib failure had a median OS of 5.8 months.

“And it didn’t seem to matter what we gave,” Dr Martin said. “Those treatments were pretty short-lived.” 

The median time to next treatment with the first subsequent therapy was 2.4 months. These therapies included bendamustine, cytarabine, and lenalidomide. 

“There was no statistical association between survival and choice of therapy,” Dr Martin said.

What was significant, by univariate Cox regression analysis, was the patients’ MIPI prior to ibrutinib therapy (P=0.0002) and the duration of ibrutinib treatment (P=0.0465).

“So at this point in time, I think it’s fair to say that there is insufficient data to recommend any specific treatment following ibrutinib failure,” Dr Martin said.

However, he did make a few suggestions for treating high-risk patients.

Treatment suggestions after ibrutinib failure

Dr Martin’s first suggestion is to focus on symptom management rather than active therapy for the older, frailer patients. Second, consider allogeneic stem cell transplant in any high-risk patient responding to ibrutinib.

Third, consider continuing ibrutinib therapy while starting the next therapy. And fourth, consider some form of continuous therapy that does not depend on TP53.

Dr Martin admitted that what to do following ibrutinib failure remains cloudy.

“Conducting a clinical trial will be tricky,” he said, “because the median time from ibrutinib failure to the next therapy was 9 days, and we’re targeting a very high-risk patient population.”

In addition, on average 80% have expression of Ki67.

Currently, a phase 2 trial of copanlisib (NCT02455297) is the only post-ibrutinib clinical trial in MCL open. Copanlisib is a potent and reversible phosphatidylinositol-3-kinase (PI3K) inhibitor with activity against both alpha and delta isoforms of PI3K. Preliminary results of the trial demonstrated response in 5 of 7 MCL patients.

 

 

So perhaps the best approach, Dr Martin suggested, would be to improve response and prevent relapse while on ibrutinib using combination therapies.

A phase 1/1b trial of ibrutinib with bendamustine and rituximab (BR) is underway. Of 17 MCL patients treated thus far, 94% have responded, and 76% have achieved a CR. But 25% developed grade 3 rash.

Ibrutinib is also being studied in combination with rituximab in MCL. The combination has produced an overall response rate of 88%, with 40% of patients achieving a CR.

“My interpretation from all these studies is you can probably add ibrutinib to any other effective anti-MCL therapy and improve that therapy,” Dr Martin said. “But there are questions, obviously, that still arise.”

Overcoming ibrutinib resistance

Dr Martin explained that, to use combinations rationally, we need to understand mechanisms of ibrutinib resistance, “and that’s not so straightforward.”

Mutations in MCL likely have multiple mechanisms of resistance. Mutations occur predominantly in 3 groups of genes involving NF-kB, PIM/mTOR, and epigenetic modifiers. 

A number of trials are underway to hit some of these pathways, Dr Martin said.

Researchers at Cornell are studying ibrutinib plus palbociclib, an inhibitor of CDK4/CDK6 approved for advanced breast cancer, in a phase 1 trial of MCL patients.

The combination “very early on, has seen a high number of complete responses, which have been exciting,” Dr Martin said.

There are many ongoing ibrutinib trials in previously treated patients, including ones with carfilzomib, palbociclib, bortezomib, venetoclax, lenalidomide, and TGR-1202. In addition, the frontline trial of BR +/- ibrutinib is expected to have results soon.  

“[A]nd once that happens, my guess is that this frontline trial, once it’s read out, essentially, makes all these other trials irrelevant because the minute ibrutinib moves into the frontline setting, it makes it very difficult to evaluate in a subsequent setting,” Dr Martin said. “So within a couple of years, it will be standard in the frontline setting.”

Dr Martin is concerned that resources are insufficient—there are too many studies, too few patients, and too little time—to find another, potentially more effective agent or combination. 

He said there won’t be a one-size-fits-all approach to MCL either before or after ibrutinib, and collaboration among institutions, companies, and cooperative groups will be needed. 

“Management in the post-ibrutinib setting remains unclear,” he said, “and these patients should be treated in a clinical trial if possible.”

Publications
Publications
Topics
Article Type
Display Headline
Post-ibrutinib management in MCL unclear, speaker says
Display Headline
Post-ibrutinib management in MCL unclear, speaker says
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Interventions can treat, prevent iron deficiency in blood donors

Article Type
Changed
Tue, 11/03/2015 - 06:00
Display Headline
Interventions can treat, prevent iron deficiency in blood donors

Blood donation in progress

ANAHEIM, CA—Data from the STRIDE study have revealed interventions that can mitigate iron deficiency in repeat blood donors.

The study showed that providing repeat blood donors with iron supplements significantly improved their iron status.

But informing donors about their ferritin levels and recommending they take iron pills also significantly improved their iron status.

Meanwhile, patients in control groups became more iron-deficient over the study period.

The study also revealed no difference in ferritin or hemoglobin levels between patients who took 19 mg of iron and those who took 38 mg.

Alan E. Mast, MD, PhD, of the Blood Center of Wisconsin in Milwaukee, presented these results at the 2015 AABB Annual Meeting (abstract S34-030E).

Dr Mast said blood donation removes a lot of iron, and iron is used to make hemoglobin in new red blood cells. But the measurement of hemoglobin does not accurately reflect iron stores.

“That’s really important,” he said. “The only test we do to qualify a blood donor doesn’t tell us if they have iron deficiency or not. And because of that, many regular blood donors become iron-deficient and continue to donate blood.”

Dr Mast said the strategies that appear to mitigate iron deficiency in regular blood donors are oral iron supplements and delaying the donation interval for more than 6 months.

“[However,] the effectiveness of providing iron pills versus providing the donor with information about their iron status has not been previously examined,” he noted.

This was the goal of the STRIDE (Strategies to Reduce Iron Deficiency) study.

Study design

This blinded, randomized, placebo-controlled study enrolled 692 frequent blood donors from 3 blood centers. They were assigned to 1 of 5 arms for 2 years of follow-up.

In 3 arms, donors received pills for 60 days after each donation. They received 38 mg or 19 mg of elemental iron, or they received a placebo.

Donors in the remaining 2 arms received letters after each donation—either a letter informing them of their iron status or a “control” letter thanking them for donating blood and urging them to donate again.

Every iron status letter reported the donor’s ferritin level. If the level was >26 mg/L, the letter simply urged donors to donate again. If the ferritin level was ≤26 mg/L, the letter recommended taking a self-purchased iron supplement (17 mg to 38 mg) and/or delaying donation for 6 months. Donors were allowed to choose either option, both, or neither.

The researchers measured ferritin, soluble transferrin receptor, and complete blood count at each donation.

Study completion

Of the 692 subjects randomized, 393 completed a final visit. The researchers noted that a donor’s ferritin level at enrollment, race, or gender did not impact study completion. However, older subjects were more likely to complete the study.

In all, 116 subjects were lost to follow-up, and the numbers were similar between the study arms. Thirty-nine subjects discontinued due to adverse events—16 in the 38 mg iron group, 12 in the 19 mg iron group, and 11 in the placebo group.

And 144 subjects discontinued for “other reasons”—9 in the iron status letter arm, 10 in the control letter arm, 30 in the 38 mg iron arm, 42 in the 19 mg iron arm, and 53 in the placebo arm.

Subjects’ reasons for discontinuation included not wanting to take a pill every day, believing they are in the placebo group and wanting to take iron, and subjects’ physicians recommending they start taking iron.

“Donors in pill arms de-enrolled more frequently than those in the letter arms, and the important thing to remember is that this is a controlled, randomized study where the donors did not know what they were taking,” Dr Mast said. “And I think that, a lot of the time, if donors had known what they were taking, they might have continued to participate in the study or continued to take the pills.”

 

 

Results

Dr Mast noted that, at the study’s end, all measures of iron deficiency were statistically indistinguishable in the 3 intervention arms, which were statistically different from the 2 control arms.

Between study enrollment and the donors’ final visit, the prevalence of ferritin <26 mg/L was unchanged in the control groups. But it had declined by more than 50% in the 3 intervention groups—19 mg iron, 38 mg iron, and iron status letter (P<0.0001 for all 3).

The prevalence of ferritin <12 mg/L was unchanged in the 2 control arms, but it had declined by more than 70% in the 3 intervention arms—19 mg iron (P<0.0001), 38 mg iron (P<0.01), and iron status letter (P<0.0001).

The researchers also calculated the odds ratios for iron deficiency over all donor visits. The odds for ferritin <26 or <12 mg/L decreased more than 80% in the 19 mg iron group (P<0.01 for both ferritin measurements) and the 38 mg iron group (P<0.01 for both).

The odds for ferritin <26 or <12 mg/L decreased about 50% in the iron status letter arm (P<0.01 for both).

And the odds for ferritin <12 mg/L increased about 50% in the control groups (P<0.01 for both the placebo and control letter groups). However, there was no significant difference for ferritin <26 mg/L in either control group.

Lastly, the researchers performed longitudinal modeling of hemoglobin. They found that hemoglobin increased >0.03 g/dL in the 19 mg and 38 mg iron arms (P<0.01 for both), decreasing the odds for low hemoglobin deferral about 50%.

Hemoglobin decreased >0.3 g/dL in the control groups (P<0.0001 for both the placebo and control letter groups), increasing the odds for low hemoglobin deferral about 70%.

“Interestingly, [being] in the iron status letter group did not affect hemoglobin that much in the longitudinal modeling of the donors,” Dr Mast noted.

In closing, he pointed out that the 19 mg and 38 mg iron pills were equally effective for mitigating iron deficiency and improving hemoglobin in these blood donors.

“From a physiology point of view, I think this is one of the most important results of this study,” Dr Mast said. “There’s absolutely no difference. There was no trend for 38 mg to be better than 19 in any analysis that we did.”

“There’s lots of reasons that could be happening, but I think it’s scientifically interesting and operationally interesting. And it’s important because we can tell donors—ask them to take a multivitamin with 19 mg of iron, and that will be sufficient to treat iron deficiency.”

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Blood donation in progress

ANAHEIM, CA—Data from the STRIDE study have revealed interventions that can mitigate iron deficiency in repeat blood donors.

The study showed that providing repeat blood donors with iron supplements significantly improved their iron status.

But informing donors about their ferritin levels and recommending they take iron pills also significantly improved their iron status.

Meanwhile, patients in control groups became more iron-deficient over the study period.

The study also revealed no difference in ferritin or hemoglobin levels between patients who took 19 mg of iron and those who took 38 mg.

Alan E. Mast, MD, PhD, of the Blood Center of Wisconsin in Milwaukee, presented these results at the 2015 AABB Annual Meeting (abstract S34-030E).

Dr Mast said blood donation removes a lot of iron, and iron is used to make hemoglobin in new red blood cells. But the measurement of hemoglobin does not accurately reflect iron stores.

“That’s really important,” he said. “The only test we do to qualify a blood donor doesn’t tell us if they have iron deficiency or not. And because of that, many regular blood donors become iron-deficient and continue to donate blood.”

Dr Mast said the strategies that appear to mitigate iron deficiency in regular blood donors are oral iron supplements and delaying the donation interval for more than 6 months.

“[However,] the effectiveness of providing iron pills versus providing the donor with information about their iron status has not been previously examined,” he noted.

This was the goal of the STRIDE (Strategies to Reduce Iron Deficiency) study.

Study design

This blinded, randomized, placebo-controlled study enrolled 692 frequent blood donors from 3 blood centers. They were assigned to 1 of 5 arms for 2 years of follow-up.

In 3 arms, donors received pills for 60 days after each donation. They received 38 mg or 19 mg of elemental iron, or they received a placebo.

Donors in the remaining 2 arms received letters after each donation—either a letter informing them of their iron status or a “control” letter thanking them for donating blood and urging them to donate again.

Every iron status letter reported the donor’s ferritin level. If the level was >26 mg/L, the letter simply urged donors to donate again. If the ferritin level was ≤26 mg/L, the letter recommended taking a self-purchased iron supplement (17 mg to 38 mg) and/or delaying donation for 6 months. Donors were allowed to choose either option, both, or neither.

The researchers measured ferritin, soluble transferrin receptor, and complete blood count at each donation.

Study completion

Of the 692 subjects randomized, 393 completed a final visit. The researchers noted that a donor’s ferritin level at enrollment, race, or gender did not impact study completion. However, older subjects were more likely to complete the study.

In all, 116 subjects were lost to follow-up, and the numbers were similar between the study arms. Thirty-nine subjects discontinued due to adverse events—16 in the 38 mg iron group, 12 in the 19 mg iron group, and 11 in the placebo group.

And 144 subjects discontinued for “other reasons”—9 in the iron status letter arm, 10 in the control letter arm, 30 in the 38 mg iron arm, 42 in the 19 mg iron arm, and 53 in the placebo arm.

Subjects’ reasons for discontinuation included not wanting to take a pill every day, believing they are in the placebo group and wanting to take iron, and subjects’ physicians recommending they start taking iron.

“Donors in pill arms de-enrolled more frequently than those in the letter arms, and the important thing to remember is that this is a controlled, randomized study where the donors did not know what they were taking,” Dr Mast said. “And I think that, a lot of the time, if donors had known what they were taking, they might have continued to participate in the study or continued to take the pills.”

 

 

Results

Dr Mast noted that, at the study’s end, all measures of iron deficiency were statistically indistinguishable in the 3 intervention arms, which were statistically different from the 2 control arms.

Between study enrollment and the donors’ final visit, the prevalence of ferritin <26 mg/L was unchanged in the control groups. But it had declined by more than 50% in the 3 intervention groups—19 mg iron, 38 mg iron, and iron status letter (P<0.0001 for all 3).

The prevalence of ferritin <12 mg/L was unchanged in the 2 control arms, but it had declined by more than 70% in the 3 intervention arms—19 mg iron (P<0.0001), 38 mg iron (P<0.01), and iron status letter (P<0.0001).

The researchers also calculated the odds ratios for iron deficiency over all donor visits. The odds for ferritin <26 or <12 mg/L decreased more than 80% in the 19 mg iron group (P<0.01 for both ferritin measurements) and the 38 mg iron group (P<0.01 for both).

The odds for ferritin <26 or <12 mg/L decreased about 50% in the iron status letter arm (P<0.01 for both).

And the odds for ferritin <12 mg/L increased about 50% in the control groups (P<0.01 for both the placebo and control letter groups). However, there was no significant difference for ferritin <26 mg/L in either control group.

Lastly, the researchers performed longitudinal modeling of hemoglobin. They found that hemoglobin increased >0.03 g/dL in the 19 mg and 38 mg iron arms (P<0.01 for both), decreasing the odds for low hemoglobin deferral about 50%.

Hemoglobin decreased >0.3 g/dL in the control groups (P<0.0001 for both the placebo and control letter groups), increasing the odds for low hemoglobin deferral about 70%.

“Interestingly, [being] in the iron status letter group did not affect hemoglobin that much in the longitudinal modeling of the donors,” Dr Mast noted.

In closing, he pointed out that the 19 mg and 38 mg iron pills were equally effective for mitigating iron deficiency and improving hemoglobin in these blood donors.

“From a physiology point of view, I think this is one of the most important results of this study,” Dr Mast said. “There’s absolutely no difference. There was no trend for 38 mg to be better than 19 in any analysis that we did.”

“There’s lots of reasons that could be happening, but I think it’s scientifically interesting and operationally interesting. And it’s important because we can tell donors—ask them to take a multivitamin with 19 mg of iron, and that will be sufficient to treat iron deficiency.”

Blood donation in progress

ANAHEIM, CA—Data from the STRIDE study have revealed interventions that can mitigate iron deficiency in repeat blood donors.

The study showed that providing repeat blood donors with iron supplements significantly improved their iron status.

But informing donors about their ferritin levels and recommending they take iron pills also significantly improved their iron status.

Meanwhile, patients in control groups became more iron-deficient over the study period.

The study also revealed no difference in ferritin or hemoglobin levels between patients who took 19 mg of iron and those who took 38 mg.

Alan E. Mast, MD, PhD, of the Blood Center of Wisconsin in Milwaukee, presented these results at the 2015 AABB Annual Meeting (abstract S34-030E).

Dr Mast said blood donation removes a lot of iron, and iron is used to make hemoglobin in new red blood cells. But the measurement of hemoglobin does not accurately reflect iron stores.

“That’s really important,” he said. “The only test we do to qualify a blood donor doesn’t tell us if they have iron deficiency or not. And because of that, many regular blood donors become iron-deficient and continue to donate blood.”

Dr Mast said the strategies that appear to mitigate iron deficiency in regular blood donors are oral iron supplements and delaying the donation interval for more than 6 months.

“[However,] the effectiveness of providing iron pills versus providing the donor with information about their iron status has not been previously examined,” he noted.

This was the goal of the STRIDE (Strategies to Reduce Iron Deficiency) study.

Study design

This blinded, randomized, placebo-controlled study enrolled 692 frequent blood donors from 3 blood centers. They were assigned to 1 of 5 arms for 2 years of follow-up.

In 3 arms, donors received pills for 60 days after each donation. They received 38 mg or 19 mg of elemental iron, or they received a placebo.

Donors in the remaining 2 arms received letters after each donation—either a letter informing them of their iron status or a “control” letter thanking them for donating blood and urging them to donate again.

Every iron status letter reported the donor’s ferritin level. If the level was >26 mg/L, the letter simply urged donors to donate again. If the ferritin level was ≤26 mg/L, the letter recommended taking a self-purchased iron supplement (17 mg to 38 mg) and/or delaying donation for 6 months. Donors were allowed to choose either option, both, or neither.

The researchers measured ferritin, soluble transferrin receptor, and complete blood count at each donation.

Study completion

Of the 692 subjects randomized, 393 completed a final visit. The researchers noted that a donor’s ferritin level at enrollment, race, or gender did not impact study completion. However, older subjects were more likely to complete the study.

In all, 116 subjects were lost to follow-up, and the numbers were similar between the study arms. Thirty-nine subjects discontinued due to adverse events—16 in the 38 mg iron group, 12 in the 19 mg iron group, and 11 in the placebo group.

And 144 subjects discontinued for “other reasons”—9 in the iron status letter arm, 10 in the control letter arm, 30 in the 38 mg iron arm, 42 in the 19 mg iron arm, and 53 in the placebo arm.

Subjects’ reasons for discontinuation included not wanting to take a pill every day, believing they are in the placebo group and wanting to take iron, and subjects’ physicians recommending they start taking iron.

“Donors in pill arms de-enrolled more frequently than those in the letter arms, and the important thing to remember is that this is a controlled, randomized study where the donors did not know what they were taking,” Dr Mast said. “And I think that, a lot of the time, if donors had known what they were taking, they might have continued to participate in the study or continued to take the pills.”

 

 

Results

Dr Mast noted that, at the study’s end, all measures of iron deficiency were statistically indistinguishable in the 3 intervention arms, which were statistically different from the 2 control arms.

Between study enrollment and the donors’ final visit, the prevalence of ferritin <26 mg/L was unchanged in the control groups. But it had declined by more than 50% in the 3 intervention groups—19 mg iron, 38 mg iron, and iron status letter (P<0.0001 for all 3).

The prevalence of ferritin <12 mg/L was unchanged in the 2 control arms, but it had declined by more than 70% in the 3 intervention arms—19 mg iron (P<0.0001), 38 mg iron (P<0.01), and iron status letter (P<0.0001).

The researchers also calculated the odds ratios for iron deficiency over all donor visits. The odds for ferritin <26 or <12 mg/L decreased more than 80% in the 19 mg iron group (P<0.01 for both ferritin measurements) and the 38 mg iron group (P<0.01 for both).

The odds for ferritin <26 or <12 mg/L decreased about 50% in the iron status letter arm (P<0.01 for both).

And the odds for ferritin <12 mg/L increased about 50% in the control groups (P<0.01 for both the placebo and control letter groups). However, there was no significant difference for ferritin <26 mg/L in either control group.

Lastly, the researchers performed longitudinal modeling of hemoglobin. They found that hemoglobin increased >0.03 g/dL in the 19 mg and 38 mg iron arms (P<0.01 for both), decreasing the odds for low hemoglobin deferral about 50%.

Hemoglobin decreased >0.3 g/dL in the control groups (P<0.0001 for both the placebo and control letter groups), increasing the odds for low hemoglobin deferral about 70%.

“Interestingly, [being] in the iron status letter group did not affect hemoglobin that much in the longitudinal modeling of the donors,” Dr Mast noted.

In closing, he pointed out that the 19 mg and 38 mg iron pills were equally effective for mitigating iron deficiency and improving hemoglobin in these blood donors.

“From a physiology point of view, I think this is one of the most important results of this study,” Dr Mast said. “There’s absolutely no difference. There was no trend for 38 mg to be better than 19 in any analysis that we did.”

“There’s lots of reasons that could be happening, but I think it’s scientifically interesting and operationally interesting. And it’s important because we can tell donors—ask them to take a multivitamin with 19 mg of iron, and that will be sufficient to treat iron deficiency.”

Publications
Publications
Topics
Article Type
Display Headline
Interventions can treat, prevent iron deficiency in blood donors
Display Headline
Interventions can treat, prevent iron deficiency in blood donors
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Protocol could improve massive blood transfusion

Article Type
Changed
Tue, 11/03/2015 - 06:00
Display Headline
Protocol could improve massive blood transfusion

Fresh frozen plasma

An “early and aggressive” approach to massive blood transfusion can save lives in military combat zones and may provide the same benefit in civilian trauma care as well, according to an article published in the AANA Journal.

The article describes 2 patients who required massive transfusions due to multiple gunshot wounds sustained while in combat zones.

One patient received an inadequate amount of blood products and ultimately died.

But the other patient benefitted from a protocol change to ensure an adequate amount of blood products was delivered quickly.

David Gaskin, CRNA, of Huntsville Memorial Hospital in Texas, and his colleagues described these cases in the journal.

The authors noted that, while providing care in a combat zone, the transfusion of packed red blood cells (PRBC) and fresh frozen plasma (FFP) is performed in a 1:1 ratio. However, the packaging and thawing techniques of the plasma can delay the delivery of blood products and prevent a patient from receiving enough blood.

Another issue in a military environment is the challenge of effectively communicating with live donors on site, which can cause delays in obtaining fresh blood supplies. Both of these issues can have life-threatening consequences for patients.

This is what happened with the first patient described in the article. The 38-year-old man sustained multiple gunshot wounds to the left side of the chest, left side of the back, and flank.

The surgical team was unable to maintain a high ratio of PRBCs to plasma and to infuse an adequate quantity of fresh whole blood (FWB) into this patient. He received 26 units of PRBCs, 5 units of FFP, 3 units of FWB, and 1 unit of cryoprecipitate.

The patient experienced trauma-induced coagulopathy, acidosis, and hypothermia. He died within 2 hours of presentation.

Because of this death, the team identified and implemented a protocol to keep 4 FFP units thawed and ready for immediate use at all times. They also identified and prescreened additional blood donors and implemented a phone roster and base-wide overhead system to enable rapid notification of these donors.

The second patient described in the article benefitted from these changes. This 23-year-old male sustained a gunshot wound to the left lower aspect of the abdomen and multiple gunshot wounds to bilateral lower extremities.

The “early and aggressive” use of FWB and plasma provided the necessary endogenous clotting factors and platelets to promote hemostasis in this patient. He received 18 units of PRBCs, 18 units of FFP, 2 units of cryoprecipitate, and 24 units of FWB.

Gaskin and his colleagues said these results suggest that efforts to incorporate a similar resuscitation strategy into civilian practice may improve outcomes, but it warrants continued study.

Publications
Topics

Fresh frozen plasma

An “early and aggressive” approach to massive blood transfusion can save lives in military combat zones and may provide the same benefit in civilian trauma care as well, according to an article published in the AANA Journal.

The article describes 2 patients who required massive transfusions due to multiple gunshot wounds sustained while in combat zones.

One patient received an inadequate amount of blood products and ultimately died.

But the other patient benefitted from a protocol change to ensure an adequate amount of blood products was delivered quickly.

David Gaskin, CRNA, of Huntsville Memorial Hospital in Texas, and his colleagues described these cases in the journal.

The authors noted that, while providing care in a combat zone, the transfusion of packed red blood cells (PRBC) and fresh frozen plasma (FFP) is performed in a 1:1 ratio. However, the packaging and thawing techniques of the plasma can delay the delivery of blood products and prevent a patient from receiving enough blood.

Another issue in a military environment is the challenge of effectively communicating with live donors on site, which can cause delays in obtaining fresh blood supplies. Both of these issues can have life-threatening consequences for patients.

This is what happened with the first patient described in the article. The 38-year-old man sustained multiple gunshot wounds to the left side of the chest, left side of the back, and flank.

The surgical team was unable to maintain a high ratio of PRBCs to plasma and to infuse an adequate quantity of fresh whole blood (FWB) into this patient. He received 26 units of PRBCs, 5 units of FFP, 3 units of FWB, and 1 unit of cryoprecipitate.

The patient experienced trauma-induced coagulopathy, acidosis, and hypothermia. He died within 2 hours of presentation.

Because of this death, the team identified and implemented a protocol to keep 4 FFP units thawed and ready for immediate use at all times. They also identified and prescreened additional blood donors and implemented a phone roster and base-wide overhead system to enable rapid notification of these donors.

The second patient described in the article benefitted from these changes. This 23-year-old male sustained a gunshot wound to the left lower aspect of the abdomen and multiple gunshot wounds to bilateral lower extremities.

The “early and aggressive” use of FWB and plasma provided the necessary endogenous clotting factors and platelets to promote hemostasis in this patient. He received 18 units of PRBCs, 18 units of FFP, 2 units of cryoprecipitate, and 24 units of FWB.

Gaskin and his colleagues said these results suggest that efforts to incorporate a similar resuscitation strategy into civilian practice may improve outcomes, but it warrants continued study.

Fresh frozen plasma

An “early and aggressive” approach to massive blood transfusion can save lives in military combat zones and may provide the same benefit in civilian trauma care as well, according to an article published in the AANA Journal.

The article describes 2 patients who required massive transfusions due to multiple gunshot wounds sustained while in combat zones.

One patient received an inadequate amount of blood products and ultimately died.

But the other patient benefitted from a protocol change to ensure an adequate amount of blood products was delivered quickly.

David Gaskin, CRNA, of Huntsville Memorial Hospital in Texas, and his colleagues described these cases in the journal.

The authors noted that, while providing care in a combat zone, the transfusion of packed red blood cells (PRBC) and fresh frozen plasma (FFP) is performed in a 1:1 ratio. However, the packaging and thawing techniques of the plasma can delay the delivery of blood products and prevent a patient from receiving enough blood.

Another issue in a military environment is the challenge of effectively communicating with live donors on site, which can cause delays in obtaining fresh blood supplies. Both of these issues can have life-threatening consequences for patients.

This is what happened with the first patient described in the article. The 38-year-old man sustained multiple gunshot wounds to the left side of the chest, left side of the back, and flank.

The surgical team was unable to maintain a high ratio of PRBCs to plasma and to infuse an adequate quantity of fresh whole blood (FWB) into this patient. He received 26 units of PRBCs, 5 units of FFP, 3 units of FWB, and 1 unit of cryoprecipitate.

The patient experienced trauma-induced coagulopathy, acidosis, and hypothermia. He died within 2 hours of presentation.

Because of this death, the team identified and implemented a protocol to keep 4 FFP units thawed and ready for immediate use at all times. They also identified and prescreened additional blood donors and implemented a phone roster and base-wide overhead system to enable rapid notification of these donors.

The second patient described in the article benefitted from these changes. This 23-year-old male sustained a gunshot wound to the left lower aspect of the abdomen and multiple gunshot wounds to bilateral lower extremities.

The “early and aggressive” use of FWB and plasma provided the necessary endogenous clotting factors and platelets to promote hemostasis in this patient. He received 18 units of PRBCs, 18 units of FFP, 2 units of cryoprecipitate, and 24 units of FWB.

Gaskin and his colleagues said these results suggest that efforts to incorporate a similar resuscitation strategy into civilian practice may improve outcomes, but it warrants continued study.

Publications
Publications
Topics
Article Type
Display Headline
Protocol could improve massive blood transfusion
Display Headline
Protocol could improve massive blood transfusion
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Drug gets orphan designation for BPDCN

Article Type
Changed
Tue, 11/03/2015 - 06:00
Display Headline
Drug gets orphan designation for BPDCN

Micrograph of dendritic cells

The European Medicines Agency (EMA) has granted orphan drug designation to SL-401 for the treatment of blastic plasmacytoid dendritic cell neoplasm (BPDCN).

SL-401 is a targeted therapy directed to the interleukin-3 receptor (IL-3R), which is present on cancer stem cells and tumor bulk in a range of hematologic malignancies.

The drug is composed of human IL-3 coupled to a truncated diphtheria toxin payload that inhibits protein synthesis.

SL-401 already has orphan designation from the EMA to treat acute myeloid leukemia (AML) and from the US Food and Drug Administration (FDA) for the treatment of AML and BPDCN. The drug is under development by Stemline Therapeutics, Inc.

SL-401 research

At ASH 2012 (abstract 3625), researchers reported results with SL-401 in a study of patients with AML, BPDCN, and myelodysplastic syndromes (MDS).

At that time, the study had enrolled 80 patients, including 59 with relapsed or refractory AML, 11 with de novo AML unfit for chemotherapy, 7 with high-risk MDS, and 3 with relapsed/refractory BPDCN.

Patients received a single cycle of SL-401 as a 15-minute intravenous infusion in 1 of 2 dosing regimens to determine the maximum tolerated dose (MTD) and assess antitumor activity.

With regimen A, 45 patients received doses ranging from 4 μg/kg to 12.5 μg/kg every other day for up to 6 doses. With regimen B, 35 patients received doses ranging from 7.1 μg/kg to 22.1 μg/kg daily for up to 5 doses.

Of the 59 patients with relapsed/refractory AML, 2 achieved complete responses (CRs), 5 had partial responses (PRs), and 8 had minor responses (MRs). One CR lasted more than 8 months, and the other lasted more than 25 months.

Of the 11 patients with AML who were not candidates for chemotherapy, 2 had PRs and 1 had an MR. Among the 7 patients with high-risk MDS, there was 1 PR and 1 MR.

And among the 3 patients with BPDCN, there were 2 CRs. One CR lasted more than 2 months, and the other lasted more than 4 months.

The MTD was not achieved with regimen A, but the MTD for regimen B was 16.6 μg/kg/day. The dose-limiting toxicities were a gastrointestinal bleed (n=1), transaminase and creatinine kinase elevations (n=1), and capillary leak syndrome (n=3). There was no evidence of treatment-related bone marrow suppression.

Last year, researchers reported additional results in BPDCN patients (Frankel et al, Blood 2014).

Eleven BPDCN patients received a single course of SL-401 (at 12.5 μg/kg intravenously over 15 minutes) daily for up to 5 doses. Three patients who had initial responses to SL-401 received a second course while in relapse.

Seven of 9 evaluable (78%) patients responded to a single course of SL-401. There were 5 CRs and 2 PRs. The median duration of responses was 5 months (range, 1-20+ months).

The most common adverse events were transient and included fever, chills, hypotension, edema, hypoalbuminemia, thrombocytopenia, and transaminasemia.

Three multicenter clinical trials of SL-401 are currently open in the following indications:

Additional SL-401 studies are planned for patients with myeloma, lymphomas, and other leukemias.

About orphan designation

In the European Union, orphan designation is granted to therapies intended to treat a life-threatening or chronically debilitating condition that affects no more than 5 in 10,000 persons and where no satisfactory treatment is available.

 

 

Companies that obtain orphan designation for a drug in the European Union benefit from a number of incentives, including protocol assistance, a type of scientific advice specific for designated orphan medicines, and 10 years of market exclusivity once the medicine is on the market. Fee reductions are also available, depending on the status of the sponsor and the type of service required.

The FDA grants orphan designation to drugs that are intended to treat diseases or conditions affecting fewer than 200,000 patients in the US.

In the US, orphan designation provides the sponsor of a drug with various development incentives, including opportunities to apply for research-related tax credits and grant funding, assistance in designing clinical trials, and 7 years of US market exclusivity if the drug is approved.

Publications
Topics

Micrograph of dendritic cells

The European Medicines Agency (EMA) has granted orphan drug designation to SL-401 for the treatment of blastic plasmacytoid dendritic cell neoplasm (BPDCN).

SL-401 is a targeted therapy directed to the interleukin-3 receptor (IL-3R), which is present on cancer stem cells and tumor bulk in a range of hematologic malignancies.

The drug is composed of human IL-3 coupled to a truncated diphtheria toxin payload that inhibits protein synthesis.

SL-401 already has orphan designation from the EMA to treat acute myeloid leukemia (AML) and from the US Food and Drug Administration (FDA) for the treatment of AML and BPDCN. The drug is under development by Stemline Therapeutics, Inc.

SL-401 research

At ASH 2012 (abstract 3625), researchers reported results with SL-401 in a study of patients with AML, BPDCN, and myelodysplastic syndromes (MDS).

At that time, the study had enrolled 80 patients, including 59 with relapsed or refractory AML, 11 with de novo AML unfit for chemotherapy, 7 with high-risk MDS, and 3 with relapsed/refractory BPDCN.

Patients received a single cycle of SL-401 as a 15-minute intravenous infusion in 1 of 2 dosing regimens to determine the maximum tolerated dose (MTD) and assess antitumor activity.

With regimen A, 45 patients received doses ranging from 4 μg/kg to 12.5 μg/kg every other day for up to 6 doses. With regimen B, 35 patients received doses ranging from 7.1 μg/kg to 22.1 μg/kg daily for up to 5 doses.

Of the 59 patients with relapsed/refractory AML, 2 achieved complete responses (CRs), 5 had partial responses (PRs), and 8 had minor responses (MRs). One CR lasted more than 8 months, and the other lasted more than 25 months.

Of the 11 patients with AML who were not candidates for chemotherapy, 2 had PRs and 1 had an MR. Among the 7 patients with high-risk MDS, there was 1 PR and 1 MR.

And among the 3 patients with BPDCN, there were 2 CRs. One CR lasted more than 2 months, and the other lasted more than 4 months.

The MTD was not achieved with regimen A, but the MTD for regimen B was 16.6 μg/kg/day. The dose-limiting toxicities were a gastrointestinal bleed (n=1), transaminase and creatinine kinase elevations (n=1), and capillary leak syndrome (n=3). There was no evidence of treatment-related bone marrow suppression.

Last year, researchers reported additional results in BPDCN patients (Frankel et al, Blood 2014).

Eleven BPDCN patients received a single course of SL-401 (at 12.5 μg/kg intravenously over 15 minutes) daily for up to 5 doses. Three patients who had initial responses to SL-401 received a second course while in relapse.

Seven of 9 evaluable (78%) patients responded to a single course of SL-401. There were 5 CRs and 2 PRs. The median duration of responses was 5 months (range, 1-20+ months).

The most common adverse events were transient and included fever, chills, hypotension, edema, hypoalbuminemia, thrombocytopenia, and transaminasemia.

Three multicenter clinical trials of SL-401 are currently open in the following indications:

Additional SL-401 studies are planned for patients with myeloma, lymphomas, and other leukemias.

About orphan designation

In the European Union, orphan designation is granted to therapies intended to treat a life-threatening or chronically debilitating condition that affects no more than 5 in 10,000 persons and where no satisfactory treatment is available.

 

 

Companies that obtain orphan designation for a drug in the European Union benefit from a number of incentives, including protocol assistance, a type of scientific advice specific for designated orphan medicines, and 10 years of market exclusivity once the medicine is on the market. Fee reductions are also available, depending on the status of the sponsor and the type of service required.

The FDA grants orphan designation to drugs that are intended to treat diseases or conditions affecting fewer than 200,000 patients in the US.

In the US, orphan designation provides the sponsor of a drug with various development incentives, including opportunities to apply for research-related tax credits and grant funding, assistance in designing clinical trials, and 7 years of US market exclusivity if the drug is approved.

Micrograph of dendritic cells

The European Medicines Agency (EMA) has granted orphan drug designation to SL-401 for the treatment of blastic plasmacytoid dendritic cell neoplasm (BPDCN).

SL-401 is a targeted therapy directed to the interleukin-3 receptor (IL-3R), which is present on cancer stem cells and tumor bulk in a range of hematologic malignancies.

The drug is composed of human IL-3 coupled to a truncated diphtheria toxin payload that inhibits protein synthesis.

SL-401 already has orphan designation from the EMA to treat acute myeloid leukemia (AML) and from the US Food and Drug Administration (FDA) for the treatment of AML and BPDCN. The drug is under development by Stemline Therapeutics, Inc.

SL-401 research

At ASH 2012 (abstract 3625), researchers reported results with SL-401 in a study of patients with AML, BPDCN, and myelodysplastic syndromes (MDS).

At that time, the study had enrolled 80 patients, including 59 with relapsed or refractory AML, 11 with de novo AML unfit for chemotherapy, 7 with high-risk MDS, and 3 with relapsed/refractory BPDCN.

Patients received a single cycle of SL-401 as a 15-minute intravenous infusion in 1 of 2 dosing regimens to determine the maximum tolerated dose (MTD) and assess antitumor activity.

With regimen A, 45 patients received doses ranging from 4 μg/kg to 12.5 μg/kg every other day for up to 6 doses. With regimen B, 35 patients received doses ranging from 7.1 μg/kg to 22.1 μg/kg daily for up to 5 doses.

Of the 59 patients with relapsed/refractory AML, 2 achieved complete responses (CRs), 5 had partial responses (PRs), and 8 had minor responses (MRs). One CR lasted more than 8 months, and the other lasted more than 25 months.

Of the 11 patients with AML who were not candidates for chemotherapy, 2 had PRs and 1 had an MR. Among the 7 patients with high-risk MDS, there was 1 PR and 1 MR.

And among the 3 patients with BPDCN, there were 2 CRs. One CR lasted more than 2 months, and the other lasted more than 4 months.

The MTD was not achieved with regimen A, but the MTD for regimen B was 16.6 μg/kg/day. The dose-limiting toxicities were a gastrointestinal bleed (n=1), transaminase and creatinine kinase elevations (n=1), and capillary leak syndrome (n=3). There was no evidence of treatment-related bone marrow suppression.

Last year, researchers reported additional results in BPDCN patients (Frankel et al, Blood 2014).

Eleven BPDCN patients received a single course of SL-401 (at 12.5 μg/kg intravenously over 15 minutes) daily for up to 5 doses. Three patients who had initial responses to SL-401 received a second course while in relapse.

Seven of 9 evaluable (78%) patients responded to a single course of SL-401. There were 5 CRs and 2 PRs. The median duration of responses was 5 months (range, 1-20+ months).

The most common adverse events were transient and included fever, chills, hypotension, edema, hypoalbuminemia, thrombocytopenia, and transaminasemia.

Three multicenter clinical trials of SL-401 are currently open in the following indications:

Additional SL-401 studies are planned for patients with myeloma, lymphomas, and other leukemias.

About orphan designation

In the European Union, orphan designation is granted to therapies intended to treat a life-threatening or chronically debilitating condition that affects no more than 5 in 10,000 persons and where no satisfactory treatment is available.

 

 

Companies that obtain orphan designation for a drug in the European Union benefit from a number of incentives, including protocol assistance, a type of scientific advice specific for designated orphan medicines, and 10 years of market exclusivity once the medicine is on the market. Fee reductions are also available, depending on the status of the sponsor and the type of service required.

The FDA grants orphan designation to drugs that are intended to treat diseases or conditions affecting fewer than 200,000 patients in the US.

In the US, orphan designation provides the sponsor of a drug with various development incentives, including opportunities to apply for research-related tax credits and grant funding, assistance in designing clinical trials, and 7 years of US market exclusivity if the drug is approved.

Publications
Publications
Topics
Article Type
Display Headline
Drug gets orphan designation for BPDCN
Display Headline
Drug gets orphan designation for BPDCN
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Metacognition to Reduce Medical Error

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

Files
References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
Article PDF
Issue
Journal of Hospital Medicine - 11(2)
Page Number
120-122
Sections
Files
Files
Article PDF
Article PDF

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

A 71‐year‐old man with widely metastatic nonsmall cell lung cancer presented to an emergency department of a teaching hospital at 7 pm with a chief complaint of severe chest pain relieved by sitting upright and leaning forward. A senior cardiologist, with expertise in echocardiography, assessed the patient and performed a bedside echocardiogram. He found a large pericardial effusion but concluded there was no cardiac tamponade. Given the patient's other medical problems, he referred him to internal medicine for admission to their service. The attending internist agreed to admit the patient, suggesting close cardiac monitoring and reevaluation with a formal echocardiogram in the morning. At 9 am, the team and the cardiologist were urgently summoned to the echo lab by the technician who now diagnosed tamponade. After looking at the images, the cardiologist disagreed with the technician's interpretation and declared that there was no sign of tamponade.

After leaving the echo lab, the attending internist led a team discussion on the phenomenon of and reasons for interobserver variation. The residents initially focused on the difference in expertise between the cardiologist and technician. The attending, who felt this was unlikely because the technician was very experienced, introduced the possibility of a cognitive misstep. Having staked out an opinion on the lack of tamponade the night before and acting on that interpretation by declining admission to his service, the cardiologist was susceptible to anchoring bias, where adjustments to a preliminary diagnosis are insufficient because of the influence of the initial interpretation.[1] The following day, the cardiologist performed a pericardiocentesis and reported that the fluid came out under pressure. In the face of this definitive information, he concluded that his prior assessment was incorrect and that tamponade had been present from the start.

The origins of medical error reduction lie in the practice of using autopsies to determine the cause of death spearheaded by Karl Rokitansky at the Vienna Medical School in the 1800s.[2] Ernest Amory Codman expanded the effort through the linkage of treatment decisions to subsequent outcomes by following patients after hospital discharge.[3] The advent of modern imaging techniques coupled with interventional methods of obtaining pathological specimens has dramatically improved diagnostic accuracy over the past 40 years. As a result, the practice of using autopsies to improve clinical acumen and reduce diagnostic error has virtually disappeared, while the focus on medical error has actually increased. The forum for reducing error shifted to morbidity and mortality rounds (MMRs), which have been relabeled quality‐improvement rounds in many hospitals.

In these regularly scheduled meetings, interprofessional clinicians discuss errors and adverse outcomes. Because deaths are rarely unexpected and often occur outside of the acute care setting, the focus is usually on errors in the execution of complex clinical plans that combine the wide array of modern laboratory, imaging, pharmaceutical, interventional, surgical, and pathological tools available to clinicians today. In the era of patient safety and quality improvement, errors are mostly blamed on systems‐based issues that lead to hospital complications, despite evidence that cognitive factors play a large role.[4] Systems‐based analysis was popularized by the landmark report of the Institute of Medicine.[5] In our local institutions (the University of Toronto teaching hospitals), improving diagnostic accuracy is almost never on the agenda. We suspect the same is true elsewhere. Common themes include mistakes in medication administration and dosing, communication, and physician handover. The Swiss cheese model[6] is often invoked to diffuse blame across a number of individuals, processes, and even machines. However, as Wachter and Pronovost point out, reengineering of systems has limited capacity for solving all safety and quality improvement issues when people are involved; human error can still sabotage the effort.[7]

Discussions centered on a physician's raw thinking ability have become a third rail, even though clinical reasoning lies at the core of patient safety. Human error is rarely discussed, in part because it is mistakenly believed to be uncommon and felt to be the result of deficits in knowledge or incompetence. Furthermore, the fear of assigning blame to individuals in front of their peers may be counterproductive, discouraging identification of future errors. However, the fields of cognitive psychology and medical decision making have clearly established that cognitive errors occur predictably and often, especially at times of high cognitive load (eg, when many high stakes complex decisions need to be made in a short period of time). Errors do not usually result from a lack of knowledge (although they can), but rather because people rely on instincts that include common biases called heuristics.[8] Most of the time, heuristics are a helpful and necessary evolutionary adaptation of the human thought process, but by their inherent nature, they can lead to predictable and repeatable errors. Because the effects of cognitive biases are inherent to all decision makers, using this framework for discussing individual error may be a method of decreasing the second victim effect[9] and avoid demoralizing the individual.

MMRs thus represent fertile ground for introducing cognitive psychology into medical education and quality improvement. The existing format is useful for teaching cognitive psychology because it is an open forum where discussions center on errors of omission and commission, many of which are a result of both systems issues and decision making heuristics. Several studies have attempted to describe methods for improving MMRs[10, 11, 12]; however, none have incorporated concepts from cognitive psychology. This type of analysis has penetrated several cases in the WebM&M series created by the Agency of Healthcare Quality Research, which can be used as a model for hospital‐based MMRs.[13] For the vignette described above, a MMR that considers systems‐based approaches might discuss how a busy emergency room, limitations of capacity on the cardiology service, and closure of the echo lab at night, played a role in this story. However, although it is difficult to replay another person's mental processing, ignoring the possibility that the cardiologist in this case may have fallen prey to a common cognitive error would be a missed opportunity to learn how frequently heuristics can be faulty. A cognitive approach applied to this example would explore explanations such as anchoring, ego, and hassle biases. Front‐line clinicians in busy hospital settings will recognize the interaction between workload pressures and cognitive mistakes common to examples like this one.

Cognitive heuristics should first be introduced to MMRs by experienced clinicians, well respected for their clinical acumen, by telling specific personal stories where heuristics led to errors in their practices and why the shortcut in thinking occurred. Thereafter, the traditional MMR format can be used: presenting a case, describing how an experienced clinician might manage the case, and then asking the audience members for comment. Incorporating discussions of cognitive missteps, in medical and nonmedical contexts, would help normalize the understanding that even the most experienced and smartest people fall prey to them. The tone must be positive.

Attendees could be encouraged to review their own thought processes through diagnostic verification for cases where their initial diagnosis was incorrect. This would involve assessment for adequacy, ensuring that potential diagnoses account for all abnormal and normal clinical findings, and coherency, ensuring that the diagnoses are pathophysiologically consistent with all clinical findings. Another strategy may be to illustrate cognitive forcing strategies for particular biases.[14] For example, in the case of anchoring bias, trainees may be encouraged to replay the clinical scenario with a different priming stem and evaluate if they would come to the same clinical conclusion. A challenge for all MMRs is how best to select cases; given the difficulties in replaying one's cognitive processes, this problem may be magnified. Potential selection methods could utilize anonymous reporting systems or patient complaints; however, the optimal strategy is yet to be determined.

Graber et al. have summarized the limited research on attempts to improve cognitive processes through educational interventions and illustrate its mixed results.[15] The most positive study was a randomized control trial using combined pattern recognition and deliberative reasoning to improve diagnostic accuracy in the face of biasing information.[16] Despite positive results, others have suggested that cognitive biases are impossible to teach due to their subconscious nature.[17] They argue that training physicians to avoid heuristics will simply lead to over investigation. These polarizing views highlight the need for research to evaluate interventions like the cognitive autopsy suggested here.

Trainees recognize early that their knowledge base is limited. However, it takes more internal analysis to realize that their brains' decision‐making capacity is similarly limited. Utilizing these regularly scheduled clinical meetings in the manner described above may build improved metacognition, cognition about cognition or more colloquially thinking about thinking. Clinicians understand that bias can easily occur in research and accept mechanisms to protect studies from those potential threats to validity such as double blinding of outcome assessments. Supplementing MMRs with cognitive discussions represents an analogous intent to reduce biases, introducing metacognition as the next frontier in advancing clinical care. Errors are inevitable,[18] and recognition of our cognitive blind spots will provide physicians with an improved framework for analysis of these errors. Building metacognition is a difficult task; however, this is not a reason to stop trying. In the spirit of innovation begun by pioneers like Rokitansky and Codman, and renewed focus on diagnostic errors generated by the recent report of the National Academy of Sciences[19], it is time for the cognitive autopsy to be built into the quality improvement and patient safety map.

Acknowledgements

The authors thank Donald A. Redelemeier, MD, MSc, University of Toronto, and Gurpreet Dhaliwal, MD, University of California, San Francisco, for providing comments on an earlier draft of this article. Neither was compensated for their contributions.

Disclosure: Nothing to report.

References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
References
  1. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185(4157):11241131.
  2. Nuland SB. Doctors: The Biography of Medicine. New York, NY: Vintage Books; 1995.
  3. Codman EA. The classic: a study in hospital efficiency: as demonstrated by the case report of first five years of private hospital. Clin Orthop Relat Res. 2013;471(6):17781783.
  4. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):14931499.
  5. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 1999.
  6. Reason J. The contribution of latent human failures to the breakdown of complex systems. Philos Trans R Soc Lond B Biol Sci. 1990;327(1241):475484.
  7. Wachter RM, Pronovost PJ. Balancing “no blame” with accountability in patient safety. N Engl J Med. 2009;361(14):14011406.
  8. Croskerry P. From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med. 2013;368(26):24452448.
  9. Wu AW. Medical error: the second victim. The doctor who makes the mistake needs help too. BMJ. 2000;320(7237):726727.
  10. Ksouri H, Balanant PY, Tadie JM, et al. Impact of morbidity and mortality conferences on analysis of mortality and critical events in intensive care practice. Am J Crit Care. 2010;19(2):135145.
  11. Szekendi MK, Barnard C, Creamer J, Noskin GA. Using patient safety morbidity and mortality conferences to promote transparency and a culture of safety. Jt Comm J Qual Patient Saf. 2010;36(1):39.
  12. Calder LA, Kwok ESH, Adam Cwinn A, et al. Enhancing the quality of morbidity and mortality rounds: the Ottawa M21(3):314321.
  13. Agency for Healthcare Research and Quality. AHRQ WebM41(1):110120.
  14. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf. 2012;21(7):535557.
  15. Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41(12):11521158.
  16. Norman GR, Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):94100.
  17. Cain DM, Detsky AS. Everyone's a little bit biased (even physicians). JAMA. 2008;299(24):28932895.
  18. Balogh EP, Miller BT, Ball JR. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.
Issue
Journal of Hospital Medicine - 11(2)
Issue
Journal of Hospital Medicine - 11(2)
Page Number
120-122
Page Number
120-122
Article Type
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement
Display Headline
Incorporating metacognition into morbidity and mortality rounds: The next frontier in quality improvement
Sections
Article Source
© 2015 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence: Dr. Allan Detsky, MD, Mount Sinai Hospital, Room 429, 600 University Ave., Toronto, Ontario M5G 1X5, Canada; Telephone: 416‐586‐8507; Fax: 416‐586‐8350; E‐mail: adetsky@mtsinai.on.ca
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files