User login
Hepatitis C May Increase Risk of Parkinson’s Disease
Hepatitis C infection may increase the risk of Parkinson’s disease, according to a nationwide population-based study published online ahead of print December 23, 2015, in Neurology. Researchers analyzed 10 years of data from the Taiwan National Health Insurance Research Database, which included 49,967 patients with viral hepatitis—35,619 with hepatitis B infection, 10,286 with hepatitis C, and 4,062 with both—and 199,868 noninfected controls.
Individuals with hepatitis C infection had a 29% greater incidence of Parkinson’s disease after adjustment for confounders such as sex, age, heart disease, stroke, and head injury. The researchers found no significant associations between hepatitis B or coinfection and Parkinson’s disease risk.
Age was the most common risk factor for Parkinson’s disease across all cohorts, and in the control group, comorbidities such as hyperlipidemia, hypertension, ischemic heart disease, diabetes, and head injury all were associated with a significant increase in the risk of Parkinson’s disease. Among individuals with hepatitis C infection, however, only ischemic heart disease and head injury remained significantly associated with Parkinson’s disease risk.
The possibility of an association between hepatitis C infection and Parkinson’s disease has emerged recently, with evidence showing that the virus is neurotropic and can replicate in the CNS, reported Hsin-Hsi Tsai, MD, a neurologist at the National Taiwan University Hospital in Taipei, and coauthors.
“Parkinsonism is rarely a described feature in patients with hepatitis C virus. However, a recent study has discovered that hepatitis C virus can induce dopaminergic neuron death, suggesting a possible association between hepatitis C virus infection and” Parkinson’s disease, said the authors.
The study also showed that the association between hepatitis C infection and Parkinson’s disease was more significant in individuals younger than 65, who had a 61% greater risk of developing the neurodegenerative disease.
“Some of the risk factors for hepatitis C virus infection, such as illicit drug use and associated behaviors, may be confounding factors in this age group,” said the authors. They pointed out, however, that in Taiwan, use of IV drugs was not known to be a risk factor for infection. Commenting on a possible mechanism for the association between hepatitis C infection and Parkinson’s disease, Dr. Tsai and associates suggested that the hepatitis C virus could be a possible viral candidate for triggering the neuroinflammation that is a characteristic feature of Parkinson’s disease.
“An earlier imaging study that involved using magnetic resonance spectroscopy to investigate the cerebral effect of hepatitis C virus showed that chronic hepatitis C virus infection was associated with elevated choline/creatinine ratios, a biomarker indicating inflammatory and infective conditions, in the basal ganglia and white matter,” they said.
—Bianca Nogrady
Suggested Reading
Tsai HH, Liou HH, Muo CH, et al. Hepatitis C virus infection as a risk factor for Parkinson disease: A nationwide cohort study. Neurology. 2015 Dec 23 [Epub ahead of print].
Hepatitis C infection may increase the risk of Parkinson’s disease, according to a nationwide population-based study published online ahead of print December 23, 2015, in Neurology. Researchers analyzed 10 years of data from the Taiwan National Health Insurance Research Database, which included 49,967 patients with viral hepatitis—35,619 with hepatitis B infection, 10,286 with hepatitis C, and 4,062 with both—and 199,868 noninfected controls.
Individuals with hepatitis C infection had a 29% greater incidence of Parkinson’s disease after adjustment for confounders such as sex, age, heart disease, stroke, and head injury. The researchers found no significant associations between hepatitis B or coinfection and Parkinson’s disease risk.
Age was the most common risk factor for Parkinson’s disease across all cohorts, and in the control group, comorbidities such as hyperlipidemia, hypertension, ischemic heart disease, diabetes, and head injury all were associated with a significant increase in the risk of Parkinson’s disease. Among individuals with hepatitis C infection, however, only ischemic heart disease and head injury remained significantly associated with Parkinson’s disease risk.
The possibility of an association between hepatitis C infection and Parkinson’s disease has emerged recently, with evidence showing that the virus is neurotropic and can replicate in the CNS, reported Hsin-Hsi Tsai, MD, a neurologist at the National Taiwan University Hospital in Taipei, and coauthors.
“Parkinsonism is rarely a described feature in patients with hepatitis C virus. However, a recent study has discovered that hepatitis C virus can induce dopaminergic neuron death, suggesting a possible association between hepatitis C virus infection and” Parkinson’s disease, said the authors.
The study also showed that the association between hepatitis C infection and Parkinson’s disease was more significant in individuals younger than 65, who had a 61% greater risk of developing the neurodegenerative disease.
“Some of the risk factors for hepatitis C virus infection, such as illicit drug use and associated behaviors, may be confounding factors in this age group,” said the authors. They pointed out, however, that in Taiwan, use of IV drugs was not known to be a risk factor for infection. Commenting on a possible mechanism for the association between hepatitis C infection and Parkinson’s disease, Dr. Tsai and associates suggested that the hepatitis C virus could be a possible viral candidate for triggering the neuroinflammation that is a characteristic feature of Parkinson’s disease.
“An earlier imaging study that involved using magnetic resonance spectroscopy to investigate the cerebral effect of hepatitis C virus showed that chronic hepatitis C virus infection was associated with elevated choline/creatinine ratios, a biomarker indicating inflammatory and infective conditions, in the basal ganglia and white matter,” they said.
—Bianca Nogrady
Hepatitis C infection may increase the risk of Parkinson’s disease, according to a nationwide population-based study published online ahead of print December 23, 2015, in Neurology. Researchers analyzed 10 years of data from the Taiwan National Health Insurance Research Database, which included 49,967 patients with viral hepatitis—35,619 with hepatitis B infection, 10,286 with hepatitis C, and 4,062 with both—and 199,868 noninfected controls.
Individuals with hepatitis C infection had a 29% greater incidence of Parkinson’s disease after adjustment for confounders such as sex, age, heart disease, stroke, and head injury. The researchers found no significant associations between hepatitis B or coinfection and Parkinson’s disease risk.
Age was the most common risk factor for Parkinson’s disease across all cohorts, and in the control group, comorbidities such as hyperlipidemia, hypertension, ischemic heart disease, diabetes, and head injury all were associated with a significant increase in the risk of Parkinson’s disease. Among individuals with hepatitis C infection, however, only ischemic heart disease and head injury remained significantly associated with Parkinson’s disease risk.
The possibility of an association between hepatitis C infection and Parkinson’s disease has emerged recently, with evidence showing that the virus is neurotropic and can replicate in the CNS, reported Hsin-Hsi Tsai, MD, a neurologist at the National Taiwan University Hospital in Taipei, and coauthors.
“Parkinsonism is rarely a described feature in patients with hepatitis C virus. However, a recent study has discovered that hepatitis C virus can induce dopaminergic neuron death, suggesting a possible association between hepatitis C virus infection and” Parkinson’s disease, said the authors.
The study also showed that the association between hepatitis C infection and Parkinson’s disease was more significant in individuals younger than 65, who had a 61% greater risk of developing the neurodegenerative disease.
“Some of the risk factors for hepatitis C virus infection, such as illicit drug use and associated behaviors, may be confounding factors in this age group,” said the authors. They pointed out, however, that in Taiwan, use of IV drugs was not known to be a risk factor for infection. Commenting on a possible mechanism for the association between hepatitis C infection and Parkinson’s disease, Dr. Tsai and associates suggested that the hepatitis C virus could be a possible viral candidate for triggering the neuroinflammation that is a characteristic feature of Parkinson’s disease.
“An earlier imaging study that involved using magnetic resonance spectroscopy to investigate the cerebral effect of hepatitis C virus showed that chronic hepatitis C virus infection was associated with elevated choline/creatinine ratios, a biomarker indicating inflammatory and infective conditions, in the basal ganglia and white matter,” they said.
—Bianca Nogrady
Suggested Reading
Tsai HH, Liou HH, Muo CH, et al. Hepatitis C virus infection as a risk factor for Parkinson disease: A nationwide cohort study. Neurology. 2015 Dec 23 [Epub ahead of print].
Suggested Reading
Tsai HH, Liou HH, Muo CH, et al. Hepatitis C virus infection as a risk factor for Parkinson disease: A nationwide cohort study. Neurology. 2015 Dec 23 [Epub ahead of print].
High Urate Concentration May Protect Men Against Parkinson’s Disease
Men with a high plasma urate concentration have a decreased risk of developing Parkinson’s disease, independent of potential risk factors such as age, smoking, and caffeine intake, according to research published online ahead of print January 13 in Neurology. Plasma urate concentration appears to have no association with risk of Parkinson’s disease among women, however. For men, urate could protect against Parkinson’s disease risk or slow the progression of preclinical Parkinson’s disease, according to the authors.
“Our findings, together with previous observations that urate can be elevated by administration of its precursor inosine, which is generally safe and tolerable, in early Parkinson’s disease, provide strong evidence supporting the design of a randomized trial of urate elevation in patients with early Parkinson’s disease or pre-Parkinson syndrome,” said Xiang Gao, MD, PhD, Director of the Nutritional Epidemiology Laboratory at Pennsylvania State University in University Park.
Dr. Gao and colleagues examined blood samples for more than 90,000 men and women who participated in the Health Professionals Follow-up Study, the Nurses’ Health Study, or the Cancer Prevention Study II Nutrition. The researchers confirmed cases of Parkinson’s disease based on a detailed questionnaire that the treating neurologist or internists completed, or through a movement disorder specialist’s review of medical records. Only patients with definite or probable Parkinson’s disease were included in the analysis. Between one and six controls were selected randomly for each case. The investigators used questionnaires to collect data on potential confounders, including age, smoking status, height, weight, chronic diseases, and consumption of caffeinated coffee and alcohol.
Dr. Gao’s group identified 388 new incident cases of Parkinson’s disease since blood collection and matched them with 1,267 controls. Higher baseline urate concentrations were associated with lower risk of Parkinson’s disease in men, but not in women. The multivariate-adjusted risk ratios of Parkinson’s disease, comparing the highest and lowest quartiles of urate, were 0.63 in men and 1.04 in women. Adjusting the data for cardiovascular factors, including history of cardiovascular disease and diabetes, did not affect the results.
The researchers pooled the results of their study with those of three previous investigations that included 325 patients with Parkinson’s disease. The pooled risk ratios comparing the highest and lowest categories of urate were 0.63 in men and 0.89 in women.
Animal and human studies suggest that urate is a neuroprotective agent, noted the authors. Their own previous research indicates that urate may slow disease progression during the preclinical stage of Parkinson’s disease.
The strengths of the current study include its prospective design, large sample size, and availability of information on covariates that may confound the potential association between serum urate and Parkinson’s disease risk, according to the authors. The study was based on a single measure of plasma urate, however, and did not account for within-person variability in urate levels. In addition, the results may be difficult to generalize because the majority of participants were Caucasian and had high educational attainment and socioeconomic status.
—Erik Greb
Suggested Reading
Gao X, O’Reilly ÉJ, Schwarzschild MA, Ascherio A. Prospective study of plasma urate and risk of Parkinson disease in men and women. Neurology. 2016 Jan 13 [Epub ahead of print].
Men with a high plasma urate concentration have a decreased risk of developing Parkinson’s disease, independent of potential risk factors such as age, smoking, and caffeine intake, according to research published online ahead of print January 13 in Neurology. Plasma urate concentration appears to have no association with risk of Parkinson’s disease among women, however. For men, urate could protect against Parkinson’s disease risk or slow the progression of preclinical Parkinson’s disease, according to the authors.
“Our findings, together with previous observations that urate can be elevated by administration of its precursor inosine, which is generally safe and tolerable, in early Parkinson’s disease, provide strong evidence supporting the design of a randomized trial of urate elevation in patients with early Parkinson’s disease or pre-Parkinson syndrome,” said Xiang Gao, MD, PhD, Director of the Nutritional Epidemiology Laboratory at Pennsylvania State University in University Park.
Dr. Gao and colleagues examined blood samples for more than 90,000 men and women who participated in the Health Professionals Follow-up Study, the Nurses’ Health Study, or the Cancer Prevention Study II Nutrition. The researchers confirmed cases of Parkinson’s disease based on a detailed questionnaire that the treating neurologist or internists completed, or through a movement disorder specialist’s review of medical records. Only patients with definite or probable Parkinson’s disease were included in the analysis. Between one and six controls were selected randomly for each case. The investigators used questionnaires to collect data on potential confounders, including age, smoking status, height, weight, chronic diseases, and consumption of caffeinated coffee and alcohol.
Dr. Gao’s group identified 388 new incident cases of Parkinson’s disease since blood collection and matched them with 1,267 controls. Higher baseline urate concentrations were associated with lower risk of Parkinson’s disease in men, but not in women. The multivariate-adjusted risk ratios of Parkinson’s disease, comparing the highest and lowest quartiles of urate, were 0.63 in men and 1.04 in women. Adjusting the data for cardiovascular factors, including history of cardiovascular disease and diabetes, did not affect the results.
The researchers pooled the results of their study with those of three previous investigations that included 325 patients with Parkinson’s disease. The pooled risk ratios comparing the highest and lowest categories of urate were 0.63 in men and 0.89 in women.
Animal and human studies suggest that urate is a neuroprotective agent, noted the authors. Their own previous research indicates that urate may slow disease progression during the preclinical stage of Parkinson’s disease.
The strengths of the current study include its prospective design, large sample size, and availability of information on covariates that may confound the potential association between serum urate and Parkinson’s disease risk, according to the authors. The study was based on a single measure of plasma urate, however, and did not account for within-person variability in urate levels. In addition, the results may be difficult to generalize because the majority of participants were Caucasian and had high educational attainment and socioeconomic status.
—Erik Greb
Men with a high plasma urate concentration have a decreased risk of developing Parkinson’s disease, independent of potential risk factors such as age, smoking, and caffeine intake, according to research published online ahead of print January 13 in Neurology. Plasma urate concentration appears to have no association with risk of Parkinson’s disease among women, however. For men, urate could protect against Parkinson’s disease risk or slow the progression of preclinical Parkinson’s disease, according to the authors.
“Our findings, together with previous observations that urate can be elevated by administration of its precursor inosine, which is generally safe and tolerable, in early Parkinson’s disease, provide strong evidence supporting the design of a randomized trial of urate elevation in patients with early Parkinson’s disease or pre-Parkinson syndrome,” said Xiang Gao, MD, PhD, Director of the Nutritional Epidemiology Laboratory at Pennsylvania State University in University Park.
Dr. Gao and colleagues examined blood samples for more than 90,000 men and women who participated in the Health Professionals Follow-up Study, the Nurses’ Health Study, or the Cancer Prevention Study II Nutrition. The researchers confirmed cases of Parkinson’s disease based on a detailed questionnaire that the treating neurologist or internists completed, or through a movement disorder specialist’s review of medical records. Only patients with definite or probable Parkinson’s disease were included in the analysis. Between one and six controls were selected randomly for each case. The investigators used questionnaires to collect data on potential confounders, including age, smoking status, height, weight, chronic diseases, and consumption of caffeinated coffee and alcohol.
Dr. Gao’s group identified 388 new incident cases of Parkinson’s disease since blood collection and matched them with 1,267 controls. Higher baseline urate concentrations were associated with lower risk of Parkinson’s disease in men, but not in women. The multivariate-adjusted risk ratios of Parkinson’s disease, comparing the highest and lowest quartiles of urate, were 0.63 in men and 1.04 in women. Adjusting the data for cardiovascular factors, including history of cardiovascular disease and diabetes, did not affect the results.
The researchers pooled the results of their study with those of three previous investigations that included 325 patients with Parkinson’s disease. The pooled risk ratios comparing the highest and lowest categories of urate were 0.63 in men and 0.89 in women.
Animal and human studies suggest that urate is a neuroprotective agent, noted the authors. Their own previous research indicates that urate may slow disease progression during the preclinical stage of Parkinson’s disease.
The strengths of the current study include its prospective design, large sample size, and availability of information on covariates that may confound the potential association between serum urate and Parkinson’s disease risk, according to the authors. The study was based on a single measure of plasma urate, however, and did not account for within-person variability in urate levels. In addition, the results may be difficult to generalize because the majority of participants were Caucasian and had high educational attainment and socioeconomic status.
—Erik Greb
Suggested Reading
Gao X, O’Reilly ÉJ, Schwarzschild MA, Ascherio A. Prospective study of plasma urate and risk of Parkinson disease in men and women. Neurology. 2016 Jan 13 [Epub ahead of print].
Suggested Reading
Gao X, O’Reilly ÉJ, Schwarzschild MA, Ascherio A. Prospective study of plasma urate and risk of Parkinson disease in men and women. Neurology. 2016 Jan 13 [Epub ahead of print].
Point/Counterpoint: So you think you can make a vascular surgeon in 5 years?
YES
BY MALACHI G. SHEAHAN III, M.D.
Believe it or not, one thing just about all vascular surgeons will agree upon is the proper way to train. For most of us, the best way to become a surgeon is the way we became a surgeon. Therefore, unless there is some aberration in the readership circulation of Vascular Specialist, I begin this debate facing an uphill battle with most of you.
The question of how to become a vascular surgeon should not be some esoteric matter left to be debated in the late Friday session of some educational symposium. Indeed, I commend the editors for bringing this issue to a more public forum. As much as I enjoy listening to the twenty-seventh abstract redefining the risks of type 2 endoleaks at our national meeting, the matter of how to create a vascular surgeon will define our profession for years to come.
Data from the Association of American Medical Colleges shows that there is now one vascular surgeon for every 100,000 people in the U.S. That is one vascular surgeon for every 350 dialysis patients or one for every 2,600 individuals with peripheral artery disease. We are already in short supply and 40% of us are over 55 years old. Applicant numbers to traditional 5 + 2 programs have plateaued over the past 10 years, suggesting that expanding fellowship positions is not the answer. Who then will fill this gap? As Dr. Ian Malcolm warned us in “Jurassic Park,” life will find a way.
If vascular surgeons don’t act to address this need, I know two candidates who are interested. Both interventional cardiology (10% over 55) and interventional radiology (12% over 55) have younger workforces that are growing at a superior rate. Between 2008 and 2013, the largest increases in training positions offered among all medical specialties were seen in interventional cardiology and interventional radiology.
Luckily our profession has not been caught completely off guard. Integrated vascular residency positions were first offered in 2007. Based on the quality and quantity of applicants, the number of institutions offering the integrated 0 + 5 vascular residency has grown from 17 in 2009 to 51 in 2015.
As practiced today, vascular surgery bears little similarity to even a decade ago. Limb salvage, aortic interventions, vein care, and access management all require highly specific training not typically offered in a general surgery residency. Our new board certification emphasizes the ability to supervise and interpret radiologic tests. Vascular surgery training is no longer a honing of general surgical skills. We must teach and develop completely new areas of expertise in our trainees. I propose the longer we have to focus on these specific abilities, the better our product will be.
A classic argument against traditional 5 + 2 training is why have a postgraduate-year 4 or 5 performing a pancreaticoduodenectomy (Whipple procedure) when they will never perform one in practice? This, however, is a flawed point, as open abdominal cases contain many aspects that translate well to vascular surgery. I believe the enemy is not Allen O. Whipple, but rather Harvey J. Laparoscope.1 Much like the declining numbers of open aortic cases, laparoscopic surgery has replaced much of the open surgical volume in general surgery training programs. How well these skills translate to vascular is unknown, but at face value, the cross-applicability doesn’t seem to pass muster. So while no case is wasted, perhaps our trainees’ time could be spent more efficiently.
Integrated 0 + 5 programs give total control of the rotations and curriculum to the vascular program director. This allows a truly cohesive approach to developing vascular skills and knowledge over a five year period interspersed with core general surgery skills and principals. Surgery rotations such as trauma, ICU, and cardiothoracic surgery that provide the best educational content to our trainees can now be handpicked, while avoiding lower-yield content like advanced laparoscopy and breast. Quality control is now in the hands of a vascular surgeon.
After all, Erica, if the sanctity of the five year general surgery residency must be preserved, why do you run one of the world’s only 4 + 2 programs? Clearly you believe we can condense our trainees’ education without losing quality.
Using the available metrics and data points it would be difficult to prove superiority of the 0 + 5 pathway to the 5 + 2. Therefore, I will borrow a technique from my clinical trials’ friends and claim noninferiority. Follow my logic here and I promise not to include a convoluted endpoint like strokes, deaths, and non-Q wave MIs induced in training directors.
Our best test for measuring cognitive development during vascular training remains the Vascular Surgery In-Training Examination (VSITE). Looking at the 2015 results, the L5 integrated residents received a better average standard score (565 vs. 542) than their L2 fellowship counterparts. In fact, the L5 integrated residents had a superior score on seven of the nine vascular sub-tests.
For technical skill acquisition, we can look at both Accreditation Council for Graduate Medical Education (ACGME) case logs and the Fundamentals of Vascular Surgery (FVS) exam. The largest study of vascular surgical experience was published by P. Batista and colleagues from Thomas Jefferson University, Philadelphia, in 2015. They found integrated residents had performed 12% more vascular procedures than traditional 5 + 2 residents (851 vs. 758) despite 2 years less training time. Our own FVS exam was conducted on more than 280 vascular trainees representing all levels from both paradigms. On this validated exam of technical skill, 94% of PGY 5 integrated residents received a passing score, compared with 92% of PGY 7 fellows. Interestingly, means scores were significantly higher for PGY 5 integrated residents vs. first year fellows (P less than .005) despite the former group receiving one year less training.
Perhaps the final barrier to the success of the integrated pathway is our own preconceived notions. Doubters often cite some unmeasurable like “maturity” as a deterrent. Do we question the maturity of the general surgeon with five years of residency? How about the pediatrician or general practitioner with fewer? Isn’t maturity a key aspect of any physician?
I believe it is time to put our doubts to rest and embrace this new paradigm. We now have ample evidence that under the supervision of a vascular program director, a competent surgeon can be produced in five years.
These young people may not have followed our exact path, but they are our future.
Dr. Sheahan is an associate professor and the program director of the Vascular Surgery Fellowship at the Health Sciences Center, School of Medicine, Louisiana State University, New Orleans.
References
1. Possibly not the actual name of the inventor of the laparoscope, but I’m working on a deadline here. [Editor’s Note; A summary of the complex history of the development of laparascopy can be found here: J Laparoendosc Adv Surg Tech A. 1997 Dec;7:369-73.]
NO
BY ERICA L. MITCHELL, M.D.
Dr. Sheahan has already convinced himself that he has won this debate because he honestly believes that he has persuaded the Vascular Specialist readers of the merits and benefits of the integrated vascular surgery training paradigm. While I respect Mal for supporting a 5 year training paradigm, I am prepared to argue for a potentially even shorter surgical training model than that set by the integrated 0 + 5 (and in some cases 0 + 6 or 0 + 7) time-based archetype.
I propose, and implore, that vascular surgery educators adopt a competency-based educational (CBE) framework in which trainees complete their training when competence has been met and demonstrated through objective performance benchmarks, whether that is after 7 years, 5 years, or even 4 or fewer years of vascular surgical training.
The goal of all graduate medical education is to ensure that the graduating physician is competent to practice independently in his or her chosen field of medicine. For nearly a century, surgical training has been based on the apprenticeship model as articulated by Halsted. Residents work with faculty members on clinical rotations, gaining experience while providing service to patients. The rotations have formal educational goals and objectives, but resident experience relies heavily on the patients who present to the clinical service. The time in training is set and for vascular surgery, the required time in training is either 5 years via the integrated 0 + 5 track or 6-7 years via the early-specialization or traditional training tracks. Board eligibility requires completion of this training time, documentation of operative case logs, and a “ready to practice independently” attestation from the vascular surgery program director. It is unusual for surgical residents not to complete their program or to remain in their program for additional training, despite recent evidence suggesting that current surgical training may be resulting in suboptimal experiences.1
As a consequence of time-based residency training, residents completing vascular surgical training vary in competence, and currently there is no mechanism to solve this situation. While, I am sure you will agree, none of us think we are graduating incompetent vascular surgeons, we do, however, come across residents or fellows whom we believe are not yet ready for autonomous practice at completion of their training, regardless of their training paradigm. With time determining completion of training these residents, unfortunately, at the end of their designated training period the training is done, regardless of demonstrated skills or knowledge. While this is concerning, we also see the counter to this unprepared resident.
We have all witnessed exceptional trainees in our programs. These trainees, regardless of their training program, sail through their surgical residencies. They meet all of the defined educational milestones, finish all of the program requirements, and demonstrate ability to care for patients unsupervised way before their set graduation date. For both these types of residents, educational landmarks, as defined by the ACGME, are of secondary importance and since only time determines completion of training, the curriculum becomes irrelevant. The question then becomes: why work to define a body of vascular surgical knowledge or a required set of technical and non-technical skills if competence is defined as time in training? Mal, surely you don’t support graduating a trainee simply because they have spent five years in training? Hopefully you would want to know that this graduating trainee is ready and competent to safely and autonomously practice the full scope of vascular surgical practice.
Competency-based education is gaining momentum around the world as medical educators, physicians, and policy makers try to ensure that our graduating specialists are acquiring and demonstrating the competencies needed to practice in today’s rapidly evolving heath care systems. It is becoming the standard in training of physicians because of the perception that it provides more transparent standards and increased public accountability. Competency-based training is learner centric, outcomes based, and differentiated. A key distinguishing feature of CBE is that residents can progress through the educational process at different rates: the most capable and talented individuals should be able to make career transitions earlier, thus allowing them to enter the workforce at an accelerated rate. Others, requiring more time, would still attain the appropriate level of knowledge, skills, and attitudes needed to enter independent practice, and leave the program only when competent.
With the emerging reality of numerous nonsurgical specialties encroaching upon various traditional domains of vascular surgery, it is essential that our specialty lead the field in vascular education so as to maintain our stronghold on these areas of expertise. Competency-based training is a logical evolutionary step from our traditional years-in-place based system. Such training should improve, or at least verify, the quality of educational outcomes for our vascular trainees and our varying training programs. This model of education will allow comparisons among training programs, differing training tracks and even differing specialty practices. I urge the vascular surgery community to discuss this concept and ultimately to implement it.
Dr. Mitchell is a professor of surgery, program director for vascular surgery, and vice-chair of Quality, Department of Surgery, Division of Vascular Surgery, Oregon Health and Science University, Portland.
References
YES
BY MALACHI G. SHEAHAN III, M.D.
Believe it or not, one thing just about all vascular surgeons will agree upon is the proper way to train. For most of us, the best way to become a surgeon is the way we became a surgeon. Therefore, unless there is some aberration in the readership circulation of Vascular Specialist, I begin this debate facing an uphill battle with most of you.
The question of how to become a vascular surgeon should not be some esoteric matter left to be debated in the late Friday session of some educational symposium. Indeed, I commend the editors for bringing this issue to a more public forum. As much as I enjoy listening to the twenty-seventh abstract redefining the risks of type 2 endoleaks at our national meeting, the matter of how to create a vascular surgeon will define our profession for years to come.
Data from the Association of American Medical Colleges shows that there is now one vascular surgeon for every 100,000 people in the U.S. That is one vascular surgeon for every 350 dialysis patients or one for every 2,600 individuals with peripheral artery disease. We are already in short supply and 40% of us are over 55 years old. Applicant numbers to traditional 5 + 2 programs have plateaued over the past 10 years, suggesting that expanding fellowship positions is not the answer. Who then will fill this gap? As Dr. Ian Malcolm warned us in “Jurassic Park,” life will find a way.
If vascular surgeons don’t act to address this need, I know two candidates who are interested. Both interventional cardiology (10% over 55) and interventional radiology (12% over 55) have younger workforces that are growing at a superior rate. Between 2008 and 2013, the largest increases in training positions offered among all medical specialties were seen in interventional cardiology and interventional radiology.
Luckily our profession has not been caught completely off guard. Integrated vascular residency positions were first offered in 2007. Based on the quality and quantity of applicants, the number of institutions offering the integrated 0 + 5 vascular residency has grown from 17 in 2009 to 51 in 2015.
As practiced today, vascular surgery bears little similarity to even a decade ago. Limb salvage, aortic interventions, vein care, and access management all require highly specific training not typically offered in a general surgery residency. Our new board certification emphasizes the ability to supervise and interpret radiologic tests. Vascular surgery training is no longer a honing of general surgical skills. We must teach and develop completely new areas of expertise in our trainees. I propose the longer we have to focus on these specific abilities, the better our product will be.
A classic argument against traditional 5 + 2 training is why have a postgraduate-year 4 or 5 performing a pancreaticoduodenectomy (Whipple procedure) when they will never perform one in practice? This, however, is a flawed point, as open abdominal cases contain many aspects that translate well to vascular surgery. I believe the enemy is not Allen O. Whipple, but rather Harvey J. Laparoscope.1 Much like the declining numbers of open aortic cases, laparoscopic surgery has replaced much of the open surgical volume in general surgery training programs. How well these skills translate to vascular is unknown, but at face value, the cross-applicability doesn’t seem to pass muster. So while no case is wasted, perhaps our trainees’ time could be spent more efficiently.
Integrated 0 + 5 programs give total control of the rotations and curriculum to the vascular program director. This allows a truly cohesive approach to developing vascular skills and knowledge over a five year period interspersed with core general surgery skills and principals. Surgery rotations such as trauma, ICU, and cardiothoracic surgery that provide the best educational content to our trainees can now be handpicked, while avoiding lower-yield content like advanced laparoscopy and breast. Quality control is now in the hands of a vascular surgeon.
After all, Erica, if the sanctity of the five year general surgery residency must be preserved, why do you run one of the world’s only 4 + 2 programs? Clearly you believe we can condense our trainees’ education without losing quality.
Using the available metrics and data points it would be difficult to prove superiority of the 0 + 5 pathway to the 5 + 2. Therefore, I will borrow a technique from my clinical trials’ friends and claim noninferiority. Follow my logic here and I promise not to include a convoluted endpoint like strokes, deaths, and non-Q wave MIs induced in training directors.
Our best test for measuring cognitive development during vascular training remains the Vascular Surgery In-Training Examination (VSITE). Looking at the 2015 results, the L5 integrated residents received a better average standard score (565 vs. 542) than their L2 fellowship counterparts. In fact, the L5 integrated residents had a superior score on seven of the nine vascular sub-tests.
For technical skill acquisition, we can look at both Accreditation Council for Graduate Medical Education (ACGME) case logs and the Fundamentals of Vascular Surgery (FVS) exam. The largest study of vascular surgical experience was published by P. Batista and colleagues from Thomas Jefferson University, Philadelphia, in 2015. They found integrated residents had performed 12% more vascular procedures than traditional 5 + 2 residents (851 vs. 758) despite 2 years less training time. Our own FVS exam was conducted on more than 280 vascular trainees representing all levels from both paradigms. On this validated exam of technical skill, 94% of PGY 5 integrated residents received a passing score, compared with 92% of PGY 7 fellows. Interestingly, means scores were significantly higher for PGY 5 integrated residents vs. first year fellows (P less than .005) despite the former group receiving one year less training.
Perhaps the final barrier to the success of the integrated pathway is our own preconceived notions. Doubters often cite some unmeasurable like “maturity” as a deterrent. Do we question the maturity of the general surgeon with five years of residency? How about the pediatrician or general practitioner with fewer? Isn’t maturity a key aspect of any physician?
I believe it is time to put our doubts to rest and embrace this new paradigm. We now have ample evidence that under the supervision of a vascular program director, a competent surgeon can be produced in five years.
These young people may not have followed our exact path, but they are our future.
Dr. Sheahan is an associate professor and the program director of the Vascular Surgery Fellowship at the Health Sciences Center, School of Medicine, Louisiana State University, New Orleans.
References
1. Possibly not the actual name of the inventor of the laparoscope, but I’m working on a deadline here. [Editor’s Note; A summary of the complex history of the development of laparascopy can be found here: J Laparoendosc Adv Surg Tech A. 1997 Dec;7:369-73.]
NO
BY ERICA L. MITCHELL, M.D.
Dr. Sheahan has already convinced himself that he has won this debate because he honestly believes that he has persuaded the Vascular Specialist readers of the merits and benefits of the integrated vascular surgery training paradigm. While I respect Mal for supporting a 5 year training paradigm, I am prepared to argue for a potentially even shorter surgical training model than that set by the integrated 0 + 5 (and in some cases 0 + 6 or 0 + 7) time-based archetype.
I propose, and implore, that vascular surgery educators adopt a competency-based educational (CBE) framework in which trainees complete their training when competence has been met and demonstrated through objective performance benchmarks, whether that is after 7 years, 5 years, or even 4 or fewer years of vascular surgical training.
The goal of all graduate medical education is to ensure that the graduating physician is competent to practice independently in his or her chosen field of medicine. For nearly a century, surgical training has been based on the apprenticeship model as articulated by Halsted. Residents work with faculty members on clinical rotations, gaining experience while providing service to patients. The rotations have formal educational goals and objectives, but resident experience relies heavily on the patients who present to the clinical service. The time in training is set and for vascular surgery, the required time in training is either 5 years via the integrated 0 + 5 track or 6-7 years via the early-specialization or traditional training tracks. Board eligibility requires completion of this training time, documentation of operative case logs, and a “ready to practice independently” attestation from the vascular surgery program director. It is unusual for surgical residents not to complete their program or to remain in their program for additional training, despite recent evidence suggesting that current surgical training may be resulting in suboptimal experiences.1
As a consequence of time-based residency training, residents completing vascular surgical training vary in competence, and currently there is no mechanism to solve this situation. While, I am sure you will agree, none of us think we are graduating incompetent vascular surgeons, we do, however, come across residents or fellows whom we believe are not yet ready for autonomous practice at completion of their training, regardless of their training paradigm. With time determining completion of training these residents, unfortunately, at the end of their designated training period the training is done, regardless of demonstrated skills or knowledge. While this is concerning, we also see the counter to this unprepared resident.
We have all witnessed exceptional trainees in our programs. These trainees, regardless of their training program, sail through their surgical residencies. They meet all of the defined educational milestones, finish all of the program requirements, and demonstrate ability to care for patients unsupervised way before their set graduation date. For both these types of residents, educational landmarks, as defined by the ACGME, are of secondary importance and since only time determines completion of training, the curriculum becomes irrelevant. The question then becomes: why work to define a body of vascular surgical knowledge or a required set of technical and non-technical skills if competence is defined as time in training? Mal, surely you don’t support graduating a trainee simply because they have spent five years in training? Hopefully you would want to know that this graduating trainee is ready and competent to safely and autonomously practice the full scope of vascular surgical practice.
Competency-based education is gaining momentum around the world as medical educators, physicians, and policy makers try to ensure that our graduating specialists are acquiring and demonstrating the competencies needed to practice in today’s rapidly evolving heath care systems. It is becoming the standard in training of physicians because of the perception that it provides more transparent standards and increased public accountability. Competency-based training is learner centric, outcomes based, and differentiated. A key distinguishing feature of CBE is that residents can progress through the educational process at different rates: the most capable and talented individuals should be able to make career transitions earlier, thus allowing them to enter the workforce at an accelerated rate. Others, requiring more time, would still attain the appropriate level of knowledge, skills, and attitudes needed to enter independent practice, and leave the program only when competent.
With the emerging reality of numerous nonsurgical specialties encroaching upon various traditional domains of vascular surgery, it is essential that our specialty lead the field in vascular education so as to maintain our stronghold on these areas of expertise. Competency-based training is a logical evolutionary step from our traditional years-in-place based system. Such training should improve, or at least verify, the quality of educational outcomes for our vascular trainees and our varying training programs. This model of education will allow comparisons among training programs, differing training tracks and even differing specialty practices. I urge the vascular surgery community to discuss this concept and ultimately to implement it.
Dr. Mitchell is a professor of surgery, program director for vascular surgery, and vice-chair of Quality, Department of Surgery, Division of Vascular Surgery, Oregon Health and Science University, Portland.
References
YES
BY MALACHI G. SHEAHAN III, M.D.
Believe it or not, one thing just about all vascular surgeons will agree upon is the proper way to train. For most of us, the best way to become a surgeon is the way we became a surgeon. Therefore, unless there is some aberration in the readership circulation of Vascular Specialist, I begin this debate facing an uphill battle with most of you.
The question of how to become a vascular surgeon should not be some esoteric matter left to be debated in the late Friday session of some educational symposium. Indeed, I commend the editors for bringing this issue to a more public forum. As much as I enjoy listening to the twenty-seventh abstract redefining the risks of type 2 endoleaks at our national meeting, the matter of how to create a vascular surgeon will define our profession for years to come.
Data from the Association of American Medical Colleges shows that there is now one vascular surgeon for every 100,000 people in the U.S. That is one vascular surgeon for every 350 dialysis patients or one for every 2,600 individuals with peripheral artery disease. We are already in short supply and 40% of us are over 55 years old. Applicant numbers to traditional 5 + 2 programs have plateaued over the past 10 years, suggesting that expanding fellowship positions is not the answer. Who then will fill this gap? As Dr. Ian Malcolm warned us in “Jurassic Park,” life will find a way.
If vascular surgeons don’t act to address this need, I know two candidates who are interested. Both interventional cardiology (10% over 55) and interventional radiology (12% over 55) have younger workforces that are growing at a superior rate. Between 2008 and 2013, the largest increases in training positions offered among all medical specialties were seen in interventional cardiology and interventional radiology.
Luckily our profession has not been caught completely off guard. Integrated vascular residency positions were first offered in 2007. Based on the quality and quantity of applicants, the number of institutions offering the integrated 0 + 5 vascular residency has grown from 17 in 2009 to 51 in 2015.
As practiced today, vascular surgery bears little similarity to even a decade ago. Limb salvage, aortic interventions, vein care, and access management all require highly specific training not typically offered in a general surgery residency. Our new board certification emphasizes the ability to supervise and interpret radiologic tests. Vascular surgery training is no longer a honing of general surgical skills. We must teach and develop completely new areas of expertise in our trainees. I propose the longer we have to focus on these specific abilities, the better our product will be.
A classic argument against traditional 5 + 2 training is why have a postgraduate-year 4 or 5 performing a pancreaticoduodenectomy (Whipple procedure) when they will never perform one in practice? This, however, is a flawed point, as open abdominal cases contain many aspects that translate well to vascular surgery. I believe the enemy is not Allen O. Whipple, but rather Harvey J. Laparoscope.1 Much like the declining numbers of open aortic cases, laparoscopic surgery has replaced much of the open surgical volume in general surgery training programs. How well these skills translate to vascular is unknown, but at face value, the cross-applicability doesn’t seem to pass muster. So while no case is wasted, perhaps our trainees’ time could be spent more efficiently.
Integrated 0 + 5 programs give total control of the rotations and curriculum to the vascular program director. This allows a truly cohesive approach to developing vascular skills and knowledge over a five year period interspersed with core general surgery skills and principals. Surgery rotations such as trauma, ICU, and cardiothoracic surgery that provide the best educational content to our trainees can now be handpicked, while avoiding lower-yield content like advanced laparoscopy and breast. Quality control is now in the hands of a vascular surgeon.
After all, Erica, if the sanctity of the five year general surgery residency must be preserved, why do you run one of the world’s only 4 + 2 programs? Clearly you believe we can condense our trainees’ education without losing quality.
Using the available metrics and data points it would be difficult to prove superiority of the 0 + 5 pathway to the 5 + 2. Therefore, I will borrow a technique from my clinical trials’ friends and claim noninferiority. Follow my logic here and I promise not to include a convoluted endpoint like strokes, deaths, and non-Q wave MIs induced in training directors.
Our best test for measuring cognitive development during vascular training remains the Vascular Surgery In-Training Examination (VSITE). Looking at the 2015 results, the L5 integrated residents received a better average standard score (565 vs. 542) than their L2 fellowship counterparts. In fact, the L5 integrated residents had a superior score on seven of the nine vascular sub-tests.
For technical skill acquisition, we can look at both Accreditation Council for Graduate Medical Education (ACGME) case logs and the Fundamentals of Vascular Surgery (FVS) exam. The largest study of vascular surgical experience was published by P. Batista and colleagues from Thomas Jefferson University, Philadelphia, in 2015. They found integrated residents had performed 12% more vascular procedures than traditional 5 + 2 residents (851 vs. 758) despite 2 years less training time. Our own FVS exam was conducted on more than 280 vascular trainees representing all levels from both paradigms. On this validated exam of technical skill, 94% of PGY 5 integrated residents received a passing score, compared with 92% of PGY 7 fellows. Interestingly, means scores were significantly higher for PGY 5 integrated residents vs. first year fellows (P less than .005) despite the former group receiving one year less training.
Perhaps the final barrier to the success of the integrated pathway is our own preconceived notions. Doubters often cite some unmeasurable like “maturity” as a deterrent. Do we question the maturity of the general surgeon with five years of residency? How about the pediatrician or general practitioner with fewer? Isn’t maturity a key aspect of any physician?
I believe it is time to put our doubts to rest and embrace this new paradigm. We now have ample evidence that under the supervision of a vascular program director, a competent surgeon can be produced in five years.
These young people may not have followed our exact path, but they are our future.
Dr. Sheahan is an associate professor and the program director of the Vascular Surgery Fellowship at the Health Sciences Center, School of Medicine, Louisiana State University, New Orleans.
References
1. Possibly not the actual name of the inventor of the laparoscope, but I’m working on a deadline here. [Editor’s Note; A summary of the complex history of the development of laparascopy can be found here: J Laparoendosc Adv Surg Tech A. 1997 Dec;7:369-73.]
NO
BY ERICA L. MITCHELL, M.D.
Dr. Sheahan has already convinced himself that he has won this debate because he honestly believes that he has persuaded the Vascular Specialist readers of the merits and benefits of the integrated vascular surgery training paradigm. While I respect Mal for supporting a 5 year training paradigm, I am prepared to argue for a potentially even shorter surgical training model than that set by the integrated 0 + 5 (and in some cases 0 + 6 or 0 + 7) time-based archetype.
I propose, and implore, that vascular surgery educators adopt a competency-based educational (CBE) framework in which trainees complete their training when competence has been met and demonstrated through objective performance benchmarks, whether that is after 7 years, 5 years, or even 4 or fewer years of vascular surgical training.
The goal of all graduate medical education is to ensure that the graduating physician is competent to practice independently in his or her chosen field of medicine. For nearly a century, surgical training has been based on the apprenticeship model as articulated by Halsted. Residents work with faculty members on clinical rotations, gaining experience while providing service to patients. The rotations have formal educational goals and objectives, but resident experience relies heavily on the patients who present to the clinical service. The time in training is set and for vascular surgery, the required time in training is either 5 years via the integrated 0 + 5 track or 6-7 years via the early-specialization or traditional training tracks. Board eligibility requires completion of this training time, documentation of operative case logs, and a “ready to practice independently” attestation from the vascular surgery program director. It is unusual for surgical residents not to complete their program or to remain in their program for additional training, despite recent evidence suggesting that current surgical training may be resulting in suboptimal experiences.1
As a consequence of time-based residency training, residents completing vascular surgical training vary in competence, and currently there is no mechanism to solve this situation. While, I am sure you will agree, none of us think we are graduating incompetent vascular surgeons, we do, however, come across residents or fellows whom we believe are not yet ready for autonomous practice at completion of their training, regardless of their training paradigm. With time determining completion of training these residents, unfortunately, at the end of their designated training period the training is done, regardless of demonstrated skills or knowledge. While this is concerning, we also see the counter to this unprepared resident.
We have all witnessed exceptional trainees in our programs. These trainees, regardless of their training program, sail through their surgical residencies. They meet all of the defined educational milestones, finish all of the program requirements, and demonstrate ability to care for patients unsupervised way before their set graduation date. For both these types of residents, educational landmarks, as defined by the ACGME, are of secondary importance and since only time determines completion of training, the curriculum becomes irrelevant. The question then becomes: why work to define a body of vascular surgical knowledge or a required set of technical and non-technical skills if competence is defined as time in training? Mal, surely you don’t support graduating a trainee simply because they have spent five years in training? Hopefully you would want to know that this graduating trainee is ready and competent to safely and autonomously practice the full scope of vascular surgical practice.
Competency-based education is gaining momentum around the world as medical educators, physicians, and policy makers try to ensure that our graduating specialists are acquiring and demonstrating the competencies needed to practice in today’s rapidly evolving heath care systems. It is becoming the standard in training of physicians because of the perception that it provides more transparent standards and increased public accountability. Competency-based training is learner centric, outcomes based, and differentiated. A key distinguishing feature of CBE is that residents can progress through the educational process at different rates: the most capable and talented individuals should be able to make career transitions earlier, thus allowing them to enter the workforce at an accelerated rate. Others, requiring more time, would still attain the appropriate level of knowledge, skills, and attitudes needed to enter independent practice, and leave the program only when competent.
With the emerging reality of numerous nonsurgical specialties encroaching upon various traditional domains of vascular surgery, it is essential that our specialty lead the field in vascular education so as to maintain our stronghold on these areas of expertise. Competency-based training is a logical evolutionary step from our traditional years-in-place based system. Such training should improve, or at least verify, the quality of educational outcomes for our vascular trainees and our varying training programs. This model of education will allow comparisons among training programs, differing training tracks and even differing specialty practices. I urge the vascular surgery community to discuss this concept and ultimately to implement it.
Dr. Mitchell is a professor of surgery, program director for vascular surgery, and vice-chair of Quality, Department of Surgery, Division of Vascular Surgery, Oregon Health and Science University, Portland.
References
Appendicitis, antibiotics, and surgery: An evolving trilogy
Appendicitis is the most common surgical emergency in children. It is seen at all ages; however, it is less common in infants and toddlers younger than 4 years of age and peaks at an incidence of 25/100,000 in children 12- to 18-years-old. Fortunately, appendicitis is rarely fatal but can be associated with significant morbidity, especially in young children in whom the diagnosis is often delayed and perforation is more common. Reducing morbidity requires early diagnosis and optimizing management such that perforation and associated peritonitis are prevented.
The classical signs and symptoms of appendicitis are periumbilical pain migrating to the right lower quadrant, nausea, and low-grade fever. Presentation may vary if the location of the appendix is atypical, but primarily is age associated. In young children, abdominal distension, hip pain with or without limp, and fever are commonplace. In older children, right lower quadrant abdominal pain that intensifies with coughing or movement is frequent. Localized tenderness also appears to be age related; right lower quadrant tenderness and rebound are more often found in older children and adolescents, whereas younger children have more diffuse signs.
When I started my career, abdominal x-rays would be performed in search of a fecalith. However, such studies were of low sensitivity, and clinical acumen had a primary role in the decision to take the child to the operating room. In the current era, ultrasound and CT scan provide reasonable sensitivity and specificity. Ultrasound criteria include a diameter greater than 6 mm, concentric rings (target sign), an appendicolith, high echogenicity, obstruction of the lumen, and fluid surrounding the appendix.
As the pathogenesis of appendicitis represents occlusion of the appendiceal lumen, followed by overgrowth or translocation of bowel flora resulting in inflammation of the wall of the appendix, anaerobes and gram-negative gut flora represent the most important pathogens. In advanced cases, necrosis and gangrene of the appendix result with progression to rupture and peritonitis.
The traditional management was early surgical intervention to reduce the risk of perforation and peritonitis with acceptance of high rates of negative abdominal explorations as an acceptable consequence. Today, the approach to management of appendicitis is undergoing reevaluation. Early antimicrobial treatment has become routine in the management of nonperforated, perforated, or abscessed appendicitis. However, the question being asked is, “Do all children with uncomplicated appendicitis need appendectomy, or is antibiotic management sufficient?”
P. Salminen et al. reported on the results of a randomized clinical trial in 530 patients aged 18-60 years, comparing antimicrobial treatment alone with early appendectomy. Among 273 patients in the surgical group, all but 1 underwent successful appendectomy, resulting in a success rate of 99.6% (95% CI, 98.0%-100.0%). In the antibiotic group, 186 of 256 patients (70%) treated with antibiotics did not require surgery; 70 (27%) underwent appendectomy within 1 year of initial presentation for appendicitis (JAMA. 2015 Jun 16;313[23]:2340-8). There were no intraabdominal abscesses or other major complications associated with delayed appendectomy in patients randomized to antibiotic treatment. The authors concluded that among patients with CT-proven, uncomplicated appendicitis, antibiotic treatment did not meet the prespecified criterion for noninferiority, compared with appendectomy. However, most patients randomized to antibiotics for uncomplicated appendicitis did not require appendectomy during the 1-year follow-up period.
J.A. Horst et al. reviewed published reports of medical management of appendicitis in children (Ann Emerg Med. 2015 Aug;66[2]:119-22). They concluded that medical management of uncomplicated appendicitis in a select low-risk pediatric population is safe and does not result in significant morbidity. The arguments against a nonoperative approach include the risk of recurrent appendicitis, including the anxiety associated with any recurrences of abdominal pain, the risk of antibiotic-related complications, the potential for increased duration of hospitalization, and the relatively low morbidity of appendectomy in children. Factors associated with failed antibiotic management included fecaliths, fluid collections, or an appendiceal diameter greater than 1.1 cm on CT scan. The investigators concluded such children are poor candidates for nonsurgical management.
The bottom line is that antimicrobial therapy, in the absence of surgery, can be effective. Certainly in remote settings where surgery is not readily available, antimicrobial therapy with fluid and electrolyte management and close observation can be used in children with uncomplicated appendicitis with few failures and relatively few children requiring subsequent appendectomy. In more complicated cases with evidence of fecalith, or appendiceal abscess or phlegm, initial antimicrobial therapy reduces the acute inflammation and urgent need for surgery, but persistent inflammation of the appendix is often observed and appendectomy, either acutely or after improvement following antimicrobial therapy, appears indicated. Many different antimicrobial regimens have proven effective; ceftriaxone and metronidazole are associated with low rates of complications, offer an opportunity for once-daily therapy, and are cost effective, compared with other once-daily regimens.
Dr. Pelton is chief of pediatric infectious disease and coordinator of the maternal-child HIV program at Boston Medical Center.
Appendicitis is the most common surgical emergency in children. It is seen at all ages; however, it is less common in infants and toddlers younger than 4 years of age and peaks at an incidence of 25/100,000 in children 12- to 18-years-old. Fortunately, appendicitis is rarely fatal but can be associated with significant morbidity, especially in young children in whom the diagnosis is often delayed and perforation is more common. Reducing morbidity requires early diagnosis and optimizing management such that perforation and associated peritonitis are prevented.
The classical signs and symptoms of appendicitis are periumbilical pain migrating to the right lower quadrant, nausea, and low-grade fever. Presentation may vary if the location of the appendix is atypical, but primarily is age associated. In young children, abdominal distension, hip pain with or without limp, and fever are commonplace. In older children, right lower quadrant abdominal pain that intensifies with coughing or movement is frequent. Localized tenderness also appears to be age related; right lower quadrant tenderness and rebound are more often found in older children and adolescents, whereas younger children have more diffuse signs.
When I started my career, abdominal x-rays would be performed in search of a fecalith. However, such studies were of low sensitivity, and clinical acumen had a primary role in the decision to take the child to the operating room. In the current era, ultrasound and CT scan provide reasonable sensitivity and specificity. Ultrasound criteria include a diameter greater than 6 mm, concentric rings (target sign), an appendicolith, high echogenicity, obstruction of the lumen, and fluid surrounding the appendix.
As the pathogenesis of appendicitis represents occlusion of the appendiceal lumen, followed by overgrowth or translocation of bowel flora resulting in inflammation of the wall of the appendix, anaerobes and gram-negative gut flora represent the most important pathogens. In advanced cases, necrosis and gangrene of the appendix result with progression to rupture and peritonitis.
The traditional management was early surgical intervention to reduce the risk of perforation and peritonitis with acceptance of high rates of negative abdominal explorations as an acceptable consequence. Today, the approach to management of appendicitis is undergoing reevaluation. Early antimicrobial treatment has become routine in the management of nonperforated, perforated, or abscessed appendicitis. However, the question being asked is, “Do all children with uncomplicated appendicitis need appendectomy, or is antibiotic management sufficient?”
P. Salminen et al. reported on the results of a randomized clinical trial in 530 patients aged 18-60 years, comparing antimicrobial treatment alone with early appendectomy. Among 273 patients in the surgical group, all but 1 underwent successful appendectomy, resulting in a success rate of 99.6% (95% CI, 98.0%-100.0%). In the antibiotic group, 186 of 256 patients (70%) treated with antibiotics did not require surgery; 70 (27%) underwent appendectomy within 1 year of initial presentation for appendicitis (JAMA. 2015 Jun 16;313[23]:2340-8). There were no intraabdominal abscesses or other major complications associated with delayed appendectomy in patients randomized to antibiotic treatment. The authors concluded that among patients with CT-proven, uncomplicated appendicitis, antibiotic treatment did not meet the prespecified criterion for noninferiority, compared with appendectomy. However, most patients randomized to antibiotics for uncomplicated appendicitis did not require appendectomy during the 1-year follow-up period.
J.A. Horst et al. reviewed published reports of medical management of appendicitis in children (Ann Emerg Med. 2015 Aug;66[2]:119-22). They concluded that medical management of uncomplicated appendicitis in a select low-risk pediatric population is safe and does not result in significant morbidity. The arguments against a nonoperative approach include the risk of recurrent appendicitis, including the anxiety associated with any recurrences of abdominal pain, the risk of antibiotic-related complications, the potential for increased duration of hospitalization, and the relatively low morbidity of appendectomy in children. Factors associated with failed antibiotic management included fecaliths, fluid collections, or an appendiceal diameter greater than 1.1 cm on CT scan. The investigators concluded such children are poor candidates for nonsurgical management.
The bottom line is that antimicrobial therapy, in the absence of surgery, can be effective. Certainly in remote settings where surgery is not readily available, antimicrobial therapy with fluid and electrolyte management and close observation can be used in children with uncomplicated appendicitis with few failures and relatively few children requiring subsequent appendectomy. In more complicated cases with evidence of fecalith, or appendiceal abscess or phlegm, initial antimicrobial therapy reduces the acute inflammation and urgent need for surgery, but persistent inflammation of the appendix is often observed and appendectomy, either acutely or after improvement following antimicrobial therapy, appears indicated. Many different antimicrobial regimens have proven effective; ceftriaxone and metronidazole are associated with low rates of complications, offer an opportunity for once-daily therapy, and are cost effective, compared with other once-daily regimens.
Dr. Pelton is chief of pediatric infectious disease and coordinator of the maternal-child HIV program at Boston Medical Center.
Appendicitis is the most common surgical emergency in children. It is seen at all ages; however, it is less common in infants and toddlers younger than 4 years of age and peaks at an incidence of 25/100,000 in children 12- to 18-years-old. Fortunately, appendicitis is rarely fatal but can be associated with significant morbidity, especially in young children in whom the diagnosis is often delayed and perforation is more common. Reducing morbidity requires early diagnosis and optimizing management such that perforation and associated peritonitis are prevented.
The classical signs and symptoms of appendicitis are periumbilical pain migrating to the right lower quadrant, nausea, and low-grade fever. Presentation may vary if the location of the appendix is atypical, but primarily is age associated. In young children, abdominal distension, hip pain with or without limp, and fever are commonplace. In older children, right lower quadrant abdominal pain that intensifies with coughing or movement is frequent. Localized tenderness also appears to be age related; right lower quadrant tenderness and rebound are more often found in older children and adolescents, whereas younger children have more diffuse signs.
When I started my career, abdominal x-rays would be performed in search of a fecalith. However, such studies were of low sensitivity, and clinical acumen had a primary role in the decision to take the child to the operating room. In the current era, ultrasound and CT scan provide reasonable sensitivity and specificity. Ultrasound criteria include a diameter greater than 6 mm, concentric rings (target sign), an appendicolith, high echogenicity, obstruction of the lumen, and fluid surrounding the appendix.
As the pathogenesis of appendicitis represents occlusion of the appendiceal lumen, followed by overgrowth or translocation of bowel flora resulting in inflammation of the wall of the appendix, anaerobes and gram-negative gut flora represent the most important pathogens. In advanced cases, necrosis and gangrene of the appendix result with progression to rupture and peritonitis.
The traditional management was early surgical intervention to reduce the risk of perforation and peritonitis with acceptance of high rates of negative abdominal explorations as an acceptable consequence. Today, the approach to management of appendicitis is undergoing reevaluation. Early antimicrobial treatment has become routine in the management of nonperforated, perforated, or abscessed appendicitis. However, the question being asked is, “Do all children with uncomplicated appendicitis need appendectomy, or is antibiotic management sufficient?”
P. Salminen et al. reported on the results of a randomized clinical trial in 530 patients aged 18-60 years, comparing antimicrobial treatment alone with early appendectomy. Among 273 patients in the surgical group, all but 1 underwent successful appendectomy, resulting in a success rate of 99.6% (95% CI, 98.0%-100.0%). In the antibiotic group, 186 of 256 patients (70%) treated with antibiotics did not require surgery; 70 (27%) underwent appendectomy within 1 year of initial presentation for appendicitis (JAMA. 2015 Jun 16;313[23]:2340-8). There were no intraabdominal abscesses or other major complications associated with delayed appendectomy in patients randomized to antibiotic treatment. The authors concluded that among patients with CT-proven, uncomplicated appendicitis, antibiotic treatment did not meet the prespecified criterion for noninferiority, compared with appendectomy. However, most patients randomized to antibiotics for uncomplicated appendicitis did not require appendectomy during the 1-year follow-up period.
J.A. Horst et al. reviewed published reports of medical management of appendicitis in children (Ann Emerg Med. 2015 Aug;66[2]:119-22). They concluded that medical management of uncomplicated appendicitis in a select low-risk pediatric population is safe and does not result in significant morbidity. The arguments against a nonoperative approach include the risk of recurrent appendicitis, including the anxiety associated with any recurrences of abdominal pain, the risk of antibiotic-related complications, the potential for increased duration of hospitalization, and the relatively low morbidity of appendectomy in children. Factors associated with failed antibiotic management included fecaliths, fluid collections, or an appendiceal diameter greater than 1.1 cm on CT scan. The investigators concluded such children are poor candidates for nonsurgical management.
The bottom line is that antimicrobial therapy, in the absence of surgery, can be effective. Certainly in remote settings where surgery is not readily available, antimicrobial therapy with fluid and electrolyte management and close observation can be used in children with uncomplicated appendicitis with few failures and relatively few children requiring subsequent appendectomy. In more complicated cases with evidence of fecalith, or appendiceal abscess or phlegm, initial antimicrobial therapy reduces the acute inflammation and urgent need for surgery, but persistent inflammation of the appendix is often observed and appendectomy, either acutely or after improvement following antimicrobial therapy, appears indicated. Many different antimicrobial regimens have proven effective; ceftriaxone and metronidazole are associated with low rates of complications, offer an opportunity for once-daily therapy, and are cost effective, compared with other once-daily regimens.
Dr. Pelton is chief of pediatric infectious disease and coordinator of the maternal-child HIV program at Boston Medical Center.
Make the Diagnosis - February 2016
Diagnosis: Pyoderma gangrenosum
Pyoderma gangrenosum (PG) is an uncommon, noninfectious neutrophilic dermatosis that results in chronic ulcerative lesions. This disease process favors adult women and can be associated with systemic diseases in the majority of cases. The most common underlying systemic ailments include inflammatory bowel disease, arthritis, infection, and hematologic malignancy; it can also be drug induced.
Typically, the lesions begin as an erythematous pustule or nodule on an extremity. As was the case with our patient, a history of a "spider bite" or other arthropod assault may be elicited in the history as patients try to attribute a cause to the development of the initial ulceration. The pustule then develops into an ulcer with a characteristic necrotic, violaceous undermined border with a purulent base. Also, this disease process is associated with pathergy, in which minor trauma can induce additional lesions at remote sites.
There are four well-known types of pyoderma gangrenosum including the classic ulcerative lesions, pustular, bullous, and superficial granulomatous type, also known as vegetative PG. The pustular type may be seen more frequently in patients with inflammatory bowel disease, the bullous type may predominate in hematologic disorders, and the superficial granulomatous type is known to occur following surgery or other trauma.
The pathology of lesions can be nonspecific. However, in untreated lesions, widespread infiltration of neutrophils can be demonstrated at the base of the ulcers with accompanying necrosis at the periphery of lesions.
Dr. Bilu Martin is in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at edermatologynews.com. To submit your case for possible publication, send an email to dermnews@frontlinemedcom.com.
Diagnosis: Pyoderma gangrenosum
Pyoderma gangrenosum (PG) is an uncommon, noninfectious neutrophilic dermatosis that results in chronic ulcerative lesions. This disease process favors adult women and can be associated with systemic diseases in the majority of cases. The most common underlying systemic ailments include inflammatory bowel disease, arthritis, infection, and hematologic malignancy; it can also be drug induced.
Typically, the lesions begin as an erythematous pustule or nodule on an extremity. As was the case with our patient, a history of a "spider bite" or other arthropod assault may be elicited in the history as patients try to attribute a cause to the development of the initial ulceration. The pustule then develops into an ulcer with a characteristic necrotic, violaceous undermined border with a purulent base. Also, this disease process is associated with pathergy, in which minor trauma can induce additional lesions at remote sites.
There are four well-known types of pyoderma gangrenosum including the classic ulcerative lesions, pustular, bullous, and superficial granulomatous type, also known as vegetative PG. The pustular type may be seen more frequently in patients with inflammatory bowel disease, the bullous type may predominate in hematologic disorders, and the superficial granulomatous type is known to occur following surgery or other trauma.
The pathology of lesions can be nonspecific. However, in untreated lesions, widespread infiltration of neutrophils can be demonstrated at the base of the ulcers with accompanying necrosis at the periphery of lesions.
Dr. Bilu Martin is in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at edermatologynews.com. To submit your case for possible publication, send an email to dermnews@frontlinemedcom.com.
Diagnosis: Pyoderma gangrenosum
Pyoderma gangrenosum (PG) is an uncommon, noninfectious neutrophilic dermatosis that results in chronic ulcerative lesions. This disease process favors adult women and can be associated with systemic diseases in the majority of cases. The most common underlying systemic ailments include inflammatory bowel disease, arthritis, infection, and hematologic malignancy; it can also be drug induced.
Typically, the lesions begin as an erythematous pustule or nodule on an extremity. As was the case with our patient, a history of a "spider bite" or other arthropod assault may be elicited in the history as patients try to attribute a cause to the development of the initial ulceration. The pustule then develops into an ulcer with a characteristic necrotic, violaceous undermined border with a purulent base. Also, this disease process is associated with pathergy, in which minor trauma can induce additional lesions at remote sites.
There are four well-known types of pyoderma gangrenosum including the classic ulcerative lesions, pustular, bullous, and superficial granulomatous type, also known as vegetative PG. The pustular type may be seen more frequently in patients with inflammatory bowel disease, the bullous type may predominate in hematologic disorders, and the superficial granulomatous type is known to occur following surgery or other trauma.
The pathology of lesions can be nonspecific. However, in untreated lesions, widespread infiltration of neutrophils can be demonstrated at the base of the ulcers with accompanying necrosis at the periphery of lesions.
Dr. Bilu Martin is in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at edermatologynews.com. To submit your case for possible publication, send an email to dermnews@frontlinemedcom.com.

A 42-year-old woman with a 10-year history of Crohn's disease, treated with weekly subcutaneous injections of adalimumab, and hypertension presented with ulcerations on the lower extremities. She stated that the ulcerations began after she had been camping and reported being bitten by several ants during the trip, approximately 3 months earlier.
Pediatric Dermatology Consult - February 2016
By Catalina Matiz, M.D., and David Ginsberg
Nummular eczema
Nummular eczema is not an uncommon dermatosis that presents in pediatric and adult patients; its name, which derives from the Latin word nummulus (coin-like), refers to the coined-shape plaques that characterize this condition. It also has been referred to as discoid eczema and nummular dermatitis.1
The lesions begin as erythematous papules and vesicles that extend into larger oval or circular plaques that often become crusted, and can later progress to dry and scaly plaques.1,2 Patients often complain of intense pruritus.1 The lesions can be single or multiple, and more commonly occur on the extensor extremities as well as the trunk, and rarely affect the neck and the head.1-3 The pathophysiology of nummular eczema is not fully understood. It can occur in patients that exhibit atopic manifestations such as atopic dermatitis and other allergies, but there has been no clear link found between nummular eczema and atopy.3,4
Many theories exist implicating causative factors including Staphylococcus aureus colonization and xerosis.1 Similarly, some physicians believe that patch testing can be useful in these patients because of the potential for exacerbation caused by environmental allergens, but there is still no agreement on the ultimate cause.5 There is a higher incidence in males than females, and in the pediatric population, it is more common among “school aged” children between the ages of 2-12.6 Overall, nummular eczema is more commonly seen in adults, but it can occur at any age.2,3,6
Differential diagnosis
Nummular eczema is commonly mistaken as tinea corporis.1 The coined shape lesions, from which nummular eczema gets its name, can resemble the characteristic annular shape plaques of “ring worm,” but a potassium hydroxide (KOH) test or a fungal culture are simple ways to differentiate between the two conditions.
Nummular eczema occasionally can be confused for psoriasis as both entities can present with oval plaques. Psoriasis lesions tend to be pinker and less erythematous than nummular eczema lesions and most psoriasis plaques present with a characteristic silver scale.7 Clinically, nummular eczema is frequently associated with extreme pruritus, while in psoriasis the pruritus is less prominent.7
A biopsy would yield a more definitive diagnosis in difficult cases. Histologically, nummular eczema resembles other forms of spongiotic dermatitis, while psoriasis has very distinct histological features.7 Differentiating between contact dermatitis and nummular eczema relies on a thorough history of known allergies and potential exposure to environmental allergens. If history alone does not yield a definitive diagnosis and a suspicion for contact allergy is high, patch testing could help support one diagnosis over the other.5
Treatment
The generally accepted first line therapy includes mid to high potency topical corticosteroids in an ointment preparation or else under occlusion.1,4 Other topical agents used include tar preparations and calcineurin inhibitors.4 Intralesional corticosteroid injection can be used to treat isolated lesions that fail to respond to topical treatments.4
As with almost all manifestations of dermatitis, general gentle skin care measures and daily moisturizing are recommended.1 For more severe cases in older children, narrow-band UVB light therapy can be helpful.1 Due to their efficacy in treatment of other forms of refractory dermatitis, systemic therapy with cyclosporine, azathioprine, mycophenolate mofetil, and methotrexate can be used in cases in which phototherapy fails or is not accessible.4
In cases recalcitrant to topical therapies, secondary staphylococcal infection always should be ruled out and treated with systemic antimicrobials such as first generation cephalosporins.1
References
- Eczematous eruptions in childhood in “Hurwitz Clinical Pediatric Dermatology,” 4th ed. (New York, N.Y.: Elsevier, pp. 59-60
- Acta Derm Venereol. 1961;41:453-60.
- Acta Derm Venereol. 1969;49(2):189-96.
- Australas J Dermatol. 2010 May;51(2):128-30.
- Contact Dermatitis. 1997 May;36(5):261-4.
- Ped Dermatol. 2012 Oct;29(5):580-3.
- Dermatol Ther. 2006 Mar-Apr;19(2):73-82.
Dr. Matiz is assistant professor of dermatology at Rady Children’s Hospital San Diego–University of California, San Diego and Mr. Ginsberg is a research associate at the hospital. Dr. Matiz and Mr. Ginsberg said they have no relevant financial disclosures.
By Catalina Matiz, M.D., and David Ginsberg
Nummular eczema
Nummular eczema is not an uncommon dermatosis that presents in pediatric and adult patients; its name, which derives from the Latin word nummulus (coin-like), refers to the coined-shape plaques that characterize this condition. It also has been referred to as discoid eczema and nummular dermatitis.1
The lesions begin as erythematous papules and vesicles that extend into larger oval or circular plaques that often become crusted, and can later progress to dry and scaly plaques.1,2 Patients often complain of intense pruritus.1 The lesions can be single or multiple, and more commonly occur on the extensor extremities as well as the trunk, and rarely affect the neck and the head.1-3 The pathophysiology of nummular eczema is not fully understood. It can occur in patients that exhibit atopic manifestations such as atopic dermatitis and other allergies, but there has been no clear link found between nummular eczema and atopy.3,4
Many theories exist implicating causative factors including Staphylococcus aureus colonization and xerosis.1 Similarly, some physicians believe that patch testing can be useful in these patients because of the potential for exacerbation caused by environmental allergens, but there is still no agreement on the ultimate cause.5 There is a higher incidence in males than females, and in the pediatric population, it is more common among “school aged” children between the ages of 2-12.6 Overall, nummular eczema is more commonly seen in adults, but it can occur at any age.2,3,6
Differential diagnosis
Nummular eczema is commonly mistaken as tinea corporis.1 The coined shape lesions, from which nummular eczema gets its name, can resemble the characteristic annular shape plaques of “ring worm,” but a potassium hydroxide (KOH) test or a fungal culture are simple ways to differentiate between the two conditions.
Nummular eczema occasionally can be confused for psoriasis as both entities can present with oval plaques. Psoriasis lesions tend to be pinker and less erythematous than nummular eczema lesions and most psoriasis plaques present with a characteristic silver scale.7 Clinically, nummular eczema is frequently associated with extreme pruritus, while in psoriasis the pruritus is less prominent.7
A biopsy would yield a more definitive diagnosis in difficult cases. Histologically, nummular eczema resembles other forms of spongiotic dermatitis, while psoriasis has very distinct histological features.7 Differentiating between contact dermatitis and nummular eczema relies on a thorough history of known allergies and potential exposure to environmental allergens. If history alone does not yield a definitive diagnosis and a suspicion for contact allergy is high, patch testing could help support one diagnosis over the other.5
Treatment
The generally accepted first line therapy includes mid to high potency topical corticosteroids in an ointment preparation or else under occlusion.1,4 Other topical agents used include tar preparations and calcineurin inhibitors.4 Intralesional corticosteroid injection can be used to treat isolated lesions that fail to respond to topical treatments.4
As with almost all manifestations of dermatitis, general gentle skin care measures and daily moisturizing are recommended.1 For more severe cases in older children, narrow-band UVB light therapy can be helpful.1 Due to their efficacy in treatment of other forms of refractory dermatitis, systemic therapy with cyclosporine, azathioprine, mycophenolate mofetil, and methotrexate can be used in cases in which phototherapy fails or is not accessible.4
In cases recalcitrant to topical therapies, secondary staphylococcal infection always should be ruled out and treated with systemic antimicrobials such as first generation cephalosporins.1
References
- Eczematous eruptions in childhood in “Hurwitz Clinical Pediatric Dermatology,” 4th ed. (New York, N.Y.: Elsevier, pp. 59-60
- Acta Derm Venereol. 1961;41:453-60.
- Acta Derm Venereol. 1969;49(2):189-96.
- Australas J Dermatol. 2010 May;51(2):128-30.
- Contact Dermatitis. 1997 May;36(5):261-4.
- Ped Dermatol. 2012 Oct;29(5):580-3.
- Dermatol Ther. 2006 Mar-Apr;19(2):73-82.
Dr. Matiz is assistant professor of dermatology at Rady Children’s Hospital San Diego–University of California, San Diego and Mr. Ginsberg is a research associate at the hospital. Dr. Matiz and Mr. Ginsberg said they have no relevant financial disclosures.
By Catalina Matiz, M.D., and David Ginsberg
Nummular eczema
Nummular eczema is not an uncommon dermatosis that presents in pediatric and adult patients; its name, which derives from the Latin word nummulus (coin-like), refers to the coined-shape plaques that characterize this condition. It also has been referred to as discoid eczema and nummular dermatitis.1
The lesions begin as erythematous papules and vesicles that extend into larger oval or circular plaques that often become crusted, and can later progress to dry and scaly plaques.1,2 Patients often complain of intense pruritus.1 The lesions can be single or multiple, and more commonly occur on the extensor extremities as well as the trunk, and rarely affect the neck and the head.1-3 The pathophysiology of nummular eczema is not fully understood. It can occur in patients that exhibit atopic manifestations such as atopic dermatitis and other allergies, but there has been no clear link found between nummular eczema and atopy.3,4
Many theories exist implicating causative factors including Staphylococcus aureus colonization and xerosis.1 Similarly, some physicians believe that patch testing can be useful in these patients because of the potential for exacerbation caused by environmental allergens, but there is still no agreement on the ultimate cause.5 There is a higher incidence in males than females, and in the pediatric population, it is more common among “school aged” children between the ages of 2-12.6 Overall, nummular eczema is more commonly seen in adults, but it can occur at any age.2,3,6
Differential diagnosis
Nummular eczema is commonly mistaken as tinea corporis.1 The coined shape lesions, from which nummular eczema gets its name, can resemble the characteristic annular shape plaques of “ring worm,” but a potassium hydroxide (KOH) test or a fungal culture are simple ways to differentiate between the two conditions.
Nummular eczema occasionally can be confused for psoriasis as both entities can present with oval plaques. Psoriasis lesions tend to be pinker and less erythematous than nummular eczema lesions and most psoriasis plaques present with a characteristic silver scale.7 Clinically, nummular eczema is frequently associated with extreme pruritus, while in psoriasis the pruritus is less prominent.7
A biopsy would yield a more definitive diagnosis in difficult cases. Histologically, nummular eczema resembles other forms of spongiotic dermatitis, while psoriasis has very distinct histological features.7 Differentiating between contact dermatitis and nummular eczema relies on a thorough history of known allergies and potential exposure to environmental allergens. If history alone does not yield a definitive diagnosis and a suspicion for contact allergy is high, patch testing could help support one diagnosis over the other.5
Treatment
The generally accepted first line therapy includes mid to high potency topical corticosteroids in an ointment preparation or else under occlusion.1,4 Other topical agents used include tar preparations and calcineurin inhibitors.4 Intralesional corticosteroid injection can be used to treat isolated lesions that fail to respond to topical treatments.4
As with almost all manifestations of dermatitis, general gentle skin care measures and daily moisturizing are recommended.1 For more severe cases in older children, narrow-band UVB light therapy can be helpful.1 Due to their efficacy in treatment of other forms of refractory dermatitis, systemic therapy with cyclosporine, azathioprine, mycophenolate mofetil, and methotrexate can be used in cases in which phototherapy fails or is not accessible.4
In cases recalcitrant to topical therapies, secondary staphylococcal infection always should be ruled out and treated with systemic antimicrobials such as first generation cephalosporins.1
References
- Eczematous eruptions in childhood in “Hurwitz Clinical Pediatric Dermatology,” 4th ed. (New York, N.Y.: Elsevier, pp. 59-60
- Acta Derm Venereol. 1961;41:453-60.
- Acta Derm Venereol. 1969;49(2):189-96.
- Australas J Dermatol. 2010 May;51(2):128-30.
- Contact Dermatitis. 1997 May;36(5):261-4.
- Ped Dermatol. 2012 Oct;29(5):580-3.
- Dermatol Ther. 2006 Mar-Apr;19(2):73-82.
Dr. Matiz is assistant professor of dermatology at Rady Children’s Hospital San Diego–University of California, San Diego and Mr. Ginsberg is a research associate at the hospital. Dr. Matiz and Mr. Ginsberg said they have no relevant financial disclosures.

A 9-month-old male with no significant previous medical history presents with a very itchy rash that has been present for 6 weeks. His mother reports that the lesions began as small red bumps on the extremities and his torso, that developed over the course of a few weeks into large round, red, very pruritic plaques. He has been treated with an antifungal cream for several weeks without resolution, and most recently his mother has been applying hydrocortisone 2.5% cream with no improvement either. On exam, the patient is a well appearing infant, who is visibly irritated due to the pruritus accompanying his rash. There are several 1-cm to 4-cm round and oval dry, scaly, erythematous plaques on the trunk (see photo) and a few on the extremities. There is no generalized xerosis.
High Headache Frequency Is More Likely During Perimenopause
Women in perimenopause are at increased risk of high-frequency headache, compared with premenopausal women, according to data published online ahead of print January 21 in Headache. Women in menopause also are at increased risk of high-frequency headache, but the effect of menopause on headache frequency may be mediated or confounded by medication overuse or depression.
“Our results confirm the commonly held belief that the perimenopause worsens headache, but challenge the idea that migraine ‘always’ improves during the menopause,” said Vincent T. Martin, MD, Professor of Internal Medicine in the University of Cincinnati’s (UC) Division of General Internal Medicine and codirector of the Headache and Facial Pain Program at the UC Neuroscience Institute. “Recognition of the increased risk of high-frequency headache during the menopausal transition suggests a need for optimized preventive treatment of migraine during this time of women’s life.”
Research has suggested a lower prevalence of headache or migraine during menopause, compared with premenopause. No previous studies have analyzed whether frequency of headache attacks changes during the menopausal transition among women with migraine, however. Dr. Martin and colleagues sought to determine whether the percentage of female migraineurs with high-frequency headache, defined as 10 or more days/month, is greater during the perimenopausal and menopausal time periods, compared with the premenopausal period. The researchers also set out to examine whether any increase in high-frequency headache during a particular reproductive phase was restricted to the early or late stages of the phase.
An Analysis of AMPP Data
To answer their questions, the investigators conducted a cross-sectional study using data from the American Migraine Prevalence and Prevention (AMPP) study. The AMPP researchers elicited data about headache from 162,756 respondents age 12 or older in 2004 and invited a random subset of 24,000 people age 18 or older with self-reported severe headache to participate in annual follow-up surveys for the subsequent five years. Follow-up surveys included questions about sociodemographics (eg, BMI, smoking, and household income) and headache types and characteristics, in addition to the Migraine Disability Assessment Score. Dr. Martin and colleagues examined data from the 2006 follow-up survey because it contained questions on the menstrual cycle.
Eligible participants in the cross-sectional study were women with a diagnosis of migraine between ages 35 and 65. Women who were pregnant, breastfeeding, had a history of hysterectomy or oophorectomy, or used hormonal therapies were excluded from the analysis. The investigators classified respondents as premenopause, perimenopause, and menopause according to Stages of Reproductive Aging Workshop criteria.
Late Perimenopause and Headache Frequency
The analysis included 3,664 women, of whom 3,454 had episodic migraine and 210 had chronic migraine. In all, 1,263 women were classified as premenopausal, 1,283 as perimenopausal, and 1,118 as menopausal. Compared with women in premenopause, women in perimenopause and menopause used more migraine preventives and were more likely to overuse medication.
Approximately 8% of premenopausal women had high-frequency headache, compared with 12.2% of perimenopausal women and 12.0% of postmenopausal women. After adjustments for sociodemographics alone, the odds ratios (ORs) of high-frequency headache were 1.62 for perimenopausal women and 1.76 for menopausal women, compared with premenopausal women. After adjustment for BMI, current migraine preventive use, medication overuse, and depression, the OR decreased, but remained significant in the perimenopausal group (OR, 1.42) and lost significance for the menopausal group (OR, 1.27). Depression and medication overuse significantly increased the likelihood of high-frequency headache.
When the researchers examined participants in the early and late stages of perimenopause and adjusted data for all covariates, women in late perimenopause had an increased likelihood of high-frequency headache (OR, 1.72), but women in early perimenopause had a statistically insignificant increased risk of this outcome (OR, 1.22), compared with premenopausal women. When the researchers examined the early and late stages of menopause, compared with premenopause, they found no significant difference in risk of high-frequency headache after controlling for all covariates.
Results Contradict Common Belief
“These results suggest that the hormonal milieu of the late perimenopause is particularly provocative for high-frequency headache among migraineurs,” said Dr. Martin. Because the researchers did not collect data on premenstrual syndrome (PMS) disorder, they could not determine whether the increased risk for high-frequency headache during perimenopause only occurred in female migraineurs with PMS or in the entire population.
Epidemiologic studies have contributed to an impression that migraine prevalence declines in menopausal women, but the current study’s results contradict this impression. “Our study used high-frequency headache as its primary outcome measure, rather than migraine prevalence. It is plausible that during menopause, migraine prevalence decreases and migraine attacks occur more frequently in subgroups of women,” said Dr. Martin. “Women, as they get older, develop lots of aches and pains, joint [pain], and back pain, and it is possible [that] their overuse of pain medications for headache and other conditions might actually drive an increase in headaches for the menopause group,” he added.
Estrogen withdrawal in the late luteal phase, low serum levels of estrogen or progesterone, and increased uterine prostaglandin release could precipitate headache during the menopausal transition, said the researchers. These hormonal changes also may change the characteristics of the menstrual cycle, which could in turn affect headache frequency.
An advantage of the cross-sectional analysis is that it was a large population-based study of persons with migraine “that should have wide generalizability to the general population,” said Dr. Martin. The outcome measure of high-frequency headache, however, was not limited to migraine, but included all headaches. In addition, headache frequency was self-reported, and investigators did not confirm it with daily headache diaries. Finally, the researchers did not account or control for aura. “Our results should be considered preliminary until confirmed in future studies,” Dr. Martin concluded.
—Erik Greb
Suggested Reading
Martin VT, Pavlovic J, Fanning KM, et al. Perimenopause and menopause are associated with high frequency headache in women with migraine: results of the American Migraine Prevalence and Prevention Study. Headache. 2016 Jan 21 [Epub ahead of print].
Women in perimenopause are at increased risk of high-frequency headache, compared with premenopausal women, according to data published online ahead of print January 21 in Headache. Women in menopause also are at increased risk of high-frequency headache, but the effect of menopause on headache frequency may be mediated or confounded by medication overuse or depression.
“Our results confirm the commonly held belief that the perimenopause worsens headache, but challenge the idea that migraine ‘always’ improves during the menopause,” said Vincent T. Martin, MD, Professor of Internal Medicine in the University of Cincinnati’s (UC) Division of General Internal Medicine and codirector of the Headache and Facial Pain Program at the UC Neuroscience Institute. “Recognition of the increased risk of high-frequency headache during the menopausal transition suggests a need for optimized preventive treatment of migraine during this time of women’s life.”
Research has suggested a lower prevalence of headache or migraine during menopause, compared with premenopause. No previous studies have analyzed whether frequency of headache attacks changes during the menopausal transition among women with migraine, however. Dr. Martin and colleagues sought to determine whether the percentage of female migraineurs with high-frequency headache, defined as 10 or more days/month, is greater during the perimenopausal and menopausal time periods, compared with the premenopausal period. The researchers also set out to examine whether any increase in high-frequency headache during a particular reproductive phase was restricted to the early or late stages of the phase.
An Analysis of AMPP Data
To answer their questions, the investigators conducted a cross-sectional study using data from the American Migraine Prevalence and Prevention (AMPP) study. The AMPP researchers elicited data about headache from 162,756 respondents age 12 or older in 2004 and invited a random subset of 24,000 people age 18 or older with self-reported severe headache to participate in annual follow-up surveys for the subsequent five years. Follow-up surveys included questions about sociodemographics (eg, BMI, smoking, and household income) and headache types and characteristics, in addition to the Migraine Disability Assessment Score. Dr. Martin and colleagues examined data from the 2006 follow-up survey because it contained questions on the menstrual cycle.
Eligible participants in the cross-sectional study were women with a diagnosis of migraine between ages 35 and 65. Women who were pregnant, breastfeeding, had a history of hysterectomy or oophorectomy, or used hormonal therapies were excluded from the analysis. The investigators classified respondents as premenopause, perimenopause, and menopause according to Stages of Reproductive Aging Workshop criteria.
Late Perimenopause and Headache Frequency
The analysis included 3,664 women, of whom 3,454 had episodic migraine and 210 had chronic migraine. In all, 1,263 women were classified as premenopausal, 1,283 as perimenopausal, and 1,118 as menopausal. Compared with women in premenopause, women in perimenopause and menopause used more migraine preventives and were more likely to overuse medication.
Approximately 8% of premenopausal women had high-frequency headache, compared with 12.2% of perimenopausal women and 12.0% of postmenopausal women. After adjustments for sociodemographics alone, the odds ratios (ORs) of high-frequency headache were 1.62 for perimenopausal women and 1.76 for menopausal women, compared with premenopausal women. After adjustment for BMI, current migraine preventive use, medication overuse, and depression, the OR decreased, but remained significant in the perimenopausal group (OR, 1.42) and lost significance for the menopausal group (OR, 1.27). Depression and medication overuse significantly increased the likelihood of high-frequency headache.
When the researchers examined participants in the early and late stages of perimenopause and adjusted data for all covariates, women in late perimenopause had an increased likelihood of high-frequency headache (OR, 1.72), but women in early perimenopause had a statistically insignificant increased risk of this outcome (OR, 1.22), compared with premenopausal women. When the researchers examined the early and late stages of menopause, compared with premenopause, they found no significant difference in risk of high-frequency headache after controlling for all covariates.
Results Contradict Common Belief
“These results suggest that the hormonal milieu of the late perimenopause is particularly provocative for high-frequency headache among migraineurs,” said Dr. Martin. Because the researchers did not collect data on premenstrual syndrome (PMS) disorder, they could not determine whether the increased risk for high-frequency headache during perimenopause only occurred in female migraineurs with PMS or in the entire population.
Epidemiologic studies have contributed to an impression that migraine prevalence declines in menopausal women, but the current study’s results contradict this impression. “Our study used high-frequency headache as its primary outcome measure, rather than migraine prevalence. It is plausible that during menopause, migraine prevalence decreases and migraine attacks occur more frequently in subgroups of women,” said Dr. Martin. “Women, as they get older, develop lots of aches and pains, joint [pain], and back pain, and it is possible [that] their overuse of pain medications for headache and other conditions might actually drive an increase in headaches for the menopause group,” he added.
Estrogen withdrawal in the late luteal phase, low serum levels of estrogen or progesterone, and increased uterine prostaglandin release could precipitate headache during the menopausal transition, said the researchers. These hormonal changes also may change the characteristics of the menstrual cycle, which could in turn affect headache frequency.
An advantage of the cross-sectional analysis is that it was a large population-based study of persons with migraine “that should have wide generalizability to the general population,” said Dr. Martin. The outcome measure of high-frequency headache, however, was not limited to migraine, but included all headaches. In addition, headache frequency was self-reported, and investigators did not confirm it with daily headache diaries. Finally, the researchers did not account or control for aura. “Our results should be considered preliminary until confirmed in future studies,” Dr. Martin concluded.
—Erik Greb
Women in perimenopause are at increased risk of high-frequency headache, compared with premenopausal women, according to data published online ahead of print January 21 in Headache. Women in menopause also are at increased risk of high-frequency headache, but the effect of menopause on headache frequency may be mediated or confounded by medication overuse or depression.
“Our results confirm the commonly held belief that the perimenopause worsens headache, but challenge the idea that migraine ‘always’ improves during the menopause,” said Vincent T. Martin, MD, Professor of Internal Medicine in the University of Cincinnati’s (UC) Division of General Internal Medicine and codirector of the Headache and Facial Pain Program at the UC Neuroscience Institute. “Recognition of the increased risk of high-frequency headache during the menopausal transition suggests a need for optimized preventive treatment of migraine during this time of women’s life.”
Research has suggested a lower prevalence of headache or migraine during menopause, compared with premenopause. No previous studies have analyzed whether frequency of headache attacks changes during the menopausal transition among women with migraine, however. Dr. Martin and colleagues sought to determine whether the percentage of female migraineurs with high-frequency headache, defined as 10 or more days/month, is greater during the perimenopausal and menopausal time periods, compared with the premenopausal period. The researchers also set out to examine whether any increase in high-frequency headache during a particular reproductive phase was restricted to the early or late stages of the phase.
An Analysis of AMPP Data
To answer their questions, the investigators conducted a cross-sectional study using data from the American Migraine Prevalence and Prevention (AMPP) study. The AMPP researchers elicited data about headache from 162,756 respondents age 12 or older in 2004 and invited a random subset of 24,000 people age 18 or older with self-reported severe headache to participate in annual follow-up surveys for the subsequent five years. Follow-up surveys included questions about sociodemographics (eg, BMI, smoking, and household income) and headache types and characteristics, in addition to the Migraine Disability Assessment Score. Dr. Martin and colleagues examined data from the 2006 follow-up survey because it contained questions on the menstrual cycle.
Eligible participants in the cross-sectional study were women with a diagnosis of migraine between ages 35 and 65. Women who were pregnant, breastfeeding, had a history of hysterectomy or oophorectomy, or used hormonal therapies were excluded from the analysis. The investigators classified respondents as premenopause, perimenopause, and menopause according to Stages of Reproductive Aging Workshop criteria.
Late Perimenopause and Headache Frequency
The analysis included 3,664 women, of whom 3,454 had episodic migraine and 210 had chronic migraine. In all, 1,263 women were classified as premenopausal, 1,283 as perimenopausal, and 1,118 as menopausal. Compared with women in premenopause, women in perimenopause and menopause used more migraine preventives and were more likely to overuse medication.
Approximately 8% of premenopausal women had high-frequency headache, compared with 12.2% of perimenopausal women and 12.0% of postmenopausal women. After adjustments for sociodemographics alone, the odds ratios (ORs) of high-frequency headache were 1.62 for perimenopausal women and 1.76 for menopausal women, compared with premenopausal women. After adjustment for BMI, current migraine preventive use, medication overuse, and depression, the OR decreased, but remained significant in the perimenopausal group (OR, 1.42) and lost significance for the menopausal group (OR, 1.27). Depression and medication overuse significantly increased the likelihood of high-frequency headache.
When the researchers examined participants in the early and late stages of perimenopause and adjusted data for all covariates, women in late perimenopause had an increased likelihood of high-frequency headache (OR, 1.72), but women in early perimenopause had a statistically insignificant increased risk of this outcome (OR, 1.22), compared with premenopausal women. When the researchers examined the early and late stages of menopause, compared with premenopause, they found no significant difference in risk of high-frequency headache after controlling for all covariates.
Results Contradict Common Belief
“These results suggest that the hormonal milieu of the late perimenopause is particularly provocative for high-frequency headache among migraineurs,” said Dr. Martin. Because the researchers did not collect data on premenstrual syndrome (PMS) disorder, they could not determine whether the increased risk for high-frequency headache during perimenopause only occurred in female migraineurs with PMS or in the entire population.
Epidemiologic studies have contributed to an impression that migraine prevalence declines in menopausal women, but the current study’s results contradict this impression. “Our study used high-frequency headache as its primary outcome measure, rather than migraine prevalence. It is plausible that during menopause, migraine prevalence decreases and migraine attacks occur more frequently in subgroups of women,” said Dr. Martin. “Women, as they get older, develop lots of aches and pains, joint [pain], and back pain, and it is possible [that] their overuse of pain medications for headache and other conditions might actually drive an increase in headaches for the menopause group,” he added.
Estrogen withdrawal in the late luteal phase, low serum levels of estrogen or progesterone, and increased uterine prostaglandin release could precipitate headache during the menopausal transition, said the researchers. These hormonal changes also may change the characteristics of the menstrual cycle, which could in turn affect headache frequency.
An advantage of the cross-sectional analysis is that it was a large population-based study of persons with migraine “that should have wide generalizability to the general population,” said Dr. Martin. The outcome measure of high-frequency headache, however, was not limited to migraine, but included all headaches. In addition, headache frequency was self-reported, and investigators did not confirm it with daily headache diaries. Finally, the researchers did not account or control for aura. “Our results should be considered preliminary until confirmed in future studies,” Dr. Martin concluded.
—Erik Greb
Suggested Reading
Martin VT, Pavlovic J, Fanning KM, et al. Perimenopause and menopause are associated with high frequency headache in women with migraine: results of the American Migraine Prevalence and Prevention Study. Headache. 2016 Jan 21 [Epub ahead of print].
Suggested Reading
Martin VT, Pavlovic J, Fanning KM, et al. Perimenopause and menopause are associated with high frequency headache in women with migraine: results of the American Migraine Prevalence and Prevention Study. Headache. 2016 Jan 21 [Epub ahead of print].
Hope may not be the best component of an exercise regimen
Judging by the crowd and the difficulty in finding a locker at my gym on January 1, a lot of people are serious about their 2016 New Year’s resolution to exercise and lose weight. But as most of us have experienced personally and professionally, embarking on a well-intended effort to exercise in the hope of losing weight more often results in frustration than a trip to the store to buy smaller-sized clothes.
The frequent answer to the question “What did the doctor say at your visit?” provides a partial explanation of this phenomenon: “Nothing, just that I should exercise and lose weight” is the usual hackneyed response. Nothing—as in nothing unexpected, nothing significant, and nothing specific was said. It is with this lack of specific advice that I feel many of us let our patients down.
We admonish patients to eat fewer calories, avoid the evil carbs, walk 10,000 steps, ride a bike, use the elliptical, or swim three times a week. There is a concrete but broad nature to these suggestions, but there is also a familiarity and a lack of specificity that leaves patients feeling that there is no science behind them. And the truth is that many of us are not comfortable enough with current data from our exercise physiology colleagues to have a detailed discussion with our patients that pairs their specific goals with an exercise regimen and diet most likely to be beneficial. We may fear sounding like the morning talk show doctors, offering sound bites instead of engaging in an evidence-based dialogue with our patients.
Many of our patients cannot afford a personal trainer to guide and cajole them through a successful regimen—assuming that they, or we, can separate myth and fact and choose an appropriate trainer. We should try to be their guide and sounding board as well as coach and cheerleader.
In this issue of the Journal, John and Christopher Higgins present a primer on the background information to use when talking with our patients about starting an exercise program focused on weight loss. They provide useful references that support specific approaches to achieve realistic expectations, and they review and compare various strategies.
I’m sure by March it will again be easier to find a locker at my gym. And I hope by then that my new workout plan will be more scientifically based, as well as a bit more effective. Even data-based hope springs eternal.
Judging by the crowd and the difficulty in finding a locker at my gym on January 1, a lot of people are serious about their 2016 New Year’s resolution to exercise and lose weight. But as most of us have experienced personally and professionally, embarking on a well-intended effort to exercise in the hope of losing weight more often results in frustration than a trip to the store to buy smaller-sized clothes.
The frequent answer to the question “What did the doctor say at your visit?” provides a partial explanation of this phenomenon: “Nothing, just that I should exercise and lose weight” is the usual hackneyed response. Nothing—as in nothing unexpected, nothing significant, and nothing specific was said. It is with this lack of specific advice that I feel many of us let our patients down.
We admonish patients to eat fewer calories, avoid the evil carbs, walk 10,000 steps, ride a bike, use the elliptical, or swim three times a week. There is a concrete but broad nature to these suggestions, but there is also a familiarity and a lack of specificity that leaves patients feeling that there is no science behind them. And the truth is that many of us are not comfortable enough with current data from our exercise physiology colleagues to have a detailed discussion with our patients that pairs their specific goals with an exercise regimen and diet most likely to be beneficial. We may fear sounding like the morning talk show doctors, offering sound bites instead of engaging in an evidence-based dialogue with our patients.
Many of our patients cannot afford a personal trainer to guide and cajole them through a successful regimen—assuming that they, or we, can separate myth and fact and choose an appropriate trainer. We should try to be their guide and sounding board as well as coach and cheerleader.
In this issue of the Journal, John and Christopher Higgins present a primer on the background information to use when talking with our patients about starting an exercise program focused on weight loss. They provide useful references that support specific approaches to achieve realistic expectations, and they review and compare various strategies.
I’m sure by March it will again be easier to find a locker at my gym. And I hope by then that my new workout plan will be more scientifically based, as well as a bit more effective. Even data-based hope springs eternal.
Judging by the crowd and the difficulty in finding a locker at my gym on January 1, a lot of people are serious about their 2016 New Year’s resolution to exercise and lose weight. But as most of us have experienced personally and professionally, embarking on a well-intended effort to exercise in the hope of losing weight more often results in frustration than a trip to the store to buy smaller-sized clothes.
The frequent answer to the question “What did the doctor say at your visit?” provides a partial explanation of this phenomenon: “Nothing, just that I should exercise and lose weight” is the usual hackneyed response. Nothing—as in nothing unexpected, nothing significant, and nothing specific was said. It is with this lack of specific advice that I feel many of us let our patients down.
We admonish patients to eat fewer calories, avoid the evil carbs, walk 10,000 steps, ride a bike, use the elliptical, or swim three times a week. There is a concrete but broad nature to these suggestions, but there is also a familiarity and a lack of specificity that leaves patients feeling that there is no science behind them. And the truth is that many of us are not comfortable enough with current data from our exercise physiology colleagues to have a detailed discussion with our patients that pairs their specific goals with an exercise regimen and diet most likely to be beneficial. We may fear sounding like the morning talk show doctors, offering sound bites instead of engaging in an evidence-based dialogue with our patients.
Many of our patients cannot afford a personal trainer to guide and cajole them through a successful regimen—assuming that they, or we, can separate myth and fact and choose an appropriate trainer. We should try to be their guide and sounding board as well as coach and cheerleader.
In this issue of the Journal, John and Christopher Higgins present a primer on the background information to use when talking with our patients about starting an exercise program focused on weight loss. They provide useful references that support specific approaches to achieve realistic expectations, and they review and compare various strategies.
I’m sure by March it will again be easier to find a locker at my gym. And I hope by then that my new workout plan will be more scientifically based, as well as a bit more effective. Even data-based hope springs eternal.
The ethics of ICDs: History and future directions
In 1975, Julia and Joseph Quinlan approached the administrator of St. Clare’s Hospital in Denville, New Jersey, and requested that the mechanical ventilator on which their adopted daughter, Karen, was dependent be turned off. Karen Ann Quinlan, 21 years old, was in a permanent vegetative state after a severe anoxic event, and her parents had been informed by the hospital’s medical staff that she would never regain consciousness.
To the Quinlans’ request to withdraw the ventilator, the hospital administrator replied, “You have to understand our position, Mrs. Quinlan. In this hospital we don’t kill people.”1
The administrator’s response was consistent with prevailing ethical and legal perspectives, analyses, and directives at that time related to discontinuation of life-sustaining treatment. In the mid-1970s, the American Medical Association’s position was that it was permissible to not put a patient on a ventilator (ie, a physician could withhold a life-sustaining treatment), but once a patient was on a ventilator, it was not permissible to take the patient off if the intention was to allow death to occur.1 However, the New Jersey Supreme Court ultimately found this distinction between withholding and withdrawing unconvincing, and ruled unanimously that Karen Quinlan’s ventilator could be turned off.2
THE HASTINGS CENTER REPORT: STOPPING IS THE SAME AS NOT STARTING
During the subsequent decade, further ethical analysis and additional legal cases resulted in new insights and more nuanced thinking about forgoing life-sustaining treatment.
These developments were summarized in a 1987 report by the Hastings Center,3 a leading bioethics research and policy institute. The report provided normative guidance for the termination of life-sustaining treatment and for the care of dying patients. It acknowledged that deciding not to start a life-sustaining treatment can emotionally and psychologically affect healthcare professionals differently than deciding to stop such a treatment. However, the report also asserted that there is no morally important difference between withholding and withdrawing such treatments.
Reflecting a partnership model between patients and professionals for healthcare decision-making, and affirming the ethical significance of both a burden-benefit analysis and patient autonomy, the report stated that when a patient or surrogate in collaboration with a responsible healthcare professional decides that a treatment under way and the life it supports have become more burdensome than beneficial to the patient, that is sufficient reason to stop. There is no ethical requirement that treatment, once initiated, must continue against the patient’s wishes or when the surrogate determines that it is more burdensome than beneficial from the patient’s perspective. In fact, imposing treatment in such circumstances violates the patient’s right to self-determination.3
The report noted further that, because of frequent uncertainty about the efficacy of proposed treatments, it is preferable to initiate time-limited trials of treatments and then later stop them if they prove ineffective or become overly burdensome from a patient’s perspective.
ICDs ARE LIKE OTHER LIFE-SUSTAINING THERAPIES
In this issue of Cleveland Clinic Journal of Medicine, Baibars et al4 address the question of how implantable cardioverter-defibrillators (ICDs) should be managed at the end of life. The historical events and developments recounted above regarding withdrawing life-sustaining technologies are an appropriate context for ethically assessing the management of ICDs for dying patients.
Obviously, ICDs are not ventilators, but like ventilators, they are life-sustaining therapy, as are dialysis machines, blood transfusions, medically supplied nutrition and hydration, ventricular assist devices, and other implantable electronic cardiac devices such as pacemakers. Each of these life-sustaining therapies, depending on a patient’s clinical condition, underlying illness, and comorbidities, can become a death-prolonging technology.
An ethical framework and analysis about whether to continue any life-sustaining therapy, including an ICD, must include an assessment of the benefit-to-burden ratio from the patient’s perspective. Does the therapy enhance or maintain a quality of life acceptable to the patient? Or has it become overly burdensome and does it maintain a quality of life the patient finds (or would find) unacceptable? If the latter is true, and especially in the context of an underlying terminal condition, then shifting the goals of care to focus on comfort is always appropriate and ethically justified. Treatments—including ICDs—that do not contribute to patient comfort should be withdrawn.
TOWARD COMPETENCY IN ETHICAL MANAGEMENT
Baibars et al note that much more needs to be done to enhance competencies, increase proficiencies, and mitigate the moral distress of healthcare professionals caring for dying patients with ICDs and other devices. To help clinicians achieve a personal and professional “comfort zone” for ethically managing patients with ICDs, we recommend that healthcare institutions, medical schools, and nursing schools take the following steps:
Develop comprehensive end-of-life policies, procedures, and protocols that incorporate specific guidance for managing cardiac devices and that have been endorsed by a hospital ethics committee. Such guidance can be informative and educational and can ensure that decisions and resulting actions (including stopping cardiac devices) are ethically supportable.
Provide more palliative care training in medical and nursing schools, residency programs, and continuing education activities so that front-line clinicians can deliver “basic,” “primary” palliative care not requiring specialty palliative medicine. This training, called for in the Institute of Medicine’s 2014 report, Dying in America,5 should include explicit ethics discussions about managing cardiac devices at the end of life.
Provide ongoing training in communication skills needed for all patient-professional encounters. Effectively engaging patients in goals-of-care discussions, especially patients with life-limiting illnesses such as heart failure, cannot be achieved without these skills.
- Pence G. Comas: Karen Quinlan and Nancy Cruzan. In: Classic Cases in Medical Ethics: Accounts of Cases That Have Shaped Medical Ethics, With Philosophical, Legal, and Historical Backgrounds, 3rd edition. Boston: McGraw-Hill; 2000:29–55.
- In the matter of Karen Quinlan, an alleged incompetent. In re Quinlan. 70 N.J. 10, 355 A.2d 647 (1976), cert. denied, 429 U.S. 922 (1976).
- Wolf SM. Hastings Center. Guidelines on the Termination of Life-Sustaining Treatment and Care of the Dying: A Report by the Hastings Center. The Hastings Center: Briarcliff Manor, NY; 1987.
- Baibars MM, Alraies MC, Kabach A, Pritzker M. Can patients opt to turn off implantable cardioverter-defibrillators near the end of life? Cleve Clin J Med 2016; 83:97–98.
- National Academy of Sciences. Dying in America: improving quality and honoring individual p near the end of life. www.iom.edu/Reports/2014/Dying-In-America-Improving-Quality-and-Honoring-Individual-P-Near-the-End-of-Life.aspx. Accessed January 4, 2016.
In 1975, Julia and Joseph Quinlan approached the administrator of St. Clare’s Hospital in Denville, New Jersey, and requested that the mechanical ventilator on which their adopted daughter, Karen, was dependent be turned off. Karen Ann Quinlan, 21 years old, was in a permanent vegetative state after a severe anoxic event, and her parents had been informed by the hospital’s medical staff that she would never regain consciousness.
To the Quinlans’ request to withdraw the ventilator, the hospital administrator replied, “You have to understand our position, Mrs. Quinlan. In this hospital we don’t kill people.”1
The administrator’s response was consistent with prevailing ethical and legal perspectives, analyses, and directives at that time related to discontinuation of life-sustaining treatment. In the mid-1970s, the American Medical Association’s position was that it was permissible to not put a patient on a ventilator (ie, a physician could withhold a life-sustaining treatment), but once a patient was on a ventilator, it was not permissible to take the patient off if the intention was to allow death to occur.1 However, the New Jersey Supreme Court ultimately found this distinction between withholding and withdrawing unconvincing, and ruled unanimously that Karen Quinlan’s ventilator could be turned off.2
THE HASTINGS CENTER REPORT: STOPPING IS THE SAME AS NOT STARTING
During the subsequent decade, further ethical analysis and additional legal cases resulted in new insights and more nuanced thinking about forgoing life-sustaining treatment.
These developments were summarized in a 1987 report by the Hastings Center,3 a leading bioethics research and policy institute. The report provided normative guidance for the termination of life-sustaining treatment and for the care of dying patients. It acknowledged that deciding not to start a life-sustaining treatment can emotionally and psychologically affect healthcare professionals differently than deciding to stop such a treatment. However, the report also asserted that there is no morally important difference between withholding and withdrawing such treatments.
Reflecting a partnership model between patients and professionals for healthcare decision-making, and affirming the ethical significance of both a burden-benefit analysis and patient autonomy, the report stated that when a patient or surrogate in collaboration with a responsible healthcare professional decides that a treatment under way and the life it supports have become more burdensome than beneficial to the patient, that is sufficient reason to stop. There is no ethical requirement that treatment, once initiated, must continue against the patient’s wishes or when the surrogate determines that it is more burdensome than beneficial from the patient’s perspective. In fact, imposing treatment in such circumstances violates the patient’s right to self-determination.3
The report noted further that, because of frequent uncertainty about the efficacy of proposed treatments, it is preferable to initiate time-limited trials of treatments and then later stop them if they prove ineffective or become overly burdensome from a patient’s perspective.
ICDs ARE LIKE OTHER LIFE-SUSTAINING THERAPIES
In this issue of Cleveland Clinic Journal of Medicine, Baibars et al4 address the question of how implantable cardioverter-defibrillators (ICDs) should be managed at the end of life. The historical events and developments recounted above regarding withdrawing life-sustaining technologies are an appropriate context for ethically assessing the management of ICDs for dying patients.
Obviously, ICDs are not ventilators, but like ventilators, they are life-sustaining therapy, as are dialysis machines, blood transfusions, medically supplied nutrition and hydration, ventricular assist devices, and other implantable electronic cardiac devices such as pacemakers. Each of these life-sustaining therapies, depending on a patient’s clinical condition, underlying illness, and comorbidities, can become a death-prolonging technology.
An ethical framework and analysis about whether to continue any life-sustaining therapy, including an ICD, must include an assessment of the benefit-to-burden ratio from the patient’s perspective. Does the therapy enhance or maintain a quality of life acceptable to the patient? Or has it become overly burdensome and does it maintain a quality of life the patient finds (or would find) unacceptable? If the latter is true, and especially in the context of an underlying terminal condition, then shifting the goals of care to focus on comfort is always appropriate and ethically justified. Treatments—including ICDs—that do not contribute to patient comfort should be withdrawn.
TOWARD COMPETENCY IN ETHICAL MANAGEMENT
Baibars et al note that much more needs to be done to enhance competencies, increase proficiencies, and mitigate the moral distress of healthcare professionals caring for dying patients with ICDs and other devices. To help clinicians achieve a personal and professional “comfort zone” for ethically managing patients with ICDs, we recommend that healthcare institutions, medical schools, and nursing schools take the following steps:
Develop comprehensive end-of-life policies, procedures, and protocols that incorporate specific guidance for managing cardiac devices and that have been endorsed by a hospital ethics committee. Such guidance can be informative and educational and can ensure that decisions and resulting actions (including stopping cardiac devices) are ethically supportable.
Provide more palliative care training in medical and nursing schools, residency programs, and continuing education activities so that front-line clinicians can deliver “basic,” “primary” palliative care not requiring specialty palliative medicine. This training, called for in the Institute of Medicine’s 2014 report, Dying in America,5 should include explicit ethics discussions about managing cardiac devices at the end of life.
Provide ongoing training in communication skills needed for all patient-professional encounters. Effectively engaging patients in goals-of-care discussions, especially patients with life-limiting illnesses such as heart failure, cannot be achieved without these skills.
In 1975, Julia and Joseph Quinlan approached the administrator of St. Clare’s Hospital in Denville, New Jersey, and requested that the mechanical ventilator on which their adopted daughter, Karen, was dependent be turned off. Karen Ann Quinlan, 21 years old, was in a permanent vegetative state after a severe anoxic event, and her parents had been informed by the hospital’s medical staff that she would never regain consciousness.
To the Quinlans’ request to withdraw the ventilator, the hospital administrator replied, “You have to understand our position, Mrs. Quinlan. In this hospital we don’t kill people.”1
The administrator’s response was consistent with prevailing ethical and legal perspectives, analyses, and directives at that time related to discontinuation of life-sustaining treatment. In the mid-1970s, the American Medical Association’s position was that it was permissible to not put a patient on a ventilator (ie, a physician could withhold a life-sustaining treatment), but once a patient was on a ventilator, it was not permissible to take the patient off if the intention was to allow death to occur.1 However, the New Jersey Supreme Court ultimately found this distinction between withholding and withdrawing unconvincing, and ruled unanimously that Karen Quinlan’s ventilator could be turned off.2
THE HASTINGS CENTER REPORT: STOPPING IS THE SAME AS NOT STARTING
During the subsequent decade, further ethical analysis and additional legal cases resulted in new insights and more nuanced thinking about forgoing life-sustaining treatment.
These developments were summarized in a 1987 report by the Hastings Center,3 a leading bioethics research and policy institute. The report provided normative guidance for the termination of life-sustaining treatment and for the care of dying patients. It acknowledged that deciding not to start a life-sustaining treatment can emotionally and psychologically affect healthcare professionals differently than deciding to stop such a treatment. However, the report also asserted that there is no morally important difference between withholding and withdrawing such treatments.
Reflecting a partnership model between patients and professionals for healthcare decision-making, and affirming the ethical significance of both a burden-benefit analysis and patient autonomy, the report stated that when a patient or surrogate in collaboration with a responsible healthcare professional decides that a treatment under way and the life it supports have become more burdensome than beneficial to the patient, that is sufficient reason to stop. There is no ethical requirement that treatment, once initiated, must continue against the patient’s wishes or when the surrogate determines that it is more burdensome than beneficial from the patient’s perspective. In fact, imposing treatment in such circumstances violates the patient’s right to self-determination.3
The report noted further that, because of frequent uncertainty about the efficacy of proposed treatments, it is preferable to initiate time-limited trials of treatments and then later stop them if they prove ineffective or become overly burdensome from a patient’s perspective.
ICDs ARE LIKE OTHER LIFE-SUSTAINING THERAPIES
In this issue of Cleveland Clinic Journal of Medicine, Baibars et al4 address the question of how implantable cardioverter-defibrillators (ICDs) should be managed at the end of life. The historical events and developments recounted above regarding withdrawing life-sustaining technologies are an appropriate context for ethically assessing the management of ICDs for dying patients.
Obviously, ICDs are not ventilators, but like ventilators, they are life-sustaining therapy, as are dialysis machines, blood transfusions, medically supplied nutrition and hydration, ventricular assist devices, and other implantable electronic cardiac devices such as pacemakers. Each of these life-sustaining therapies, depending on a patient’s clinical condition, underlying illness, and comorbidities, can become a death-prolonging technology.
An ethical framework and analysis about whether to continue any life-sustaining therapy, including an ICD, must include an assessment of the benefit-to-burden ratio from the patient’s perspective. Does the therapy enhance or maintain a quality of life acceptable to the patient? Or has it become overly burdensome and does it maintain a quality of life the patient finds (or would find) unacceptable? If the latter is true, and especially in the context of an underlying terminal condition, then shifting the goals of care to focus on comfort is always appropriate and ethically justified. Treatments—including ICDs—that do not contribute to patient comfort should be withdrawn.
TOWARD COMPETENCY IN ETHICAL MANAGEMENT
Baibars et al note that much more needs to be done to enhance competencies, increase proficiencies, and mitigate the moral distress of healthcare professionals caring for dying patients with ICDs and other devices. To help clinicians achieve a personal and professional “comfort zone” for ethically managing patients with ICDs, we recommend that healthcare institutions, medical schools, and nursing schools take the following steps:
Develop comprehensive end-of-life policies, procedures, and protocols that incorporate specific guidance for managing cardiac devices and that have been endorsed by a hospital ethics committee. Such guidance can be informative and educational and can ensure that decisions and resulting actions (including stopping cardiac devices) are ethically supportable.
Provide more palliative care training in medical and nursing schools, residency programs, and continuing education activities so that front-line clinicians can deliver “basic,” “primary” palliative care not requiring specialty palliative medicine. This training, called for in the Institute of Medicine’s 2014 report, Dying in America,5 should include explicit ethics discussions about managing cardiac devices at the end of life.
Provide ongoing training in communication skills needed for all patient-professional encounters. Effectively engaging patients in goals-of-care discussions, especially patients with life-limiting illnesses such as heart failure, cannot be achieved without these skills.
- Pence G. Comas: Karen Quinlan and Nancy Cruzan. In: Classic Cases in Medical Ethics: Accounts of Cases That Have Shaped Medical Ethics, With Philosophical, Legal, and Historical Backgrounds, 3rd edition. Boston: McGraw-Hill; 2000:29–55.
- In the matter of Karen Quinlan, an alleged incompetent. In re Quinlan. 70 N.J. 10, 355 A.2d 647 (1976), cert. denied, 429 U.S. 922 (1976).
- Wolf SM. Hastings Center. Guidelines on the Termination of Life-Sustaining Treatment and Care of the Dying: A Report by the Hastings Center. The Hastings Center: Briarcliff Manor, NY; 1987.
- Baibars MM, Alraies MC, Kabach A, Pritzker M. Can patients opt to turn off implantable cardioverter-defibrillators near the end of life? Cleve Clin J Med 2016; 83:97–98.
- National Academy of Sciences. Dying in America: improving quality and honoring individual p near the end of life. www.iom.edu/Reports/2014/Dying-In-America-Improving-Quality-and-Honoring-Individual-P-Near-the-End-of-Life.aspx. Accessed January 4, 2016.
- Pence G. Comas: Karen Quinlan and Nancy Cruzan. In: Classic Cases in Medical Ethics: Accounts of Cases That Have Shaped Medical Ethics, With Philosophical, Legal, and Historical Backgrounds, 3rd edition. Boston: McGraw-Hill; 2000:29–55.
- In the matter of Karen Quinlan, an alleged incompetent. In re Quinlan. 70 N.J. 10, 355 A.2d 647 (1976), cert. denied, 429 U.S. 922 (1976).
- Wolf SM. Hastings Center. Guidelines on the Termination of Life-Sustaining Treatment and Care of the Dying: A Report by the Hastings Center. The Hastings Center: Briarcliff Manor, NY; 1987.
- Baibars MM, Alraies MC, Kabach A, Pritzker M. Can patients opt to turn off implantable cardioverter-defibrillators near the end of life? Cleve Clin J Med 2016; 83:97–98.
- National Academy of Sciences. Dying in America: improving quality and honoring individual p near the end of life. www.iom.edu/Reports/2014/Dying-In-America-Improving-Quality-and-Honoring-Individual-P-Near-the-End-of-Life.aspx. Accessed January 4, 2016.
Veterans, guilt, and suicide risk: An opportunity to collaborate with chaplains?
Suicidal behavior is a major cause of morbidity and mortality in the United States,1 and active-duty and reserve military personnel and veterans account for nearly 18% of suicide deaths.2 By one estimate, as many as 22 veterans die by suicide each day.3 These rates are thought to be due to a higher incidence of mental illness in certain veteran populations relative to the general population.4–8 Consequently, a number of mental health services are available to veterans in a variety of clinical and community settings.
Chaplains and clinicians bring complementary skills and services to the problem of suicide risk among veterans. In particular, helping at-risk veterans deal with experiences of guilt is an opportunity for interdisciplinary collaboration. Available literature supports the potential utility of chaplaincy services in supporting at-risk veteran populations.9–15
But while most healthcare facilities have chaplains on staff, there is little information to guide any such collaboration. Further, healthcare providers appear to have a limited understanding of chaplaincy services, the “language” within which chaplains operate, or the roles chaplains play in healthcare settings.16
In the following discussion, using the example of experiences of guilt, we offer our insights and suggestions on how chaplaincy services may prove useful in alleviating this complex emotion in veterans at risk of suicide.
BENEFITS OF TALKING TO A CHAPLAIN
Collaboration between healthcare providers and pastoral care professionals has been suggested as a means of enhancing the treatment of patients with mental illness.17,18 Chaplains draw from a variety of faith traditions and are usually trained to respond to the needs of people from a variety of religious and spiritual backgrounds. They provide some non-faithbased services (eg, crisis intervention, life review, bereavement counseling) resembling those also provided in formal mental healthcare settings.19 By facilitating religious and spiritual coping and religious practice and responding to religious and spiritual needs, chaplains also offer a level of support not typically offered by formal mental healthcare providers.20
Veterans at risk of suicide sometimes look to pastoral care providers, particularly chaplains, for mental health support.9,10 Research on the effects of chaplaincy services on suicidal behavior is just beginning to emerge.15 Still, the US Department of Health and Human Services has recognized pastoral care services as having a “beneficial and therapeutic effect on the medical condition of a patient.”11
For example, in one study, hospital inpatients reported higher satisfaction if they had been visited by a chaplain.12 Chaplains help align treatment plans with patient values and wishes.13 In another study,14 patients undergoing coronary artery bypass grafting who were randomized to receive five visits from a chaplain were found to have a higher rate of positive religious coping (eg, forgiveness, letting go of anger). Positive religious coping has been correlated with lower levels of psychological stress and better mental health outcomes.
EXPERIENCING GUILT IS LINKED TO RISK OF SUICIDE
Suicidal behavior is complex, multifaceted, and linked to genetic, neurologic, psychological, social, and cultural factors.21
Assessing for and addressing certain complex emotions, such as guilt and shame, is an important part of suicide prevention efforts. Guilt is defined as a “controllable psychological state that is typically linked to a specific action or behavior, and which entails regret or remorse.”22
Guilt has been linked to risk of suicide in veterans.23–25 In one study, close to 75% of veterans who had thought about suicide said they frequently experienced guilt about having violated the precepts of their faith group, family, God, life, or the military.26
Such findings suggest that the sense of guilt experienced by some at-risk veterans may be grounded in a variety of contexts. For example, faith communities that place a strong emphasis on obedience to moral, ethical, and religious precepts may contribute to the experience of guilt unless balanced by a message of grace or favor from a benevolent God or deity. Without this balance, engaging in activities that are not fully sanctioned by one’s faith community may lead to guilt.
Families might also contribute to veterans’ experiences of guilt by placing unrealistic expectations on them. And the family environment may not be conducive to resolving feelings of guilt in veterans, harboring resentment and antipathies and making it very difficult to alleviate any ensuing sense of distress.
CLINICIAN’S ROLE IN ASSESSING GUILT
In addressing and assessing guilt in veterans at risk of suicide, clinicians should try to recognize the source and clinical implications of this emotion.
Recognize the source of guilt
Guilt may indicate a clinical disorder such as a mood disorder (eg, major depression).27 Mood disorders significantly increase the risk of suicidal behavior.28,29
Beyond diagnosing a clinical disorder, prescribing pharmacotherapy, and referring for mental healthcare services, recognizing the source of this emotion remains an important part of addressing a patient’s experience of guilt. Especially when associated with a clinical disorder, guilt is often irrational and excessive and does not appropriately reflect the experience or situation in question.
Case conceptualization, defined as “synthesizing the patient’s experience with relevant clinical theory and research,”30 can be used to understand the context in which the guilt-inducing action or behavior occurred and the veteran’s own interpretation of his or her actions. Understanding the source of the patient’s guilt could be used to plan treatment and resolve any underlying sense of distress.
As with other negative emotions, the affective component of guilt is often the result of cognitive distortions made as the person tries to make sense of what has occurred or to reconcile beliefs of right and wrong with the guilt-inducing behavior.31 The common cognitive errors associated with guilt include:
- Hindsight bias (a belief that one should have known what was going to happen as a result of one’s actions)
- Responsibility distortion (a belief that one’s actions directly caused an adverse event)
- Justification distortion (a belief that one’s actions were not justified by the situation)
- Wrongdoing distortion (a belief that one violated one’s own standards of right and wrong).31
Cognitive therapy to counter cognitive distortions
A variety of clinical options exist to help veterans manage and resolve guilt.
Cognitive therapy can counter the cognitive distortions that drive feelings of guilt. The goal is to guide patients to examine the evidence, process the event, and realize that their behavior was appropriate for the given situation. Cognitive processing therapy and prolonged exposure therapy have both been shown to decrease trauma-related guilt, though cognitive processing therapy was found to be better at decreasing guilt that arose from cognitive distortions.32
Guilt and suicide ideation have also been associated with a belief that one’s actions constituted an unforgivable sin.33 Responding to these inherently religious-spiritual cognitive distortions may be beyond the scope of expertise for many healthcare professionals. In such cases, it may be prudent to consider complementing clinical services with pastoral care. It follows that pastoral care services should only be provided if the veteran voices a desire and readiness for them. The clinician and chaplain can then work together to provide coordinated care to best meet the patient’s needs, to address the experience of guilt, and to alleviate the sense of distress.
A CHAPLAIN’S PERSPECTIVE ON GUILT
A prominent feature of pastoral practice is helping people, including at-risk veterans, resolve feelings of guilt regardless of the context on which the emotion is founded (eg, religion, shame).10 For many people, guilt is an impenetrable barrier, preventing resolution of whatever experience led to a sense of inner turmoil.
Forgiveness
In the context of pastoral care, resolution of guilt is ordinarily tied to a need for forgiveness. There are multiple ways in which forgiveness can be grounded in religious and spiritual contexts.34 Examples include forgiving others (ie, forswearing resentment, anger, or hatred directed toward another person), being forgiven by God or another benevolent deity, and forgiving oneself for violating perceived personal transgressions.35 In some cases, divine forgiveness may be conditional on interpersonal forgiveness.36 Forgiveness is also sometimes seen as a remedy for sin and a way to restore moral order.37
Some people may initially think they can never be forgiven. With time and the weight of one’s experiences, the impossibility of forgiveness can become so ingrained that it becomes a core belief. These core beliefs set up a vicious circle of thoughts and feelings, in which people and places and events from the past are continuously brought forward into the present. Anger and resentment become the steady diet for the tormented self that feels forever powerless over experienced injustices. These relived experiences drive the person into a deep isolation where the self becomes less human—a thing, an object. This experience of losing oneself proves excruciating and often leads to contemplation of suicide as a way to resolve anguish.
Hope emerges
Pastoral care services provide a means to reframe one’s core beliefs, manage and resolve the burden of guilt, and uncover new motivation for living.
The practice of spiritual direction within the discipline of pastoral care listens for these inner movements and encourages the person to give voice to them in his or her own words. No longer limited by a diminished, tormented self, the real self begins to relate to another reality that changes his or her identity, relieves the burden of guilt, and gives reason, purpose, and meaning to life.
Even with this opportunity for a new life, however, cognitive distortions based on a disproportional “faith-based prism” may persist. In this case, clinicians and chaplains must work closely together to reframe old understandings of self and incorrect understandings of religion and spirituality into one that continues to reinforce this newfound sense of hope.38
A VETERAN OF IRAQ WITH SUICIDE IDEATION
The following case illustrates how clinicians and chaplains may be able to work together to help facilitate the resolution of guilt.
A veteran who had served in Iraq had entered the Domiciliary Care Program at a US Department of Veterans Affairs medical center. He reported experiencing problems with guilt, forgiveness, and suicide ideation. A clinical therapeutic program was prescribed after a psychological evaluation uncovered that he was also struggling with depression and posttraumatic stress disorder.
His mental healthcare providers recognized the importance of incorporating a religious-spiritual component into the therapeutic plan, and so consulted with a chaplain to plan a suitable course of action. Specifically, this veteran reported feeling that he could not be forgiven for his military experiences, a feeling that was giving way to alienation and isolation from the God of his faith tradition.
The chaplain helped this veteran reflect on his military experiences, giving him the perspective he needed to view his God as one who truly loves him. He recognized instances in which he could have lost his life had it not been for others who intervened on his behalf at just the right time. This awareness caused him to think about his life differently, challenging him to reframe his relationship with God. Instead of simple coincidences, the veteran began to consider the mystery behind these times and places.
Over time and in keeping with the tenets of his faith tradition, the veteran stated that he was ultimately able to accept and receive God’s love and forgiveness. He now reports that these inner spiritual movements serve as a source of support during occasional relapses into emotional distress. These movements allow him to consider the mystery of his present life and its value based on his experience of his God’s love and forgiveness.
CARE FOR SUICIDE SURVIVORS
The experience of guilt is not limited to veterans. Those bereaved by suicide are also left to manage their own experiences of the loss and ensuing complex emotions. Friends and loved ones who survive a suicide decedent may experience guilt, feeling that they somehow contributed to or failed to prevent the suicide. Such feelings of guilt are hypothesized to lower the threshold for suicidal behavior in those bereaved.39
Guilt and shame are also frequently encountered in survivors of nonfatal suicide attempts.40 Chaplaincy services might also prove useful for these individuals.
TIME IS EVERYTHING
Patients who may have an active psychopathology should have their clinical therapeutic needs attended to first. If the clinician deems pastoral care services to be an appropriate complementary support option, care should be taken to select a pastoral care provider who is adequately prepared for this role. Different professional organizations (eg, Association of Professional Chaplains) have established board-certification procedures, minimum education requirements, and supervised practical experience required for chaplaincy certification.
Also, spiritual growth and development remain a core focus of pastoral practice. Clinicians should discontinue any collaboration with pastoral care providers who question an individual’s faith or commitment to his or her faith, or who promote thinking or actions that could be deleterious to the patient’s therapeutic trajectory.
SUMMING UP
We have here presented our perspectives on how chaplaincy services can be used to complement clinical services in support of at-risk veterans struggling with experiences of guilt. Unfortunately, the current level of collaboration between chaplains and clinicians in support of at-risk veteran populations is limited.20 Our hope is that clinicians managing these at-risk patients will develop a greater awareness of how chaplaincy services might be able to help in alleviating experiences of guilt in at-risk veteran populations. A further hope is that such cases will serve as an opportunity for greater interdisciplinary collaboration, benefiting at-risk veterans most in need of support.
Acknowledgment: Dr. Rasmussen was supported by the Office of Academic Affiliations, Advanced Fellowship Program in Mental Illness Research and Treatment, US Department of Veterans Affairs, VISN 2 Center of Excellence for Suicide Prevention.
- Centers for Disease Control and Prevention (CDC). Suicide and self-inflicted injury. www.cdc.gov/nchs/fastats/suicide.htm. Accessed November 12, 2015.
- Centers for Disease Control and Prevention (CDC). National violent death reporting system (NVDRS). https://wisqars.cdc.gov:8443/nvdrs/nvdrsDisplay.jsp. Accessed November 12, 2015.
- Kemp JE, Bossarte R. Suicide data report, 2012. www.sprc.org/library_resources/items/suicide-data-report-2012. Accessed November 12, 2015.
- Bullman TA, Kang HK. The risk of suicide among wounded Vietnam veterans. Am J Public Health 1996; 86:662–667.
- Kang HK, Bullman TA. Is there an epidemic of suicides among current and former US military personnel? Ann Epidemiol 2009; 19:757–760.
- LeardMann CA, Powell TM, Smith TC, et al. Risk factors associated with suicide in current and former US military personnel. JAMA 2013; 310:496–506.
- Mrnak-Meyer J, Tate SR, Tripp JC, Worley MJ, Jajodia A, McQuaid JR. Predictors of suicide-related hospitalization among US veterans receiving treatment for comorbid depression and substance dependence: who is the riskiest of the risky? Suicide Life Threat Behav 2011; 41:532–542.
- Pietrzak RH, Russo AR, Ling Q, Southwick SM. Suicidal ideation in treatment-seeking veterans of Operations Enduring Freedom and Iraqi Freedom: the role of coping strategies, resilience, and social support. J Psychiatr Res 2011; 45:720–726.
- Kopacz MS, McCarten JM, Pollitt MJ. VHA chaplaincy contact with veterans at increased risk of suicide. South Med J 2014; 107: 661–664.
- Kopacz MS. Providing pastoral care services in a clinical setting to veterans at-risk of suicide. J Relig Health 2013; 52:759–767.
- Medicare program; payment for nursing and allied health education. Health Care Financing Administration (HCFA), HHS. Final rule. Fed Regist 2001; 66:3358–3376.
- Marin DB, Sharma V, Sosunov E, Egorova N, Goldstein R, Handzo GF. Relationship between chaplain visits and patient satisfaction. J Health Care Chaplain 2015; 21:14–24.
- Flannelly KJ, Emanuel LL, Handzo GF, Galek K, Silton NR, Carlson M. A national study of chaplaincy services and end-of-life outcomes. BMC Palliat Care 2012; 11:10.
- Bay PS, Beckman D, Trippi J, Gunderman R, Terry C. The effect of pastoral care services on anxiety, depression, hope, religious coping, and religious problem solving styles: a randomized controlled study. J Relig Health 2008; 47:57–69.
- Kopacz MS, Nieuwsma JA, Jackson GL, et al. Chaplains’ engagement with suicidality among their service users: findings from the VA/DoD Integrated Mental Health Strategy. Suicide Life Threat Behav 2015. [Epub ahead of print.]
- Flannelly KJ, Galek K, Bucchino J, Handzo GF, Tannenbaum HP. Department directors’ perceptions of the roles and functions of hospital chaplains: a national survey. Hosp Top 2005; 83:19–27.
- Farrell JL, Goebert DA. Collaboration between psychiatrists and clergy in recognizing and treating serious mental illness. Psychiatr Serv 2008; 59:437–440.
- Weaver AJ, Flannelly KJ, Flannelly LT, Oppenheimer JE. Collaboration between clergy and mental health professionals: a review of professional health care journals from 1980 through 1999. Counsel Val 2003; 47:162–171.
- Handzo GF, Flannelly KJ, Kudler T, et al. What do chaplains really do? II. Interventions in the New York chaplaincy study. J Health Care Chaplain 2008; 14:39–56.
- Kopacz MS, Pollitt MJ. Delivering chaplaincy services to veterans at increased risk of suicide. J Health Care Chaplain 2015; 21:1–13.
- Knox KL, Bossarte RM. Suicide prevention for veterans and active duty personnel. Am J Public Health 2012;102(suppl 1):S8–S9.
- Bryan CJ, Morrow CE, Etienne N, Ray-Sannerud B. Guilt, shame, and suicidal ideation in a military outpatient clinical sample. Depress Anxiety 2013; 30:55–60.
- Ganz D, Sher L. Educating medical professionals about suicide prevention among military veterans. Int J Adolesc Med Health 2013; 25:187–191.
- Hendin H, Haas AP. Suicide and guilt as manifestations of PTSD in Vietnam combat veterans. Am J Psychiatry 1991; 148:586–591.
- Maguen S, Metzler TJ, Bosch J, Marmar CR, Knight SJ, Neylan TC. Killing in combat may be independently associated with suicidal ideation. Depress Anxiety 2012; 29:918–923.
- Kopacz MS, McCarten JM, Vance CG, Connery AL. A preliminary study for exploring different sources of guilt in a sample of veterans who sought chaplaincy services. Mil Psychol 2015; 27:1–8.
- Buck CJ. 2013 ICD-9-CM for physicians. St. Louis, MO: Saunders; 2013.
- Angst F, Stassen HH, Clayton PJ, Angst J. Mortality of patients with mood disorders: follow-up over 34-38 years. J Affect Disord 2002; 68:167–181.
- Nierenberg AA, Gray SM, Grandin LD. Mood disorders and suicide. J Clin Psychiatry 2001; 62(suppl 25):27–30.
- Macneil CA, Hasty MK, Conus P, Berk M. Is diagnosis enough to guide interventions in mental health? Using case formulation in clinical practice. BMC Med 2012; 10:111.
- Kubany ES, Manke FP. Cognitive therapy for trauma-related guilt: conceptual bases and treatment outlines. Cogn Behav Pract 1995; 2:27–61.
- Resick PA, Nishith P, Weaver TL, Astin MC, Feuer CA. Comparison of cognitive-processing therapy with prolonged exposure and a waiting condition for the treatment of chronic posttraumatic stress disorder in female rape victims. J Consult Clin Psychol 2002; 70:867–879.
- Exline JJ, Yali AM, Sanderson WC. Guilt, discord, and alienation: the role of religious strain in depression and suicidality. J Clin Psychol 2000; 56:1481–1496.
- Musick MA. Multiple forms of forgiveness and their relationship with aging and religion, In: Schaie KW, Krause N, Booth A, editors. Religious Influences on Health and Well-being in the Elderly. New York, NY: Springer Publishing Company; 2004:202–214.
- Kaplan BH, Munroe-Blum H, Blazer DG. Religion, health and forgiveness: tradition and challenges. In: Levin JS, editor. Religion in Aging and Health. Theoretical Foundations and Methodological Frontiers. Thousand Oaks, CA: SAGE Focus Edition; 1994:52–77.
- Worthington EL Jr, Berry JW, Parrott L III. Unforgiveness, forgiveness, religion and health. In: Plante TG, Sherman AC, editors. Faith and Health. Psychological Perspectives. New York, NY: Guilford Press; 2001:107–138.
- Enright RD, Gassin EA, Wu GR. Forgiveness: a developmental view. J Moral Educ 1992; 21:99–114.
- Kopacz MS, O’Reilly LM, Van Inwagen CC, et al. Understanding the role of chaplains in veteran suicide prevention efforts: a discussion paper. SAGE Open 2014; 4:1–10.
- Young IT, Iglewicz A, Glorioso D, et al. Suicide bereavement and complicated grief. Dialogues Clin Neurosci 2012; 14:177–186.
- Wiklander M, Samuelsson M, Asberg M. Shame reactions after suicide attempt. Scand J Caring Sci 2003; 17:293–300.
Suicidal behavior is a major cause of morbidity and mortality in the United States,1 and active-duty and reserve military personnel and veterans account for nearly 18% of suicide deaths.2 By one estimate, as many as 22 veterans die by suicide each day.3 These rates are thought to be due to a higher incidence of mental illness in certain veteran populations relative to the general population.4–8 Consequently, a number of mental health services are available to veterans in a variety of clinical and community settings.
Chaplains and clinicians bring complementary skills and services to the problem of suicide risk among veterans. In particular, helping at-risk veterans deal with experiences of guilt is an opportunity for interdisciplinary collaboration. Available literature supports the potential utility of chaplaincy services in supporting at-risk veteran populations.9–15
But while most healthcare facilities have chaplains on staff, there is little information to guide any such collaboration. Further, healthcare providers appear to have a limited understanding of chaplaincy services, the “language” within which chaplains operate, or the roles chaplains play in healthcare settings.16
In the following discussion, using the example of experiences of guilt, we offer our insights and suggestions on how chaplaincy services may prove useful in alleviating this complex emotion in veterans at risk of suicide.
BENEFITS OF TALKING TO A CHAPLAIN
Collaboration between healthcare providers and pastoral care professionals has been suggested as a means of enhancing the treatment of patients with mental illness.17,18 Chaplains draw from a variety of faith traditions and are usually trained to respond to the needs of people from a variety of religious and spiritual backgrounds. They provide some non-faithbased services (eg, crisis intervention, life review, bereavement counseling) resembling those also provided in formal mental healthcare settings.19 By facilitating religious and spiritual coping and religious practice and responding to religious and spiritual needs, chaplains also offer a level of support not typically offered by formal mental healthcare providers.20
Veterans at risk of suicide sometimes look to pastoral care providers, particularly chaplains, for mental health support.9,10 Research on the effects of chaplaincy services on suicidal behavior is just beginning to emerge.15 Still, the US Department of Health and Human Services has recognized pastoral care services as having a “beneficial and therapeutic effect on the medical condition of a patient.”11
For example, in one study, hospital inpatients reported higher satisfaction if they had been visited by a chaplain.12 Chaplains help align treatment plans with patient values and wishes.13 In another study,14 patients undergoing coronary artery bypass grafting who were randomized to receive five visits from a chaplain were found to have a higher rate of positive religious coping (eg, forgiveness, letting go of anger). Positive religious coping has been correlated with lower levels of psychological stress and better mental health outcomes.
EXPERIENCING GUILT IS LINKED TO RISK OF SUICIDE
Suicidal behavior is complex, multifaceted, and linked to genetic, neurologic, psychological, social, and cultural factors.21
Assessing for and addressing certain complex emotions, such as guilt and shame, is an important part of suicide prevention efforts. Guilt is defined as a “controllable psychological state that is typically linked to a specific action or behavior, and which entails regret or remorse.”22
Guilt has been linked to risk of suicide in veterans.23–25 In one study, close to 75% of veterans who had thought about suicide said they frequently experienced guilt about having violated the precepts of their faith group, family, God, life, or the military.26
Such findings suggest that the sense of guilt experienced by some at-risk veterans may be grounded in a variety of contexts. For example, faith communities that place a strong emphasis on obedience to moral, ethical, and religious precepts may contribute to the experience of guilt unless balanced by a message of grace or favor from a benevolent God or deity. Without this balance, engaging in activities that are not fully sanctioned by one’s faith community may lead to guilt.
Families might also contribute to veterans’ experiences of guilt by placing unrealistic expectations on them. And the family environment may not be conducive to resolving feelings of guilt in veterans, harboring resentment and antipathies and making it very difficult to alleviate any ensuing sense of distress.
CLINICIAN’S ROLE IN ASSESSING GUILT
In addressing and assessing guilt in veterans at risk of suicide, clinicians should try to recognize the source and clinical implications of this emotion.
Recognize the source of guilt
Guilt may indicate a clinical disorder such as a mood disorder (eg, major depression).27 Mood disorders significantly increase the risk of suicidal behavior.28,29
Beyond diagnosing a clinical disorder, prescribing pharmacotherapy, and referring for mental healthcare services, recognizing the source of this emotion remains an important part of addressing a patient’s experience of guilt. Especially when associated with a clinical disorder, guilt is often irrational and excessive and does not appropriately reflect the experience or situation in question.
Case conceptualization, defined as “synthesizing the patient’s experience with relevant clinical theory and research,”30 can be used to understand the context in which the guilt-inducing action or behavior occurred and the veteran’s own interpretation of his or her actions. Understanding the source of the patient’s guilt could be used to plan treatment and resolve any underlying sense of distress.
As with other negative emotions, the affective component of guilt is often the result of cognitive distortions made as the person tries to make sense of what has occurred or to reconcile beliefs of right and wrong with the guilt-inducing behavior.31 The common cognitive errors associated with guilt include:
- Hindsight bias (a belief that one should have known what was going to happen as a result of one’s actions)
- Responsibility distortion (a belief that one’s actions directly caused an adverse event)
- Justification distortion (a belief that one’s actions were not justified by the situation)
- Wrongdoing distortion (a belief that one violated one’s own standards of right and wrong).31
Cognitive therapy to counter cognitive distortions
A variety of clinical options exist to help veterans manage and resolve guilt.
Cognitive therapy can counter the cognitive distortions that drive feelings of guilt. The goal is to guide patients to examine the evidence, process the event, and realize that their behavior was appropriate for the given situation. Cognitive processing therapy and prolonged exposure therapy have both been shown to decrease trauma-related guilt, though cognitive processing therapy was found to be better at decreasing guilt that arose from cognitive distortions.32
Guilt and suicide ideation have also been associated with a belief that one’s actions constituted an unforgivable sin.33 Responding to these inherently religious-spiritual cognitive distortions may be beyond the scope of expertise for many healthcare professionals. In such cases, it may be prudent to consider complementing clinical services with pastoral care. It follows that pastoral care services should only be provided if the veteran voices a desire and readiness for them. The clinician and chaplain can then work together to provide coordinated care to best meet the patient’s needs, to address the experience of guilt, and to alleviate the sense of distress.
A CHAPLAIN’S PERSPECTIVE ON GUILT
A prominent feature of pastoral practice is helping people, including at-risk veterans, resolve feelings of guilt regardless of the context on which the emotion is founded (eg, religion, shame).10 For many people, guilt is an impenetrable barrier, preventing resolution of whatever experience led to a sense of inner turmoil.
Forgiveness
In the context of pastoral care, resolution of guilt is ordinarily tied to a need for forgiveness. There are multiple ways in which forgiveness can be grounded in religious and spiritual contexts.34 Examples include forgiving others (ie, forswearing resentment, anger, or hatred directed toward another person), being forgiven by God or another benevolent deity, and forgiving oneself for violating perceived personal transgressions.35 In some cases, divine forgiveness may be conditional on interpersonal forgiveness.36 Forgiveness is also sometimes seen as a remedy for sin and a way to restore moral order.37
Some people may initially think they can never be forgiven. With time and the weight of one’s experiences, the impossibility of forgiveness can become so ingrained that it becomes a core belief. These core beliefs set up a vicious circle of thoughts and feelings, in which people and places and events from the past are continuously brought forward into the present. Anger and resentment become the steady diet for the tormented self that feels forever powerless over experienced injustices. These relived experiences drive the person into a deep isolation where the self becomes less human—a thing, an object. This experience of losing oneself proves excruciating and often leads to contemplation of suicide as a way to resolve anguish.
Hope emerges
Pastoral care services provide a means to reframe one’s core beliefs, manage and resolve the burden of guilt, and uncover new motivation for living.
The practice of spiritual direction within the discipline of pastoral care listens for these inner movements and encourages the person to give voice to them in his or her own words. No longer limited by a diminished, tormented self, the real self begins to relate to another reality that changes his or her identity, relieves the burden of guilt, and gives reason, purpose, and meaning to life.
Even with this opportunity for a new life, however, cognitive distortions based on a disproportional “faith-based prism” may persist. In this case, clinicians and chaplains must work closely together to reframe old understandings of self and incorrect understandings of religion and spirituality into one that continues to reinforce this newfound sense of hope.38
A VETERAN OF IRAQ WITH SUICIDE IDEATION
The following case illustrates how clinicians and chaplains may be able to work together to help facilitate the resolution of guilt.
A veteran who had served in Iraq had entered the Domiciliary Care Program at a US Department of Veterans Affairs medical center. He reported experiencing problems with guilt, forgiveness, and suicide ideation. A clinical therapeutic program was prescribed after a psychological evaluation uncovered that he was also struggling with depression and posttraumatic stress disorder.
His mental healthcare providers recognized the importance of incorporating a religious-spiritual component into the therapeutic plan, and so consulted with a chaplain to plan a suitable course of action. Specifically, this veteran reported feeling that he could not be forgiven for his military experiences, a feeling that was giving way to alienation and isolation from the God of his faith tradition.
The chaplain helped this veteran reflect on his military experiences, giving him the perspective he needed to view his God as one who truly loves him. He recognized instances in which he could have lost his life had it not been for others who intervened on his behalf at just the right time. This awareness caused him to think about his life differently, challenging him to reframe his relationship with God. Instead of simple coincidences, the veteran began to consider the mystery behind these times and places.
Over time and in keeping with the tenets of his faith tradition, the veteran stated that he was ultimately able to accept and receive God’s love and forgiveness. He now reports that these inner spiritual movements serve as a source of support during occasional relapses into emotional distress. These movements allow him to consider the mystery of his present life and its value based on his experience of his God’s love and forgiveness.
CARE FOR SUICIDE SURVIVORS
The experience of guilt is not limited to veterans. Those bereaved by suicide are also left to manage their own experiences of the loss and ensuing complex emotions. Friends and loved ones who survive a suicide decedent may experience guilt, feeling that they somehow contributed to or failed to prevent the suicide. Such feelings of guilt are hypothesized to lower the threshold for suicidal behavior in those bereaved.39
Guilt and shame are also frequently encountered in survivors of nonfatal suicide attempts.40 Chaplaincy services might also prove useful for these individuals.
TIME IS EVERYTHING
Patients who may have an active psychopathology should have their clinical therapeutic needs attended to first. If the clinician deems pastoral care services to be an appropriate complementary support option, care should be taken to select a pastoral care provider who is adequately prepared for this role. Different professional organizations (eg, Association of Professional Chaplains) have established board-certification procedures, minimum education requirements, and supervised practical experience required for chaplaincy certification.
Also, spiritual growth and development remain a core focus of pastoral practice. Clinicians should discontinue any collaboration with pastoral care providers who question an individual’s faith or commitment to his or her faith, or who promote thinking or actions that could be deleterious to the patient’s therapeutic trajectory.
SUMMING UP
We have here presented our perspectives on how chaplaincy services can be used to complement clinical services in support of at-risk veterans struggling with experiences of guilt. Unfortunately, the current level of collaboration between chaplains and clinicians in support of at-risk veteran populations is limited.20 Our hope is that clinicians managing these at-risk patients will develop a greater awareness of how chaplaincy services might be able to help in alleviating experiences of guilt in at-risk veteran populations. A further hope is that such cases will serve as an opportunity for greater interdisciplinary collaboration, benefiting at-risk veterans most in need of support.
Acknowledgment: Dr. Rasmussen was supported by the Office of Academic Affiliations, Advanced Fellowship Program in Mental Illness Research and Treatment, US Department of Veterans Affairs, VISN 2 Center of Excellence for Suicide Prevention.
Suicidal behavior is a major cause of morbidity and mortality in the United States,1 and active-duty and reserve military personnel and veterans account for nearly 18% of suicide deaths.2 By one estimate, as many as 22 veterans die by suicide each day.3 These rates are thought to be due to a higher incidence of mental illness in certain veteran populations relative to the general population.4–8 Consequently, a number of mental health services are available to veterans in a variety of clinical and community settings.
Chaplains and clinicians bring complementary skills and services to the problem of suicide risk among veterans. In particular, helping at-risk veterans deal with experiences of guilt is an opportunity for interdisciplinary collaboration. Available literature supports the potential utility of chaplaincy services in supporting at-risk veteran populations.9–15
But while most healthcare facilities have chaplains on staff, there is little information to guide any such collaboration. Further, healthcare providers appear to have a limited understanding of chaplaincy services, the “language” within which chaplains operate, or the roles chaplains play in healthcare settings.16
In the following discussion, using the example of experiences of guilt, we offer our insights and suggestions on how chaplaincy services may prove useful in alleviating this complex emotion in veterans at risk of suicide.
BENEFITS OF TALKING TO A CHAPLAIN
Collaboration between healthcare providers and pastoral care professionals has been suggested as a means of enhancing the treatment of patients with mental illness.17,18 Chaplains draw from a variety of faith traditions and are usually trained to respond to the needs of people from a variety of religious and spiritual backgrounds. They provide some non-faithbased services (eg, crisis intervention, life review, bereavement counseling) resembling those also provided in formal mental healthcare settings.19 By facilitating religious and spiritual coping and religious practice and responding to religious and spiritual needs, chaplains also offer a level of support not typically offered by formal mental healthcare providers.20
Veterans at risk of suicide sometimes look to pastoral care providers, particularly chaplains, for mental health support.9,10 Research on the effects of chaplaincy services on suicidal behavior is just beginning to emerge.15 Still, the US Department of Health and Human Services has recognized pastoral care services as having a “beneficial and therapeutic effect on the medical condition of a patient.”11
For example, in one study, hospital inpatients reported higher satisfaction if they had been visited by a chaplain.12 Chaplains help align treatment plans with patient values and wishes.13 In another study,14 patients undergoing coronary artery bypass grafting who were randomized to receive five visits from a chaplain were found to have a higher rate of positive religious coping (eg, forgiveness, letting go of anger). Positive religious coping has been correlated with lower levels of psychological stress and better mental health outcomes.
EXPERIENCING GUILT IS LINKED TO RISK OF SUICIDE
Suicidal behavior is complex, multifaceted, and linked to genetic, neurologic, psychological, social, and cultural factors.21
Assessing for and addressing certain complex emotions, such as guilt and shame, is an important part of suicide prevention efforts. Guilt is defined as a “controllable psychological state that is typically linked to a specific action or behavior, and which entails regret or remorse.”22
Guilt has been linked to risk of suicide in veterans.23–25 In one study, close to 75% of veterans who had thought about suicide said they frequently experienced guilt about having violated the precepts of their faith group, family, God, life, or the military.26
Such findings suggest that the sense of guilt experienced by some at-risk veterans may be grounded in a variety of contexts. For example, faith communities that place a strong emphasis on obedience to moral, ethical, and religious precepts may contribute to the experience of guilt unless balanced by a message of grace or favor from a benevolent God or deity. Without this balance, engaging in activities that are not fully sanctioned by one’s faith community may lead to guilt.
Families might also contribute to veterans’ experiences of guilt by placing unrealistic expectations on them. And the family environment may not be conducive to resolving feelings of guilt in veterans, harboring resentment and antipathies and making it very difficult to alleviate any ensuing sense of distress.
CLINICIAN’S ROLE IN ASSESSING GUILT
In addressing and assessing guilt in veterans at risk of suicide, clinicians should try to recognize the source and clinical implications of this emotion.
Recognize the source of guilt
Guilt may indicate a clinical disorder such as a mood disorder (eg, major depression).27 Mood disorders significantly increase the risk of suicidal behavior.28,29
Beyond diagnosing a clinical disorder, prescribing pharmacotherapy, and referring for mental healthcare services, recognizing the source of this emotion remains an important part of addressing a patient’s experience of guilt. Especially when associated with a clinical disorder, guilt is often irrational and excessive and does not appropriately reflect the experience or situation in question.
Case conceptualization, defined as “synthesizing the patient’s experience with relevant clinical theory and research,”30 can be used to understand the context in which the guilt-inducing action or behavior occurred and the veteran’s own interpretation of his or her actions. Understanding the source of the patient’s guilt could be used to plan treatment and resolve any underlying sense of distress.
As with other negative emotions, the affective component of guilt is often the result of cognitive distortions made as the person tries to make sense of what has occurred or to reconcile beliefs of right and wrong with the guilt-inducing behavior.31 The common cognitive errors associated with guilt include:
- Hindsight bias (a belief that one should have known what was going to happen as a result of one’s actions)
- Responsibility distortion (a belief that one’s actions directly caused an adverse event)
- Justification distortion (a belief that one’s actions were not justified by the situation)
- Wrongdoing distortion (a belief that one violated one’s own standards of right and wrong).31
Cognitive therapy to counter cognitive distortions
A variety of clinical options exist to help veterans manage and resolve guilt.
Cognitive therapy can counter the cognitive distortions that drive feelings of guilt. The goal is to guide patients to examine the evidence, process the event, and realize that their behavior was appropriate for the given situation. Cognitive processing therapy and prolonged exposure therapy have both been shown to decrease trauma-related guilt, though cognitive processing therapy was found to be better at decreasing guilt that arose from cognitive distortions.32
Guilt and suicide ideation have also been associated with a belief that one’s actions constituted an unforgivable sin.33 Responding to these inherently religious-spiritual cognitive distortions may be beyond the scope of expertise for many healthcare professionals. In such cases, it may be prudent to consider complementing clinical services with pastoral care. It follows that pastoral care services should only be provided if the veteran voices a desire and readiness for them. The clinician and chaplain can then work together to provide coordinated care to best meet the patient’s needs, to address the experience of guilt, and to alleviate the sense of distress.
A CHAPLAIN’S PERSPECTIVE ON GUILT
A prominent feature of pastoral practice is helping people, including at-risk veterans, resolve feelings of guilt regardless of the context on which the emotion is founded (eg, religion, shame).10 For many people, guilt is an impenetrable barrier, preventing resolution of whatever experience led to a sense of inner turmoil.
Forgiveness
In the context of pastoral care, resolution of guilt is ordinarily tied to a need for forgiveness. There are multiple ways in which forgiveness can be grounded in religious and spiritual contexts.34 Examples include forgiving others (ie, forswearing resentment, anger, or hatred directed toward another person), being forgiven by God or another benevolent deity, and forgiving oneself for violating perceived personal transgressions.35 In some cases, divine forgiveness may be conditional on interpersonal forgiveness.36 Forgiveness is also sometimes seen as a remedy for sin and a way to restore moral order.37
Some people may initially think they can never be forgiven. With time and the weight of one’s experiences, the impossibility of forgiveness can become so ingrained that it becomes a core belief. These core beliefs set up a vicious circle of thoughts and feelings, in which people and places and events from the past are continuously brought forward into the present. Anger and resentment become the steady diet for the tormented self that feels forever powerless over experienced injustices. These relived experiences drive the person into a deep isolation where the self becomes less human—a thing, an object. This experience of losing oneself proves excruciating and often leads to contemplation of suicide as a way to resolve anguish.
Hope emerges
Pastoral care services provide a means to reframe one’s core beliefs, manage and resolve the burden of guilt, and uncover new motivation for living.
The practice of spiritual direction within the discipline of pastoral care listens for these inner movements and encourages the person to give voice to them in his or her own words. No longer limited by a diminished, tormented self, the real self begins to relate to another reality that changes his or her identity, relieves the burden of guilt, and gives reason, purpose, and meaning to life.
Even with this opportunity for a new life, however, cognitive distortions based on a disproportional “faith-based prism” may persist. In this case, clinicians and chaplains must work closely together to reframe old understandings of self and incorrect understandings of religion and spirituality into one that continues to reinforce this newfound sense of hope.38
A VETERAN OF IRAQ WITH SUICIDE IDEATION
The following case illustrates how clinicians and chaplains may be able to work together to help facilitate the resolution of guilt.
A veteran who had served in Iraq had entered the Domiciliary Care Program at a US Department of Veterans Affairs medical center. He reported experiencing problems with guilt, forgiveness, and suicide ideation. A clinical therapeutic program was prescribed after a psychological evaluation uncovered that he was also struggling with depression and posttraumatic stress disorder.
His mental healthcare providers recognized the importance of incorporating a religious-spiritual component into the therapeutic plan, and so consulted with a chaplain to plan a suitable course of action. Specifically, this veteran reported feeling that he could not be forgiven for his military experiences, a feeling that was giving way to alienation and isolation from the God of his faith tradition.
The chaplain helped this veteran reflect on his military experiences, giving him the perspective he needed to view his God as one who truly loves him. He recognized instances in which he could have lost his life had it not been for others who intervened on his behalf at just the right time. This awareness caused him to think about his life differently, challenging him to reframe his relationship with God. Instead of simple coincidences, the veteran began to consider the mystery behind these times and places.
Over time and in keeping with the tenets of his faith tradition, the veteran stated that he was ultimately able to accept and receive God’s love and forgiveness. He now reports that these inner spiritual movements serve as a source of support during occasional relapses into emotional distress. These movements allow him to consider the mystery of his present life and its value based on his experience of his God’s love and forgiveness.
CARE FOR SUICIDE SURVIVORS
The experience of guilt is not limited to veterans. Those bereaved by suicide are also left to manage their own experiences of the loss and ensuing complex emotions. Friends and loved ones who survive a suicide decedent may experience guilt, feeling that they somehow contributed to or failed to prevent the suicide. Such feelings of guilt are hypothesized to lower the threshold for suicidal behavior in those bereaved.39
Guilt and shame are also frequently encountered in survivors of nonfatal suicide attempts.40 Chaplaincy services might also prove useful for these individuals.
TIME IS EVERYTHING
Patients who may have an active psychopathology should have their clinical therapeutic needs attended to first. If the clinician deems pastoral care services to be an appropriate complementary support option, care should be taken to select a pastoral care provider who is adequately prepared for this role. Different professional organizations (eg, Association of Professional Chaplains) have established board-certification procedures, minimum education requirements, and supervised practical experience required for chaplaincy certification.
Also, spiritual growth and development remain a core focus of pastoral practice. Clinicians should discontinue any collaboration with pastoral care providers who question an individual’s faith or commitment to his or her faith, or who promote thinking or actions that could be deleterious to the patient’s therapeutic trajectory.
SUMMING UP
We have here presented our perspectives on how chaplaincy services can be used to complement clinical services in support of at-risk veterans struggling with experiences of guilt. Unfortunately, the current level of collaboration between chaplains and clinicians in support of at-risk veteran populations is limited.20 Our hope is that clinicians managing these at-risk patients will develop a greater awareness of how chaplaincy services might be able to help in alleviating experiences of guilt in at-risk veteran populations. A further hope is that such cases will serve as an opportunity for greater interdisciplinary collaboration, benefiting at-risk veterans most in need of support.
Acknowledgment: Dr. Rasmussen was supported by the Office of Academic Affiliations, Advanced Fellowship Program in Mental Illness Research and Treatment, US Department of Veterans Affairs, VISN 2 Center of Excellence for Suicide Prevention.
- Centers for Disease Control and Prevention (CDC). Suicide and self-inflicted injury. www.cdc.gov/nchs/fastats/suicide.htm. Accessed November 12, 2015.
- Centers for Disease Control and Prevention (CDC). National violent death reporting system (NVDRS). https://wisqars.cdc.gov:8443/nvdrs/nvdrsDisplay.jsp. Accessed November 12, 2015.
- Kemp JE, Bossarte R. Suicide data report, 2012. www.sprc.org/library_resources/items/suicide-data-report-2012. Accessed November 12, 2015.
- Bullman TA, Kang HK. The risk of suicide among wounded Vietnam veterans. Am J Public Health 1996; 86:662–667.
- Kang HK, Bullman TA. Is there an epidemic of suicides among current and former US military personnel? Ann Epidemiol 2009; 19:757–760.
- LeardMann CA, Powell TM, Smith TC, et al. Risk factors associated with suicide in current and former US military personnel. JAMA 2013; 310:496–506.
- Mrnak-Meyer J, Tate SR, Tripp JC, Worley MJ, Jajodia A, McQuaid JR. Predictors of suicide-related hospitalization among US veterans receiving treatment for comorbid depression and substance dependence: who is the riskiest of the risky? Suicide Life Threat Behav 2011; 41:532–542.
- Pietrzak RH, Russo AR, Ling Q, Southwick SM. Suicidal ideation in treatment-seeking veterans of Operations Enduring Freedom and Iraqi Freedom: the role of coping strategies, resilience, and social support. J Psychiatr Res 2011; 45:720–726.
- Kopacz MS, McCarten JM, Pollitt MJ. VHA chaplaincy contact with veterans at increased risk of suicide. South Med J 2014; 107: 661–664.
- Kopacz MS. Providing pastoral care services in a clinical setting to veterans at-risk of suicide. J Relig Health 2013; 52:759–767.
- Medicare program; payment for nursing and allied health education. Health Care Financing Administration (HCFA), HHS. Final rule. Fed Regist 2001; 66:3358–3376.
- Marin DB, Sharma V, Sosunov E, Egorova N, Goldstein R, Handzo GF. Relationship between chaplain visits and patient satisfaction. J Health Care Chaplain 2015; 21:14–24.
- Flannelly KJ, Emanuel LL, Handzo GF, Galek K, Silton NR, Carlson M. A national study of chaplaincy services and end-of-life outcomes. BMC Palliat Care 2012; 11:10.
- Bay PS, Beckman D, Trippi J, Gunderman R, Terry C. The effect of pastoral care services on anxiety, depression, hope, religious coping, and religious problem solving styles: a randomized controlled study. J Relig Health 2008; 47:57–69.
- Kopacz MS, Nieuwsma JA, Jackson GL, et al. Chaplains’ engagement with suicidality among their service users: findings from the VA/DoD Integrated Mental Health Strategy. Suicide Life Threat Behav 2015. [Epub ahead of print.]
- Flannelly KJ, Galek K, Bucchino J, Handzo GF, Tannenbaum HP. Department directors’ perceptions of the roles and functions of hospital chaplains: a national survey. Hosp Top 2005; 83:19–27.
- Farrell JL, Goebert DA. Collaboration between psychiatrists and clergy in recognizing and treating serious mental illness. Psychiatr Serv 2008; 59:437–440.
- Weaver AJ, Flannelly KJ, Flannelly LT, Oppenheimer JE. Collaboration between clergy and mental health professionals: a review of professional health care journals from 1980 through 1999. Counsel Val 2003; 47:162–171.
- Handzo GF, Flannelly KJ, Kudler T, et al. What do chaplains really do? II. Interventions in the New York chaplaincy study. J Health Care Chaplain 2008; 14:39–56.
- Kopacz MS, Pollitt MJ. Delivering chaplaincy services to veterans at increased risk of suicide. J Health Care Chaplain 2015; 21:1–13.
- Knox KL, Bossarte RM. Suicide prevention for veterans and active duty personnel. Am J Public Health 2012;102(suppl 1):S8–S9.
- Bryan CJ, Morrow CE, Etienne N, Ray-Sannerud B. Guilt, shame, and suicidal ideation in a military outpatient clinical sample. Depress Anxiety 2013; 30:55–60.
- Ganz D, Sher L. Educating medical professionals about suicide prevention among military veterans. Int J Adolesc Med Health 2013; 25:187–191.
- Hendin H, Haas AP. Suicide and guilt as manifestations of PTSD in Vietnam combat veterans. Am J Psychiatry 1991; 148:586–591.
- Maguen S, Metzler TJ, Bosch J, Marmar CR, Knight SJ, Neylan TC. Killing in combat may be independently associated with suicidal ideation. Depress Anxiety 2012; 29:918–923.
- Kopacz MS, McCarten JM, Vance CG, Connery AL. A preliminary study for exploring different sources of guilt in a sample of veterans who sought chaplaincy services. Mil Psychol 2015; 27:1–8.
- Buck CJ. 2013 ICD-9-CM for physicians. St. Louis, MO: Saunders; 2013.
- Angst F, Stassen HH, Clayton PJ, Angst J. Mortality of patients with mood disorders: follow-up over 34-38 years. J Affect Disord 2002; 68:167–181.
- Nierenberg AA, Gray SM, Grandin LD. Mood disorders and suicide. J Clin Psychiatry 2001; 62(suppl 25):27–30.
- Macneil CA, Hasty MK, Conus P, Berk M. Is diagnosis enough to guide interventions in mental health? Using case formulation in clinical practice. BMC Med 2012; 10:111.
- Kubany ES, Manke FP. Cognitive therapy for trauma-related guilt: conceptual bases and treatment outlines. Cogn Behav Pract 1995; 2:27–61.
- Resick PA, Nishith P, Weaver TL, Astin MC, Feuer CA. Comparison of cognitive-processing therapy with prolonged exposure and a waiting condition for the treatment of chronic posttraumatic stress disorder in female rape victims. J Consult Clin Psychol 2002; 70:867–879.
- Exline JJ, Yali AM, Sanderson WC. Guilt, discord, and alienation: the role of religious strain in depression and suicidality. J Clin Psychol 2000; 56:1481–1496.
- Musick MA. Multiple forms of forgiveness and their relationship with aging and religion, In: Schaie KW, Krause N, Booth A, editors. Religious Influences on Health and Well-being in the Elderly. New York, NY: Springer Publishing Company; 2004:202–214.
- Kaplan BH, Munroe-Blum H, Blazer DG. Religion, health and forgiveness: tradition and challenges. In: Levin JS, editor. Religion in Aging and Health. Theoretical Foundations and Methodological Frontiers. Thousand Oaks, CA: SAGE Focus Edition; 1994:52–77.
- Worthington EL Jr, Berry JW, Parrott L III. Unforgiveness, forgiveness, religion and health. In: Plante TG, Sherman AC, editors. Faith and Health. Psychological Perspectives. New York, NY: Guilford Press; 2001:107–138.
- Enright RD, Gassin EA, Wu GR. Forgiveness: a developmental view. J Moral Educ 1992; 21:99–114.
- Kopacz MS, O’Reilly LM, Van Inwagen CC, et al. Understanding the role of chaplains in veteran suicide prevention efforts: a discussion paper. SAGE Open 2014; 4:1–10.
- Young IT, Iglewicz A, Glorioso D, et al. Suicide bereavement and complicated grief. Dialogues Clin Neurosci 2012; 14:177–186.
- Wiklander M, Samuelsson M, Asberg M. Shame reactions after suicide attempt. Scand J Caring Sci 2003; 17:293–300.
- Centers for Disease Control and Prevention (CDC). Suicide and self-inflicted injury. www.cdc.gov/nchs/fastats/suicide.htm. Accessed November 12, 2015.
- Centers for Disease Control and Prevention (CDC). National violent death reporting system (NVDRS). https://wisqars.cdc.gov:8443/nvdrs/nvdrsDisplay.jsp. Accessed November 12, 2015.
- Kemp JE, Bossarte R. Suicide data report, 2012. www.sprc.org/library_resources/items/suicide-data-report-2012. Accessed November 12, 2015.
- Bullman TA, Kang HK. The risk of suicide among wounded Vietnam veterans. Am J Public Health 1996; 86:662–667.
- Kang HK, Bullman TA. Is there an epidemic of suicides among current and former US military personnel? Ann Epidemiol 2009; 19:757–760.
- LeardMann CA, Powell TM, Smith TC, et al. Risk factors associated with suicide in current and former US military personnel. JAMA 2013; 310:496–506.
- Mrnak-Meyer J, Tate SR, Tripp JC, Worley MJ, Jajodia A, McQuaid JR. Predictors of suicide-related hospitalization among US veterans receiving treatment for comorbid depression and substance dependence: who is the riskiest of the risky? Suicide Life Threat Behav 2011; 41:532–542.
- Pietrzak RH, Russo AR, Ling Q, Southwick SM. Suicidal ideation in treatment-seeking veterans of Operations Enduring Freedom and Iraqi Freedom: the role of coping strategies, resilience, and social support. J Psychiatr Res 2011; 45:720–726.
- Kopacz MS, McCarten JM, Pollitt MJ. VHA chaplaincy contact with veterans at increased risk of suicide. South Med J 2014; 107: 661–664.
- Kopacz MS. Providing pastoral care services in a clinical setting to veterans at-risk of suicide. J Relig Health 2013; 52:759–767.
- Medicare program; payment for nursing and allied health education. Health Care Financing Administration (HCFA), HHS. Final rule. Fed Regist 2001; 66:3358–3376.
- Marin DB, Sharma V, Sosunov E, Egorova N, Goldstein R, Handzo GF. Relationship between chaplain visits and patient satisfaction. J Health Care Chaplain 2015; 21:14–24.
- Flannelly KJ, Emanuel LL, Handzo GF, Galek K, Silton NR, Carlson M. A national study of chaplaincy services and end-of-life outcomes. BMC Palliat Care 2012; 11:10.
- Bay PS, Beckman D, Trippi J, Gunderman R, Terry C. The effect of pastoral care services on anxiety, depression, hope, religious coping, and religious problem solving styles: a randomized controlled study. J Relig Health 2008; 47:57–69.
- Kopacz MS, Nieuwsma JA, Jackson GL, et al. Chaplains’ engagement with suicidality among their service users: findings from the VA/DoD Integrated Mental Health Strategy. Suicide Life Threat Behav 2015. [Epub ahead of print.]
- Flannelly KJ, Galek K, Bucchino J, Handzo GF, Tannenbaum HP. Department directors’ perceptions of the roles and functions of hospital chaplains: a national survey. Hosp Top 2005; 83:19–27.
- Farrell JL, Goebert DA. Collaboration between psychiatrists and clergy in recognizing and treating serious mental illness. Psychiatr Serv 2008; 59:437–440.
- Weaver AJ, Flannelly KJ, Flannelly LT, Oppenheimer JE. Collaboration between clergy and mental health professionals: a review of professional health care journals from 1980 through 1999. Counsel Val 2003; 47:162–171.
- Handzo GF, Flannelly KJ, Kudler T, et al. What do chaplains really do? II. Interventions in the New York chaplaincy study. J Health Care Chaplain 2008; 14:39–56.
- Kopacz MS, Pollitt MJ. Delivering chaplaincy services to veterans at increased risk of suicide. J Health Care Chaplain 2015; 21:1–13.
- Knox KL, Bossarte RM. Suicide prevention for veterans and active duty personnel. Am J Public Health 2012;102(suppl 1):S8–S9.
- Bryan CJ, Morrow CE, Etienne N, Ray-Sannerud B. Guilt, shame, and suicidal ideation in a military outpatient clinical sample. Depress Anxiety 2013; 30:55–60.
- Ganz D, Sher L. Educating medical professionals about suicide prevention among military veterans. Int J Adolesc Med Health 2013; 25:187–191.
- Hendin H, Haas AP. Suicide and guilt as manifestations of PTSD in Vietnam combat veterans. Am J Psychiatry 1991; 148:586–591.
- Maguen S, Metzler TJ, Bosch J, Marmar CR, Knight SJ, Neylan TC. Killing in combat may be independently associated with suicidal ideation. Depress Anxiety 2012; 29:918–923.
- Kopacz MS, McCarten JM, Vance CG, Connery AL. A preliminary study for exploring different sources of guilt in a sample of veterans who sought chaplaincy services. Mil Psychol 2015; 27:1–8.
- Buck CJ. 2013 ICD-9-CM for physicians. St. Louis, MO: Saunders; 2013.
- Angst F, Stassen HH, Clayton PJ, Angst J. Mortality of patients with mood disorders: follow-up over 34-38 years. J Affect Disord 2002; 68:167–181.
- Nierenberg AA, Gray SM, Grandin LD. Mood disorders and suicide. J Clin Psychiatry 2001; 62(suppl 25):27–30.
- Macneil CA, Hasty MK, Conus P, Berk M. Is diagnosis enough to guide interventions in mental health? Using case formulation in clinical practice. BMC Med 2012; 10:111.
- Kubany ES, Manke FP. Cognitive therapy for trauma-related guilt: conceptual bases and treatment outlines. Cogn Behav Pract 1995; 2:27–61.
- Resick PA, Nishith P, Weaver TL, Astin MC, Feuer CA. Comparison of cognitive-processing therapy with prolonged exposure and a waiting condition for the treatment of chronic posttraumatic stress disorder in female rape victims. J Consult Clin Psychol 2002; 70:867–879.
- Exline JJ, Yali AM, Sanderson WC. Guilt, discord, and alienation: the role of religious strain in depression and suicidality. J Clin Psychol 2000; 56:1481–1496.
- Musick MA. Multiple forms of forgiveness and their relationship with aging and religion, In: Schaie KW, Krause N, Booth A, editors. Religious Influences on Health and Well-being in the Elderly. New York, NY: Springer Publishing Company; 2004:202–214.
- Kaplan BH, Munroe-Blum H, Blazer DG. Religion, health and forgiveness: tradition and challenges. In: Levin JS, editor. Religion in Aging and Health. Theoretical Foundations and Methodological Frontiers. Thousand Oaks, CA: SAGE Focus Edition; 1994:52–77.
- Worthington EL Jr, Berry JW, Parrott L III. Unforgiveness, forgiveness, religion and health. In: Plante TG, Sherman AC, editors. Faith and Health. Psychological Perspectives. New York, NY: Guilford Press; 2001:107–138.
- Enright RD, Gassin EA, Wu GR. Forgiveness: a developmental view. J Moral Educ 1992; 21:99–114.
- Kopacz MS, O’Reilly LM, Van Inwagen CC, et al. Understanding the role of chaplains in veteran suicide prevention efforts: a discussion paper. SAGE Open 2014; 4:1–10.
- Young IT, Iglewicz A, Glorioso D, et al. Suicide bereavement and complicated grief. Dialogues Clin Neurosci 2012; 14:177–186.
- Wiklander M, Samuelsson M, Asberg M. Shame reactions after suicide attempt. Scand J Caring Sci 2003; 17:293–300.