User login
Ultrabrief Delirium Assessments
Delirium is a form of acute brain failure that affects up to 64% of older hospitalized patients and is associated with a multitude of adverse outcomes.[1] Healthcare providers, regardless of clinical setting, do not identify delirium in approximately 75% of cases.[2, 3] The paucity of brief and simple delirium assessment tools has been a barrier to improving delirium recognition.
To address this unmet need, several ultrabrief (<30 seconds) delirium assessment tools have been recently studied. In this issue of the Journal of Hospital Medicine, Fick et al. evaluated 20 individual components of the 3‐minute diagnostic interview for delirium using the Confusion Assessment Method (3D‐CAM), which was recently validated in older hospitalized patients.[4, 5] They observed that the best‐performing single‐item delirium assessment was the months of the year backward (MOTYB) task from December to January. This task assesses for inattention, a cardinal feature of delirium. Using a cutoff of 1 or more errors, the MOTYB was 83% sensitive and 69% specific for delirium.[5] By adding name the day of the week, the sensitivity increased to 93% with similar specificity (64%). This supports research by O'Regan et al., who examined MOTYB, but defined a positive screen if they could not recite the months backward from December to July perfectly. They observed a sensitivity and specificity of 84% and 90%, respectively, in older hospitalized patients.[6]
The assessment of arousal, another feature of delirium, has also garnered significant interest as another ultrabrief delirium screening method. Arousal is the patient's responsiveness to the environment and can be assessed during routine clinical care. Fick et al. observed that impaired arousal using the 3D‐CAM was 19% sensitive for delirium. This is in contrast to others who have reported sensitivities of 64% to 84%.[7, 8, 9] The difference in sensitivity may in part be explained by the method used to detect arousal. The 3D‐CAM asks, Was the patient sleep/stuporous? or Was the patient hyperviglant? Previous studies used the Richmond Agitation Sedation Scale (RASS), an arousal scale based on eye contact and physical behaviors to assess patients from 5 (coma) to +4 (combative).[10] Therefore, it is important to consider the method of arousal assessment if using this feature for delirium screening.
These ultrabrief delirium assessments would be even more clinically useful if they identified patients at high risk for adverse outcomes. In this same journal issue, 2 studies evaluated the prognostic ability of several ultrabrief delirium assessments. Zadravecz et al. observed that an abnormal RASS was a moderately good predictor of 24‐hour mortality, with an area under the receiver operating characteristic curve of 0.82.[11] Yevchak et al. observed that an abnormal RASS or MOTYB was associated with longer hospital length of stays, increased in‐hospital mortality, and need for skilled nursing.[12]
Viewed as a whole, these studies represent a significant advancement in delirium measurement and have the potential to improve this quality‐of‐care issue. However, uncertainties still exist. (1) Can these ultrabrief delirium assessments be used as standalone assessments? Based upon current data, these assessments have a significant proportion of false negative and positive rates. The effect on such misclassification on patient outcomes and healthcare utilization needs to be clarified. Because of this concern, Fick et al. recommended performing a more specific delirium assessment in those who have a positive MOTYB screen.[5] (2) What is the optimal cutoff of the MOTYB task and does this cutoff vary in different patient populations? The optimal cutoff will depend on whether or not a more sensitive test (lower error threshold) or specific test (higher error threshold) is desired. The optimal cutoff may also depend on the patient population (eg, demented versus nondemented). (3) Most important to practicing hospitalist and patients, will introducing these ultrabrief delirium assessments improve delirium recognition and improve patient outcomes? The impetus for widespread implementation of these assessments would be strengthened if healthcare providers successfully applied these assessments in clinical practice and subsequently improved outcomes.
In conclusion, the MOTYB and the assessment of arousal may be reasonable alternatives to more conventional delirium screening, especially in clinical environments with significant time constraints. However, additional research is needed to better refine these instruments to the clinical environment they will be used and determine how they impact clinical care and patient outcomes.
Disclosures
Dr. Han is supported the National Heart, Lung, and Blood Institute (K12HL109019). Dr. Vasilevskis is supported by the National Institutes of Health (K23AG040157) and the Geriatric Research, Education and Clinical Center (GRECC). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Department of Veterans Affairs. The authors report no conflicts of interest.
- Delirium in elderly people. Lancet. 2014;383(9920):911–922. , ,
- Detection of delirium in the acute hospital. Age Ageing. 2010;39(1):131–135. , , ,
- Delirium in older emergency department patients: recognition, risk factors, and psychomotor subtypes. Acad Emerg Med. 2009;16(3):193–200. , , , et al.
- 3D‐CAM: derivation and validation of a 3‐minute diagnostic interview for CAM‐defined delirium: a cross‐sectional diagnostic test study. Ann Intern Med. 2014;161(8):554–561. , , , et al.
- Preliminary development of an ultrabrief two‐item bedside test for delirium. J Hosp Med. 2015;10(00):000–000. , , , et al.
- Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry. 2014;85(10):1122–1131. , , , et al.
- Serial administration of a modified Richmond Agitation and Sedation Scale for delirium screening. J Hosp Med. 2012;7(5):450–453. , , , .
- Abnormal level of arousal as a predictor of delirium and inattention: an exploratory study. Am J Geriatr Psychiatry. 2013;21(12):1244–1253. , , ,
- The diagnostic performance of the Richmond Agitation Sedation Scale for detecting delirium in older emergency department patients. Acad Emerg Med. 2015;22(7):878–882. , , , et al.
- The Richmond Agitation‐Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002;166(10):1338–1344. , , , et al.
- Comparison of mental status scales for predicting mortality on the general wards. J Hosp Med. 2015;10(10):658–663. , , , et al.
- The association between an ultrabrief cognitive screening in older adults and hospital outcomes. J Hosp Med. 2015;10(10):651–657. , , , , ,
Delirium is a form of acute brain failure that affects up to 64% of older hospitalized patients and is associated with a multitude of adverse outcomes.[1] Healthcare providers, regardless of clinical setting, do not identify delirium in approximately 75% of cases.[2, 3] The paucity of brief and simple delirium assessment tools has been a barrier to improving delirium recognition.
To address this unmet need, several ultrabrief (<30 seconds) delirium assessment tools have been recently studied. In this issue of the Journal of Hospital Medicine, Fick et al. evaluated 20 individual components of the 3‐minute diagnostic interview for delirium using the Confusion Assessment Method (3D‐CAM), which was recently validated in older hospitalized patients.[4, 5] They observed that the best‐performing single‐item delirium assessment was the months of the year backward (MOTYB) task from December to January. This task assesses for inattention, a cardinal feature of delirium. Using a cutoff of 1 or more errors, the MOTYB was 83% sensitive and 69% specific for delirium.[5] By adding name the day of the week, the sensitivity increased to 93% with similar specificity (64%). This supports research by O'Regan et al., who examined MOTYB, but defined a positive screen if they could not recite the months backward from December to July perfectly. They observed a sensitivity and specificity of 84% and 90%, respectively, in older hospitalized patients.[6]
The assessment of arousal, another feature of delirium, has also garnered significant interest as another ultrabrief delirium screening method. Arousal is the patient's responsiveness to the environment and can be assessed during routine clinical care. Fick et al. observed that impaired arousal using the 3D‐CAM was 19% sensitive for delirium. This is in contrast to others who have reported sensitivities of 64% to 84%.[7, 8, 9] The difference in sensitivity may in part be explained by the method used to detect arousal. The 3D‐CAM asks, Was the patient sleep/stuporous? or Was the patient hyperviglant? Previous studies used the Richmond Agitation Sedation Scale (RASS), an arousal scale based on eye contact and physical behaviors to assess patients from 5 (coma) to +4 (combative).[10] Therefore, it is important to consider the method of arousal assessment if using this feature for delirium screening.
These ultrabrief delirium assessments would be even more clinically useful if they identified patients at high risk for adverse outcomes. In this same journal issue, 2 studies evaluated the prognostic ability of several ultrabrief delirium assessments. Zadravecz et al. observed that an abnormal RASS was a moderately good predictor of 24‐hour mortality, with an area under the receiver operating characteristic curve of 0.82.[11] Yevchak et al. observed that an abnormal RASS or MOTYB was associated with longer hospital length of stays, increased in‐hospital mortality, and need for skilled nursing.[12]
Viewed as a whole, these studies represent a significant advancement in delirium measurement and have the potential to improve this quality‐of‐care issue. However, uncertainties still exist. (1) Can these ultrabrief delirium assessments be used as standalone assessments? Based upon current data, these assessments have a significant proportion of false negative and positive rates. The effect on such misclassification on patient outcomes and healthcare utilization needs to be clarified. Because of this concern, Fick et al. recommended performing a more specific delirium assessment in those who have a positive MOTYB screen.[5] (2) What is the optimal cutoff of the MOTYB task and does this cutoff vary in different patient populations? The optimal cutoff will depend on whether or not a more sensitive test (lower error threshold) or specific test (higher error threshold) is desired. The optimal cutoff may also depend on the patient population (eg, demented versus nondemented). (3) Most important to practicing hospitalist and patients, will introducing these ultrabrief delirium assessments improve delirium recognition and improve patient outcomes? The impetus for widespread implementation of these assessments would be strengthened if healthcare providers successfully applied these assessments in clinical practice and subsequently improved outcomes.
In conclusion, the MOTYB and the assessment of arousal may be reasonable alternatives to more conventional delirium screening, especially in clinical environments with significant time constraints. However, additional research is needed to better refine these instruments to the clinical environment they will be used and determine how they impact clinical care and patient outcomes.
Disclosures
Dr. Han is supported the National Heart, Lung, and Blood Institute (K12HL109019). Dr. Vasilevskis is supported by the National Institutes of Health (K23AG040157) and the Geriatric Research, Education and Clinical Center (GRECC). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Department of Veterans Affairs. The authors report no conflicts of interest.
Delirium is a form of acute brain failure that affects up to 64% of older hospitalized patients and is associated with a multitude of adverse outcomes.[1] Healthcare providers, regardless of clinical setting, do not identify delirium in approximately 75% of cases.[2, 3] The paucity of brief and simple delirium assessment tools has been a barrier to improving delirium recognition.
To address this unmet need, several ultrabrief (<30 seconds) delirium assessment tools have been recently studied. In this issue of the Journal of Hospital Medicine, Fick et al. evaluated 20 individual components of the 3‐minute diagnostic interview for delirium using the Confusion Assessment Method (3D‐CAM), which was recently validated in older hospitalized patients.[4, 5] They observed that the best‐performing single‐item delirium assessment was the months of the year backward (MOTYB) task from December to January. This task assesses for inattention, a cardinal feature of delirium. Using a cutoff of 1 or more errors, the MOTYB was 83% sensitive and 69% specific for delirium.[5] By adding name the day of the week, the sensitivity increased to 93% with similar specificity (64%). This supports research by O'Regan et al., who examined MOTYB, but defined a positive screen if they could not recite the months backward from December to July perfectly. They observed a sensitivity and specificity of 84% and 90%, respectively, in older hospitalized patients.[6]
The assessment of arousal, another feature of delirium, has also garnered significant interest as another ultrabrief delirium screening method. Arousal is the patient's responsiveness to the environment and can be assessed during routine clinical care. Fick et al. observed that impaired arousal using the 3D‐CAM was 19% sensitive for delirium. This is in contrast to others who have reported sensitivities of 64% to 84%.[7, 8, 9] The difference in sensitivity may in part be explained by the method used to detect arousal. The 3D‐CAM asks, Was the patient sleep/stuporous? or Was the patient hyperviglant? Previous studies used the Richmond Agitation Sedation Scale (RASS), an arousal scale based on eye contact and physical behaviors to assess patients from 5 (coma) to +4 (combative).[10] Therefore, it is important to consider the method of arousal assessment if using this feature for delirium screening.
These ultrabrief delirium assessments would be even more clinically useful if they identified patients at high risk for adverse outcomes. In this same journal issue, 2 studies evaluated the prognostic ability of several ultrabrief delirium assessments. Zadravecz et al. observed that an abnormal RASS was a moderately good predictor of 24‐hour mortality, with an area under the receiver operating characteristic curve of 0.82.[11] Yevchak et al. observed that an abnormal RASS or MOTYB was associated with longer hospital length of stays, increased in‐hospital mortality, and need for skilled nursing.[12]
Viewed as a whole, these studies represent a significant advancement in delirium measurement and have the potential to improve this quality‐of‐care issue. However, uncertainties still exist. (1) Can these ultrabrief delirium assessments be used as standalone assessments? Based upon current data, these assessments have a significant proportion of false negative and positive rates. The effect on such misclassification on patient outcomes and healthcare utilization needs to be clarified. Because of this concern, Fick et al. recommended performing a more specific delirium assessment in those who have a positive MOTYB screen.[5] (2) What is the optimal cutoff of the MOTYB task and does this cutoff vary in different patient populations? The optimal cutoff will depend on whether or not a more sensitive test (lower error threshold) or specific test (higher error threshold) is desired. The optimal cutoff may also depend on the patient population (eg, demented versus nondemented). (3) Most important to practicing hospitalist and patients, will introducing these ultrabrief delirium assessments improve delirium recognition and improve patient outcomes? The impetus for widespread implementation of these assessments would be strengthened if healthcare providers successfully applied these assessments in clinical practice and subsequently improved outcomes.
In conclusion, the MOTYB and the assessment of arousal may be reasonable alternatives to more conventional delirium screening, especially in clinical environments with significant time constraints. However, additional research is needed to better refine these instruments to the clinical environment they will be used and determine how they impact clinical care and patient outcomes.
Disclosures
Dr. Han is supported the National Heart, Lung, and Blood Institute (K12HL109019). Dr. Vasilevskis is supported by the National Institutes of Health (K23AG040157) and the Geriatric Research, Education and Clinical Center (GRECC). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the Department of Veterans Affairs. The authors report no conflicts of interest.
- Delirium in elderly people. Lancet. 2014;383(9920):911–922. , ,
- Detection of delirium in the acute hospital. Age Ageing. 2010;39(1):131–135. , , ,
- Delirium in older emergency department patients: recognition, risk factors, and psychomotor subtypes. Acad Emerg Med. 2009;16(3):193–200. , , , et al.
- 3D‐CAM: derivation and validation of a 3‐minute diagnostic interview for CAM‐defined delirium: a cross‐sectional diagnostic test study. Ann Intern Med. 2014;161(8):554–561. , , , et al.
- Preliminary development of an ultrabrief two‐item bedside test for delirium. J Hosp Med. 2015;10(00):000–000. , , , et al.
- Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry. 2014;85(10):1122–1131. , , , et al.
- Serial administration of a modified Richmond Agitation and Sedation Scale for delirium screening. J Hosp Med. 2012;7(5):450–453. , , , .
- Abnormal level of arousal as a predictor of delirium and inattention: an exploratory study. Am J Geriatr Psychiatry. 2013;21(12):1244–1253. , , ,
- The diagnostic performance of the Richmond Agitation Sedation Scale for detecting delirium in older emergency department patients. Acad Emerg Med. 2015;22(7):878–882. , , , et al.
- The Richmond Agitation‐Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002;166(10):1338–1344. , , , et al.
- Comparison of mental status scales for predicting mortality on the general wards. J Hosp Med. 2015;10(10):658–663. , , , et al.
- The association between an ultrabrief cognitive screening in older adults and hospital outcomes. J Hosp Med. 2015;10(10):651–657. , , , , ,
- Delirium in elderly people. Lancet. 2014;383(9920):911–922. , ,
- Detection of delirium in the acute hospital. Age Ageing. 2010;39(1):131–135. , , ,
- Delirium in older emergency department patients: recognition, risk factors, and psychomotor subtypes. Acad Emerg Med. 2009;16(3):193–200. , , , et al.
- 3D‐CAM: derivation and validation of a 3‐minute diagnostic interview for CAM‐defined delirium: a cross‐sectional diagnostic test study. Ann Intern Med. 2014;161(8):554–561. , , , et al.
- Preliminary development of an ultrabrief two‐item bedside test for delirium. J Hosp Med. 2015;10(00):000–000. , , , et al.
- Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry. 2014;85(10):1122–1131. , , , et al.
- Serial administration of a modified Richmond Agitation and Sedation Scale for delirium screening. J Hosp Med. 2012;7(5):450–453. , , , .
- Abnormal level of arousal as a predictor of delirium and inattention: an exploratory study. Am J Geriatr Psychiatry. 2013;21(12):1244–1253. , , ,
- The diagnostic performance of the Richmond Agitation Sedation Scale for detecting delirium in older emergency department patients. Acad Emerg Med. 2015;22(7):878–882. , , , et al.
- The Richmond Agitation‐Sedation Scale: validity and reliability in adult intensive care unit patients. Am J Respir Crit Care Med. 2002;166(10):1338–1344. , , , et al.
- Comparison of mental status scales for predicting mortality on the general wards. J Hosp Med. 2015;10(10):658–663. , , , et al.
- The association between an ultrabrief cognitive screening in older adults and hospital outcomes. J Hosp Med. 2015;10(10):651–657. , , , , ,
Things We Do for No Reason/
In this issue of the Journal of Hospital Medicine, we introduce a new recurring feature, Choosing Wisely: Things We Do for No Reason. The series is based on a talk I have delivered for the past 4 years at the annual national meeting of the Society of Hospital Medicine, in which I highlight 4 diagnostic tests, therapies, or other clinical practices that are commonly performed even though they are of low value to our inpatients.
There are many reasons hospitalists order unnecessary tests or treatments, or employ unhelpful clinical practices. Unnecessary testing may occur when we are not familiar with the test itselfthe actual costs of the test, the operating characteristics of the test, or the evidence supporting its usefulness in specific situations. Some tests are ordered unnecessarily because we cannot retrieve usable results from a different hospital or even our own electronic medical records. We may order tests or treatments due to patient expectations, a perceived need to practice defensively, or economic incentives.
Finally, we may simply order tests because of our uncertainty in the absence of data or simply because they are traditional practices (the way we've always done it). Physicians often order tests and treatments and institute clinical practices learned in residency or fellowship training.[1, 2] Local norms and practices influence physician behavior.
We created Things We Do for No Reason (TWDFNR) as a platform for provocative discussions of practices that have become common parts of hospital care but have limited supporting evidence, or even have evidence refuting or justifiably challenging their value. Each article in TWDFNR will describe why the test, treatment, or other clinical practice is commonly employed, why it may not be of high value, in what circumstances it may actually be valuable, and what conclusions can be drawn from the evidence provided. TWDFNR pieces are not systematic reviews or meta‐analyses and do not represent black and white conclusions or clinical practice standards; they are meant as a starting place for research and active discussions among hospitalists and patients.
In many respects, the Choosing Wisely: Things We Do for No Reason series is an extension of the Choosing Wisely campaign created by the American Board of Internal Medicine Foundation. Like Choosing Wisely, we are focusing on individual tests, treatments, and other clinical practices that are not beneficial and are potentially harmful to patients. Practices discussed may not cause significant physical or financial harm at the time they are used, but they may have significant downstream effects.
The Choosing Wisely campaign has brilliantly identified 5 important hospital medicine low‐value practices, and we hope to identify many more. We hope this series will serve as a grassroots effort to uncover more Choosing Wisely‐type practices. As institutions create their own high‐value care committees, the Choosing Wisely: Things We Do for No Reason series can provide possible agenda items, or provide the opportunity for sites to carry out analyses of their own practices to see whether any of the TWDFNR topics provide local opportunities for implementing higher‐value practices.
Although we do not believe that reducing the low‐value practices that will appear in TWDFNR will, alone, solve our wasteful practices, we hope that highlighting them will remind individuals, institutions, and systems that targeting low‐value practices is a responsibility that we all must embrace. We accept that not everyone will agree that the practices we present are low value, but the conversation is important to have. We invite you to take part in the Choosing Wisely: Things We Do for No Reason conversation. Let us know whether you think the practices highlighted are low value or whether you disagree with the conclusions. We welcome unsolicited proposals for series topics submitted as a 500‐word prcis. Send us your prcis or ideas on low‐value adult or pediatric patient practices that we should highlight in this series by emailing us at twdfnr@hospitalmedicine.org.
Disclosure: Nothing to report.
- The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174:1640–1648. , , , .
- Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312:2385–2393. , , , , .
In this issue of the Journal of Hospital Medicine, we introduce a new recurring feature, Choosing Wisely: Things We Do for No Reason. The series is based on a talk I have delivered for the past 4 years at the annual national meeting of the Society of Hospital Medicine, in which I highlight 4 diagnostic tests, therapies, or other clinical practices that are commonly performed even though they are of low value to our inpatients.
There are many reasons hospitalists order unnecessary tests or treatments, or employ unhelpful clinical practices. Unnecessary testing may occur when we are not familiar with the test itselfthe actual costs of the test, the operating characteristics of the test, or the evidence supporting its usefulness in specific situations. Some tests are ordered unnecessarily because we cannot retrieve usable results from a different hospital or even our own electronic medical records. We may order tests or treatments due to patient expectations, a perceived need to practice defensively, or economic incentives.
Finally, we may simply order tests because of our uncertainty in the absence of data or simply because they are traditional practices (the way we've always done it). Physicians often order tests and treatments and institute clinical practices learned in residency or fellowship training.[1, 2] Local norms and practices influence physician behavior.
We created Things We Do for No Reason (TWDFNR) as a platform for provocative discussions of practices that have become common parts of hospital care but have limited supporting evidence, or even have evidence refuting or justifiably challenging their value. Each article in TWDFNR will describe why the test, treatment, or other clinical practice is commonly employed, why it may not be of high value, in what circumstances it may actually be valuable, and what conclusions can be drawn from the evidence provided. TWDFNR pieces are not systematic reviews or meta‐analyses and do not represent black and white conclusions or clinical practice standards; they are meant as a starting place for research and active discussions among hospitalists and patients.
In many respects, the Choosing Wisely: Things We Do for No Reason series is an extension of the Choosing Wisely campaign created by the American Board of Internal Medicine Foundation. Like Choosing Wisely, we are focusing on individual tests, treatments, and other clinical practices that are not beneficial and are potentially harmful to patients. Practices discussed may not cause significant physical or financial harm at the time they are used, but they may have significant downstream effects.
The Choosing Wisely campaign has brilliantly identified 5 important hospital medicine low‐value practices, and we hope to identify many more. We hope this series will serve as a grassroots effort to uncover more Choosing Wisely‐type practices. As institutions create their own high‐value care committees, the Choosing Wisely: Things We Do for No Reason series can provide possible agenda items, or provide the opportunity for sites to carry out analyses of their own practices to see whether any of the TWDFNR topics provide local opportunities for implementing higher‐value practices.
Although we do not believe that reducing the low‐value practices that will appear in TWDFNR will, alone, solve our wasteful practices, we hope that highlighting them will remind individuals, institutions, and systems that targeting low‐value practices is a responsibility that we all must embrace. We accept that not everyone will agree that the practices we present are low value, but the conversation is important to have. We invite you to take part in the Choosing Wisely: Things We Do for No Reason conversation. Let us know whether you think the practices highlighted are low value or whether you disagree with the conclusions. We welcome unsolicited proposals for series topics submitted as a 500‐word prcis. Send us your prcis or ideas on low‐value adult or pediatric patient practices that we should highlight in this series by emailing us at twdfnr@hospitalmedicine.org.
Disclosure: Nothing to report.
In this issue of the Journal of Hospital Medicine, we introduce a new recurring feature, Choosing Wisely: Things We Do for No Reason. The series is based on a talk I have delivered for the past 4 years at the annual national meeting of the Society of Hospital Medicine, in which I highlight 4 diagnostic tests, therapies, or other clinical practices that are commonly performed even though they are of low value to our inpatients.
There are many reasons hospitalists order unnecessary tests or treatments, or employ unhelpful clinical practices. Unnecessary testing may occur when we are not familiar with the test itselfthe actual costs of the test, the operating characteristics of the test, or the evidence supporting its usefulness in specific situations. Some tests are ordered unnecessarily because we cannot retrieve usable results from a different hospital or even our own electronic medical records. We may order tests or treatments due to patient expectations, a perceived need to practice defensively, or economic incentives.
Finally, we may simply order tests because of our uncertainty in the absence of data or simply because they are traditional practices (the way we've always done it). Physicians often order tests and treatments and institute clinical practices learned in residency or fellowship training.[1, 2] Local norms and practices influence physician behavior.
We created Things We Do for No Reason (TWDFNR) as a platform for provocative discussions of practices that have become common parts of hospital care but have limited supporting evidence, or even have evidence refuting or justifiably challenging their value. Each article in TWDFNR will describe why the test, treatment, or other clinical practice is commonly employed, why it may not be of high value, in what circumstances it may actually be valuable, and what conclusions can be drawn from the evidence provided. TWDFNR pieces are not systematic reviews or meta‐analyses and do not represent black and white conclusions or clinical practice standards; they are meant as a starting place for research and active discussions among hospitalists and patients.
In many respects, the Choosing Wisely: Things We Do for No Reason series is an extension of the Choosing Wisely campaign created by the American Board of Internal Medicine Foundation. Like Choosing Wisely, we are focusing on individual tests, treatments, and other clinical practices that are not beneficial and are potentially harmful to patients. Practices discussed may not cause significant physical or financial harm at the time they are used, but they may have significant downstream effects.
The Choosing Wisely campaign has brilliantly identified 5 important hospital medicine low‐value practices, and we hope to identify many more. We hope this series will serve as a grassroots effort to uncover more Choosing Wisely‐type practices. As institutions create their own high‐value care committees, the Choosing Wisely: Things We Do for No Reason series can provide possible agenda items, or provide the opportunity for sites to carry out analyses of their own practices to see whether any of the TWDFNR topics provide local opportunities for implementing higher‐value practices.
Although we do not believe that reducing the low‐value practices that will appear in TWDFNR will, alone, solve our wasteful practices, we hope that highlighting them will remind individuals, institutions, and systems that targeting low‐value practices is a responsibility that we all must embrace. We accept that not everyone will agree that the practices we present are low value, but the conversation is important to have. We invite you to take part in the Choosing Wisely: Things We Do for No Reason conversation. Let us know whether you think the practices highlighted are low value or whether you disagree with the conclusions. We welcome unsolicited proposals for series topics submitted as a 500‐word prcis. Send us your prcis or ideas on low‐value adult or pediatric patient practices that we should highlight in this series by emailing us at twdfnr@hospitalmedicine.org.
Disclosure: Nothing to report.
- The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174:1640–1648. , , , .
- Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312:2385–2393. , , , , .
- The association between residency training and internists' ability to practice conservatively. JAMA Intern Med. 2014;174:1640–1648. , , , .
- Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312:2385–2393. , , , , .
Corticosteroids far outpaced minoxidil use for alopecia areata
Alopecia areata sends “hundreds of thousands” of patients to the doctor every year in the United States, and six in ten of those visits end with a corticosteroid prescription, investigators reported in the Journal of Drugs in Dermatology.
In contrast, “minoxidil appears either underreported or underutilized in this population of patients, which suggests the need to educate both dermatologists and patients on the potential usefulness of this medication in alopecia areata,” wrote Michael Farhangian and his associates at Wake Forest University in Winston-Salem, N.C.
About 2% of individuals develop alopecia areata during their lives, but there are no consensus guidelines for disease in the United States. To better understand treatment patterns here, the investigators analyzed data on about 2.6 outpatient visits for alopecia areata between 2001 and 2010. The data came from two national ambulatory health care surveys (J Drugs Dermatol. 2015;14[9]:1012-14).
Patients with alopecia areata most often sought care from dermatologists (85%), the researchers reported. Providers prescribed topical and injected corticosteroids far more often (61%) than other drugs, such as minoxidil (5.9%), topical tacrolimus (5.7%), topical retinoid (3.3%), oral steroids (1.8%), or anthralin (1.8%).
The British Association of Dermatologists recommends corticosteroids for localized alopecia areata, but long-term use can lead to skin atrophy, hypopigmentation, and telangiectasia, the researchers warned. “This risk may be increased in patients who are prescribed both topical and injected corticosteroids, as was observed in 9.9% of patients,” they added.
Frequencies of minoxidil and tacrolimus use were nearly identical even though tacrolimus has been found ineffectivein alopecia areata, according to the researchers.
“Patients may be hesitant to use minoxidil since it is only FDA-approved for androgenetic alopecia and not for alopecia areata,” they wrote. Minoxidil also is available over-the-counter, which could explain its scarcity in the dataset, they added.
Galderma Laboratories helped fund the work through an unrestricted educational grant. Mr. Farhangian declared no competing interests. Senior author Dr. Steven Feldman reported relationships with Galderma, Janssen, Taro, Abbott Labs, and a number of other pharmaceutical companies. Dr. Feldman also reported holding stock in Causa Research and Medical Quality Enhancement Corporation. Another coauthor reported relationships with several pharmaceutical companies.
Alopecia areata sends “hundreds of thousands” of patients to the doctor every year in the United States, and six in ten of those visits end with a corticosteroid prescription, investigators reported in the Journal of Drugs in Dermatology.
In contrast, “minoxidil appears either underreported or underutilized in this population of patients, which suggests the need to educate both dermatologists and patients on the potential usefulness of this medication in alopecia areata,” wrote Michael Farhangian and his associates at Wake Forest University in Winston-Salem, N.C.
About 2% of individuals develop alopecia areata during their lives, but there are no consensus guidelines for disease in the United States. To better understand treatment patterns here, the investigators analyzed data on about 2.6 outpatient visits for alopecia areata between 2001 and 2010. The data came from two national ambulatory health care surveys (J Drugs Dermatol. 2015;14[9]:1012-14).
Patients with alopecia areata most often sought care from dermatologists (85%), the researchers reported. Providers prescribed topical and injected corticosteroids far more often (61%) than other drugs, such as minoxidil (5.9%), topical tacrolimus (5.7%), topical retinoid (3.3%), oral steroids (1.8%), or anthralin (1.8%).
The British Association of Dermatologists recommends corticosteroids for localized alopecia areata, but long-term use can lead to skin atrophy, hypopigmentation, and telangiectasia, the researchers warned. “This risk may be increased in patients who are prescribed both topical and injected corticosteroids, as was observed in 9.9% of patients,” they added.
Frequencies of minoxidil and tacrolimus use were nearly identical even though tacrolimus has been found ineffectivein alopecia areata, according to the researchers.
“Patients may be hesitant to use minoxidil since it is only FDA-approved for androgenetic alopecia and not for alopecia areata,” they wrote. Minoxidil also is available over-the-counter, which could explain its scarcity in the dataset, they added.
Galderma Laboratories helped fund the work through an unrestricted educational grant. Mr. Farhangian declared no competing interests. Senior author Dr. Steven Feldman reported relationships with Galderma, Janssen, Taro, Abbott Labs, and a number of other pharmaceutical companies. Dr. Feldman also reported holding stock in Causa Research and Medical Quality Enhancement Corporation. Another coauthor reported relationships with several pharmaceutical companies.
Alopecia areata sends “hundreds of thousands” of patients to the doctor every year in the United States, and six in ten of those visits end with a corticosteroid prescription, investigators reported in the Journal of Drugs in Dermatology.
In contrast, “minoxidil appears either underreported or underutilized in this population of patients, which suggests the need to educate both dermatologists and patients on the potential usefulness of this medication in alopecia areata,” wrote Michael Farhangian and his associates at Wake Forest University in Winston-Salem, N.C.
About 2% of individuals develop alopecia areata during their lives, but there are no consensus guidelines for disease in the United States. To better understand treatment patterns here, the investigators analyzed data on about 2.6 outpatient visits for alopecia areata between 2001 and 2010. The data came from two national ambulatory health care surveys (J Drugs Dermatol. 2015;14[9]:1012-14).
Patients with alopecia areata most often sought care from dermatologists (85%), the researchers reported. Providers prescribed topical and injected corticosteroids far more often (61%) than other drugs, such as minoxidil (5.9%), topical tacrolimus (5.7%), topical retinoid (3.3%), oral steroids (1.8%), or anthralin (1.8%).
The British Association of Dermatologists recommends corticosteroids for localized alopecia areata, but long-term use can lead to skin atrophy, hypopigmentation, and telangiectasia, the researchers warned. “This risk may be increased in patients who are prescribed both topical and injected corticosteroids, as was observed in 9.9% of patients,” they added.
Frequencies of minoxidil and tacrolimus use were nearly identical even though tacrolimus has been found ineffectivein alopecia areata, according to the researchers.
“Patients may be hesitant to use minoxidil since it is only FDA-approved for androgenetic alopecia and not for alopecia areata,” they wrote. Minoxidil also is available over-the-counter, which could explain its scarcity in the dataset, they added.
Galderma Laboratories helped fund the work through an unrestricted educational grant. Mr. Farhangian declared no competing interests. Senior author Dr. Steven Feldman reported relationships with Galderma, Janssen, Taro, Abbott Labs, and a number of other pharmaceutical companies. Dr. Feldman also reported holding stock in Causa Research and Medical Quality Enhancement Corporation. Another coauthor reported relationships with several pharmaceutical companies.
FROM JOURNAL OF DRUGS IN DERMATOLOGY
Key clinical point: Topical and injected corticosteroids were by far the most commonly recorded treatment for alopecia areata in the United States.
Major finding: Providers prescribed topical or injected corticosteroids during 61% of visits – far more often than minoxidil (5.9%), topical tacrolimus (5.7%), or other drugs.
Data source: Retrospective analysis of about 2.6 million visits for alopecia areata in the United States between 2001 and 2010.
Disclosures: Galderma Laboratories helped fund the work through an unrestricted educational grant. Mr. Farhangian declared no competing interests. Senior author Dr. Steven Feldman reported relationships with Galderma, Janssen, Taro, Abbott Labs, and a number of other pharmaceutical companies. Dr. Feldman also reported holding stock in Causa Research and Medical Quality Enhancement Corporation. Another coauthor reported relationships with several pharmaceutical companies.
New ACR/EULAR gout classification criteria offer better sensitivity, specificity
The presence of monosodium urate monohydrate crystals in a symptomatic joint, bursa, or tophus is sufficient to diagnose gout, according to new gout classification criteria from the American College of Rheumatology and the European League Against Rheumatism.
When symptomatic urate crystals are missing, other signs and symptoms are considered and scored; a score of 8 or more constitutes gout. “The threshold chosen for this classification criteria set yielded the best combination of sensitivity and specificity,” at 92% and 89%, respectively, and outperformed previous classification schemes, said the authors, led by Dr. Tuhina Neogi of Boston University.
To qualify for gout, patients must first have at least one episode of swelling, pain, or tenderness in a peripheral joint or bursa. They get a score of 1 if that happens in the ankle or midfoot, and a score of 2 if it involves a metatarsophalangeal joint. If the affected joint is red and too painful to touch and use, patients get an additional score of 3. Chalklike drainage from a subcutaneous nodule in a gout-prone area, and serum urate at or above 10 mg/dL, both get a score of 4. Imaging of one or more gout erosions in the hands or feet also gets a score of 4 (Arthritis Rheumatol. 2015 Oct;67[10]:2557-68. doi: 10.1002/art.39254).
Overall, the criteria incorporate clinical, laboratory, and imaging evidence. A web-based calculator makes the scoring easy.
“Although MSU [monosodium urate monohydrate] crystal results are extremely helpful when positive, they are not a feasible universal standard, particularly because many potential study subjects are likely to be recruited from nonrheumatology settings. We aimed to develop a new set of criteria that could be flexible enough to enable accurate classification of gout regardless of MSU status,” the authors said.
“This classification criteria set will enable a standardized approach to identifying a relatively homogeneous group of individuals who have the clinical entity of gout for enrollment into studies. The criteria permit characterization of an individual as having gout regardless of whether he or she is currently experiencing an acute symptomatic episode and regardless of any comorbidities,” they said.
The hope of the work is to facilitate a better understanding of gout and speed development of new trials and treatments. The criteria will “help to ensure that patients with the same disease are being evaluated, which will enhance our ability to study the disease, including performing outcomes studies and clinical trials,” Dr. Neogi said in a written statement.
Previous gout classification criteria were developed when advanced imaging was not available. “Additionally, the increasing prevalence of gout, advances in therapeutics, and the development of international research collaborations to understand the impact, mechanisms, and optimal treatment of this condition emphasize the need for accurate and uniform classification criteria for gout,” according to the statement.
The new criteria are based on a systematic review of the literature on advanced gout imaging; a diagnostic study in which the presence of MSU crystals in synovial fluid or tophi was the gold standard; a ranking exercise of paper patient cases; and a multicriterion decision analysis exercise. The criteria were then validated in 330 patients.
The work was supported in part by the National Institutes of Health, the Agency for Healthcare Research and Quality, and Arthritis New Zealand. Numerous authors reported receiving consulting fees, speaking fees, and/or honoraria from companies that market drugs or specialty foods for gout.
The presence of monosodium urate monohydrate crystals in a symptomatic joint, bursa, or tophus is sufficient to diagnose gout, according to new gout classification criteria from the American College of Rheumatology and the European League Against Rheumatism.
When symptomatic urate crystals are missing, other signs and symptoms are considered and scored; a score of 8 or more constitutes gout. “The threshold chosen for this classification criteria set yielded the best combination of sensitivity and specificity,” at 92% and 89%, respectively, and outperformed previous classification schemes, said the authors, led by Dr. Tuhina Neogi of Boston University.
To qualify for gout, patients must first have at least one episode of swelling, pain, or tenderness in a peripheral joint or bursa. They get a score of 1 if that happens in the ankle or midfoot, and a score of 2 if it involves a metatarsophalangeal joint. If the affected joint is red and too painful to touch and use, patients get an additional score of 3. Chalklike drainage from a subcutaneous nodule in a gout-prone area, and serum urate at or above 10 mg/dL, both get a score of 4. Imaging of one or more gout erosions in the hands or feet also gets a score of 4 (Arthritis Rheumatol. 2015 Oct;67[10]:2557-68. doi: 10.1002/art.39254).
Overall, the criteria incorporate clinical, laboratory, and imaging evidence. A web-based calculator makes the scoring easy.
“Although MSU [monosodium urate monohydrate] crystal results are extremely helpful when positive, they are not a feasible universal standard, particularly because many potential study subjects are likely to be recruited from nonrheumatology settings. We aimed to develop a new set of criteria that could be flexible enough to enable accurate classification of gout regardless of MSU status,” the authors said.
“This classification criteria set will enable a standardized approach to identifying a relatively homogeneous group of individuals who have the clinical entity of gout for enrollment into studies. The criteria permit characterization of an individual as having gout regardless of whether he or she is currently experiencing an acute symptomatic episode and regardless of any comorbidities,” they said.
The hope of the work is to facilitate a better understanding of gout and speed development of new trials and treatments. The criteria will “help to ensure that patients with the same disease are being evaluated, which will enhance our ability to study the disease, including performing outcomes studies and clinical trials,” Dr. Neogi said in a written statement.
Previous gout classification criteria were developed when advanced imaging was not available. “Additionally, the increasing prevalence of gout, advances in therapeutics, and the development of international research collaborations to understand the impact, mechanisms, and optimal treatment of this condition emphasize the need for accurate and uniform classification criteria for gout,” according to the statement.
The new criteria are based on a systematic review of the literature on advanced gout imaging; a diagnostic study in which the presence of MSU crystals in synovial fluid or tophi was the gold standard; a ranking exercise of paper patient cases; and a multicriterion decision analysis exercise. The criteria were then validated in 330 patients.
The work was supported in part by the National Institutes of Health, the Agency for Healthcare Research and Quality, and Arthritis New Zealand. Numerous authors reported receiving consulting fees, speaking fees, and/or honoraria from companies that market drugs or specialty foods for gout.
The presence of monosodium urate monohydrate crystals in a symptomatic joint, bursa, or tophus is sufficient to diagnose gout, according to new gout classification criteria from the American College of Rheumatology and the European League Against Rheumatism.
When symptomatic urate crystals are missing, other signs and symptoms are considered and scored; a score of 8 or more constitutes gout. “The threshold chosen for this classification criteria set yielded the best combination of sensitivity and specificity,” at 92% and 89%, respectively, and outperformed previous classification schemes, said the authors, led by Dr. Tuhina Neogi of Boston University.
To qualify for gout, patients must first have at least one episode of swelling, pain, or tenderness in a peripheral joint or bursa. They get a score of 1 if that happens in the ankle or midfoot, and a score of 2 if it involves a metatarsophalangeal joint. If the affected joint is red and too painful to touch and use, patients get an additional score of 3. Chalklike drainage from a subcutaneous nodule in a gout-prone area, and serum urate at or above 10 mg/dL, both get a score of 4. Imaging of one or more gout erosions in the hands or feet also gets a score of 4 (Arthritis Rheumatol. 2015 Oct;67[10]:2557-68. doi: 10.1002/art.39254).
Overall, the criteria incorporate clinical, laboratory, and imaging evidence. A web-based calculator makes the scoring easy.
“Although MSU [monosodium urate monohydrate] crystal results are extremely helpful when positive, they are not a feasible universal standard, particularly because many potential study subjects are likely to be recruited from nonrheumatology settings. We aimed to develop a new set of criteria that could be flexible enough to enable accurate classification of gout regardless of MSU status,” the authors said.
“This classification criteria set will enable a standardized approach to identifying a relatively homogeneous group of individuals who have the clinical entity of gout for enrollment into studies. The criteria permit characterization of an individual as having gout regardless of whether he or she is currently experiencing an acute symptomatic episode and regardless of any comorbidities,” they said.
The hope of the work is to facilitate a better understanding of gout and speed development of new trials and treatments. The criteria will “help to ensure that patients with the same disease are being evaluated, which will enhance our ability to study the disease, including performing outcomes studies and clinical trials,” Dr. Neogi said in a written statement.
Previous gout classification criteria were developed when advanced imaging was not available. “Additionally, the increasing prevalence of gout, advances in therapeutics, and the development of international research collaborations to understand the impact, mechanisms, and optimal treatment of this condition emphasize the need for accurate and uniform classification criteria for gout,” according to the statement.
The new criteria are based on a systematic review of the literature on advanced gout imaging; a diagnostic study in which the presence of MSU crystals in synovial fluid or tophi was the gold standard; a ranking exercise of paper patient cases; and a multicriterion decision analysis exercise. The criteria were then validated in 330 patients.
The work was supported in part by the National Institutes of Health, the Agency for Healthcare Research and Quality, and Arthritis New Zealand. Numerous authors reported receiving consulting fees, speaking fees, and/or honoraria from companies that market drugs or specialty foods for gout.
FROM ARTHRITIS & RHEUMATOLOGY
Joint Commission launches educational campaign on antibiotic use
The Joint Commission has introduced a multimedia campaign to educate the public on the health risks associated with antibiotic overuse, the group announced Sept. 14.
The Speak Up: Antibiotics program aims to educate consumers on appropriate use of antibiotics, and includes resources to help patients determine which illnesses may or may not need antibiotic treatment. The website includes an infographic, podcast, and animated video.
The initiative is part of the Speak Up series, a program that encourages patients to become more active in their medical decisions through self-education and advocacy.
About 2 million people in the United States become infected with antibiotic-resistant bacteria each year, the Joint Commission reported.
“Antibiotics also can kill good bacteria in the body, potentially leading to other problems such as diarrhea or yeast infections,” the organization said in a statement. “As a result, antibiotic overuse has become a critical health and patient safety concern, especially in young children and seniors, who are at higher risk for illness.”
Click here for more information on the Speak Up: Antibiotics campaign.
The Joint Commission has introduced a multimedia campaign to educate the public on the health risks associated with antibiotic overuse, the group announced Sept. 14.
The Speak Up: Antibiotics program aims to educate consumers on appropriate use of antibiotics, and includes resources to help patients determine which illnesses may or may not need antibiotic treatment. The website includes an infographic, podcast, and animated video.
The initiative is part of the Speak Up series, a program that encourages patients to become more active in their medical decisions through self-education and advocacy.
About 2 million people in the United States become infected with antibiotic-resistant bacteria each year, the Joint Commission reported.
“Antibiotics also can kill good bacteria in the body, potentially leading to other problems such as diarrhea or yeast infections,” the organization said in a statement. “As a result, antibiotic overuse has become a critical health and patient safety concern, especially in young children and seniors, who are at higher risk for illness.”
Click here for more information on the Speak Up: Antibiotics campaign.
The Joint Commission has introduced a multimedia campaign to educate the public on the health risks associated with antibiotic overuse, the group announced Sept. 14.
The Speak Up: Antibiotics program aims to educate consumers on appropriate use of antibiotics, and includes resources to help patients determine which illnesses may or may not need antibiotic treatment. The website includes an infographic, podcast, and animated video.
The initiative is part of the Speak Up series, a program that encourages patients to become more active in their medical decisions through self-education and advocacy.
About 2 million people in the United States become infected with antibiotic-resistant bacteria each year, the Joint Commission reported.
“Antibiotics also can kill good bacteria in the body, potentially leading to other problems such as diarrhea or yeast infections,” the organization said in a statement. “As a result, antibiotic overuse has become a critical health and patient safety concern, especially in young children and seniors, who are at higher risk for illness.”
Click here for more information on the Speak Up: Antibiotics campaign.
Psoriasis patients more likely to have type D personalities
Incidence of type D personality was significantly more common in patients with moderate to severe psoriasis, compared with a healthy control group, according to Dr. Alejandro Molina-Leyva of Hospital Torrecardenas, Almeria, Spain, and his associates.
People with type D, or distressed, personality tend to be more worried and irritable, and tend to display more negative emotions than do others. Of the 90 patients with moderate to severe psoriasis included in the study, 39% had type D personality, compared with 24% of the 82 members of the control group. The odds ratio for psoriasis patients developing type D personality was 2.1.
Psoriasis patients with type D personalities had significantly worse general, sexual, and psoriasis-related health-related quality of life, compared with psoriasis patients without type D personality. In addition, type D personality psoriasis patients were much more likely to experience anxiety or depression than were healthy people with type D personality, with an OR of 3.2.
“It may be that the higher prevalence of type D personality in moderate to severe psoriasis is, at least in part, the result of accumulated psychic damage over years of evolution of the disease. It is important to conduct prospective studies with incident cases of psoriasis to clarify the relationship between type D personality and psoriasis,” the investigators noted.
Find the full study here in the Journal of the European Academy of Dermatology and Venereology (doi: 10.1111/jdv.12960).
Incidence of type D personality was significantly more common in patients with moderate to severe psoriasis, compared with a healthy control group, according to Dr. Alejandro Molina-Leyva of Hospital Torrecardenas, Almeria, Spain, and his associates.
People with type D, or distressed, personality tend to be more worried and irritable, and tend to display more negative emotions than do others. Of the 90 patients with moderate to severe psoriasis included in the study, 39% had type D personality, compared with 24% of the 82 members of the control group. The odds ratio for psoriasis patients developing type D personality was 2.1.
Psoriasis patients with type D personalities had significantly worse general, sexual, and psoriasis-related health-related quality of life, compared with psoriasis patients without type D personality. In addition, type D personality psoriasis patients were much more likely to experience anxiety or depression than were healthy people with type D personality, with an OR of 3.2.
“It may be that the higher prevalence of type D personality in moderate to severe psoriasis is, at least in part, the result of accumulated psychic damage over years of evolution of the disease. It is important to conduct prospective studies with incident cases of psoriasis to clarify the relationship between type D personality and psoriasis,” the investigators noted.
Find the full study here in the Journal of the European Academy of Dermatology and Venereology (doi: 10.1111/jdv.12960).
Incidence of type D personality was significantly more common in patients with moderate to severe psoriasis, compared with a healthy control group, according to Dr. Alejandro Molina-Leyva of Hospital Torrecardenas, Almeria, Spain, and his associates.
People with type D, or distressed, personality tend to be more worried and irritable, and tend to display more negative emotions than do others. Of the 90 patients with moderate to severe psoriasis included in the study, 39% had type D personality, compared with 24% of the 82 members of the control group. The odds ratio for psoriasis patients developing type D personality was 2.1.
Psoriasis patients with type D personalities had significantly worse general, sexual, and psoriasis-related health-related quality of life, compared with psoriasis patients without type D personality. In addition, type D personality psoriasis patients were much more likely to experience anxiety or depression than were healthy people with type D personality, with an OR of 3.2.
“It may be that the higher prevalence of type D personality in moderate to severe psoriasis is, at least in part, the result of accumulated psychic damage over years of evolution of the disease. It is important to conduct prospective studies with incident cases of psoriasis to clarify the relationship between type D personality and psoriasis,” the investigators noted.
Find the full study here in the Journal of the European Academy of Dermatology and Venereology (doi: 10.1111/jdv.12960).
FROM THE JOURNAL OF THE EUROPEAN ACADEMY OF DERMATOLOGY AND VENEREOLOGY
7 Hours of Sleep Can Reduce Heart Disease
Too little sleep, or poor-quality sleep, may be linked to early markers of heart disease in asymptomatic healthy adults, a new study from South Korea suggests.
More than 47,000 men and women completed a sleep questionnaire and underwent assessments of coronary artery calcium and plaque as well as brachial-ankle pulse wave velocity (PWV).
Participants' average sleep duration was 6.4 hours per night, and about 84 percent said their sleep quality was "good," according to Dr. Chan-Won Kim of Kangbuk Samsung Hospital of Sungkyunkwan University School of Medicine in Seoul, South Korea and colleagues.
The researchers considered those who got five hours or less per night to be "short" sleepers, and those who got nine or more hours to be "long" sleepers.
Short sleepers had 50% more coronary artery calcium than those who slept for seven hours per night, according to the results in Arteriosclerosis, Thrombosis and Vascular Biology. Long sleepers had 70% more calcium than those who slept seven hours.
Those who reported poor sleep quality also tended to have more coronary calcium and more arterial stiffness.
In a 2013 study, people who tended to get less than six hours of sleep nightly were more likely to have high blood pressure, high cholesterol, diabetes and to be obese.
"Adults with poor sleep quality have stiffer arteries than those who sleep seven hours a day or had good sleep quality," co-lead author Dr. Yoosoo Chang of the Center for Cohort Studies at Kangbuk Samsung Hospital said in a statement accompanying the study. "Overall, we saw the lowest levels of vascular disease in adults sleeping seven hours a day and reporting good sleep quality."
Short sleepers were more likely than others to be older, have depression, type 2 diabetes or to be smokers.
"The associations of too short or too long sleep duration and of poor sleep quality with early indicators of heart disease, such as coronary calcium and arterial stiffness, provides strong support to the increasing body of evidence that links inadequate sleep with an increased risk of heart attacks," Kim said by email.
"It is still not clear if inadequate sleep is the cause or the consequence of ill health," but good sleep hygiene, including avoiding electronic media at bedtime, should be part of a healthy lifestyle, Kim said.
"For doctors, it can be helpful to evaluate sleep duration and sleep quality when assessing the health status of their patients," Kim said.
Too little sleep, or poor-quality sleep, may be linked to early markers of heart disease in asymptomatic healthy adults, a new study from South Korea suggests.
More than 47,000 men and women completed a sleep questionnaire and underwent assessments of coronary artery calcium and plaque as well as brachial-ankle pulse wave velocity (PWV).
Participants' average sleep duration was 6.4 hours per night, and about 84 percent said their sleep quality was "good," according to Dr. Chan-Won Kim of Kangbuk Samsung Hospital of Sungkyunkwan University School of Medicine in Seoul, South Korea and colleagues.
The researchers considered those who got five hours or less per night to be "short" sleepers, and those who got nine or more hours to be "long" sleepers.
Short sleepers had 50% more coronary artery calcium than those who slept for seven hours per night, according to the results in Arteriosclerosis, Thrombosis and Vascular Biology. Long sleepers had 70% more calcium than those who slept seven hours.
Those who reported poor sleep quality also tended to have more coronary calcium and more arterial stiffness.
In a 2013 study, people who tended to get less than six hours of sleep nightly were more likely to have high blood pressure, high cholesterol, diabetes and to be obese.
"Adults with poor sleep quality have stiffer arteries than those who sleep seven hours a day or had good sleep quality," co-lead author Dr. Yoosoo Chang of the Center for Cohort Studies at Kangbuk Samsung Hospital said in a statement accompanying the study. "Overall, we saw the lowest levels of vascular disease in adults sleeping seven hours a day and reporting good sleep quality."
Short sleepers were more likely than others to be older, have depression, type 2 diabetes or to be smokers.
"The associations of too short or too long sleep duration and of poor sleep quality with early indicators of heart disease, such as coronary calcium and arterial stiffness, provides strong support to the increasing body of evidence that links inadequate sleep with an increased risk of heart attacks," Kim said by email.
"It is still not clear if inadequate sleep is the cause or the consequence of ill health," but good sleep hygiene, including avoiding electronic media at bedtime, should be part of a healthy lifestyle, Kim said.
"For doctors, it can be helpful to evaluate sleep duration and sleep quality when assessing the health status of their patients," Kim said.
Too little sleep, or poor-quality sleep, may be linked to early markers of heart disease in asymptomatic healthy adults, a new study from South Korea suggests.
More than 47,000 men and women completed a sleep questionnaire and underwent assessments of coronary artery calcium and plaque as well as brachial-ankle pulse wave velocity (PWV).
Participants' average sleep duration was 6.4 hours per night, and about 84 percent said their sleep quality was "good," according to Dr. Chan-Won Kim of Kangbuk Samsung Hospital of Sungkyunkwan University School of Medicine in Seoul, South Korea and colleagues.
The researchers considered those who got five hours or less per night to be "short" sleepers, and those who got nine or more hours to be "long" sleepers.
Short sleepers had 50% more coronary artery calcium than those who slept for seven hours per night, according to the results in Arteriosclerosis, Thrombosis and Vascular Biology. Long sleepers had 70% more calcium than those who slept seven hours.
Those who reported poor sleep quality also tended to have more coronary calcium and more arterial stiffness.
In a 2013 study, people who tended to get less than six hours of sleep nightly were more likely to have high blood pressure, high cholesterol, diabetes and to be obese.
"Adults with poor sleep quality have stiffer arteries than those who sleep seven hours a day or had good sleep quality," co-lead author Dr. Yoosoo Chang of the Center for Cohort Studies at Kangbuk Samsung Hospital said in a statement accompanying the study. "Overall, we saw the lowest levels of vascular disease in adults sleeping seven hours a day and reporting good sleep quality."
Short sleepers were more likely than others to be older, have depression, type 2 diabetes or to be smokers.
"The associations of too short or too long sleep duration and of poor sleep quality with early indicators of heart disease, such as coronary calcium and arterial stiffness, provides strong support to the increasing body of evidence that links inadequate sleep with an increased risk of heart attacks," Kim said by email.
"It is still not clear if inadequate sleep is the cause or the consequence of ill health," but good sleep hygiene, including avoiding electronic media at bedtime, should be part of a healthy lifestyle, Kim said.
"For doctors, it can be helpful to evaluate sleep duration and sleep quality when assessing the health status of their patients," Kim said.
Gene linked to aggressive AML
Photo courtesy of NIH
The gene FOXC1 is associated with aggressive acute myeloid leukemia (AML), according to research published in Cancer Cell.
Researchers said tissue-inappropriate derepression of FOXC1 has functional consequences and prognostic significance in AML.
They found evidence suggesting that FOXC1 enhances clonogenic potential, helps block monocyte/macrophage differentiation, accelerates leukemia onset in mice, and leads to inferior survival in AML patients.
“This is an important finding which helps us understand how acute myeloid leukemia develops and why some cases of AML are more aggressive than others,” said study author Tim Somervaille, MBBS, PhD, of The University of Manchester in the UK.
“Here, instead of being faulty or mutated, this normal gene is turned on in the wrong place at the wrong time, which makes the cancer grow more rapidly. There are certain situations where this gene is necessary, as in the development of the eye and skeleton before birth, but when it’s switched on in the wrong tissue, it causes more aggressive forms of leukemia.”
Dr Somervaille and his colleagues said FOXC1 is expressed in at least 20% of human AML cases but not in normal hematopoietic populations.
The researchers analyzed levels of transcription factor genes in data from published studies to identify transcription regulators expressed in human AML hematopoietic stem and progenitor cells (HSPCs) but not normal HSPCs. In these studies, FOXC1 was among the genes that were most highly upregulated in AML HSPCs.
Further investigation revealed that FOXC1 expression is associated with mutations in NPM1 and t(6;9) but no other recurring mutations in AML.
When Dr Somervaille and his colleagues conducted experiments with human AML cells, they found that FOXC1 “contributes to oncogenic potential by maintaining differentiation block and clonogenic activity.”
In vitro experiments with normal HSPCs showed that FOXC1 expression temporarily impairs myeloid differentiation. In mice, expression of FOXC1 in normal HSPCs reduced donor:recipient chimerism in the blood and skewed differentiation toward the myeloid lineage and away from the B-cell lineage.
By comparing samples from AML patients, the researchers found that FOXC1 expression is associated with high HOX gene expression.
Subsequent experiments showed that FOXC1 collaborates with HOXA9 to enhance clonogenic potential and cell-cycle progression, help block monocyte/macrophage and B-lineage differentiation, and accelerate the onset of symptomatic leukemia in mice.
To determine if the same effects occur in humans, the researchers again analyzed data from AML patients. The results indicated that FOXC1 expression helps block monocyte/macrophage differentiation and leads to inferior survival.
Dr Somervaille and his colleagues said these findings may have therapeutic implications, as previous research has shown that, in basal-like breast cancer, high FOXC1 expression renders cells more susceptible to pharmacological inhibition of NF-kB. But additional research is needed to investigate therapeutic implications for AML.
Photo courtesy of NIH
The gene FOXC1 is associated with aggressive acute myeloid leukemia (AML), according to research published in Cancer Cell.
Researchers said tissue-inappropriate derepression of FOXC1 has functional consequences and prognostic significance in AML.
They found evidence suggesting that FOXC1 enhances clonogenic potential, helps block monocyte/macrophage differentiation, accelerates leukemia onset in mice, and leads to inferior survival in AML patients.
“This is an important finding which helps us understand how acute myeloid leukemia develops and why some cases of AML are more aggressive than others,” said study author Tim Somervaille, MBBS, PhD, of The University of Manchester in the UK.
“Here, instead of being faulty or mutated, this normal gene is turned on in the wrong place at the wrong time, which makes the cancer grow more rapidly. There are certain situations where this gene is necessary, as in the development of the eye and skeleton before birth, but when it’s switched on in the wrong tissue, it causes more aggressive forms of leukemia.”
Dr Somervaille and his colleagues said FOXC1 is expressed in at least 20% of human AML cases but not in normal hematopoietic populations.
The researchers analyzed levels of transcription factor genes in data from published studies to identify transcription regulators expressed in human AML hematopoietic stem and progenitor cells (HSPCs) but not normal HSPCs. In these studies, FOXC1 was among the genes that were most highly upregulated in AML HSPCs.
Further investigation revealed that FOXC1 expression is associated with mutations in NPM1 and t(6;9) but no other recurring mutations in AML.
When Dr Somervaille and his colleagues conducted experiments with human AML cells, they found that FOXC1 “contributes to oncogenic potential by maintaining differentiation block and clonogenic activity.”
In vitro experiments with normal HSPCs showed that FOXC1 expression temporarily impairs myeloid differentiation. In mice, expression of FOXC1 in normal HSPCs reduced donor:recipient chimerism in the blood and skewed differentiation toward the myeloid lineage and away from the B-cell lineage.
By comparing samples from AML patients, the researchers found that FOXC1 expression is associated with high HOX gene expression.
Subsequent experiments showed that FOXC1 collaborates with HOXA9 to enhance clonogenic potential and cell-cycle progression, help block monocyte/macrophage and B-lineage differentiation, and accelerate the onset of symptomatic leukemia in mice.
To determine if the same effects occur in humans, the researchers again analyzed data from AML patients. The results indicated that FOXC1 expression helps block monocyte/macrophage differentiation and leads to inferior survival.
Dr Somervaille and his colleagues said these findings may have therapeutic implications, as previous research has shown that, in basal-like breast cancer, high FOXC1 expression renders cells more susceptible to pharmacological inhibition of NF-kB. But additional research is needed to investigate therapeutic implications for AML.
Photo courtesy of NIH
The gene FOXC1 is associated with aggressive acute myeloid leukemia (AML), according to research published in Cancer Cell.
Researchers said tissue-inappropriate derepression of FOXC1 has functional consequences and prognostic significance in AML.
They found evidence suggesting that FOXC1 enhances clonogenic potential, helps block monocyte/macrophage differentiation, accelerates leukemia onset in mice, and leads to inferior survival in AML patients.
“This is an important finding which helps us understand how acute myeloid leukemia develops and why some cases of AML are more aggressive than others,” said study author Tim Somervaille, MBBS, PhD, of The University of Manchester in the UK.
“Here, instead of being faulty or mutated, this normal gene is turned on in the wrong place at the wrong time, which makes the cancer grow more rapidly. There are certain situations where this gene is necessary, as in the development of the eye and skeleton before birth, but when it’s switched on in the wrong tissue, it causes more aggressive forms of leukemia.”
Dr Somervaille and his colleagues said FOXC1 is expressed in at least 20% of human AML cases but not in normal hematopoietic populations.
The researchers analyzed levels of transcription factor genes in data from published studies to identify transcription regulators expressed in human AML hematopoietic stem and progenitor cells (HSPCs) but not normal HSPCs. In these studies, FOXC1 was among the genes that were most highly upregulated in AML HSPCs.
Further investigation revealed that FOXC1 expression is associated with mutations in NPM1 and t(6;9) but no other recurring mutations in AML.
When Dr Somervaille and his colleagues conducted experiments with human AML cells, they found that FOXC1 “contributes to oncogenic potential by maintaining differentiation block and clonogenic activity.”
In vitro experiments with normal HSPCs showed that FOXC1 expression temporarily impairs myeloid differentiation. In mice, expression of FOXC1 in normal HSPCs reduced donor:recipient chimerism in the blood and skewed differentiation toward the myeloid lineage and away from the B-cell lineage.
By comparing samples from AML patients, the researchers found that FOXC1 expression is associated with high HOX gene expression.
Subsequent experiments showed that FOXC1 collaborates with HOXA9 to enhance clonogenic potential and cell-cycle progression, help block monocyte/macrophage and B-lineage differentiation, and accelerate the onset of symptomatic leukemia in mice.
To determine if the same effects occur in humans, the researchers again analyzed data from AML patients. The results indicated that FOXC1 expression helps block monocyte/macrophage differentiation and leads to inferior survival.
Dr Somervaille and his colleagues said these findings may have therapeutic implications, as previous research has shown that, in basal-like breast cancer, high FOXC1 expression renders cells more susceptible to pharmacological inhibition of NF-kB. But additional research is needed to investigate therapeutic implications for AML.
Living near dams increases malaria risk, study shows
Photo courtesy of the
International Water
Management Institute
More than 1 million people in sub-Saharan Africa will contract malaria this year because they live near a large dam, according to a study published in Malaria Journal.
For the first time, researchers correlated the location of large dams in sub-Saharan Africa with the incidence of malaria.
And they found evidence to suggest that construction of an expected 78 major new dams over the next few years will lead to an additional 56,000 malaria cases annually.
The researchers said these findings have major implications for new dam projects and how health impacts should be assessed prior to construction.
“Dams are at the center of much development planning in Africa,” said study author Solomon Kibret, a graduate student at the University of New England in Armidale, New South Wales, Australia.
“While dams clearly bring many benefits—contributing to economic growth, poverty alleviation, and food security—adverse malaria impacts need to be addressed or they will undermine the sustainability of Africa’s drive for development.”
As part of the CGIAR Research Program on Water, Land, and Ecosystems, Kibret and colleagues looked at 1268 dams in sub-Saharan Africa. Of these, just under two-thirds (n=723) are in malarious areas.
The researchers compared detailed maps of malaria incidence with the dam sites. The number of annual malaria cases associated with the dams was estimated by comparing the number of cases for communities less than 5 kilometers from the dam reservoir with the number of cases for communities further away.
The team found that 15 million people live within 5 kilometers of dam reservoirs and are therefore at risk of contracting malaria. And at least 1.1 million malaria cases annually are linked to the presence of the dams.
“Our study showed that the population at risk of malaria around dams is at least 4 times greater than previously estimated,” Kibret said, noting that the authors were conservative in all their analyses.
The risk is particularly high in areas of sub-Saharan Africa with “unstable” malaria transmission, where malaria is seasonal. The study indicated that the impact of dams on malaria in unstable areas could either lead to intensified malaria transmission or change the nature of transmission from seasonal to perennial.
Explaining the risk
Previous research revealed increases in malaria incidence near major sub-Saharan dams such as the Akosombo Dam in Ghana, the Koka Dam in Ethiopia, and the Kamburu Dam in Kenya. But until now, no attempt has been made to assess the cumulative effect of large dam-building on malaria.
Malaria is transmitted by the Anopheles mosquito, which needs slow-moving or stagnant water in which to breed. Dam reservoirs, particularly shallow puddles that often form along shorelines, provide a perfect environment for the insects to multiply. Thus, dam construction can intensify transmission and shift patterns of malaria infection.
Many African countries are planning new dams to help drive economic growth and increase water security. Improved water storage for growing populations, irrigation, and hydropower generation are needed for a fast-developing continent, but the researchers warn that building new dams has potential costs as well as benefits.
“Dams are an important option for governments anxious to develop,” said study author Matthew McCartney, PhD, of the International Water Management Institute in Vientiane, Laos.
“But it is unethical that people living close to them pay the price of that development through increased suffering and, possibly in extreme cases, loss of life due to disease.”
Lowering the risk
The researchers noted that, despite growing evidence of the impact of dams on malaria, there is scant evidence of their negative impacts being fully offset.
The team therefore made recommendations for managing the increased malaria risk. They said dam reservoirs could be more effectively designed and managed to reduce mosquito breeding. For instance, one option is to adopt operating schedules that, at critical times, dry out shoreline areas where mosquitoes tend to breed.
The researchers said dam developers should also consider increasing investment in integrated malaria intervention programs that include measures such as bed net distribution. Other environmental controls, such as introducing fish that eat mosquito larva in dam reservoirs, could also help reduce malaria cases in some instances.
“The bottom line is that adverse malaria impacts of dams routinely receive recognition in Environmental Impact Assessments, and areas around dams are frequently earmarked for intensive control efforts,” said study author Jonathan Lautze, PhD, of the International Water Management Institute in Pretoria, South Africa.
“The findings of our work hammer home the reality that this recognition and effort—well-intentioned though it may be—is simply not sufficient. Given the need for water resources development in Africa, malaria control around dams requires interdisciplinary cooperation, particularly between water and health communities. Malaria must be addressed while planning, designing, and operating African dams.”
Photo courtesy of the
International Water
Management Institute
More than 1 million people in sub-Saharan Africa will contract malaria this year because they live near a large dam, according to a study published in Malaria Journal.
For the first time, researchers correlated the location of large dams in sub-Saharan Africa with the incidence of malaria.
And they found evidence to suggest that construction of an expected 78 major new dams over the next few years will lead to an additional 56,000 malaria cases annually.
The researchers said these findings have major implications for new dam projects and how health impacts should be assessed prior to construction.
“Dams are at the center of much development planning in Africa,” said study author Solomon Kibret, a graduate student at the University of New England in Armidale, New South Wales, Australia.
“While dams clearly bring many benefits—contributing to economic growth, poverty alleviation, and food security—adverse malaria impacts need to be addressed or they will undermine the sustainability of Africa’s drive for development.”
As part of the CGIAR Research Program on Water, Land, and Ecosystems, Kibret and colleagues looked at 1268 dams in sub-Saharan Africa. Of these, just under two-thirds (n=723) are in malarious areas.
The researchers compared detailed maps of malaria incidence with the dam sites. The number of annual malaria cases associated with the dams was estimated by comparing the number of cases for communities less than 5 kilometers from the dam reservoir with the number of cases for communities further away.
The team found that 15 million people live within 5 kilometers of dam reservoirs and are therefore at risk of contracting malaria. And at least 1.1 million malaria cases annually are linked to the presence of the dams.
“Our study showed that the population at risk of malaria around dams is at least 4 times greater than previously estimated,” Kibret said, noting that the authors were conservative in all their analyses.
The risk is particularly high in areas of sub-Saharan Africa with “unstable” malaria transmission, where malaria is seasonal. The study indicated that the impact of dams on malaria in unstable areas could either lead to intensified malaria transmission or change the nature of transmission from seasonal to perennial.
Explaining the risk
Previous research revealed increases in malaria incidence near major sub-Saharan dams such as the Akosombo Dam in Ghana, the Koka Dam in Ethiopia, and the Kamburu Dam in Kenya. But until now, no attempt has been made to assess the cumulative effect of large dam-building on malaria.
Malaria is transmitted by the Anopheles mosquito, which needs slow-moving or stagnant water in which to breed. Dam reservoirs, particularly shallow puddles that often form along shorelines, provide a perfect environment for the insects to multiply. Thus, dam construction can intensify transmission and shift patterns of malaria infection.
Many African countries are planning new dams to help drive economic growth and increase water security. Improved water storage for growing populations, irrigation, and hydropower generation are needed for a fast-developing continent, but the researchers warn that building new dams has potential costs as well as benefits.
“Dams are an important option for governments anxious to develop,” said study author Matthew McCartney, PhD, of the International Water Management Institute in Vientiane, Laos.
“But it is unethical that people living close to them pay the price of that development through increased suffering and, possibly in extreme cases, loss of life due to disease.”
Lowering the risk
The researchers noted that, despite growing evidence of the impact of dams on malaria, there is scant evidence of their negative impacts being fully offset.
The team therefore made recommendations for managing the increased malaria risk. They said dam reservoirs could be more effectively designed and managed to reduce mosquito breeding. For instance, one option is to adopt operating schedules that, at critical times, dry out shoreline areas where mosquitoes tend to breed.
The researchers said dam developers should also consider increasing investment in integrated malaria intervention programs that include measures such as bed net distribution. Other environmental controls, such as introducing fish that eat mosquito larva in dam reservoirs, could also help reduce malaria cases in some instances.
“The bottom line is that adverse malaria impacts of dams routinely receive recognition in Environmental Impact Assessments, and areas around dams are frequently earmarked for intensive control efforts,” said study author Jonathan Lautze, PhD, of the International Water Management Institute in Pretoria, South Africa.
“The findings of our work hammer home the reality that this recognition and effort—well-intentioned though it may be—is simply not sufficient. Given the need for water resources development in Africa, malaria control around dams requires interdisciplinary cooperation, particularly between water and health communities. Malaria must be addressed while planning, designing, and operating African dams.”
Photo courtesy of the
International Water
Management Institute
More than 1 million people in sub-Saharan Africa will contract malaria this year because they live near a large dam, according to a study published in Malaria Journal.
For the first time, researchers correlated the location of large dams in sub-Saharan Africa with the incidence of malaria.
And they found evidence to suggest that construction of an expected 78 major new dams over the next few years will lead to an additional 56,000 malaria cases annually.
The researchers said these findings have major implications for new dam projects and how health impacts should be assessed prior to construction.
“Dams are at the center of much development planning in Africa,” said study author Solomon Kibret, a graduate student at the University of New England in Armidale, New South Wales, Australia.
“While dams clearly bring many benefits—contributing to economic growth, poverty alleviation, and food security—adverse malaria impacts need to be addressed or they will undermine the sustainability of Africa’s drive for development.”
As part of the CGIAR Research Program on Water, Land, and Ecosystems, Kibret and colleagues looked at 1268 dams in sub-Saharan Africa. Of these, just under two-thirds (n=723) are in malarious areas.
The researchers compared detailed maps of malaria incidence with the dam sites. The number of annual malaria cases associated with the dams was estimated by comparing the number of cases for communities less than 5 kilometers from the dam reservoir with the number of cases for communities further away.
The team found that 15 million people live within 5 kilometers of dam reservoirs and are therefore at risk of contracting malaria. And at least 1.1 million malaria cases annually are linked to the presence of the dams.
“Our study showed that the population at risk of malaria around dams is at least 4 times greater than previously estimated,” Kibret said, noting that the authors were conservative in all their analyses.
The risk is particularly high in areas of sub-Saharan Africa with “unstable” malaria transmission, where malaria is seasonal. The study indicated that the impact of dams on malaria in unstable areas could either lead to intensified malaria transmission or change the nature of transmission from seasonal to perennial.
Explaining the risk
Previous research revealed increases in malaria incidence near major sub-Saharan dams such as the Akosombo Dam in Ghana, the Koka Dam in Ethiopia, and the Kamburu Dam in Kenya. But until now, no attempt has been made to assess the cumulative effect of large dam-building on malaria.
Malaria is transmitted by the Anopheles mosquito, which needs slow-moving or stagnant water in which to breed. Dam reservoirs, particularly shallow puddles that often form along shorelines, provide a perfect environment for the insects to multiply. Thus, dam construction can intensify transmission and shift patterns of malaria infection.
Many African countries are planning new dams to help drive economic growth and increase water security. Improved water storage for growing populations, irrigation, and hydropower generation are needed for a fast-developing continent, but the researchers warn that building new dams has potential costs as well as benefits.
“Dams are an important option for governments anxious to develop,” said study author Matthew McCartney, PhD, of the International Water Management Institute in Vientiane, Laos.
“But it is unethical that people living close to them pay the price of that development through increased suffering and, possibly in extreme cases, loss of life due to disease.”
Lowering the risk
The researchers noted that, despite growing evidence of the impact of dams on malaria, there is scant evidence of their negative impacts being fully offset.
The team therefore made recommendations for managing the increased malaria risk. They said dam reservoirs could be more effectively designed and managed to reduce mosquito breeding. For instance, one option is to adopt operating schedules that, at critical times, dry out shoreline areas where mosquitoes tend to breed.
The researchers said dam developers should also consider increasing investment in integrated malaria intervention programs that include measures such as bed net distribution. Other environmental controls, such as introducing fish that eat mosquito larva in dam reservoirs, could also help reduce malaria cases in some instances.
“The bottom line is that adverse malaria impacts of dams routinely receive recognition in Environmental Impact Assessments, and areas around dams are frequently earmarked for intensive control efforts,” said study author Jonathan Lautze, PhD, of the International Water Management Institute in Pretoria, South Africa.
“The findings of our work hammer home the reality that this recognition and effort—well-intentioned though it may be—is simply not sufficient. Given the need for water resources development in Africa, malaria control around dams requires interdisciplinary cooperation, particularly between water and health communities. Malaria must be addressed while planning, designing, and operating African dams.”
Doc stresses importance of vitamin K shots
Photo by Vera Kratochvil
Cases of vitamin K-deficiency bleeding (VKDB) reported in infants have healthcare professionals concerned about parents refusing vitamin K shots for their newborns.
Some parents have been declining the shots in what is believed to be an extension of the anti-vaccination movement.
But avoiding vitamin K shots can result in dire consequences for newborns, said DeeAnne Jackson, MD, of the University of Alabama at Birmingham.
“Newborns have been receiving vitamin K booster injections since 1961 to prevent internal bleeding,” Dr Jackson noted. “These injections are necessary because babies have very low levels of vitamin K at birth, which can lead to serious bleeding problems if not supplemented. It is an essential nutrient babies need to assist the body in blood clot formation.”
In a recent issue of the Journal of Emergency Medicine, doctors in Ohio documented a case where a 10-week-old child had profound anemia and intracranial bleeding after the child’s parents refused both the vitamin K shot and the hepatitis B vaccine.
The parents brought the child to the emergency room when the mother noticed flecks of blood in the baby’s stool. Emergency physicians were able to stop the intracranial bleeding before it became severe with an infusion of vitamin K.
A previous report published in 2013 revealed 4 cases of VKDB at a hospital in Nashville, Tennessee. These incidents were directly related to newborns not receiving their vitamin K shot.
When the US Centers for Disease Control and Prevention investigated this issue, the agency found that 28% of parents with babies born at private birthing centers in Nashville had refused the shot.
An update published in 2014 detailed 5 cases of late VKDB treated at the aforementioned hospital between February and September 2013 and 2 additional infants who had severe vitamin K deficiency but no bleeding.
Dr Jackson believes incidents like these might be avoided by better communication between parents and healthcare professionals.
“I would encourage parents who may be nervous about vitamin K shots or vaccines to start these conversations prior to their baby’s delivery so they can learn more about why these treatments are recommended ahead of time,” she said.
“You really shouldn’t wait to see if your baby needs a vitamin K shot after birth, because delaying medical care can lead to serious and life-threatening consequences.”
Photo by Vera Kratochvil
Cases of vitamin K-deficiency bleeding (VKDB) reported in infants have healthcare professionals concerned about parents refusing vitamin K shots for their newborns.
Some parents have been declining the shots in what is believed to be an extension of the anti-vaccination movement.
But avoiding vitamin K shots can result in dire consequences for newborns, said DeeAnne Jackson, MD, of the University of Alabama at Birmingham.
“Newborns have been receiving vitamin K booster injections since 1961 to prevent internal bleeding,” Dr Jackson noted. “These injections are necessary because babies have very low levels of vitamin K at birth, which can lead to serious bleeding problems if not supplemented. It is an essential nutrient babies need to assist the body in blood clot formation.”
In a recent issue of the Journal of Emergency Medicine, doctors in Ohio documented a case where a 10-week-old child had profound anemia and intracranial bleeding after the child’s parents refused both the vitamin K shot and the hepatitis B vaccine.
The parents brought the child to the emergency room when the mother noticed flecks of blood in the baby’s stool. Emergency physicians were able to stop the intracranial bleeding before it became severe with an infusion of vitamin K.
A previous report published in 2013 revealed 4 cases of VKDB at a hospital in Nashville, Tennessee. These incidents were directly related to newborns not receiving their vitamin K shot.
When the US Centers for Disease Control and Prevention investigated this issue, the agency found that 28% of parents with babies born at private birthing centers in Nashville had refused the shot.
An update published in 2014 detailed 5 cases of late VKDB treated at the aforementioned hospital between February and September 2013 and 2 additional infants who had severe vitamin K deficiency but no bleeding.
Dr Jackson believes incidents like these might be avoided by better communication between parents and healthcare professionals.
“I would encourage parents who may be nervous about vitamin K shots or vaccines to start these conversations prior to their baby’s delivery so they can learn more about why these treatments are recommended ahead of time,” she said.
“You really shouldn’t wait to see if your baby needs a vitamin K shot after birth, because delaying medical care can lead to serious and life-threatening consequences.”
Photo by Vera Kratochvil
Cases of vitamin K-deficiency bleeding (VKDB) reported in infants have healthcare professionals concerned about parents refusing vitamin K shots for their newborns.
Some parents have been declining the shots in what is believed to be an extension of the anti-vaccination movement.
But avoiding vitamin K shots can result in dire consequences for newborns, said DeeAnne Jackson, MD, of the University of Alabama at Birmingham.
“Newborns have been receiving vitamin K booster injections since 1961 to prevent internal bleeding,” Dr Jackson noted. “These injections are necessary because babies have very low levels of vitamin K at birth, which can lead to serious bleeding problems if not supplemented. It is an essential nutrient babies need to assist the body in blood clot formation.”
In a recent issue of the Journal of Emergency Medicine, doctors in Ohio documented a case where a 10-week-old child had profound anemia and intracranial bleeding after the child’s parents refused both the vitamin K shot and the hepatitis B vaccine.
The parents brought the child to the emergency room when the mother noticed flecks of blood in the baby’s stool. Emergency physicians were able to stop the intracranial bleeding before it became severe with an infusion of vitamin K.
A previous report published in 2013 revealed 4 cases of VKDB at a hospital in Nashville, Tennessee. These incidents were directly related to newborns not receiving their vitamin K shot.
When the US Centers for Disease Control and Prevention investigated this issue, the agency found that 28% of parents with babies born at private birthing centers in Nashville had refused the shot.
An update published in 2014 detailed 5 cases of late VKDB treated at the aforementioned hospital between February and September 2013 and 2 additional infants who had severe vitamin K deficiency but no bleeding.
Dr Jackson believes incidents like these might be avoided by better communication between parents and healthcare professionals.
“I would encourage parents who may be nervous about vitamin K shots or vaccines to start these conversations prior to their baby’s delivery so they can learn more about why these treatments are recommended ahead of time,” she said.
“You really shouldn’t wait to see if your baby needs a vitamin K shot after birth, because delaying medical care can lead to serious and life-threatening consequences.”