User login
Too Much Coffee Linked to Accelerated Cognitive Decline
PHILADELPHIA –
results from a large study suggest.Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.
“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida.
The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
One of the World’s Most Widely Consumed Beverages
Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.
Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.
Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.
Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.
Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.
This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.
Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.
For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.
The study assessed cognitive function at baseline and at least two additional patient visits.
Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
Steeper Decline
Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.
“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.
At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.
For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).
Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.
Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).
Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease.
“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”
A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.
The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea.
When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.
Dr. Sewell reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
PHILADELPHIA –
results from a large study suggest.Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.
“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida.
The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
One of the World’s Most Widely Consumed Beverages
Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.
Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.
Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.
Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.
Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.
This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.
Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.
For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.
The study assessed cognitive function at baseline and at least two additional patient visits.
Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
Steeper Decline
Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.
“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.
At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.
For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).
Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.
Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).
Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease.
“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”
A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.
The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea.
When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.
Dr. Sewell reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
PHILADELPHIA –
results from a large study suggest.Investigators examined the impact of different amounts of coffee and tea on fluid intelligence — a measure of cognitive functions including abstract reasoning, pattern recognition, and logical thinking.
“It’s the old adage that too much of anything isn’t good. It’s all about balance, so moderate coffee consumption is okay but too much is probably not recommended,” said study investigator Kelsey R. Sewell, PhD, Advent Health Research Institute, Orlando, Florida.
The findings of the study were presented at the 2024 Alzheimer’s Association International Conference (AAIC).
One of the World’s Most Widely Consumed Beverages
Coffee is one of the most widely consumed beverages around the world. The beans contain a range of bioactive compounds, including caffeine, chlorogenic acid, and small amounts of vitamins and minerals.
Consistent evidence from observational and epidemiologic studies indicates that intake of both coffee and tea has beneficial effects on stroke, heart failure, cancers, diabetes, and Parkinson’s disease.
Several studies also suggest that coffee may reduce the risk for Alzheimer’s disease, said Dr. Sewell. However, there are limited longitudinal data on associations between coffee and tea intake and cognitive decline, particularly in distinct cognitive domains.
Dr. Sewell’s group previously published a study of cognitively unimpaired older adults that found greater coffee consumption was associated with slower cognitive decline and slower accumulation of brain beta-amyloid.
Their current study extends some of the prior findings and investigates the relationship between both coffee and tea intake and cognitive decline over time in a larger sample of older adults.
This new study included 8451 mostly female (60%) and White (97%) cognitively unimpaired adults older than 60 (mean age, 67.8 years) in the UK Biobank, a large-scale research resource containing in-depth, deidentified genetic and health information from half a million UK participants. Study subjects had a mean body mass index (BMI) of 26, and about 26% were apolipoprotein epsilon 4 (APOE e4) gene carriers.
Researchers divided coffee and tea consumption into tertiles: high, moderate, and no consumption.
For daily coffee consumption, 18% reported drinking four or more cups (high consumption), 58% reported drinking one to three cups (moderate consumption), and 25% reported that they never drink coffee. For daily tea consumption, 47% reported drinking four or more cups (high consumption), 38% reported drinking one to three cups (moderate consumption), and 15% reported that they never drink tea.
The study assessed cognitive function at baseline and at least two additional patient visits.
Researchers used linear mixed models to assess the relationships between coffee and tea intake and cognitive outcomes. The models adjusted for age, sex, Townsend deprivation index (reflecting socioeconomic status), ethnicity, APOE e4 status, and BMI.
Steeper Decline
Compared with high coffee consumption (four or more cups daily), people who never consumed coffee (beta, 0.06; standard error [SE], 0.02; P = .005) and those with moderate consumption (beta, 0.07; SE, 0.02; P = < .001) had slower decline in fluid intelligence after an average of 8.83 years of follow-up.
“We can see that those with high coffee consumption showed the steepest decline in fluid intelligence across the follow up, compared to those with moderate coffee consumption and those never consuming coffee,” said Dr. Sewell, referring to illustrative graphs.
At the same time, “our data suggest that across this time period, moderate coffee consumption can serve as some kind of protective factor against cognitive decline,” she added.
For tea, there was a somewhat different pattern. People who never drank tea had a greater decline in fluid intelligence, compared with those who had moderate consumption (beta, 0.06; SE, 0.02; P = .0090) or high consumption (beta, 0.06; SE, 0.02; P = .003).
Because this is an observational study, “we still need randomized controlled trials to better understand the neuroprotective mechanism of coffee and tea compounds,” said Dr. Sewell.
Responding later to a query from a meeting delegate about how moderate coffee drinking could be protective, Dr. Sewell said there are probably “different levels of mechanisms,” including at the molecular level (possibly involving amyloid toxicity) and the behavioral level (possibly involving sleep patterns).
Dr. Sewell said that she hopes this line of investigation will lead to new avenues of research in preventive strategies for Alzheimer’s disease.
“We hope that coffee and tea intake could contribute to the development of a safe and inexpensive strategy for delaying the onset and reducing the incidence for Alzheimer’s disease.”
A limitation of the study is possible recall bias, because coffee and tea consumption were self-reported. However, this may not be much of an issue because coffee and tea consumption “is usually quite a habitual behavior,” said Dr. Sewell.
The study also had no data on midlife coffee or tea consumption and did not compare the effect of different preparation methods or types of coffee and tea — for example, green tea versus black tea.
When asked if the study controlled for smoking, Dr. Sewell said it didn’t but added that it would be interesting to explore its impact on cognition.
Dr. Sewell reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM AAIC 2024
Statins: So Misunderstood
Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.
Statins Are Not Bad For the Liver
When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.
But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.
In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4
Statins Do Not Cause Muscle Pain in Most Patients
Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.
Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.
Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.
Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
Statins Are Likely Helpful In the Very Elderly
Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.
References
1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.
2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.
3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.
4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.
5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.
6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.
7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.
8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.
9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.
Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.
Statins Are Not Bad For the Liver
When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.
But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.
In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4
Statins Do Not Cause Muscle Pain in Most Patients
Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.
Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.
Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.
Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
Statins Are Likely Helpful In the Very Elderly
Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.
References
1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.
2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.
3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.
4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.
5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.
6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.
7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.
8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.
9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.
Recently, a patient of mine was hospitalized with chest pain. She was diagnosed with an acute coronary syndrome and started on a statin in addition to a beta-blocker, aspirin, and clopidogrel. After discharge, she had symptoms of dizziness and recurrent chest pain and her first thought was to stop the statin because she believed that her symptoms were statin-related side effects. I will cover a few areas where I think that there are some misunderstandings about statins.
Statins Are Not Bad For the Liver
When lovastatin first became available for prescription in the 1980s, frequent monitoring of transaminases was recommended. Patients and healthcare professionals became accustomed to frequent liver tests to monitor for statin toxicity, and to this day, some healthcare professionals still obtain liver function tests for this purpose.
But is there a reason to do this? Pfeffer and colleagues reported on the results of over 112,000 people enrolled in the West of Scotland Coronary Protection trial and found that the percentage of patients with any abnormal liver function test was similar (> 3 times the upper limit of normal for ALT) for patients taking pravastatin (1.4%) and for patients taking placebo (1.4%).1 A panel of liver experts concurred that statin-associated transaminase elevations were not indicative of liver damage or dysfunction.2 Furthermore, they noted that chronic liver disease and compensated cirrhosis were not contraindications to statin use.
In a small study, use of low-dose atorvastatin in patients with nonalcoholic steatohepatitis improved transaminase values in 75% of patients and liver steatosis and nonalcoholic fatty liver disease activity scores were significantly improved on biopsy in most of the patients.3 The US Food and Drug Administration (FDA) removed the recommendation for routine regular monitoring of liver function for patients on statins in 2012.4
Statins Do Not Cause Muscle Pain in Most Patients
Most muscle pain occurring in patients on statins is not due to the statin although patient concerns about muscle pain are common. In a meta-analysis of 19 large statin trials, 27.1% of participants treated with a statin reported at least one episode of muscle pain or weakness during a median of 4.3 years, compared with 26.6% of participants treated with placebo.5 Muscle pain for any reason is common, and patients on statins may stop therapy because of the symptoms.
Cohen and colleagues performed a survey of past and current statin users, asking about muscle symptoms.6 Muscle-related side effects were reported by 60% of former statin users and 25% of current users.
Herrett and colleagues performed an extensive series of n-of-1 trials involving 200 patients who had stopped or were considering stopping statins because of muscle symptoms.7 Participants received either 2-month blocks of atorvastatin 20 mg or 2-month blocks of placebo, six times. They rated their muscle symptoms on a visual analogue scale at the end of each block. There was no difference in muscle symptom scores between the statin and placebo periods.
Wood and colleagues took it a step further when they planned an n-of-1 trial that included statin, placebo, and no treatment.8 Each participant received four bottles of atorvastatin 20 mg, four bottles of placebo, and four empty bottles. Each month they used treatment from the bottles based on a random sequence and reported daily symptom scores. The mean symptom intensity score was 8.0 during no-tablet months, 15.4 during placebo months (P < .001, compared with no-tablet months), and 16.3 during statin months (P < .001, compared with no-tablet months; P = .39, compared with placebo).
Statins Are Likely Helpful In the Very Elderly
Should we be using statins for primary prevention in our very old patients? For many years the answer was generally “no” on the basis of a lack of evidence. Patients in their 80s often were not included in clinical trials. The much used American Heart Association risk calculator stops at age 79. Given the prevalence of coronary artery disease in patients as they reach their 80s, wouldn’t primary prevention really be secondary prevention? Xu and colleagues in a recent study compared outcomes for patients who were treated with statins for primary prevention with a group who were not. In the patients aged 75-84 there was a risk reduction for major cardiovascular events of 1.2% over 5 years, and for those 85 and older the risk reduction was 4.4%. Importantly, there were no significantly increased risks for myopathies and liver dysfunction in either age group.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. He is a member of the editorial advisory board of Internal Medicine News. Dr. Paauw has no conflicts to disclose. Contact him at imnews@mdedge.com.
References
1. Pfeffer MA et al. Circulation. 2002;105(20):2341-6.
2. Cohen DE et al. Am J Cardiol. 2006;97(8A):77C-81C.
3. Hyogo H et al. Metabolism. 2008;57(12):1711-8.
4. FDA Drug Safety Communication: Important safety label changes to cholesterol-lowering statin drugs. 2012 Feb 28.
5. Cholesterol Treatment Trialists’ Collaboration. Lancet. 2022;400(10355):832-45.
6. Cohen JD et al. J Clin Lipidol. 2012;6(3):208-15.
7. Herrett E et al. BMJ. 2021 Feb 24;372:n1355.
8. Wood FA et al. N Engl J Med. 2020;383(22):2182-4.
9. Xu W et al. Ann Intern Med. 2024;177(6):701-10.
Alzheimer’s Blood Test in Primary Care Could Slash Diagnostic, Treatment Wait Times
As disease-modifying treatments for Alzheimer’s disease (AD) become available,
. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.
“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.
The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
Projected Wait Times 100 Months by 2033
The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.
The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.
According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.
In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.
Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
Prioritizing Resources
“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.
The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.
“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.
He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.
Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment.
“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study.
This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.
A version of this article first appeared on Medscape.com.
As disease-modifying treatments for Alzheimer’s disease (AD) become available,
. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.
“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.
The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
Projected Wait Times 100 Months by 2033
The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.
The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.
According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.
In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.
Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
Prioritizing Resources
“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.
The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.
“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.
He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.
Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment.
“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study.
This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.
A version of this article first appeared on Medscape.com.
As disease-modifying treatments for Alzheimer’s disease (AD) become available,
. Currently, the patient diagnostic journey is often prolonged owing to the limited number of AD specialists, causing concern among healthcare providers and patients alike. Now, a new study suggests that use of high-performing blood tests in primary care could identify potential patients with AD much earlier, possibly reducing wait times for specialist care and receipt of treatment.“We need to triage in primary care and send preferentially the ones that actually could be eligible for treatment, and not those who are just worried because their grandmother reported that she has Alzheimer’s,” lead researcher Soeren Mattke, MD, DSc, told this news organization.
“By combining a brief cognitive test with an accurate blood test of Alzheimer’s pathology in primary care, we can reduce unnecessary referrals, and shorten appointment wait times,” said Dr. Mattke, director of the Brain Health Observatory at the University of Southern California in Los Angeles.
The findings were presented at the Alzheimer’s Association International Conference (AAIC) 2024.
Projected Wait Times 100 Months by 2033
The investigators used a Markov model to estimate wait times for patients eligible for AD treatment, taking into account constrained capacity for specialist visits.
The model included the projected US population of people aged 55 years or older from 2023 to 2032. It assumed that individuals would undergo a brief cognitive assessment in primary care and, if suggestive of early-stage cognitive impairment, be referred to a AD specialist under three scenarios: no blood test, blood test to rule out AD pathology, and blood test to confirm AD pathology.
According to the model, without an accurate blood test for AD pathology, projected wait times to see a specialist are about 12 months in 2024 and will increase to more than 100 months in 2033, largely owing to a lack of specialist appointments.
In contrast, with the availability of an accurate blood test to rule out AD, average wait times would be just 3 months in 2024 and increase to only about 13 months in 2033, because far fewer patients would need to see a specialist.
Availability of a blood test to rule in AD pathology in primary care would have a limited effect on wait times because 50% of patients would still undergo confirmatory testing based on expert assumptions, the model suggests.
Prioritizing Resources
“Millions of people have mild memory complaints, and if they all start coming to neurologists, it could completely flood the system and create long wait times for everybody,” Dr. Mattke told this news organization.
The problem, he said, is that brief cognitive tests performed in primary care are not particularly specific for mild cognitive impairment.
“They work pretty well for manifest advanced dementia but for mild cognitive impairment, which is a very subtle, symptomatic disease, they are only about 75% accurate. One quarter are false-positives. That’s a lot of people,” Dr. Mattke said.
He also noted that although earlier blood tests were about 75% accurate, they are now about 90% accurate, “so we are getting to a level where we can pretty much say with confidence that this is likely Alzheimer’s,” Dr. Mattke said.
Commenting on this research for this news organization, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, said it is clear that blood tests, “once confirmed, could have a significant impact on the wait times” for dementia assessment.
“After an initial blood test, we might be able to rule out or rule in individuals who should go to a specialist for further follow-up and testing. This allows us to really ensure that we’re prioritizing resources accordingly,” said Dr. Snyder, who was not involved in the study.
This project was supported by a research contract from C2N Diagnostics LLC to USC. Dr. Mattke serves on the board of directors of Senscio Systems Inc. and the scientific advisory board of ALZPath and Boston Millennia Partners and has received consulting fees from Biogen, C2N, Eisai, Eli Lilly, Novartis, and Roche/Genentech. Dr. Snyder has no relevant disclosures.
A version of this article first appeared on Medscape.com.
FROM AAIC 2024
New Models Predict Time From Mild Cognitive Impairment to Dementia
Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.
The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.
The findings were published online in Neurology.
Easy-to-Use Prototype
On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.
Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.
For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.
Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.
At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.
In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.
The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).
However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.
For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.
External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.
Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.
Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.
“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.
Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.
The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.
The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.
A version of this article first appeared on Medscape.com.
Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.
The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.
The findings were published online in Neurology.
Easy-to-Use Prototype
On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.
Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.
For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.
Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.
At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.
In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.
The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).
However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.
For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.
External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.
Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.
Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.
“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.
Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.
The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.
The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.
A version of this article first appeared on Medscape.com.
Using a large, real-world population, researchers have developed models that predict cognitive decline in amyloid-positive patients with either mild cognitive impairment (MCI) or mild dementia.
The models may help clinicians better answer common questions from their patients about their rate of cognitive decline, noted the investigators, led by Pieter J. van der Veere, MD, Alzheimer Center and Department of Neurology, Amsterdam Neuroscience, VU University Medical Center, Amsterdam, the Netherlands.
The findings were published online in Neurology.
Easy-to-Use Prototype
On average, it takes 4 years for MCI to progress to dementia. While new disease-modifying drugs targeting amyloid may slow progression, whether this effect is clinically meaningful is debatable, the investigators noted.
Earlier published models predicting cognitive decline either are limited to patients with MCI or haven’t been developed for easy clinical use, they added.
For the single-center study, researchers selected 961 amyloid-positive patients, mean age 65 years, who had at least two longitudinal Mini-Mental State Examinations (MMSEs). Of these, 310 had MCI, and 651 had mild dementia; 48% were women, and over 90% were White.
Researchers used linear mixed modeling to predict MMSE over time. They included age, sex, baseline MMSE, apolipoprotein E epsilon 4 status, cerebrospinal fluid (CSF) beta-amyloid (Aß) 1-42 and plasma phosphorylated-tau markers, and MRI total brain and hippocampal volume measures in the various models, including the final biomarker prediction models.
At follow-up, investigators found that the yearly decline in MMSEs increased in patients with both MCI and mild dementia. In MCI, the average MMSE declined from 26.4 (95% confidence interval [CI], 26.2-26.7) at baseline to 21.0 (95% CI, 20.2-21.7) after 5 years.
In mild dementia, the average MMSE declined from 22.4 (95% CI, 22.0-22.7) to 7.8 (95% CI, 6.8-8.9) at 5 years.
The predicted mean time to reach an MMSE of 20 (indicating mild dementia) for a hypothetical patient with MCI and a baseline MMSE of 28 and CSF Aß 1-42 of 925 pg/mL was 6 years (95% CI, 5.4-6.7 years).
However, with a hypothetical drug treatment that reduces the rate of decline by 30%, the patient would not reach the stage of moderate dementia for 8.6 years.
For a hypothetical patient with mild dementia with a baseline MMSE of 20 and CSF Aß 1-42 of 625 pg/mL, the predicted mean time to reach an MMSE of 15 was 2.3 years (95% CI, 2.1-2.5), or 3.3 years if decline is reduced by 30% with drug treatment.
External validation of the prediction models using data from the Alzheimer’s Disease Neuroimaging Initiative, a longitudinal cohort of patients not cognitively impaired or with MCI or dementia, showed comparable performance between the model-building approaches.
Researchers have incorporated the models in an easy-to-use calculator as a prototype tool that physicians can use to discuss prognosis, the uncertainty surrounding the predictions, and the impact of intervention strategies with patients.
Future prediction models may be able to predict patient-reported outcomes such as quality of life and daily functioning, the researchers noted.
“Until then, there is an important role for clinicians in translating the observed and predicted cognitive functions,” they wrote.
Compared with other studies predicting the MMSE decline using different statistical techniques, these new models showed similar or even better predictive performance while requiring less or similar information, the investigators noted.
The study used MMSE as a measure of cognition, but there may be intraindividual variation in these measures among cognitively normal patients, and those with cognitive decline may score lower if measurements are taken later in the day. Another study limitation was that the models were built for use in memory clinics, so generalizability to the general population could be limited.
The study was supported by Eisai, ZonMW, and Health~Holland Top Sector Life Sciences & Health. See paper for financial disclosures.
A version of this article first appeared on Medscape.com.
FROM NEUROLOGY
Two Diets Linked to Improved Cognition, Slowed Brain Aging
An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.
Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.
“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.
The findings were published online in Cell Metabolism.
Cognitive Outcomes
The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.
Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.
Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.
The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.
Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.
The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.
Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.
Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.
Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
Hypothesis-Generating Research
AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.
In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.
An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.
Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.
The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.
The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
A version of this article first appeared on Medscape.com.
An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.
Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.
“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.
The findings were published online in Cell Metabolism.
Cognitive Outcomes
The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.
Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.
Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.
The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.
Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.
The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.
Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.
Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.
Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
Hypothesis-Generating Research
AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.
In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.
An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.
Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.
The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.
The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
A version of this article first appeared on Medscape.com.
An intermittent fasting (IF) diet and a standard healthy living (HL) diet focused on healthy foods both lead to weight loss, reduced insulin resistance (IR), and slowed brain aging in older overweight adults with IR, new research showed. However, neither diet has an effect on Alzheimer’s disease (AD) biomarkers.
Although investigators found both diets were beneficial, some outcomes were more robust with the IF diet.
“The study provides a blueprint for assessing brain effects of dietary interventions and motivates further research on intermittent fasting and continuous diets for brain health optimization,” wrote the investigators, led by Dimitrios Kapogiannis, MD, chief, human neuroscience section, National Institute on Aging, and adjunct associate professor of neurology, the Johns Hopkins University School of Medicine.
The findings were published online in Cell Metabolism.
Cognitive Outcomes
The prevalence of IR — reduced cellular sensitivity to insulin that’s a hallmark of type 2 diabetes — increases with age and obesity, adding to an increased risk for accelerated brain aging as well as AD and related dementias (ADRD) in older adults who have overweight.
Studies reported healthy diets promote overall health, but it’s unclear whether, and to what extent, they improve brain health beyond general health enhancement.
Researchers used multiple brain and cognitive measures to assess dietary effects on brain health, including peripherally harvested neuron-derived extracellular vesicles (NDEVs) to probe neuronal insulin signaling; MRI to investigate the pace of brain aging; magnetic resonance spectroscopy (MRS) to measure brain glucose, metabolites, and neurotransmitters; and NDEVs and cerebrospinal fluid to derive biomarkers for AD/ADRD.
The study included 40 cognitively intact overweight participants with IR, mean age 63.2 years, 60% women, and 62.5% White. Their mean body weight was 97.1 kg and mean body mass index (BMI) was 34.4.
Participants were randomly assigned to 8 weeks of an IF diet or a HL diet that emphasizes fruits, vegetables, whole grains, lean proteins, and low-fat dairy and limits added sugars, saturated fats, and sodium.
The IF diet involved following the HL diet for 5 days per week and restricting calories to a quarter of the recommended daily intake for 2 consecutive days.
Both diets reduced neuronal IR and had comparable effects in improving insulin signaling biomarkers in NDEVs, reducing brain glucose on MRS, and improving blood biomarkers of carbohydrate and lipid metabolism.
Using MRI, researchers also assessed brain age, an indication of whether the brain appears older or younger than an individual’s chronological age. There was a decrease of 2.63 years with the IF diet (P = .05) and 2.42 years with the HL diet (P < .001) in the anterior cingulate and ventromedial prefrontal cortex.
Both diets improved executive function and memory, with those following the IF diet benefiting more in strategic planning, switching between two cognitively demanding tasks, cued recall, and other areas.
Hypothesis-Generating Research
AD biomarkers including amyloid beta 42 (Aß42), Aß40, and plasma phosphorylated-tau181 did not change with either diet, a finding that investigators speculated may be due to the short duration of the study. Light-chain neurofilaments increased across groups with no differences between the diets.
In other findings, BMI decreased by 1.41 with the IF diet and by 0.80 with the HL diet, and a similar pattern was observed for weight. Waist circumference decreased in both groups with no significant differences between diets.
An exploratory analysis showed executive function improved with the IF diet but not with the HL diet in women, whereas it improved with both diets in men. BMI and apolipoprotein E and SLC16A7 genotypes also modulated diet effects.
Both diets were well tolerated. The most frequent adverse events were gastrointestinal and occurred only with the IF diet.
The authors noted the findings are preliminary and results are hypothesis generating. Study limitations included the study’s short duration and its power to detect anything other than large to moderate effect size changes and differences between the diets. Researchers also didn’t acquire data on dietary intake, so lapses in adherence can’t be excluded. However, the large decreases in BMI, weight, and waist circumference with both diets indicated high adherence.
The study was supported by the National Institutes of Health’s National Institute on Aging. The authors reported no competing interests.
A version of this article first appeared on Medscape.com.
FROM CELL METABOLISM
Study Links Newer Shingles Vaccine to Delayed Dementia Diagnosis
The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine.
“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England.
But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk.
“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said.
The study was published online on July 25 in Nature Medicine.
‘Natural Experiment’
Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection.
The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017.
Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch.
They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020.
Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected.
As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%.
Reduced Risk or Delayed Diagnosis?
Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.
“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained.
But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported.
“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested.
But the researchers cautioned that this study could not prove causality.
“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned.
The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom.
Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older.
In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.
Mechanism Uncertain
Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.
“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted.
The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role.
“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said.
Stronger Effect in Women
Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men.
In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women.
In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine.
As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect.
“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented.
Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study.
He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine.
“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented.
Outside Experts Positive
Outside experts, providing comment to the Science Media Centre, welcomed the new research.
“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association.
The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.
“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”
Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.
In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality.
“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.
Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.
A version of this article first appeared on Medscape.com.
The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine.
“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England.
But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk.
“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said.
The study was published online on July 25 in Nature Medicine.
‘Natural Experiment’
Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection.
The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017.
Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch.
They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020.
Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected.
As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%.
Reduced Risk or Delayed Diagnosis?
Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.
“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained.
But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported.
“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested.
But the researchers cautioned that this study could not prove causality.
“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned.
The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom.
Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older.
In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.
Mechanism Uncertain
Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.
“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted.
The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role.
“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said.
Stronger Effect in Women
Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men.
In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women.
In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine.
As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect.
“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented.
Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study.
He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine.
“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented.
Outside Experts Positive
Outside experts, providing comment to the Science Media Centre, welcomed the new research.
“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association.
The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.
“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”
Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.
In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality.
“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.
Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.
A version of this article first appeared on Medscape.com.
The study builds on previous observations of a reduction in dementia risk with the older live shingles vaccine and reports a delay in dementia diagnosis of 164 days with the newer recombinant version, compared with the live vaccine.
“Given the prevalence of dementia, a delay of 164 days in diagnosis would not be a trivial effect at the public health level. It’s a big enough effect that if there is a causality it feels meaningful,” said senior author Paul Harrison, DM, FRCPsych, professor of psychiatry at the University of Oxford, Oxford, England.
But Dr. Harrison stressed that the study had not proven that the shingles vaccine reduced dementia risk.
“The design of the study allows us to do away with many of the confounding effects we usually see in observational studies, but this is still an observational study, and as such it cannot prove a definite causal effect,” he said.
The study was published online on July 25 in Nature Medicine.
‘Natural Experiment’
Given the risk for deleterious consequences of shingles, vaccination is now recommended for older adults in many countries. The previously used live shingles vaccine (Zostavax) is being replaced in most countries with the new recombinant shingles vaccine (Shingrix), which is more effective at preventing shingles infection.
The current study made use of a “natural experiment” in the United States, which switched over from use of the live vaccine to the recombinant vaccine in October 2017.
Researchers used electronic heath records to compare the incidence of a dementia diagnosis in individuals who received the live shingles vaccine prior to October 2017 with those who received the recombinant version after the United States made the switch.
They also used propensity score matching to further control for confounding factors, comparing 103,837 individuals who received a first dose of the live shingles vaccine between October 2014 and September 2017 with the same number of matched people who received the recombinant vaccine between November 2017 and October 2020.
Results showed that within the 6 years after vaccination, the recombinant vaccine was associated with a delay in the diagnosis of dementia, compared with the live vaccine. Specifically, receiving the recombinant vaccine was associated with a 17% increase in diagnosis-free time, translating to 164 additional days lived without a diagnosis of dementia in those subsequently affected.
As an additional control, the researchers also found significantly lower risks for dementia in individuals receiving the new recombinant shingles vaccine vs two other vaccines commonly used in older people: influenza and tetanus/diphtheria/pertussis vaccines, with increases in diagnosis-free time of 14%-27%.
Reduced Risk or Delayed Diagnosis?
Speaking at a Science Media Centre press conference on the study, lead author Maxime Taquet, PhD, FRCPsych, clinical lecturer in psychiatry at the University of Oxford, noted that the total number of dementia cases were similar in the two shingles vaccine groups by the end of the 6-year follow-up period but there was a difference in the time at which they received a diagnosis of dementia.
“The study suggests that rather than actually reducing dementia risk, the recombinant vaccine delays the onset of dementia compared to the live vaccine in patients who go on to develop the condition,” he explained.
But when comparing the recombinant vaccine with the influenza and tetanus/diphtheria/pertussis vaccines there was a clear reduction in dementia risk itself, Dr. Taquet reported.
“It might well be that the live vaccine has a potential effect on the risk of dementia itself and therefore the recombinant vaccine only shows a delay in dementia compared to the live vaccine, but both of them might decrease the overall risk of dementia,” he suggested.
But the researchers cautioned that this study could not prove causality.
“While the two groups were very carefully matched in terms of factors that might influence the development of dementia, we still have to be cautious before assuming that the vaccine is indeed causally reducing the risk of onset of dementia,” Dr. Harrison warned.
The researchers say the results would need to be confirmed in a randomized trial, which may have to be conducted in a slightly younger age group, as currently shingles vaccine is recommended for all older individuals in the United Kingdom.
Vaccine recommendations vary from country to country, Dr. Harrison added. In the United States, the Centers for Disease Control and Prevention recommends the recombinant shingles vaccine for all adults aged 50 years or older.
In the meantime, it would be interesting to see whether further observational studies in other countries find similar results as this US study, Dr. Harrison said.
Mechanism Uncertain
Speculating on a possible mechanism behind the findings, Dr. Harrison suggested two plausible explanations.
“First, it is thought that the herpes virus could be one of many factors that could promote dementia, so a vaccine that stops reactivation of this virus might therefore be delaying that process,” he noted.
The other possibility is that adjuvants included in the recombinant vaccine to stimulate the immune system might have played a role.
“We don’t have any data on the mechanism, and thus study did not address that, so further studies are needed to look into this,” Dr. Harrison said.
Stronger Effect in Women
Another intriguing finding is that the association with the recombinant vaccine and delayed dementia diagnosis seemed to be stronger in women vs men.
In the original study of the live shingles vaccine, a protective effect against dementia was shown only in women.
In the current study, the delay in dementia diagnosis was seen in both sexes but was stronger in women, showing a 22% increased time without dementia in women versus a 13% increased time in men with the recombinant versus the live vaccine.
As expected, the recombinant vaccine was associated with a lower risk for shingles disease vs the live vaccine (2.5% versus 3.5%), but women did not have a better response than men did in this respect.
“The better protection against shingles with the recombinant vaccine was similar in men and women, an observation that might be one reason to question the possible mechanism behind the dementia effect being better suppression of the herpes zoster virus by the recombinant vaccine,” Dr. Harrison commented.
Though these findings are not likely to lead to any immediate changes in policy regarding the shingles vaccine, Dr. Harrison said it would be interesting to see whether uptake of the vaccine increased after this study.
He estimated that, currently in the United Kingdom, about 60% of older adults choose to have the shingles vaccine. A 2020 study in the United States found that only about one-third of US adults over 60 had received the vaccine.
“It will be interesting to see if that figure increases after these data are publicized, but I am not recommending that people have the vaccine specifically to lower their risk of dementia because of the caveats about the study that we have discussed,” he commented.
Outside Experts Positive
Outside experts, providing comment to the Science Media Centre, welcomed the new research.
“ The study is very well-conducted and adds to previous data indicating that vaccination against shingles is associated with lower dementia risk. More research is needed in future to determine why this vaccine is associated with lower dementia risk,” said Tara Spires-Jones, FMedSci, president of the British Neuroscience Association.
The high number of patients in the study and the adjustments for potential confounders are also strong points, noted Andrew Doig, PhD, professor of biochemistry, University of Manchester, Manchester, England.
“This is a significant result, comparable in effectiveness to the recent antibody drugs for Alzheimer’s disease,” Dr. Doig said. “Administering the recombinant shingles vaccine could well be a simple and cheap way to lower the risk of Alzheimer’s disease.”
Dr. Doig noted that a link between herpes zoster infection and the onset of dementia has been suspected for some time, and a trial of the antiviral drug valacyclovir against Alzheimer’s disease is currently underway.
In regard to the shingles vaccine, he said a placebo-controlled trial would be needed to prove causality.
“We also need to see how many years the effect might last and whether we should vaccinate people at a younger age. We know that the path to Alzheimer’s can start decades before any symptoms are apparent, so the vaccine might be even more effective if given to people in their 40s or 50s,” he said.
Dr. Harrison and Dr. Taquet reported no disclosures. Dr. Doig is a founder, director, and consultant for PharmaKure, which works on Alzheimer’s drugs and diagnostics. Other commentators declared no disclosures.
A version of this article first appeared on Medscape.com.
FROM NATURE MEDICINE
Heat Waves: A Silent Threat to Older Adults’ Kidneys
TOPLINE:
Older adults show an increase in creatinine and cystatin C levels after exposure to extreme heat in a dry setting despite staying hydrated; however, changes in these kidney function biomarkers are much more modest in a humid setting and in young adults.
METHODOLOGY:
- Older adults are vulnerable to heat-related morbidity and mortality, with kidney complications accounting for many excess hospital admissions during heat waves.
- Researchers investigated plasma-based markers of kidney function following extreme heat exposure for 3 hours in 20 young (21-39 years) and 18 older (65-76 years) adults recruited from the Dallas-Fort Worth area.
- All participants underwent heat exposure in a chamber at 47 °C (116 °F) and 15% relative humidity (dry setting) and 41 °C (105 °F) and 40% relative humidity (humid setting) on separate days. They performed light physical activity mimicking their daily tasks and drank 3 mL/kg body mass of water every hour while exposed to heat.
- Blood samples were collected at baseline, immediately before the end of heat exposure (end-heating), and 2 hours after heat exposure.
- Plasma creatinine was the primary outcome, with a change ≥ 0.3 mg/dL considered as clinically meaningful. Cystatin C was the secondary outcome.
TAKEAWAY:
- The plasma creatinine level showed a modest increase from baseline to end-heating (difference, 0.10 mg/dL; P = .004) and at 2 hours post exposure (difference, 0.17 mg/dL; P < .001) in older adults facing heat exposure in the dry setting.
- The mean cystatin C levels also increased from baseline to end-heating by 0.29 mg/L (P = .01) and at 2 hours post heat exposure by 0.28 mg/L (P = .004) in older adults in the dry setting.
- The mean creatinine levels increased by only 0.06 mg/dL (P = .01) from baseline to 2 hours post exposure in older adults facing heat exposure in the humid setting.
- Young adults didn’t show any significant change in the plasma cystatin C levels during or after heat exposure; however, there was a modest increase in the plasma creatinine levels after 2 hours of heat exposure (difference, 0.06; P = .004).
IN PRACTICE:
“These findings provide limited evidence that the heightened thermal strain in older adults during extreme heat may contribute to reduced kidney function,” the authors wrote.
SOURCE:
The study was led by Zachary J. McKenna, PhD, from the Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, and was published online in JAMA.
LIMITATIONS:
The use of plasma-based markers of kidney function, a short laboratory-based exposure, and a small number of generally healthy participants were the main limitations that could affect the generalizability of this study’s findings to broader populations and real-world settings.
DISCLOSURES:
The National Institutes of Health and American Heart Association funded this study. Two authors declared receiving grants and nonfinancial support from several sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Older adults show an increase in creatinine and cystatin C levels after exposure to extreme heat in a dry setting despite staying hydrated; however, changes in these kidney function biomarkers are much more modest in a humid setting and in young adults.
METHODOLOGY:
- Older adults are vulnerable to heat-related morbidity and mortality, with kidney complications accounting for many excess hospital admissions during heat waves.
- Researchers investigated plasma-based markers of kidney function following extreme heat exposure for 3 hours in 20 young (21-39 years) and 18 older (65-76 years) adults recruited from the Dallas-Fort Worth area.
- All participants underwent heat exposure in a chamber at 47 °C (116 °F) and 15% relative humidity (dry setting) and 41 °C (105 °F) and 40% relative humidity (humid setting) on separate days. They performed light physical activity mimicking their daily tasks and drank 3 mL/kg body mass of water every hour while exposed to heat.
- Blood samples were collected at baseline, immediately before the end of heat exposure (end-heating), and 2 hours after heat exposure.
- Plasma creatinine was the primary outcome, with a change ≥ 0.3 mg/dL considered as clinically meaningful. Cystatin C was the secondary outcome.
TAKEAWAY:
- The plasma creatinine level showed a modest increase from baseline to end-heating (difference, 0.10 mg/dL; P = .004) and at 2 hours post exposure (difference, 0.17 mg/dL; P < .001) in older adults facing heat exposure in the dry setting.
- The mean cystatin C levels also increased from baseline to end-heating by 0.29 mg/L (P = .01) and at 2 hours post heat exposure by 0.28 mg/L (P = .004) in older adults in the dry setting.
- The mean creatinine levels increased by only 0.06 mg/dL (P = .01) from baseline to 2 hours post exposure in older adults facing heat exposure in the humid setting.
- Young adults didn’t show any significant change in the plasma cystatin C levels during or after heat exposure; however, there was a modest increase in the plasma creatinine levels after 2 hours of heat exposure (difference, 0.06; P = .004).
IN PRACTICE:
“These findings provide limited evidence that the heightened thermal strain in older adults during extreme heat may contribute to reduced kidney function,” the authors wrote.
SOURCE:
The study was led by Zachary J. McKenna, PhD, from the Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, and was published online in JAMA.
LIMITATIONS:
The use of plasma-based markers of kidney function, a short laboratory-based exposure, and a small number of generally healthy participants were the main limitations that could affect the generalizability of this study’s findings to broader populations and real-world settings.
DISCLOSURES:
The National Institutes of Health and American Heart Association funded this study. Two authors declared receiving grants and nonfinancial support from several sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Older adults show an increase in creatinine and cystatin C levels after exposure to extreme heat in a dry setting despite staying hydrated; however, changes in these kidney function biomarkers are much more modest in a humid setting and in young adults.
METHODOLOGY:
- Older adults are vulnerable to heat-related morbidity and mortality, with kidney complications accounting for many excess hospital admissions during heat waves.
- Researchers investigated plasma-based markers of kidney function following extreme heat exposure for 3 hours in 20 young (21-39 years) and 18 older (65-76 years) adults recruited from the Dallas-Fort Worth area.
- All participants underwent heat exposure in a chamber at 47 °C (116 °F) and 15% relative humidity (dry setting) and 41 °C (105 °F) and 40% relative humidity (humid setting) on separate days. They performed light physical activity mimicking their daily tasks and drank 3 mL/kg body mass of water every hour while exposed to heat.
- Blood samples were collected at baseline, immediately before the end of heat exposure (end-heating), and 2 hours after heat exposure.
- Plasma creatinine was the primary outcome, with a change ≥ 0.3 mg/dL considered as clinically meaningful. Cystatin C was the secondary outcome.
TAKEAWAY:
- The plasma creatinine level showed a modest increase from baseline to end-heating (difference, 0.10 mg/dL; P = .004) and at 2 hours post exposure (difference, 0.17 mg/dL; P < .001) in older adults facing heat exposure in the dry setting.
- The mean cystatin C levels also increased from baseline to end-heating by 0.29 mg/L (P = .01) and at 2 hours post heat exposure by 0.28 mg/L (P = .004) in older adults in the dry setting.
- The mean creatinine levels increased by only 0.06 mg/dL (P = .01) from baseline to 2 hours post exposure in older adults facing heat exposure in the humid setting.
- Young adults didn’t show any significant change in the plasma cystatin C levels during or after heat exposure; however, there was a modest increase in the plasma creatinine levels after 2 hours of heat exposure (difference, 0.06; P = .004).
IN PRACTICE:
“These findings provide limited evidence that the heightened thermal strain in older adults during extreme heat may contribute to reduced kidney function,” the authors wrote.
SOURCE:
The study was led by Zachary J. McKenna, PhD, from the Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, and was published online in JAMA.
LIMITATIONS:
The use of plasma-based markers of kidney function, a short laboratory-based exposure, and a small number of generally healthy participants were the main limitations that could affect the generalizability of this study’s findings to broader populations and real-world settings.
DISCLOSURES:
The National Institutes of Health and American Heart Association funded this study. Two authors declared receiving grants and nonfinancial support from several sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
How the New Vitamin D Guidelines Will, and Won’t, Change My Practice
Hi, everyone. I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.
A few months ago, my health system added a clinical decision support function to our electronic health record to reduce inappropriate ordering of vitamin D levels. Clinicians are now required to select from a list of approved indications or diagnoses (including a history of vitamin D deficiency) before ordering the test.
Although I don’t know yet whether this process has had the desired effect, I felt that it was long overdue. Several years ago, I wrote an editorial that questioned the dramatic increase in vitamin D testing given the uncertainty about what level is adequate for good health and clinical trials showing that supplementing people with lower levels has no benefits for a variety of medical conditions. A more recent review of prospective studies of vitamin D supplements concluded that most correlations between vitamin D levels and outcomes in common and high-mortality conditions are unlikely to be causal.
A new Endocrine Society guideline recommends against routine measurement of vitamin D levels in healthy individuals. The guideline reinforces my current practice of not screening for vitamin D deficiency except in special situations, such as an individual with dark skin who works the night shift and rarely goes outdoors during daytime hours. But I haven’t been offering empirical vitamin D supplements to the four at-risk groups identified by the Endocrine Society: children, adults older than 75 years, pregnant patients, and adults with prediabetes. The evidence behind these recommendations merits a closer look.
In exclusively or primarily breastfed infants, I follow the American Academy of Pediatrics recommendation to prescribe a daily supplement containing 400 IU of vitamin D. However, the Endocrine Society found evidence from several studies conducted in other countries that continuing supplementation throughout childhood reduces the risk for rickets and possibly reduces the incidence of respiratory infections, with few adverse effects.
Many older women, and some older men, choose to take a calcium and vitamin D supplement for bone health, even though there is scant evidence that doing so prevents fractures in community-dwelling adults without osteoporosis. The Endocrine Society’s meta-analysis, however, found that 1000 adults aged 75 years or older who took an average of 900 IU of vitamin D daily for 2 years could expect to experience six fewer deaths than an identical group not taking supplements.
A typical prenatal vitamin contains 400 IU of vitamin D. Placebo-controlled trials reviewed by the Endocrine Society that gave an average of 2500 IU daily found statistically insignificant reductions in preeclampsia, intrauterine death, preterm birth, small for gestation age birth, and neonatal deaths.
Finally, the Endocrine Society’s recommendation for adults with prediabetes was based on 11 trials (three conducted in the United States) that tested a daily average of 3500 IU and found a slightly lower risk for progression to diabetes (24 fewer diagnoses of type 2 diabetes per 1000 persons) in the group who took supplements.
Of the four groups highlighted by the guideline, the strongest case for vitamin D supplements is in older adults — it’s hard to argue with lower mortality, even if the difference is small. Therefore, I will start suggesting that my patients over age 75 take a daily vitamin D supplement containing at least 800 IU if they aren’t already doing so.
On the other hand, I don’t plan to change my approach to pregnant patients (whose benefits in studies could have been due to chance), children after age 1 year (studies of children in other countries with different nutritional status may not apply to the United States), or adults with prediabetes (where we already have proven lifestyle interventions with much greater effects). In these cases, either I am unconvinced that the data support benefits for my patients, or I feel that the benefits of vitamin D supplements are small enough to be outweighed by potential harms, such as increased kidney stones.
Kenneth W. Lin, Associate Director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania, has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Hi, everyone. I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.
A few months ago, my health system added a clinical decision support function to our electronic health record to reduce inappropriate ordering of vitamin D levels. Clinicians are now required to select from a list of approved indications or diagnoses (including a history of vitamin D deficiency) before ordering the test.
Although I don’t know yet whether this process has had the desired effect, I felt that it was long overdue. Several years ago, I wrote an editorial that questioned the dramatic increase in vitamin D testing given the uncertainty about what level is adequate for good health and clinical trials showing that supplementing people with lower levels has no benefits for a variety of medical conditions. A more recent review of prospective studies of vitamin D supplements concluded that most correlations between vitamin D levels and outcomes in common and high-mortality conditions are unlikely to be causal.
A new Endocrine Society guideline recommends against routine measurement of vitamin D levels in healthy individuals. The guideline reinforces my current practice of not screening for vitamin D deficiency except in special situations, such as an individual with dark skin who works the night shift and rarely goes outdoors during daytime hours. But I haven’t been offering empirical vitamin D supplements to the four at-risk groups identified by the Endocrine Society: children, adults older than 75 years, pregnant patients, and adults with prediabetes. The evidence behind these recommendations merits a closer look.
In exclusively or primarily breastfed infants, I follow the American Academy of Pediatrics recommendation to prescribe a daily supplement containing 400 IU of vitamin D. However, the Endocrine Society found evidence from several studies conducted in other countries that continuing supplementation throughout childhood reduces the risk for rickets and possibly reduces the incidence of respiratory infections, with few adverse effects.
Many older women, and some older men, choose to take a calcium and vitamin D supplement for bone health, even though there is scant evidence that doing so prevents fractures in community-dwelling adults without osteoporosis. The Endocrine Society’s meta-analysis, however, found that 1000 adults aged 75 years or older who took an average of 900 IU of vitamin D daily for 2 years could expect to experience six fewer deaths than an identical group not taking supplements.
A typical prenatal vitamin contains 400 IU of vitamin D. Placebo-controlled trials reviewed by the Endocrine Society that gave an average of 2500 IU daily found statistically insignificant reductions in preeclampsia, intrauterine death, preterm birth, small for gestation age birth, and neonatal deaths.
Finally, the Endocrine Society’s recommendation for adults with prediabetes was based on 11 trials (three conducted in the United States) that tested a daily average of 3500 IU and found a slightly lower risk for progression to diabetes (24 fewer diagnoses of type 2 diabetes per 1000 persons) in the group who took supplements.
Of the four groups highlighted by the guideline, the strongest case for vitamin D supplements is in older adults — it’s hard to argue with lower mortality, even if the difference is small. Therefore, I will start suggesting that my patients over age 75 take a daily vitamin D supplement containing at least 800 IU if they aren’t already doing so.
On the other hand, I don’t plan to change my approach to pregnant patients (whose benefits in studies could have been due to chance), children after age 1 year (studies of children in other countries with different nutritional status may not apply to the United States), or adults with prediabetes (where we already have proven lifestyle interventions with much greater effects). In these cases, either I am unconvinced that the data support benefits for my patients, or I feel that the benefits of vitamin D supplements are small enough to be outweighed by potential harms, such as increased kidney stones.
Kenneth W. Lin, Associate Director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania, has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Hi, everyone. I’m Dr. Kenny Lin. I am a family physician and associate director of the Lancaster General Hospital Family Medicine Residency, and I blog at Common Sense Family Doctor.
A few months ago, my health system added a clinical decision support function to our electronic health record to reduce inappropriate ordering of vitamin D levels. Clinicians are now required to select from a list of approved indications or diagnoses (including a history of vitamin D deficiency) before ordering the test.
Although I don’t know yet whether this process has had the desired effect, I felt that it was long overdue. Several years ago, I wrote an editorial that questioned the dramatic increase in vitamin D testing given the uncertainty about what level is adequate for good health and clinical trials showing that supplementing people with lower levels has no benefits for a variety of medical conditions. A more recent review of prospective studies of vitamin D supplements concluded that most correlations between vitamin D levels and outcomes in common and high-mortality conditions are unlikely to be causal.
A new Endocrine Society guideline recommends against routine measurement of vitamin D levels in healthy individuals. The guideline reinforces my current practice of not screening for vitamin D deficiency except in special situations, such as an individual with dark skin who works the night shift and rarely goes outdoors during daytime hours. But I haven’t been offering empirical vitamin D supplements to the four at-risk groups identified by the Endocrine Society: children, adults older than 75 years, pregnant patients, and adults with prediabetes. The evidence behind these recommendations merits a closer look.
In exclusively or primarily breastfed infants, I follow the American Academy of Pediatrics recommendation to prescribe a daily supplement containing 400 IU of vitamin D. However, the Endocrine Society found evidence from several studies conducted in other countries that continuing supplementation throughout childhood reduces the risk for rickets and possibly reduces the incidence of respiratory infections, with few adverse effects.
Many older women, and some older men, choose to take a calcium and vitamin D supplement for bone health, even though there is scant evidence that doing so prevents fractures in community-dwelling adults without osteoporosis. The Endocrine Society’s meta-analysis, however, found that 1000 adults aged 75 years or older who took an average of 900 IU of vitamin D daily for 2 years could expect to experience six fewer deaths than an identical group not taking supplements.
A typical prenatal vitamin contains 400 IU of vitamin D. Placebo-controlled trials reviewed by the Endocrine Society that gave an average of 2500 IU daily found statistically insignificant reductions in preeclampsia, intrauterine death, preterm birth, small for gestation age birth, and neonatal deaths.
Finally, the Endocrine Society’s recommendation for adults with prediabetes was based on 11 trials (three conducted in the United States) that tested a daily average of 3500 IU and found a slightly lower risk for progression to diabetes (24 fewer diagnoses of type 2 diabetes per 1000 persons) in the group who took supplements.
Of the four groups highlighted by the guideline, the strongest case for vitamin D supplements is in older adults — it’s hard to argue with lower mortality, even if the difference is small. Therefore, I will start suggesting that my patients over age 75 take a daily vitamin D supplement containing at least 800 IU if they aren’t already doing so.
On the other hand, I don’t plan to change my approach to pregnant patients (whose benefits in studies could have been due to chance), children after age 1 year (studies of children in other countries with different nutritional status may not apply to the United States), or adults with prediabetes (where we already have proven lifestyle interventions with much greater effects). In these cases, either I am unconvinced that the data support benefits for my patients, or I feel that the benefits of vitamin D supplements are small enough to be outweighed by potential harms, such as increased kidney stones.
Kenneth W. Lin, Associate Director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania, has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
New Criteria Distinguish Memory Disorder Often Misdiagnosed as Alzheimer’s
Proposed clinical criteria for a memory loss disorder that is often misdiagnosed as Alzheimer’s disease (AD) have been published.
The new criteria for limbic-predominant amnestic neurodegenerative syndrome (LANS) provide a framework for neurologists and other experts to classify the condition and offer a more precise diagnosis and potential treatments.
“In our clinical work, we see patients whose memory symptoms appear to mimic Alzheimer’s disease, but when you look at their brain imaging or biomarkers, it’s clear they don’t have Alzheimer’s. Until now, there has not been a specific medical diagnosis to point to, but now we can offer them some answers,” senior investigator David T. Jones, MD, said in a release.
The proposed criteria and the research behind it were published online in Brain Communications and will be presented at the Alzheimer›s Association International Conference in Philadelphia.
Already in Use
Predominant limbic degeneration has been linked to various underlying etiologies, older age, predominant impairment of episodic memory, and slow clinical progression, the investigators noted. However, they added, the neurologic syndrome associated with predominant limbic degeneration is undefined.
Developing clinical criteria and validating them “is critical to distinguish such a syndrome from those originating from neocortical degeneration, which may differ in underlying etiology, disease course, and therapeutic needs,” the investigators wrote.
The newly proposed clinical criteria apply to LANS, which is “highly associated with limbic-predominant age-related TDP-43 encephalopathy but also other pathologic entities.”
The criteria incorporate core, standard, and advanced features including older age at evaluation, mild clinical syndrome, disproportionate hippocampal atrophy, impaired semantic memory, limbic hypometabolism, absence of endocortical degeneration, and low likelihood of neocortical tau with highest, high, moderate, and low degrees of certainty.
“A detailed history of the clinical symptoms, which may be supported by neuropsychological testing, with the observation of disproportionate hippocampal atrophy and limbic degeneration on MRI/FDG yields a high confidence in a diagnosis of LANS, where the most likely symptom-driving proteinopathy is TDP-43 and not Alzheimer’s associated proteins,” the first author, Nick Corriveau-Lecavalier, PhD, assistant professor of neurology and psychology at Mayo Clinic, Rochester, Minnesota, told this news organization.
To validate the criteria, the investigators screened autopsied patients from Mayo Clinic and Alzheimer’s Disease Neuroimaging Initiative cohorts and applied the criteria to those with a predominant amnestic syndrome and those who had AD neuropathologic change, limbic-predominant age-related TDP-43 encephalopathy, or both pathologies at autopsy.
“The criteria effectively categorized these cases, with Alzheimer’s disease having the lowest likelihoods, limbic-predominant age-related TDP-43 encephalopathy patients having the highest likelihoods, and patients with both pathologies having intermediate likelihoods,” the investigators reported.
“Patients with high likelihoods had a milder and slower clinical course and more severe temporo-limbic degeneration compared to those with low likelihoods,” they added.
Dr. Corriveau-Lecavalier said the team is currently analyzing longitudinal cognitive and imaging trajectories in LANS over several years. “This will help us better understand how LANS and Alzheimer’s differ in their sequence of symptoms over time.”
It is important to understand that memory symptoms in old age are not “unequivocally” driven by Alzheimer’s and that LANS progresses more slowly and has a better prognosis than AD, he noted.
In addition, in vivo markers of TDP-43 are “on the horizon and can hopefully make their way to human research settings soon. This will help better understand the underlying molecular etiologies causing LANS and associated symptoms,” he said.
Dr. Corriveau-Lecavalier said the LANS criteria are ready for clinical use by experts in neurologic care. These criteria can be used to inform not only diagnosis but also prognosis, where this syndrome is associated with slow and mild progression and a memory-dominant profile.
He added that “the new criteria are also routinely used in our practice to make decisions about anti-amyloid treatment eligibility.”
Commenting on the research for this news organization, Rebecca M. Edelmayer, PhD, Alzheimer’s Association senior director of scientific engagement, said the research “exemplifies the great need to develop objective criteria for diagnosis and staging of Alzheimer’s and all other types of dementia and to create an integrated biological and clinical staging scheme that can be used effectively by physicians.”
“Advances in biomarkers will help to differentiate all types of dementia when incorporated into the diagnostic workup, but until those tools are available, a more succinct clinical criteria for diagnosis can be used to support a more personalized medicine approach to treatment, care, and enrollment into clinical studies,” said Dr. Edelmayer, who wasn’t involved in the research.
The research was funded in part by the National Institutes of Health and by the Robert Wood Johnson Foundation, the Elsie & Marvin Dekelboum Family Foundation, the Liston Family Foundation, the Edson Family, the Gerald A. and Henrietta Rauenhorst Foundation, and the Foundation Dr Corinne Schuler. Dr. Corriveau-Lecavalier and Dr. Edelmayer had no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Proposed clinical criteria for a memory loss disorder that is often misdiagnosed as Alzheimer’s disease (AD) have been published.
The new criteria for limbic-predominant amnestic neurodegenerative syndrome (LANS) provide a framework for neurologists and other experts to classify the condition and offer a more precise diagnosis and potential treatments.
“In our clinical work, we see patients whose memory symptoms appear to mimic Alzheimer’s disease, but when you look at their brain imaging or biomarkers, it’s clear they don’t have Alzheimer’s. Until now, there has not been a specific medical diagnosis to point to, but now we can offer them some answers,” senior investigator David T. Jones, MD, said in a release.
The proposed criteria and the research behind it were published online in Brain Communications and will be presented at the Alzheimer›s Association International Conference in Philadelphia.
Already in Use
Predominant limbic degeneration has been linked to various underlying etiologies, older age, predominant impairment of episodic memory, and slow clinical progression, the investigators noted. However, they added, the neurologic syndrome associated with predominant limbic degeneration is undefined.
Developing clinical criteria and validating them “is critical to distinguish such a syndrome from those originating from neocortical degeneration, which may differ in underlying etiology, disease course, and therapeutic needs,” the investigators wrote.
The newly proposed clinical criteria apply to LANS, which is “highly associated with limbic-predominant age-related TDP-43 encephalopathy but also other pathologic entities.”
The criteria incorporate core, standard, and advanced features including older age at evaluation, mild clinical syndrome, disproportionate hippocampal atrophy, impaired semantic memory, limbic hypometabolism, absence of endocortical degeneration, and low likelihood of neocortical tau with highest, high, moderate, and low degrees of certainty.
“A detailed history of the clinical symptoms, which may be supported by neuropsychological testing, with the observation of disproportionate hippocampal atrophy and limbic degeneration on MRI/FDG yields a high confidence in a diagnosis of LANS, where the most likely symptom-driving proteinopathy is TDP-43 and not Alzheimer’s associated proteins,” the first author, Nick Corriveau-Lecavalier, PhD, assistant professor of neurology and psychology at Mayo Clinic, Rochester, Minnesota, told this news organization.
To validate the criteria, the investigators screened autopsied patients from Mayo Clinic and Alzheimer’s Disease Neuroimaging Initiative cohorts and applied the criteria to those with a predominant amnestic syndrome and those who had AD neuropathologic change, limbic-predominant age-related TDP-43 encephalopathy, or both pathologies at autopsy.
“The criteria effectively categorized these cases, with Alzheimer’s disease having the lowest likelihoods, limbic-predominant age-related TDP-43 encephalopathy patients having the highest likelihoods, and patients with both pathologies having intermediate likelihoods,” the investigators reported.
“Patients with high likelihoods had a milder and slower clinical course and more severe temporo-limbic degeneration compared to those with low likelihoods,” they added.
Dr. Corriveau-Lecavalier said the team is currently analyzing longitudinal cognitive and imaging trajectories in LANS over several years. “This will help us better understand how LANS and Alzheimer’s differ in their sequence of symptoms over time.”
It is important to understand that memory symptoms in old age are not “unequivocally” driven by Alzheimer’s and that LANS progresses more slowly and has a better prognosis than AD, he noted.
In addition, in vivo markers of TDP-43 are “on the horizon and can hopefully make their way to human research settings soon. This will help better understand the underlying molecular etiologies causing LANS and associated symptoms,” he said.
Dr. Corriveau-Lecavalier said the LANS criteria are ready for clinical use by experts in neurologic care. These criteria can be used to inform not only diagnosis but also prognosis, where this syndrome is associated with slow and mild progression and a memory-dominant profile.
He added that “the new criteria are also routinely used in our practice to make decisions about anti-amyloid treatment eligibility.”
Commenting on the research for this news organization, Rebecca M. Edelmayer, PhD, Alzheimer’s Association senior director of scientific engagement, said the research “exemplifies the great need to develop objective criteria for diagnosis and staging of Alzheimer’s and all other types of dementia and to create an integrated biological and clinical staging scheme that can be used effectively by physicians.”
“Advances in biomarkers will help to differentiate all types of dementia when incorporated into the diagnostic workup, but until those tools are available, a more succinct clinical criteria for diagnosis can be used to support a more personalized medicine approach to treatment, care, and enrollment into clinical studies,” said Dr. Edelmayer, who wasn’t involved in the research.
The research was funded in part by the National Institutes of Health and by the Robert Wood Johnson Foundation, the Elsie & Marvin Dekelboum Family Foundation, the Liston Family Foundation, the Edson Family, the Gerald A. and Henrietta Rauenhorst Foundation, and the Foundation Dr Corinne Schuler. Dr. Corriveau-Lecavalier and Dr. Edelmayer had no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Proposed clinical criteria for a memory loss disorder that is often misdiagnosed as Alzheimer’s disease (AD) have been published.
The new criteria for limbic-predominant amnestic neurodegenerative syndrome (LANS) provide a framework for neurologists and other experts to classify the condition and offer a more precise diagnosis and potential treatments.
“In our clinical work, we see patients whose memory symptoms appear to mimic Alzheimer’s disease, but when you look at their brain imaging or biomarkers, it’s clear they don’t have Alzheimer’s. Until now, there has not been a specific medical diagnosis to point to, but now we can offer them some answers,” senior investigator David T. Jones, MD, said in a release.
The proposed criteria and the research behind it were published online in Brain Communications and will be presented at the Alzheimer›s Association International Conference in Philadelphia.
Already in Use
Predominant limbic degeneration has been linked to various underlying etiologies, older age, predominant impairment of episodic memory, and slow clinical progression, the investigators noted. However, they added, the neurologic syndrome associated with predominant limbic degeneration is undefined.
Developing clinical criteria and validating them “is critical to distinguish such a syndrome from those originating from neocortical degeneration, which may differ in underlying etiology, disease course, and therapeutic needs,” the investigators wrote.
The newly proposed clinical criteria apply to LANS, which is “highly associated with limbic-predominant age-related TDP-43 encephalopathy but also other pathologic entities.”
The criteria incorporate core, standard, and advanced features including older age at evaluation, mild clinical syndrome, disproportionate hippocampal atrophy, impaired semantic memory, limbic hypometabolism, absence of endocortical degeneration, and low likelihood of neocortical tau with highest, high, moderate, and low degrees of certainty.
“A detailed history of the clinical symptoms, which may be supported by neuropsychological testing, with the observation of disproportionate hippocampal atrophy and limbic degeneration on MRI/FDG yields a high confidence in a diagnosis of LANS, where the most likely symptom-driving proteinopathy is TDP-43 and not Alzheimer’s associated proteins,” the first author, Nick Corriveau-Lecavalier, PhD, assistant professor of neurology and psychology at Mayo Clinic, Rochester, Minnesota, told this news organization.
To validate the criteria, the investigators screened autopsied patients from Mayo Clinic and Alzheimer’s Disease Neuroimaging Initiative cohorts and applied the criteria to those with a predominant amnestic syndrome and those who had AD neuropathologic change, limbic-predominant age-related TDP-43 encephalopathy, or both pathologies at autopsy.
“The criteria effectively categorized these cases, with Alzheimer’s disease having the lowest likelihoods, limbic-predominant age-related TDP-43 encephalopathy patients having the highest likelihoods, and patients with both pathologies having intermediate likelihoods,” the investigators reported.
“Patients with high likelihoods had a milder and slower clinical course and more severe temporo-limbic degeneration compared to those with low likelihoods,” they added.
Dr. Corriveau-Lecavalier said the team is currently analyzing longitudinal cognitive and imaging trajectories in LANS over several years. “This will help us better understand how LANS and Alzheimer’s differ in their sequence of symptoms over time.”
It is important to understand that memory symptoms in old age are not “unequivocally” driven by Alzheimer’s and that LANS progresses more slowly and has a better prognosis than AD, he noted.
In addition, in vivo markers of TDP-43 are “on the horizon and can hopefully make their way to human research settings soon. This will help better understand the underlying molecular etiologies causing LANS and associated symptoms,” he said.
Dr. Corriveau-Lecavalier said the LANS criteria are ready for clinical use by experts in neurologic care. These criteria can be used to inform not only diagnosis but also prognosis, where this syndrome is associated with slow and mild progression and a memory-dominant profile.
He added that “the new criteria are also routinely used in our practice to make decisions about anti-amyloid treatment eligibility.”
Commenting on the research for this news organization, Rebecca M. Edelmayer, PhD, Alzheimer’s Association senior director of scientific engagement, said the research “exemplifies the great need to develop objective criteria for diagnosis and staging of Alzheimer’s and all other types of dementia and to create an integrated biological and clinical staging scheme that can be used effectively by physicians.”
“Advances in biomarkers will help to differentiate all types of dementia when incorporated into the diagnostic workup, but until those tools are available, a more succinct clinical criteria for diagnosis can be used to support a more personalized medicine approach to treatment, care, and enrollment into clinical studies,” said Dr. Edelmayer, who wasn’t involved in the research.
The research was funded in part by the National Institutes of Health and by the Robert Wood Johnson Foundation, the Elsie & Marvin Dekelboum Family Foundation, the Liston Family Foundation, the Edson Family, the Gerald A. and Henrietta Rauenhorst Foundation, and the Foundation Dr Corinne Schuler. Dr. Corriveau-Lecavalier and Dr. Edelmayer had no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Prognostication in Hospice Care: Challenges, Opportunities, and the Importance of Functional Status
Predicting life expectancy and providing an end-of-life diagnosis in hospice and palliative care is a challenge for most clinicians. Lack of training, limited communication skills, and relationships with patients are all contributing factors. These skills can improve with the use of functional scoring tools in conjunction with the patient’s comorbidities and physical/psychological symptoms. The Palliative Performance Scale (PPS), Karnofsky Performance Scale (KPS), and Eastern Cooperative Oncology Group Performance Status Scale (ECOG) are commonly used functional scoring tools.
The PPS measures 5 functional dimensions including ambulation, activity level, ability to administer self-care, oral intake, and level of consciousness.1 It has been shown to be valid for a broad range of palliative care patients, including those with advanced cancer or life-threatening noncancer diagnoses in hospitals or hospice care.2 The scale, measured in 10% increments, runs from 100% (completely functional) to 0% (dead). A PPS ≤ 70% helps meet hospice eligibility criteria.
The KPS evaluates functional impairment and helps with prognostication. Developed in 1948, it evaluates a patient’s functional ability to tolerate chemotherapy, specifically in lung cancer,and has since been validated to predict mortality across older adults and in chronic disease populations.3,4 The KPS is also measured in 10% increments ranging from 100% (completely functional without assistance) to 0% (dead). A KPS ≤ 70% assists with hospice eligibility criteria (Table 1).5
Developed in 1974, the ECOG has been identified as one of the most important functional status tools in adult cancer care.6 It describes a cancer patient’s functional ability, evaluating their ability to care for oneself and participate in daily activities.7 The ECOG is a 6-point scale; patients can receive scores ranging from 0 (fully active) to 5 (dead). An ECOG score of 4 (sometimes 3) is generally supportive of meeting hospice eligibility (Table 2).6
CASE Presentation
An 80-year-old patient was admitted to the hospice service at the Veterans Affairs Puget Sound Health Care System (VAPSHCS) community living center (CLC) in Tacoma, Washington, from a community-based acute care hospital. His medical history included prostate cancer with metastasis to his pelvis and type 2 diabetes mellitus, which was stable with treatment with oral medication. Six weeks earlier the patient reported a severe frontal headache that was not responding to over-the-counter analgesics. After 2 days with these symptoms, including a ground-level fall without injuries, he presented to the VAPSHCS emergency department (ED) where a complete neurological examination, including magnetic resonance imaging, revealed a left frontoparietal brain lesion that was 4.2 cm × 3.4 cm × 4.2 cm.
The patient experienced a seizure during his ED evaluation and was admitted for treatment. He underwent a craniotomy where most, but not all the lesions were successfully removed. Postoperatively, the patient exhibited right-sided neglect, gait instability, emotional lability, and cognitive communication disorder. The patient completed 15 of 20 planned radiation treatments but declined further radiation or chemotherapy. The patient decided to halt radiation treatments after being informed by the oncology service that the treatments would likely only add 1 to 2 months to his overall survival, which was < 6 months. The patient elected to focus his goals of care on comfort, dignity, and respect at the end of life and accepted recommendations to be placed into end-of-life hospice care. He was then transferred to the VAPSHCS CLC in Tacoma, Washington, for hospice care.
Upon admission, the patient weighed 94 kg, his vital signs were within reference range, and he reported no pain or headaches. His initial laboratory results revealed a 13.2 g/dL hemoglobin, 3.6 g/dL serum albumin, and a 5.5% hemoglobin A1c, all of which fall into a normal reference range. He had a reported ECOG score of 3 and a KPS score of 50% by the transferring medical team. The patient’s medications included scheduled dexamethasone, metformin, senna, levetiracetam, and as-needed midazolam nasal spray for breakthrough seizures. He also had as-needed acetaminophen for pain. He was alert, oriented ×3, and fully ambulatory but continuously used a 4-wheeled walker for safety and gait instability.
After the patient’s first night, the hospice team met with him to discuss his understanding of his health issues. The patient appeared to have low health literacy but told the team, “I know I am dying.” He had completed written advance directives and a Portable Order for Life-Sustaining Treatment indicating that life-sustaining treatments, including cardiopulmonary resuscitation, supplemental mechanical feeding, or intubation, were not to be used to keep him alive.
At his first 90-day recertification, the patient had gained 8 kg and laboratory results revealed a 14.6 g/dL hemoglobin, 3.8 g/dL serum albumin, and a 6.1% hemoglobin A1c. His ECOG score remained at 3, but his KPS score had increased to 60%. The patient exhibited no new neurologic symptoms or seizures and reported no headaches but had 2 ground-level falls without injury. On both occasions the patient chose not to use his walker to go to the bathroom because it was “too far from my bed.” Per VA policy, after discussions with the hospice team, he was recertified for 90 more days of hospice care. At the end of 6 months in CLC, the patient’s weight remained stable, as did his complete blood count and comprehensive medical panel. He had 1 additional noninjurious ground-level fall and again reported no pain and no use of as-needed acetaminophen. His only medical complication was testing positive for COVID-19, but he remained asymptomatic. The patient was graduated from hospice care and referred to a nearby non-VA adult family home in the community after 180 days. At that time his ECOG score was 2 and his KPS score had increased to 70%.
DISCUSSION
Primary brain tumors account for about 2% of all malignant neoplasms in adults. About half of them represent gliomas. Glioblastoma multiforme derived from neuroepithelial cells is the most frequent and deadly primary malignant central nervous system tumor in adults.8 About 50% of patients with glioblastomas are aged ≥ 65 years at diagnosis.9 A retrospective study of Centers for Medicare and Medicaid Services claims data paired with the Surveillance, Epidemiology, and End Results database indicated a median survival of 4 months for patients with glioblastoma multiforme aged > 65 years, including all treatment modalities.10 Surgical resection combined with radiation and chemotherapy offers the best prognosis for the preservation of neurologic function.11 However, comorbidities, adverse drug effects, and the potential for postoperative complications pose significant risks, especially for older patients. Ultimately, goals of care conversations and advance directives play a very important role in evaluating benefits vs risks with this malignancy.
Our patient was aged 80 years and had previously been diagnosed with metastatic prostate malignancy. His goals of care focused on spending time with his friends, leaving his room to eat in the facility dining area, and continuing his daily walks. He remained clear that he did not want his care team to institute life-sustaining treatments to be kept alive and felt the information regarding the risks vs benefits of accepting chemotherapy was not aligned with his goals of care. Over the 6 months that he received hospice care, he gained weight, improved his hemoglobin and serum albumin levels, and ambulated with the use of a 4-wheeled walker. As the patient exhibited no functional decline or new comorbidities and his functional status improved, the clinical staff felt he no longer needed hospice services. The patient had an ECOG score of 2 and a KPS score of 70% at his hospice graduation.
Medical prognostication is one of the biggest challenges clinicians face. Clinicians are generally “over prognosticators,” and their thoughts tend to be based on the patient relationship, overall experiences in health care, and desire to treat and cure patients.12 In hospice we are asked to define the usual, normal, or expected course of a disease, but what does that mean? Although metastatic malignancies usually have a predictable course in comparison to diagnoses such as dementia, chronic obstructive pulmonary disease, or congestive heart failure, the challenges to improve prognostic ability andpredict disease course continue.13-15 Focusing on functional status, goals of care, and comorbidities are keys to helping with prognosis. Given the challenge, we find the PPS, KPS, and ECOG scales important tools.
When prognosticating, we attempt to define quantity and quality of life (which our patients must define independently or from the voice of their surrogate) and their ability to perform daily activities. Quality of life in patients with glioblastoma is progressively and significantly impacted due to the emergence of debilitating neurologic symptoms arising from infiltrative tumor growth into functionally intact brain tissue that restricts and disrupts normal day-to-day activities. However, functional status plays a significant role in helping the hospice team improve its overall prognosis.
Conclusions
This case study illustrates the difficulty that comes with prognostication(s) despite a patient's severely morbid disease, history of metastatic prostate cancer, and advanced age. Although a diagnosis may be concerning, documenting a patient’s status using functional scales prior to hospice admission and during the recertification process is helpful in prognostication. Doing so will allow health care professionals to have an accepted medical standard to use regardless how distinct the patient's diagnosis. The expression, “as the disease does not read the textbook,” may serve as a helpful reminder in talking with patients and their families. This is important as most patient’s clinical disease courses are different and having the opportunity to use performance status scales may help improve prognostic skills.
1. Cleary TA. The Palliative Performance Scale (PPSv2) Version 2. In: Downing GM, ed. Medical Care of the Dying. 4th ed. Victoria Hospice Society, Learning Centre for Palliative Care; 2006:120.
2. Palliative Performance Scale. ePrognosis, University of California San Francisco. Accessed June 14, 2024. https://eprognosis.ucsf.edu/pps.php
3. Karnofsky DA, Burchenal JH. The Clinical Evaluation of Chemotherapeutic Agents in Cancer. In: MacLeod CM, ed. Evaluation of Chemotherapeutic Agents. Columbia University Press; 1949:191-205.
4. Khalid MA, Achakzai IK, Ahmed Khan S, et al. The use of Karnofsky Performance Status (KPS) as a predictor of 3 month post discharge mortality in cirrhotic patients. Gastroenterol Hepatol Bed Bench. 2018;11(4):301-305.
5. Karnofsky Performance Scale. US Dept of Veterans Affairs. Accessed June 14, 2024. https://www.hiv.va.gov/provider/tools/karnofsky-performance-scale.asp
6. Mischel A-M, Rosielle DA. Eastern Cooperative Oncology Group Performance Status. Palliative Care Network of Wisconsin. December 10, 2021. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/eastern-cooperative-oncology-group-performance-status/
7. Oken MM, Creech RH, Tormey DC, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982;5(6):649-655.
8. Nizamutdinov D, Stock EM, Dandashi JA, et al. Prognostication of survival outcomes in patients diagnosed with glioblastoma. World Neurosurg. 2018;109:e67-e74. doi:10.1016/j.wneu.2017.09.104
9. Kita D Ciernik IF Vaccarella S Age as a predictive factor in glioblastomas: population-based study. Neuroepidemiology. 2009;33(1):17-22. doi:10.1159/000210017
10. Jordan JT, Gerstner ER, Batchelor TT, Cahill DP, Plotkin SR. Glioblastoma care in the elderly. Cancer. 2016;122(2):189-197. doi:10.1002/cnr.29742
11. Brown, NF, Ottaviani D, Tazare J, et al. Survival outcomes and prognostic factors in glioblastoma. Cancers (Basel). 2022;14(13):3161. doi:10.3390/cancers14133161
12. Christalakis NA. Death Foretold: Prophecy and Prognosis in Medical Care. University of Chicago Press; 2000.
13. Weissman DE. Determining Prognosis in Advanced Cancer. Palliative Care Network of Wisconsin. January 28, 2019. Accessed June 14, 2014. https://www.mypcnow.org/fast-fact/determining-prognosis-in-advanced-cancer/
14. Childers JW, Arnold R, Curtis JR. Prognosis in End-Stage COPD. Palliative Care Network of Wisconsin. February 11, 2019. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/prognosis-in-end-stage-copd/
15. Reisfield GM, Wilson GR. Prognostication in Heart Failure. Palliative Care Network of Wisconsin. February 11, 2019. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/prognostication-in-heart-failure/
Predicting life expectancy and providing an end-of-life diagnosis in hospice and palliative care is a challenge for most clinicians. Lack of training, limited communication skills, and relationships with patients are all contributing factors. These skills can improve with the use of functional scoring tools in conjunction with the patient’s comorbidities and physical/psychological symptoms. The Palliative Performance Scale (PPS), Karnofsky Performance Scale (KPS), and Eastern Cooperative Oncology Group Performance Status Scale (ECOG) are commonly used functional scoring tools.
The PPS measures 5 functional dimensions including ambulation, activity level, ability to administer self-care, oral intake, and level of consciousness.1 It has been shown to be valid for a broad range of palliative care patients, including those with advanced cancer or life-threatening noncancer diagnoses in hospitals or hospice care.2 The scale, measured in 10% increments, runs from 100% (completely functional) to 0% (dead). A PPS ≤ 70% helps meet hospice eligibility criteria.
The KPS evaluates functional impairment and helps with prognostication. Developed in 1948, it evaluates a patient’s functional ability to tolerate chemotherapy, specifically in lung cancer,and has since been validated to predict mortality across older adults and in chronic disease populations.3,4 The KPS is also measured in 10% increments ranging from 100% (completely functional without assistance) to 0% (dead). A KPS ≤ 70% assists with hospice eligibility criteria (Table 1).5
Developed in 1974, the ECOG has been identified as one of the most important functional status tools in adult cancer care.6 It describes a cancer patient’s functional ability, evaluating their ability to care for oneself and participate in daily activities.7 The ECOG is a 6-point scale; patients can receive scores ranging from 0 (fully active) to 5 (dead). An ECOG score of 4 (sometimes 3) is generally supportive of meeting hospice eligibility (Table 2).6
CASE Presentation
An 80-year-old patient was admitted to the hospice service at the Veterans Affairs Puget Sound Health Care System (VAPSHCS) community living center (CLC) in Tacoma, Washington, from a community-based acute care hospital. His medical history included prostate cancer with metastasis to his pelvis and type 2 diabetes mellitus, which was stable with treatment with oral medication. Six weeks earlier the patient reported a severe frontal headache that was not responding to over-the-counter analgesics. After 2 days with these symptoms, including a ground-level fall without injuries, he presented to the VAPSHCS emergency department (ED) where a complete neurological examination, including magnetic resonance imaging, revealed a left frontoparietal brain lesion that was 4.2 cm × 3.4 cm × 4.2 cm.
The patient experienced a seizure during his ED evaluation and was admitted for treatment. He underwent a craniotomy where most, but not all the lesions were successfully removed. Postoperatively, the patient exhibited right-sided neglect, gait instability, emotional lability, and cognitive communication disorder. The patient completed 15 of 20 planned radiation treatments but declined further radiation or chemotherapy. The patient decided to halt radiation treatments after being informed by the oncology service that the treatments would likely only add 1 to 2 months to his overall survival, which was < 6 months. The patient elected to focus his goals of care on comfort, dignity, and respect at the end of life and accepted recommendations to be placed into end-of-life hospice care. He was then transferred to the VAPSHCS CLC in Tacoma, Washington, for hospice care.
Upon admission, the patient weighed 94 kg, his vital signs were within reference range, and he reported no pain or headaches. His initial laboratory results revealed a 13.2 g/dL hemoglobin, 3.6 g/dL serum albumin, and a 5.5% hemoglobin A1c, all of which fall into a normal reference range. He had a reported ECOG score of 3 and a KPS score of 50% by the transferring medical team. The patient’s medications included scheduled dexamethasone, metformin, senna, levetiracetam, and as-needed midazolam nasal spray for breakthrough seizures. He also had as-needed acetaminophen for pain. He was alert, oriented ×3, and fully ambulatory but continuously used a 4-wheeled walker for safety and gait instability.
After the patient’s first night, the hospice team met with him to discuss his understanding of his health issues. The patient appeared to have low health literacy but told the team, “I know I am dying.” He had completed written advance directives and a Portable Order for Life-Sustaining Treatment indicating that life-sustaining treatments, including cardiopulmonary resuscitation, supplemental mechanical feeding, or intubation, were not to be used to keep him alive.
At his first 90-day recertification, the patient had gained 8 kg and laboratory results revealed a 14.6 g/dL hemoglobin, 3.8 g/dL serum albumin, and a 6.1% hemoglobin A1c. His ECOG score remained at 3, but his KPS score had increased to 60%. The patient exhibited no new neurologic symptoms or seizures and reported no headaches but had 2 ground-level falls without injury. On both occasions the patient chose not to use his walker to go to the bathroom because it was “too far from my bed.” Per VA policy, after discussions with the hospice team, he was recertified for 90 more days of hospice care. At the end of 6 months in CLC, the patient’s weight remained stable, as did his complete blood count and comprehensive medical panel. He had 1 additional noninjurious ground-level fall and again reported no pain and no use of as-needed acetaminophen. His only medical complication was testing positive for COVID-19, but he remained asymptomatic. The patient was graduated from hospice care and referred to a nearby non-VA adult family home in the community after 180 days. At that time his ECOG score was 2 and his KPS score had increased to 70%.
DISCUSSION
Primary brain tumors account for about 2% of all malignant neoplasms in adults. About half of them represent gliomas. Glioblastoma multiforme derived from neuroepithelial cells is the most frequent and deadly primary malignant central nervous system tumor in adults.8 About 50% of patients with glioblastomas are aged ≥ 65 years at diagnosis.9 A retrospective study of Centers for Medicare and Medicaid Services claims data paired with the Surveillance, Epidemiology, and End Results database indicated a median survival of 4 months for patients with glioblastoma multiforme aged > 65 years, including all treatment modalities.10 Surgical resection combined with radiation and chemotherapy offers the best prognosis for the preservation of neurologic function.11 However, comorbidities, adverse drug effects, and the potential for postoperative complications pose significant risks, especially for older patients. Ultimately, goals of care conversations and advance directives play a very important role in evaluating benefits vs risks with this malignancy.
Our patient was aged 80 years and had previously been diagnosed with metastatic prostate malignancy. His goals of care focused on spending time with his friends, leaving his room to eat in the facility dining area, and continuing his daily walks. He remained clear that he did not want his care team to institute life-sustaining treatments to be kept alive and felt the information regarding the risks vs benefits of accepting chemotherapy was not aligned with his goals of care. Over the 6 months that he received hospice care, he gained weight, improved his hemoglobin and serum albumin levels, and ambulated with the use of a 4-wheeled walker. As the patient exhibited no functional decline or new comorbidities and his functional status improved, the clinical staff felt he no longer needed hospice services. The patient had an ECOG score of 2 and a KPS score of 70% at his hospice graduation.
Medical prognostication is one of the biggest challenges clinicians face. Clinicians are generally “over prognosticators,” and their thoughts tend to be based on the patient relationship, overall experiences in health care, and desire to treat and cure patients.12 In hospice we are asked to define the usual, normal, or expected course of a disease, but what does that mean? Although metastatic malignancies usually have a predictable course in comparison to diagnoses such as dementia, chronic obstructive pulmonary disease, or congestive heart failure, the challenges to improve prognostic ability andpredict disease course continue.13-15 Focusing on functional status, goals of care, and comorbidities are keys to helping with prognosis. Given the challenge, we find the PPS, KPS, and ECOG scales important tools.
When prognosticating, we attempt to define quantity and quality of life (which our patients must define independently or from the voice of their surrogate) and their ability to perform daily activities. Quality of life in patients with glioblastoma is progressively and significantly impacted due to the emergence of debilitating neurologic symptoms arising from infiltrative tumor growth into functionally intact brain tissue that restricts and disrupts normal day-to-day activities. However, functional status plays a significant role in helping the hospice team improve its overall prognosis.
Conclusions
This case study illustrates the difficulty that comes with prognostication(s) despite a patient's severely morbid disease, history of metastatic prostate cancer, and advanced age. Although a diagnosis may be concerning, documenting a patient’s status using functional scales prior to hospice admission and during the recertification process is helpful in prognostication. Doing so will allow health care professionals to have an accepted medical standard to use regardless how distinct the patient's diagnosis. The expression, “as the disease does not read the textbook,” may serve as a helpful reminder in talking with patients and their families. This is important as most patient’s clinical disease courses are different and having the opportunity to use performance status scales may help improve prognostic skills.
Predicting life expectancy and providing an end-of-life diagnosis in hospice and palliative care is a challenge for most clinicians. Lack of training, limited communication skills, and relationships with patients are all contributing factors. These skills can improve with the use of functional scoring tools in conjunction with the patient’s comorbidities and physical/psychological symptoms. The Palliative Performance Scale (PPS), Karnofsky Performance Scale (KPS), and Eastern Cooperative Oncology Group Performance Status Scale (ECOG) are commonly used functional scoring tools.
The PPS measures 5 functional dimensions including ambulation, activity level, ability to administer self-care, oral intake, and level of consciousness.1 It has been shown to be valid for a broad range of palliative care patients, including those with advanced cancer or life-threatening noncancer diagnoses in hospitals or hospice care.2 The scale, measured in 10% increments, runs from 100% (completely functional) to 0% (dead). A PPS ≤ 70% helps meet hospice eligibility criteria.
The KPS evaluates functional impairment and helps with prognostication. Developed in 1948, it evaluates a patient’s functional ability to tolerate chemotherapy, specifically in lung cancer,and has since been validated to predict mortality across older adults and in chronic disease populations.3,4 The KPS is also measured in 10% increments ranging from 100% (completely functional without assistance) to 0% (dead). A KPS ≤ 70% assists with hospice eligibility criteria (Table 1).5
Developed in 1974, the ECOG has been identified as one of the most important functional status tools in adult cancer care.6 It describes a cancer patient’s functional ability, evaluating their ability to care for oneself and participate in daily activities.7 The ECOG is a 6-point scale; patients can receive scores ranging from 0 (fully active) to 5 (dead). An ECOG score of 4 (sometimes 3) is generally supportive of meeting hospice eligibility (Table 2).6
CASE Presentation
An 80-year-old patient was admitted to the hospice service at the Veterans Affairs Puget Sound Health Care System (VAPSHCS) community living center (CLC) in Tacoma, Washington, from a community-based acute care hospital. His medical history included prostate cancer with metastasis to his pelvis and type 2 diabetes mellitus, which was stable with treatment with oral medication. Six weeks earlier the patient reported a severe frontal headache that was not responding to over-the-counter analgesics. After 2 days with these symptoms, including a ground-level fall without injuries, he presented to the VAPSHCS emergency department (ED) where a complete neurological examination, including magnetic resonance imaging, revealed a left frontoparietal brain lesion that was 4.2 cm × 3.4 cm × 4.2 cm.
The patient experienced a seizure during his ED evaluation and was admitted for treatment. He underwent a craniotomy where most, but not all the lesions were successfully removed. Postoperatively, the patient exhibited right-sided neglect, gait instability, emotional lability, and cognitive communication disorder. The patient completed 15 of 20 planned radiation treatments but declined further radiation or chemotherapy. The patient decided to halt radiation treatments after being informed by the oncology service that the treatments would likely only add 1 to 2 months to his overall survival, which was < 6 months. The patient elected to focus his goals of care on comfort, dignity, and respect at the end of life and accepted recommendations to be placed into end-of-life hospice care. He was then transferred to the VAPSHCS CLC in Tacoma, Washington, for hospice care.
Upon admission, the patient weighed 94 kg, his vital signs were within reference range, and he reported no pain or headaches. His initial laboratory results revealed a 13.2 g/dL hemoglobin, 3.6 g/dL serum albumin, and a 5.5% hemoglobin A1c, all of which fall into a normal reference range. He had a reported ECOG score of 3 and a KPS score of 50% by the transferring medical team. The patient’s medications included scheduled dexamethasone, metformin, senna, levetiracetam, and as-needed midazolam nasal spray for breakthrough seizures. He also had as-needed acetaminophen for pain. He was alert, oriented ×3, and fully ambulatory but continuously used a 4-wheeled walker for safety and gait instability.
After the patient’s first night, the hospice team met with him to discuss his understanding of his health issues. The patient appeared to have low health literacy but told the team, “I know I am dying.” He had completed written advance directives and a Portable Order for Life-Sustaining Treatment indicating that life-sustaining treatments, including cardiopulmonary resuscitation, supplemental mechanical feeding, or intubation, were not to be used to keep him alive.
At his first 90-day recertification, the patient had gained 8 kg and laboratory results revealed a 14.6 g/dL hemoglobin, 3.8 g/dL serum albumin, and a 6.1% hemoglobin A1c. His ECOG score remained at 3, but his KPS score had increased to 60%. The patient exhibited no new neurologic symptoms or seizures and reported no headaches but had 2 ground-level falls without injury. On both occasions the patient chose not to use his walker to go to the bathroom because it was “too far from my bed.” Per VA policy, after discussions with the hospice team, he was recertified for 90 more days of hospice care. At the end of 6 months in CLC, the patient’s weight remained stable, as did his complete blood count and comprehensive medical panel. He had 1 additional noninjurious ground-level fall and again reported no pain and no use of as-needed acetaminophen. His only medical complication was testing positive for COVID-19, but he remained asymptomatic. The patient was graduated from hospice care and referred to a nearby non-VA adult family home in the community after 180 days. At that time his ECOG score was 2 and his KPS score had increased to 70%.
DISCUSSION
Primary brain tumors account for about 2% of all malignant neoplasms in adults. About half of them represent gliomas. Glioblastoma multiforme derived from neuroepithelial cells is the most frequent and deadly primary malignant central nervous system tumor in adults.8 About 50% of patients with glioblastomas are aged ≥ 65 years at diagnosis.9 A retrospective study of Centers for Medicare and Medicaid Services claims data paired with the Surveillance, Epidemiology, and End Results database indicated a median survival of 4 months for patients with glioblastoma multiforme aged > 65 years, including all treatment modalities.10 Surgical resection combined with radiation and chemotherapy offers the best prognosis for the preservation of neurologic function.11 However, comorbidities, adverse drug effects, and the potential for postoperative complications pose significant risks, especially for older patients. Ultimately, goals of care conversations and advance directives play a very important role in evaluating benefits vs risks with this malignancy.
Our patient was aged 80 years and had previously been diagnosed with metastatic prostate malignancy. His goals of care focused on spending time with his friends, leaving his room to eat in the facility dining area, and continuing his daily walks. He remained clear that he did not want his care team to institute life-sustaining treatments to be kept alive and felt the information regarding the risks vs benefits of accepting chemotherapy was not aligned with his goals of care. Over the 6 months that he received hospice care, he gained weight, improved his hemoglobin and serum albumin levels, and ambulated with the use of a 4-wheeled walker. As the patient exhibited no functional decline or new comorbidities and his functional status improved, the clinical staff felt he no longer needed hospice services. The patient had an ECOG score of 2 and a KPS score of 70% at his hospice graduation.
Medical prognostication is one of the biggest challenges clinicians face. Clinicians are generally “over prognosticators,” and their thoughts tend to be based on the patient relationship, overall experiences in health care, and desire to treat and cure patients.12 In hospice we are asked to define the usual, normal, or expected course of a disease, but what does that mean? Although metastatic malignancies usually have a predictable course in comparison to diagnoses such as dementia, chronic obstructive pulmonary disease, or congestive heart failure, the challenges to improve prognostic ability andpredict disease course continue.13-15 Focusing on functional status, goals of care, and comorbidities are keys to helping with prognosis. Given the challenge, we find the PPS, KPS, and ECOG scales important tools.
When prognosticating, we attempt to define quantity and quality of life (which our patients must define independently or from the voice of their surrogate) and their ability to perform daily activities. Quality of life in patients with glioblastoma is progressively and significantly impacted due to the emergence of debilitating neurologic symptoms arising from infiltrative tumor growth into functionally intact brain tissue that restricts and disrupts normal day-to-day activities. However, functional status plays a significant role in helping the hospice team improve its overall prognosis.
Conclusions
This case study illustrates the difficulty that comes with prognostication(s) despite a patient's severely morbid disease, history of metastatic prostate cancer, and advanced age. Although a diagnosis may be concerning, documenting a patient’s status using functional scales prior to hospice admission and during the recertification process is helpful in prognostication. Doing so will allow health care professionals to have an accepted medical standard to use regardless how distinct the patient's diagnosis. The expression, “as the disease does not read the textbook,” may serve as a helpful reminder in talking with patients and their families. This is important as most patient’s clinical disease courses are different and having the opportunity to use performance status scales may help improve prognostic skills.
1. Cleary TA. The Palliative Performance Scale (PPSv2) Version 2. In: Downing GM, ed. Medical Care of the Dying. 4th ed. Victoria Hospice Society, Learning Centre for Palliative Care; 2006:120.
2. Palliative Performance Scale. ePrognosis, University of California San Francisco. Accessed June 14, 2024. https://eprognosis.ucsf.edu/pps.php
3. Karnofsky DA, Burchenal JH. The Clinical Evaluation of Chemotherapeutic Agents in Cancer. In: MacLeod CM, ed. Evaluation of Chemotherapeutic Agents. Columbia University Press; 1949:191-205.
4. Khalid MA, Achakzai IK, Ahmed Khan S, et al. The use of Karnofsky Performance Status (KPS) as a predictor of 3 month post discharge mortality in cirrhotic patients. Gastroenterol Hepatol Bed Bench. 2018;11(4):301-305.
5. Karnofsky Performance Scale. US Dept of Veterans Affairs. Accessed June 14, 2024. https://www.hiv.va.gov/provider/tools/karnofsky-performance-scale.asp
6. Mischel A-M, Rosielle DA. Eastern Cooperative Oncology Group Performance Status. Palliative Care Network of Wisconsin. December 10, 2021. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/eastern-cooperative-oncology-group-performance-status/
7. Oken MM, Creech RH, Tormey DC, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982;5(6):649-655.
8. Nizamutdinov D, Stock EM, Dandashi JA, et al. Prognostication of survival outcomes in patients diagnosed with glioblastoma. World Neurosurg. 2018;109:e67-e74. doi:10.1016/j.wneu.2017.09.104
9. Kita D Ciernik IF Vaccarella S Age as a predictive factor in glioblastomas: population-based study. Neuroepidemiology. 2009;33(1):17-22. doi:10.1159/000210017
10. Jordan JT, Gerstner ER, Batchelor TT, Cahill DP, Plotkin SR. Glioblastoma care in the elderly. Cancer. 2016;122(2):189-197. doi:10.1002/cnr.29742
11. Brown, NF, Ottaviani D, Tazare J, et al. Survival outcomes and prognostic factors in glioblastoma. Cancers (Basel). 2022;14(13):3161. doi:10.3390/cancers14133161
12. Christalakis NA. Death Foretold: Prophecy and Prognosis in Medical Care. University of Chicago Press; 2000.
13. Weissman DE. Determining Prognosis in Advanced Cancer. Palliative Care Network of Wisconsin. January 28, 2019. Accessed June 14, 2014. https://www.mypcnow.org/fast-fact/determining-prognosis-in-advanced-cancer/
14. Childers JW, Arnold R, Curtis JR. Prognosis in End-Stage COPD. Palliative Care Network of Wisconsin. February 11, 2019. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/prognosis-in-end-stage-copd/
15. Reisfield GM, Wilson GR. Prognostication in Heart Failure. Palliative Care Network of Wisconsin. February 11, 2019. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/prognostication-in-heart-failure/
1. Cleary TA. The Palliative Performance Scale (PPSv2) Version 2. In: Downing GM, ed. Medical Care of the Dying. 4th ed. Victoria Hospice Society, Learning Centre for Palliative Care; 2006:120.
2. Palliative Performance Scale. ePrognosis, University of California San Francisco. Accessed June 14, 2024. https://eprognosis.ucsf.edu/pps.php
3. Karnofsky DA, Burchenal JH. The Clinical Evaluation of Chemotherapeutic Agents in Cancer. In: MacLeod CM, ed. Evaluation of Chemotherapeutic Agents. Columbia University Press; 1949:191-205.
4. Khalid MA, Achakzai IK, Ahmed Khan S, et al. The use of Karnofsky Performance Status (KPS) as a predictor of 3 month post discharge mortality in cirrhotic patients. Gastroenterol Hepatol Bed Bench. 2018;11(4):301-305.
5. Karnofsky Performance Scale. US Dept of Veterans Affairs. Accessed June 14, 2024. https://www.hiv.va.gov/provider/tools/karnofsky-performance-scale.asp
6. Mischel A-M, Rosielle DA. Eastern Cooperative Oncology Group Performance Status. Palliative Care Network of Wisconsin. December 10, 2021. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/eastern-cooperative-oncology-group-performance-status/
7. Oken MM, Creech RH, Tormey DC, et al. Toxicity and response criteria of the Eastern Cooperative Oncology Group. Am J Clin Oncol. 1982;5(6):649-655.
8. Nizamutdinov D, Stock EM, Dandashi JA, et al. Prognostication of survival outcomes in patients diagnosed with glioblastoma. World Neurosurg. 2018;109:e67-e74. doi:10.1016/j.wneu.2017.09.104
9. Kita D Ciernik IF Vaccarella S Age as a predictive factor in glioblastomas: population-based study. Neuroepidemiology. 2009;33(1):17-22. doi:10.1159/000210017
10. Jordan JT, Gerstner ER, Batchelor TT, Cahill DP, Plotkin SR. Glioblastoma care in the elderly. Cancer. 2016;122(2):189-197. doi:10.1002/cnr.29742
11. Brown, NF, Ottaviani D, Tazare J, et al. Survival outcomes and prognostic factors in glioblastoma. Cancers (Basel). 2022;14(13):3161. doi:10.3390/cancers14133161
12. Christalakis NA. Death Foretold: Prophecy and Prognosis in Medical Care. University of Chicago Press; 2000.
13. Weissman DE. Determining Prognosis in Advanced Cancer. Palliative Care Network of Wisconsin. January 28, 2019. Accessed June 14, 2014. https://www.mypcnow.org/fast-fact/determining-prognosis-in-advanced-cancer/
14. Childers JW, Arnold R, Curtis JR. Prognosis in End-Stage COPD. Palliative Care Network of Wisconsin. February 11, 2019. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/prognosis-in-end-stage-copd/
15. Reisfield GM, Wilson GR. Prognostication in Heart Failure. Palliative Care Network of Wisconsin. February 11, 2019. Accessed June 14, 2024. https://www.mypcnow.org/fast-fact/prognostication-in-heart-failure/