User login
Deprescribe Low-Value Meds to Reduce Polypharmacy Harms
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM FMF 2024
As Populations Age, Occam’s Razor Loses Its Diagnostic Edge
The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.
“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.
But are these two axioms truly in conflict, or is this a false dichotomy?
Occam’s Razor and Simple Diagnoses
Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.
For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.
Beyond Hickam’s Dictum and Occam’s Razor
A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.
Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:
1. Incidentaloma: An asymptomatic condition discovered accidentally.
2. Preexisting diagnosis: A known condition in the patient’s medical history.
3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.
4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.
Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.
Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.
Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.
Practical Insights
The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.
Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.
“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.
But are these two axioms truly in conflict, or is this a false dichotomy?
Occam’s Razor and Simple Diagnoses
Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.
For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.
Beyond Hickam’s Dictum and Occam’s Razor
A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.
Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:
1. Incidentaloma: An asymptomatic condition discovered accidentally.
2. Preexisting diagnosis: A known condition in the patient’s medical history.
3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.
4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.
Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.
Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.
Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.
Practical Insights
The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.
Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.
“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.
But are these two axioms truly in conflict, or is this a false dichotomy?
Occam’s Razor and Simple Diagnoses
Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.
For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.
Beyond Hickam’s Dictum and Occam’s Razor
A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.
Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:
1. Incidentaloma: An asymptomatic condition discovered accidentally.
2. Preexisting diagnosis: A known condition in the patient’s medical history.
3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.
4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.
Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.
Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.
Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.
Practical Insights
The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.
Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Two Brain Stim Methods Better Than One for Depression?
TOPLINE:
METHODOLOGY:
- Researchers conducted a double-blind, sham-controlled randomized clinical trial from 2021 to 2023 at three hospitals in China with 240 participants with MDD (mean age, 32.5 years; 58% women).
- Participants received active tDCS + active rTMS, sham tDCS + active rTMS, active tDCS + sham rTMS, or sham tDCS + sham rTMS with treatments administered five times per week for 2 weeks.
- tDCS was administered in 20-minute sessions using a 2-mA direct current stimulator, whereas rTMS involved 1600 pulses of 10-Hz stimulation targeting the left dorsolateral prefrontal cortex. Sham treatments used a pseudostimulation coil and only emitted sound.
- The primary outcome was change in the 24-item Hamilton Depression Rating Scale (HDRS-24) total score from baseline to week 2.
- Secondary outcomes included HDRS-24 total score change at week 4, remission rate (HDRS-24 total score ≤ 9), response rate (≥ 50% reduction in HDRS-24 total score), and adverse events.
TAKEAWAY:
- The active tDCS + active rTMS group demonstrated the greatest reduction in mean HDRS-24 score (18.33 ± 5.39) at week 2 compared with sham tDCS + active rTMS, active tDCS + sham rTMS, and sham tDCS + sham rTMS (P < .001).
- Response rates at week 2 were notably higher in the active tDCS + active rTMS group (85%) than in the active tDCS + sham rTMS (30%) and sham tDCS + sham rTMS groups (32%).
- The remission rate at week 4 reached 83% in the active tDCS + active rTMS group, which was significantly higher than the remission rates with the other interventions (P < .001).
- The treatments were well tolerated, with no serious adverse events, seizures, or manic symptoms reported across all intervention groups.
IN PRACTICE:
This trial “was the first to evaluate the safety, feasibility, and efficacy of combining tDCS and rTMS in treating depression. Future studies should focus on investigating the mechanism of this synergistic effect and improving the stimulation parameters to optimize the therapeutic effect,” the investigators wrote.
SOURCE:
This study was led by Dongsheng Zhou, MD, Ningbo Kangning Hospital, Ningbo, China. It was published online in JAMA Network Open.
LIMITATIONS:
The brief treatment duration involving 10 sessions may have been insufficient for tDCS and rTMS to demonstrate their full antidepressant potential. The inability to regulate participants’ antidepressant medications throughout the study period presented another limitation. Additionally, the lack of stratified randomization and adjustment for center effects may have introduced variability in the results.
DISCLOSURES:
This study received support from multiple grants, including from the Natural Science Foundation of Zhejiang Province, Basic Public Welfare Research Project of Zhejiang Province, Ningbo Medical and Health Brand Discipline, Ningbo Clinical Medical Research Centre for Mental Health, Ningbo Top Medical and Health Research Program, and the Zhejiang Medical and Health Science and Technology Plan Project. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a double-blind, sham-controlled randomized clinical trial from 2021 to 2023 at three hospitals in China with 240 participants with MDD (mean age, 32.5 years; 58% women).
- Participants received active tDCS + active rTMS, sham tDCS + active rTMS, active tDCS + sham rTMS, or sham tDCS + sham rTMS with treatments administered five times per week for 2 weeks.
- tDCS was administered in 20-minute sessions using a 2-mA direct current stimulator, whereas rTMS involved 1600 pulses of 10-Hz stimulation targeting the left dorsolateral prefrontal cortex. Sham treatments used a pseudostimulation coil and only emitted sound.
- The primary outcome was change in the 24-item Hamilton Depression Rating Scale (HDRS-24) total score from baseline to week 2.
- Secondary outcomes included HDRS-24 total score change at week 4, remission rate (HDRS-24 total score ≤ 9), response rate (≥ 50% reduction in HDRS-24 total score), and adverse events.
TAKEAWAY:
- The active tDCS + active rTMS group demonstrated the greatest reduction in mean HDRS-24 score (18.33 ± 5.39) at week 2 compared with sham tDCS + active rTMS, active tDCS + sham rTMS, and sham tDCS + sham rTMS (P < .001).
- Response rates at week 2 were notably higher in the active tDCS + active rTMS group (85%) than in the active tDCS + sham rTMS (30%) and sham tDCS + sham rTMS groups (32%).
- The remission rate at week 4 reached 83% in the active tDCS + active rTMS group, which was significantly higher than the remission rates with the other interventions (P < .001).
- The treatments were well tolerated, with no serious adverse events, seizures, or manic symptoms reported across all intervention groups.
IN PRACTICE:
This trial “was the first to evaluate the safety, feasibility, and efficacy of combining tDCS and rTMS in treating depression. Future studies should focus on investigating the mechanism of this synergistic effect and improving the stimulation parameters to optimize the therapeutic effect,” the investigators wrote.
SOURCE:
This study was led by Dongsheng Zhou, MD, Ningbo Kangning Hospital, Ningbo, China. It was published online in JAMA Network Open.
LIMITATIONS:
The brief treatment duration involving 10 sessions may have been insufficient for tDCS and rTMS to demonstrate their full antidepressant potential. The inability to regulate participants’ antidepressant medications throughout the study period presented another limitation. Additionally, the lack of stratified randomization and adjustment for center effects may have introduced variability in the results.
DISCLOSURES:
This study received support from multiple grants, including from the Natural Science Foundation of Zhejiang Province, Basic Public Welfare Research Project of Zhejiang Province, Ningbo Medical and Health Brand Discipline, Ningbo Clinical Medical Research Centre for Mental Health, Ningbo Top Medical and Health Research Program, and the Zhejiang Medical and Health Science and Technology Plan Project. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a double-blind, sham-controlled randomized clinical trial from 2021 to 2023 at three hospitals in China with 240 participants with MDD (mean age, 32.5 years; 58% women).
- Participants received active tDCS + active rTMS, sham tDCS + active rTMS, active tDCS + sham rTMS, or sham tDCS + sham rTMS with treatments administered five times per week for 2 weeks.
- tDCS was administered in 20-minute sessions using a 2-mA direct current stimulator, whereas rTMS involved 1600 pulses of 10-Hz stimulation targeting the left dorsolateral prefrontal cortex. Sham treatments used a pseudostimulation coil and only emitted sound.
- The primary outcome was change in the 24-item Hamilton Depression Rating Scale (HDRS-24) total score from baseline to week 2.
- Secondary outcomes included HDRS-24 total score change at week 4, remission rate (HDRS-24 total score ≤ 9), response rate (≥ 50% reduction in HDRS-24 total score), and adverse events.
TAKEAWAY:
- The active tDCS + active rTMS group demonstrated the greatest reduction in mean HDRS-24 score (18.33 ± 5.39) at week 2 compared with sham tDCS + active rTMS, active tDCS + sham rTMS, and sham tDCS + sham rTMS (P < .001).
- Response rates at week 2 were notably higher in the active tDCS + active rTMS group (85%) than in the active tDCS + sham rTMS (30%) and sham tDCS + sham rTMS groups (32%).
- The remission rate at week 4 reached 83% in the active tDCS + active rTMS group, which was significantly higher than the remission rates with the other interventions (P < .001).
- The treatments were well tolerated, with no serious adverse events, seizures, or manic symptoms reported across all intervention groups.
IN PRACTICE:
This trial “was the first to evaluate the safety, feasibility, and efficacy of combining tDCS and rTMS in treating depression. Future studies should focus on investigating the mechanism of this synergistic effect and improving the stimulation parameters to optimize the therapeutic effect,” the investigators wrote.
SOURCE:
This study was led by Dongsheng Zhou, MD, Ningbo Kangning Hospital, Ningbo, China. It was published online in JAMA Network Open.
LIMITATIONS:
The brief treatment duration involving 10 sessions may have been insufficient for tDCS and rTMS to demonstrate their full antidepressant potential. The inability to regulate participants’ antidepressant medications throughout the study period presented another limitation. Additionally, the lack of stratified randomization and adjustment for center effects may have introduced variability in the results.
DISCLOSURES:
This study received support from multiple grants, including from the Natural Science Foundation of Zhejiang Province, Basic Public Welfare Research Project of Zhejiang Province, Ningbo Medical and Health Brand Discipline, Ningbo Clinical Medical Research Centre for Mental Health, Ningbo Top Medical and Health Research Program, and the Zhejiang Medical and Health Science and Technology Plan Project. The authors reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
A Single-Question Screening Tool Could Identify Untreated Hearing Loss
A simple, single-question hearing screening administered by medical assistants could effectively identify older adults with untreated hearing loss, according to a study presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The study, conducted by researchers at the University of Massachusetts Amherst, involved 49 participants aged between 56 and 90 years who attended a health clinic with a Program for All-Inclusive Care for the Elderly (PACE). Most of the participants who are in PACE are dually eligible for both Medicare and Medicaid.
Medical assistants were trained to incorporate the following single-question hearing screener during health clinic appointments: “Do you have any difficulty with your hearing (without hearing aids)?” The screening offered a Likert-scale option of responses.
“A single-question hearing screener requires no equipment,” said study author Sara Mamo, AuD, PhD, and associate professor of Speech, Language, and Hearing Sciences at the University of Massachusetts Amherst. “It simply requires a systemic belief that addressing hearing loss matters.”
Following these screenings, the research team conducted on-site hearing threshold testing to evaluate the effectiveness of the method.
Mamo and her research team found that nearly three quarters of the participants had some degree of hearing loss, with 24 individuals showing mild hearing loss and 11 exhibiting moderate or worse hearing loss.
None of the participants were current users of hearing aids, which underscores the widespread issue of untreated hearing loss in older adults, according to Mamo.
“One benefit of screening by asking a question is that the patient who says ‘yes’ to having difficulty is more likely to accept support to address the difficulty,” said Mamo. “A medical provider asking about hearing loss is an important cue to action.”
The results showed a sensitivity of 71.4% and a specificity of 42.9%, suggesting that this simple screening can help identify individuals with untreated hearing loss during routine health visits.
Despite known links between age-related hearing loss and increased risks for dementia, depression, and loneliness, the US Preventive Services Task Force does not currently recommend routine hearing loss screening for adults.
“With minimal burden, we can identify individuals with untreated hearing loss during routine health appointments,” she said.
Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, agreed.
“We do not screen enough for hearing loss,” said Perissinotto, who was not involved in the study.
The researchers also provide practical communication tips for healthcare providers working with patients with untreated hearing loss. These include speaking face-to-face, speaking slowly, and using personal sound amplifiers.
Perissinotto added that integrating an individual’s hearing status into their medical records could enhance overall care and any future communication strategies.
“Writing hearing status [into medical records] prominently could be very important, as I have had patients inappropriately labeled as having dementia when it was a hearing issue,” said Perissinotto.
Mamo and Perissinotto had no conflicts of interest.
A version of this article first appeared on Medscape.com.
A simple, single-question hearing screening administered by medical assistants could effectively identify older adults with untreated hearing loss, according to a study presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The study, conducted by researchers at the University of Massachusetts Amherst, involved 49 participants aged between 56 and 90 years who attended a health clinic with a Program for All-Inclusive Care for the Elderly (PACE). Most of the participants who are in PACE are dually eligible for both Medicare and Medicaid.
Medical assistants were trained to incorporate the following single-question hearing screener during health clinic appointments: “Do you have any difficulty with your hearing (without hearing aids)?” The screening offered a Likert-scale option of responses.
“A single-question hearing screener requires no equipment,” said study author Sara Mamo, AuD, PhD, and associate professor of Speech, Language, and Hearing Sciences at the University of Massachusetts Amherst. “It simply requires a systemic belief that addressing hearing loss matters.”
Following these screenings, the research team conducted on-site hearing threshold testing to evaluate the effectiveness of the method.
Mamo and her research team found that nearly three quarters of the participants had some degree of hearing loss, with 24 individuals showing mild hearing loss and 11 exhibiting moderate or worse hearing loss.
None of the participants were current users of hearing aids, which underscores the widespread issue of untreated hearing loss in older adults, according to Mamo.
“One benefit of screening by asking a question is that the patient who says ‘yes’ to having difficulty is more likely to accept support to address the difficulty,” said Mamo. “A medical provider asking about hearing loss is an important cue to action.”
The results showed a sensitivity of 71.4% and a specificity of 42.9%, suggesting that this simple screening can help identify individuals with untreated hearing loss during routine health visits.
Despite known links between age-related hearing loss and increased risks for dementia, depression, and loneliness, the US Preventive Services Task Force does not currently recommend routine hearing loss screening for adults.
“With minimal burden, we can identify individuals with untreated hearing loss during routine health appointments,” she said.
Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, agreed.
“We do not screen enough for hearing loss,” said Perissinotto, who was not involved in the study.
The researchers also provide practical communication tips for healthcare providers working with patients with untreated hearing loss. These include speaking face-to-face, speaking slowly, and using personal sound amplifiers.
Perissinotto added that integrating an individual’s hearing status into their medical records could enhance overall care and any future communication strategies.
“Writing hearing status [into medical records] prominently could be very important, as I have had patients inappropriately labeled as having dementia when it was a hearing issue,” said Perissinotto.
Mamo and Perissinotto had no conflicts of interest.
A version of this article first appeared on Medscape.com.
A simple, single-question hearing screening administered by medical assistants could effectively identify older adults with untreated hearing loss, according to a study presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The study, conducted by researchers at the University of Massachusetts Amherst, involved 49 participants aged between 56 and 90 years who attended a health clinic with a Program for All-Inclusive Care for the Elderly (PACE). Most of the participants who are in PACE are dually eligible for both Medicare and Medicaid.
Medical assistants were trained to incorporate the following single-question hearing screener during health clinic appointments: “Do you have any difficulty with your hearing (without hearing aids)?” The screening offered a Likert-scale option of responses.
“A single-question hearing screener requires no equipment,” said study author Sara Mamo, AuD, PhD, and associate professor of Speech, Language, and Hearing Sciences at the University of Massachusetts Amherst. “It simply requires a systemic belief that addressing hearing loss matters.”
Following these screenings, the research team conducted on-site hearing threshold testing to evaluate the effectiveness of the method.
Mamo and her research team found that nearly three quarters of the participants had some degree of hearing loss, with 24 individuals showing mild hearing loss and 11 exhibiting moderate or worse hearing loss.
None of the participants were current users of hearing aids, which underscores the widespread issue of untreated hearing loss in older adults, according to Mamo.
“One benefit of screening by asking a question is that the patient who says ‘yes’ to having difficulty is more likely to accept support to address the difficulty,” said Mamo. “A medical provider asking about hearing loss is an important cue to action.”
The results showed a sensitivity of 71.4% and a specificity of 42.9%, suggesting that this simple screening can help identify individuals with untreated hearing loss during routine health visits.
Despite known links between age-related hearing loss and increased risks for dementia, depression, and loneliness, the US Preventive Services Task Force does not currently recommend routine hearing loss screening for adults.
“With minimal burden, we can identify individuals with untreated hearing loss during routine health appointments,” she said.
Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, agreed.
“We do not screen enough for hearing loss,” said Perissinotto, who was not involved in the study.
The researchers also provide practical communication tips for healthcare providers working with patients with untreated hearing loss. These include speaking face-to-face, speaking slowly, and using personal sound amplifiers.
Perissinotto added that integrating an individual’s hearing status into their medical records could enhance overall care and any future communication strategies.
“Writing hearing status [into medical records] prominently could be very important, as I have had patients inappropriately labeled as having dementia when it was a hearing issue,” said Perissinotto.
Mamo and Perissinotto had no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM GSA 2024
Daytime Sleepiness May Flag Predementia Risk
TOPLINE:
a new study shows.
METHODOLOGY:
- Researchers included 445 older adults without dementia (mean age, 76 years; 57% women).
- Sleep components were assessed, and participants were classified as poor or good sleepers using the Pittsburgh Sleep Quality Index questionnaire.
- The primary outcome was incidence of MCR syndrome.
- The mean follow-up duration was 2.9 years.
TAKEAWAY:
- During the study period, 36 participants developed MCR syndrome.
- Poor sleepers had a higher risk for incident MCR syndrome, compared with good sleepers, after adjustment for age, sex, and educational level (adjusted hazard ratio [aHR], 2.6; 95% CI, 1.3-5.0; P < .05). However, this association was no longer significant after further adjustment for depressive symptoms.
- Sleep-related daytime dysfunction, defined as excessive sleepiness and lower enthusiasm for activities, was the only sleep component linked to a significant risk for MCR syndrome in fully adjusted models (aHR, 3.3; 95% CI, 1.5-7.4; P < .05).
- Prevalent MCR syndrome was not significantly associated with poor sleep quality (odds ratio, 1.1), suggesting that the relationship is unidirectional.
IN PRACTICE:
“Establishing the relationship between sleep dysfunction and MCR [syndrome] risk is important because early intervention may offer the best hope for preventing dementia,” the investigators wrote.
“Our findings emphasize the need for screening for sleep issues. There’s potential that people could get help with their sleep issues and prevent cognitive decline later in life,” lead author Victoire Leroy, MD, PhD, Albert Einstein College of Medicine, New York City, added in a press release.
SOURCE:
The study was published online in Neurology.
LIMITATIONS:
Study limitations included the lack of objective sleep measurements and potential recall bias in self-reported sleep complaints, particularly among participants with cognitive issues. In addition, the relatively short follow-up period may have resulted in a lower number of incident MCR syndrome cases. The sample population was also predominantly White (80%), which may have limited the generalizability of the findings to other populations.
DISCLOSURES:
The study was funded by the National Institute on Aging. No conflicts of interest were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
a new study shows.
METHODOLOGY:
- Researchers included 445 older adults without dementia (mean age, 76 years; 57% women).
- Sleep components were assessed, and participants were classified as poor or good sleepers using the Pittsburgh Sleep Quality Index questionnaire.
- The primary outcome was incidence of MCR syndrome.
- The mean follow-up duration was 2.9 years.
TAKEAWAY:
- During the study period, 36 participants developed MCR syndrome.
- Poor sleepers had a higher risk for incident MCR syndrome, compared with good sleepers, after adjustment for age, sex, and educational level (adjusted hazard ratio [aHR], 2.6; 95% CI, 1.3-5.0; P < .05). However, this association was no longer significant after further adjustment for depressive symptoms.
- Sleep-related daytime dysfunction, defined as excessive sleepiness and lower enthusiasm for activities, was the only sleep component linked to a significant risk for MCR syndrome in fully adjusted models (aHR, 3.3; 95% CI, 1.5-7.4; P < .05).
- Prevalent MCR syndrome was not significantly associated with poor sleep quality (odds ratio, 1.1), suggesting that the relationship is unidirectional.
IN PRACTICE:
“Establishing the relationship between sleep dysfunction and MCR [syndrome] risk is important because early intervention may offer the best hope for preventing dementia,” the investigators wrote.
“Our findings emphasize the need for screening for sleep issues. There’s potential that people could get help with their sleep issues and prevent cognitive decline later in life,” lead author Victoire Leroy, MD, PhD, Albert Einstein College of Medicine, New York City, added in a press release.
SOURCE:
The study was published online in Neurology.
LIMITATIONS:
Study limitations included the lack of objective sleep measurements and potential recall bias in self-reported sleep complaints, particularly among participants with cognitive issues. In addition, the relatively short follow-up period may have resulted in a lower number of incident MCR syndrome cases. The sample population was also predominantly White (80%), which may have limited the generalizability of the findings to other populations.
DISCLOSURES:
The study was funded by the National Institute on Aging. No conflicts of interest were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
a new study shows.
METHODOLOGY:
- Researchers included 445 older adults without dementia (mean age, 76 years; 57% women).
- Sleep components were assessed, and participants were classified as poor or good sleepers using the Pittsburgh Sleep Quality Index questionnaire.
- The primary outcome was incidence of MCR syndrome.
- The mean follow-up duration was 2.9 years.
TAKEAWAY:
- During the study period, 36 participants developed MCR syndrome.
- Poor sleepers had a higher risk for incident MCR syndrome, compared with good sleepers, after adjustment for age, sex, and educational level (adjusted hazard ratio [aHR], 2.6; 95% CI, 1.3-5.0; P < .05). However, this association was no longer significant after further adjustment for depressive symptoms.
- Sleep-related daytime dysfunction, defined as excessive sleepiness and lower enthusiasm for activities, was the only sleep component linked to a significant risk for MCR syndrome in fully adjusted models (aHR, 3.3; 95% CI, 1.5-7.4; P < .05).
- Prevalent MCR syndrome was not significantly associated with poor sleep quality (odds ratio, 1.1), suggesting that the relationship is unidirectional.
IN PRACTICE:
“Establishing the relationship between sleep dysfunction and MCR [syndrome] risk is important because early intervention may offer the best hope for preventing dementia,” the investigators wrote.
“Our findings emphasize the need for screening for sleep issues. There’s potential that people could get help with their sleep issues and prevent cognitive decline later in life,” lead author Victoire Leroy, MD, PhD, Albert Einstein College of Medicine, New York City, added in a press release.
SOURCE:
The study was published online in Neurology.
LIMITATIONS:
Study limitations included the lack of objective sleep measurements and potential recall bias in self-reported sleep complaints, particularly among participants with cognitive issues. In addition, the relatively short follow-up period may have resulted in a lower number of incident MCR syndrome cases. The sample population was also predominantly White (80%), which may have limited the generalizability of the findings to other populations.
DISCLOSURES:
The study was funded by the National Institute on Aging. No conflicts of interest were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Quick Dementia Screening Test Shows Promise for Primary Care
SEATTLE — A novel, quick, and low-cost dementia screening test could significantly improve early detection of Alzheimer’s disease in primary care settings, according to research presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The test, called qBEANS — short for Quick Behavioral Exam to Advance Neuropsychological Screening — involves patients spooning raw kidney beans into small plastic cups in a specific sequence to assess motor learning, visuospatial memory, and executive function. It requires no technology or wearable sensors, making it accessible and easy to implement.
Previous research has shown qBEANS to be sensitive and specific to Alzheimer’s disease pathology, as well as predictive of cognitive and functional decline, the researchers said.
However, the current version of the test takes around 7 minutes to administer, which is too long for use in primary care, according to study author Sydney Schaefer, PhD, associate professor in the School of Biological and Health Systems Engineering at Arizona State University, Tempe, Arizona.
“The purpose of this study was to identify the minimum number of trials needed for reliability relative to the original longer version,” said Schaefer.
The study involved 48 participants without dementia, 77% of whom were women, and an average age of 75.4 years.
The researchers found that the shortened version of the qBEANS test takes only about 3.85 minutes on average — nearly 48% faster than the original version — while still maintaining high reliability (intraclass correlation of 0.85).
With its brevity and simplicity, the test could be easily administered by medical assistants during patient check-in, potentially increasing early dementia detection rates in primary care, said Schaefer.
While the shortened qBEANS test shows promise, further research is needed to assess its acceptability in primary care settings.
“The findings also warrant further development of the BEAN as a direct-to-consumer product, given its low cost and ease of administration,” said Schaefer.
However, Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, cautioned that direct-to-consumer plans “could lead to participants not knowing what to do with the results out of context and without clinical input.”
“I’m not sure that we need to have a new evaluation tool, but instead, greater adoption of known and existing tools,” said Perissinotto, who was not involved in the study.
According to Perissinotto, existing cognitive screening tools Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are more commonly used to evaluate cognition and are also relatively quick to administer.
“If [qBEANS] is not benchmarked to other standard tools like the MMSE or MoCA, clinicians may have trouble interpreting results,” said Perissinotto.
Study co-authors Schaefer and Jill Love are co-founders and managing members of Neurosessments LLC, which developed the qBEANS test.
A version of this article appeared on Medscape.com.
SEATTLE — A novel, quick, and low-cost dementia screening test could significantly improve early detection of Alzheimer’s disease in primary care settings, according to research presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The test, called qBEANS — short for Quick Behavioral Exam to Advance Neuropsychological Screening — involves patients spooning raw kidney beans into small plastic cups in a specific sequence to assess motor learning, visuospatial memory, and executive function. It requires no technology or wearable sensors, making it accessible and easy to implement.
Previous research has shown qBEANS to be sensitive and specific to Alzheimer’s disease pathology, as well as predictive of cognitive and functional decline, the researchers said.
However, the current version of the test takes around 7 minutes to administer, which is too long for use in primary care, according to study author Sydney Schaefer, PhD, associate professor in the School of Biological and Health Systems Engineering at Arizona State University, Tempe, Arizona.
“The purpose of this study was to identify the minimum number of trials needed for reliability relative to the original longer version,” said Schaefer.
The study involved 48 participants without dementia, 77% of whom were women, and an average age of 75.4 years.
The researchers found that the shortened version of the qBEANS test takes only about 3.85 minutes on average — nearly 48% faster than the original version — while still maintaining high reliability (intraclass correlation of 0.85).
With its brevity and simplicity, the test could be easily administered by medical assistants during patient check-in, potentially increasing early dementia detection rates in primary care, said Schaefer.
While the shortened qBEANS test shows promise, further research is needed to assess its acceptability in primary care settings.
“The findings also warrant further development of the BEAN as a direct-to-consumer product, given its low cost and ease of administration,” said Schaefer.
However, Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, cautioned that direct-to-consumer plans “could lead to participants not knowing what to do with the results out of context and without clinical input.”
“I’m not sure that we need to have a new evaluation tool, but instead, greater adoption of known and existing tools,” said Perissinotto, who was not involved in the study.
According to Perissinotto, existing cognitive screening tools Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are more commonly used to evaluate cognition and are also relatively quick to administer.
“If [qBEANS] is not benchmarked to other standard tools like the MMSE or MoCA, clinicians may have trouble interpreting results,” said Perissinotto.
Study co-authors Schaefer and Jill Love are co-founders and managing members of Neurosessments LLC, which developed the qBEANS test.
A version of this article appeared on Medscape.com.
SEATTLE — A novel, quick, and low-cost dementia screening test could significantly improve early detection of Alzheimer’s disease in primary care settings, according to research presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The test, called qBEANS — short for Quick Behavioral Exam to Advance Neuropsychological Screening — involves patients spooning raw kidney beans into small plastic cups in a specific sequence to assess motor learning, visuospatial memory, and executive function. It requires no technology or wearable sensors, making it accessible and easy to implement.
Previous research has shown qBEANS to be sensitive and specific to Alzheimer’s disease pathology, as well as predictive of cognitive and functional decline, the researchers said.
However, the current version of the test takes around 7 minutes to administer, which is too long for use in primary care, according to study author Sydney Schaefer, PhD, associate professor in the School of Biological and Health Systems Engineering at Arizona State University, Tempe, Arizona.
“The purpose of this study was to identify the minimum number of trials needed for reliability relative to the original longer version,” said Schaefer.
The study involved 48 participants without dementia, 77% of whom were women, and an average age of 75.4 years.
The researchers found that the shortened version of the qBEANS test takes only about 3.85 minutes on average — nearly 48% faster than the original version — while still maintaining high reliability (intraclass correlation of 0.85).
With its brevity and simplicity, the test could be easily administered by medical assistants during patient check-in, potentially increasing early dementia detection rates in primary care, said Schaefer.
While the shortened qBEANS test shows promise, further research is needed to assess its acceptability in primary care settings.
“The findings also warrant further development of the BEAN as a direct-to-consumer product, given its low cost and ease of administration,” said Schaefer.
However, Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, cautioned that direct-to-consumer plans “could lead to participants not knowing what to do with the results out of context and without clinical input.”
“I’m not sure that we need to have a new evaluation tool, but instead, greater adoption of known and existing tools,” said Perissinotto, who was not involved in the study.
According to Perissinotto, existing cognitive screening tools Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are more commonly used to evaluate cognition and are also relatively quick to administer.
“If [qBEANS] is not benchmarked to other standard tools like the MMSE or MoCA, clinicians may have trouble interpreting results,” said Perissinotto.
Study co-authors Schaefer and Jill Love are co-founders and managing members of Neurosessments LLC, which developed the qBEANS test.
A version of this article appeared on Medscape.com.
FROM GSA 2024
Managing Diabetes and Dementia in Long-Term Care
VANCOUVER, BRITISH COLUMBIA — Conditions like diabetes and dementia are common in patients who are admitted to long-term care facilities, but aggressive management of these conditions in long-term care residents is not recommended, according to a presentation given at the Family Medicine Forum (FMF) 2024.
Hospitalizations for hypoglycemia are risky for patients with diabetes who are residents of long-term care facilities, particularly those aged 75 years or older, said Adam Gurau, MD, a family physician in Toronto. Gurau completed a fellowship in care of the elderly at the University of Toronto, in Ontario, Canada.
“A lot of studies have shown diabetes-related hospitalizations,” said Gurau. He cited a 2014 study that found that hypoglycemia hospitalization rates were twice as high in older patients (age, 75 years or older) as in younger patients (age, 65-74 years).
“It is important to keep in mind that our residents in long-term care are at increasing risk for hypoglycemia, and we really should try to reduce [this risk] and not use dangerous medications or potentially dangerous [means of] diabetes management,” said Gurau.
A Canadian study that examined the composite risk for emergency department visits, hospitalizations, or death within 30 days of reaching intensive glycemic control with high-risk agents (such as insulin or sulfonylureas) suggested little benefit and possible harm in using these agents in adults aged 75 years or older.
In addition, current guidelines on diabetes management encourage a different approach. “Looking at some of the more recent North American guidelines, many of them actually now recommend relaxing glycemic targets to reduce overtreatment and prevent hypoglycemia,” said Gurau.
Deprescribing Medications
Medication reviews present opportunities for taking a global view of a patient’s treatments and determining whether any drug can be removed from the list. “What we want to do is optimize medications,” said Gurau. “We’re not talking about adding medications. We’re talking about removing medications, which is, I think, what we should be doing.”
Some research suggests that patients are open to deprescribing. One survey examined older adults (mean age, 79.1 years) with three or more chronic conditions who had been prescribed at least five medications. The researchers found that most participants (77%) were willing to deprescribe one or more medicines if a doctor advised that it was possible. “General practitioners may be able to increase deprescribing by building trust with their patients and communicating evidence about the risks of medication use,” the researchers wrote.
About 62% of seniors living in a residential care home have a diagnosis of Alzheimer’s disease or another dementia, according to the Alzheimer Society of Canada. Evidence suggests that nonpharmacologic approaches, such as massage and touch therapy and music, can manage neuropsychiatric symptoms, such as aggression and agitation, that are associated with dementia in older adults, noted Gurau.
“We want to focus on nonpharmacologic approaches for many of these [long-term care] residents,” said Gurau. “We have to do as much as we can to exhaust all the nonpharmacologic approaches.”
Preventing Hospitalizations
Another challenge to tackle in long-term care is the unnecessary transfer of residents to hospital emergency departments, according to Gurau. “In many situations, it’s worth trying as hard as we can to treat them in the nursing home, as opposed to having them go to hospital.”
Researchers estimated that 25% of the transfers from long-term care facilities in Canada to hospital emergency departments in 2014 were potentially preventable.
Urinary tract infections accounted for 30% of hospital emergency department visits for potentially preventable conditions by older patients who are residents in long-term care, according to 2013-2014 data from the Canadian Institute for Health Information.
“There are lots of downsides to going to the hospital [from long-term care],” Gurau told this news organization. “There are risks for infections, risks for increasing delirium and agitation [in patients with dementia], and risks for other behavior that can really impact somebody’s life.”
Gurau reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — Conditions like diabetes and dementia are common in patients who are admitted to long-term care facilities, but aggressive management of these conditions in long-term care residents is not recommended, according to a presentation given at the Family Medicine Forum (FMF) 2024.
Hospitalizations for hypoglycemia are risky for patients with diabetes who are residents of long-term care facilities, particularly those aged 75 years or older, said Adam Gurau, MD, a family physician in Toronto. Gurau completed a fellowship in care of the elderly at the University of Toronto, in Ontario, Canada.
“A lot of studies have shown diabetes-related hospitalizations,” said Gurau. He cited a 2014 study that found that hypoglycemia hospitalization rates were twice as high in older patients (age, 75 years or older) as in younger patients (age, 65-74 years).
“It is important to keep in mind that our residents in long-term care are at increasing risk for hypoglycemia, and we really should try to reduce [this risk] and not use dangerous medications or potentially dangerous [means of] diabetes management,” said Gurau.
A Canadian study that examined the composite risk for emergency department visits, hospitalizations, or death within 30 days of reaching intensive glycemic control with high-risk agents (such as insulin or sulfonylureas) suggested little benefit and possible harm in using these agents in adults aged 75 years or older.
In addition, current guidelines on diabetes management encourage a different approach. “Looking at some of the more recent North American guidelines, many of them actually now recommend relaxing glycemic targets to reduce overtreatment and prevent hypoglycemia,” said Gurau.
Deprescribing Medications
Medication reviews present opportunities for taking a global view of a patient’s treatments and determining whether any drug can be removed from the list. “What we want to do is optimize medications,” said Gurau. “We’re not talking about adding medications. We’re talking about removing medications, which is, I think, what we should be doing.”
Some research suggests that patients are open to deprescribing. One survey examined older adults (mean age, 79.1 years) with three or more chronic conditions who had been prescribed at least five medications. The researchers found that most participants (77%) were willing to deprescribe one or more medicines if a doctor advised that it was possible. “General practitioners may be able to increase deprescribing by building trust with their patients and communicating evidence about the risks of medication use,” the researchers wrote.
About 62% of seniors living in a residential care home have a diagnosis of Alzheimer’s disease or another dementia, according to the Alzheimer Society of Canada. Evidence suggests that nonpharmacologic approaches, such as massage and touch therapy and music, can manage neuropsychiatric symptoms, such as aggression and agitation, that are associated with dementia in older adults, noted Gurau.
“We want to focus on nonpharmacologic approaches for many of these [long-term care] residents,” said Gurau. “We have to do as much as we can to exhaust all the nonpharmacologic approaches.”
Preventing Hospitalizations
Another challenge to tackle in long-term care is the unnecessary transfer of residents to hospital emergency departments, according to Gurau. “In many situations, it’s worth trying as hard as we can to treat them in the nursing home, as opposed to having them go to hospital.”
Researchers estimated that 25% of the transfers from long-term care facilities in Canada to hospital emergency departments in 2014 were potentially preventable.
Urinary tract infections accounted for 30% of hospital emergency department visits for potentially preventable conditions by older patients who are residents in long-term care, according to 2013-2014 data from the Canadian Institute for Health Information.
“There are lots of downsides to going to the hospital [from long-term care],” Gurau told this news organization. “There are risks for infections, risks for increasing delirium and agitation [in patients with dementia], and risks for other behavior that can really impact somebody’s life.”
Gurau reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — Conditions like diabetes and dementia are common in patients who are admitted to long-term care facilities, but aggressive management of these conditions in long-term care residents is not recommended, according to a presentation given at the Family Medicine Forum (FMF) 2024.
Hospitalizations for hypoglycemia are risky for patients with diabetes who are residents of long-term care facilities, particularly those aged 75 years or older, said Adam Gurau, MD, a family physician in Toronto. Gurau completed a fellowship in care of the elderly at the University of Toronto, in Ontario, Canada.
“A lot of studies have shown diabetes-related hospitalizations,” said Gurau. He cited a 2014 study that found that hypoglycemia hospitalization rates were twice as high in older patients (age, 75 years or older) as in younger patients (age, 65-74 years).
“It is important to keep in mind that our residents in long-term care are at increasing risk for hypoglycemia, and we really should try to reduce [this risk] and not use dangerous medications or potentially dangerous [means of] diabetes management,” said Gurau.
A Canadian study that examined the composite risk for emergency department visits, hospitalizations, or death within 30 days of reaching intensive glycemic control with high-risk agents (such as insulin or sulfonylureas) suggested little benefit and possible harm in using these agents in adults aged 75 years or older.
In addition, current guidelines on diabetes management encourage a different approach. “Looking at some of the more recent North American guidelines, many of them actually now recommend relaxing glycemic targets to reduce overtreatment and prevent hypoglycemia,” said Gurau.
Deprescribing Medications
Medication reviews present opportunities for taking a global view of a patient’s treatments and determining whether any drug can be removed from the list. “What we want to do is optimize medications,” said Gurau. “We’re not talking about adding medications. We’re talking about removing medications, which is, I think, what we should be doing.”
Some research suggests that patients are open to deprescribing. One survey examined older adults (mean age, 79.1 years) with three or more chronic conditions who had been prescribed at least five medications. The researchers found that most participants (77%) were willing to deprescribe one or more medicines if a doctor advised that it was possible. “General practitioners may be able to increase deprescribing by building trust with their patients and communicating evidence about the risks of medication use,” the researchers wrote.
About 62% of seniors living in a residential care home have a diagnosis of Alzheimer’s disease or another dementia, according to the Alzheimer Society of Canada. Evidence suggests that nonpharmacologic approaches, such as massage and touch therapy and music, can manage neuropsychiatric symptoms, such as aggression and agitation, that are associated with dementia in older adults, noted Gurau.
“We want to focus on nonpharmacologic approaches for many of these [long-term care] residents,” said Gurau. “We have to do as much as we can to exhaust all the nonpharmacologic approaches.”
Preventing Hospitalizations
Another challenge to tackle in long-term care is the unnecessary transfer of residents to hospital emergency departments, according to Gurau. “In many situations, it’s worth trying as hard as we can to treat them in the nursing home, as opposed to having them go to hospital.”
Researchers estimated that 25% of the transfers from long-term care facilities in Canada to hospital emergency departments in 2014 were potentially preventable.
Urinary tract infections accounted for 30% of hospital emergency department visits for potentially preventable conditions by older patients who are residents in long-term care, according to 2013-2014 data from the Canadian Institute for Health Information.
“There are lots of downsides to going to the hospital [from long-term care],” Gurau told this news organization. “There are risks for infections, risks for increasing delirium and agitation [in patients with dementia], and risks for other behavior that can really impact somebody’s life.”
Gurau reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM FMF 2024
The Use of Biomarkers for Alzheimer’s Disease in Primary Care
In our previous case-based review, I teased the opportunity to use biomarkers to increase the accuracy and expediency of the diagnosis of Alzheimer’s disease (AD). These tests are no longer confined to the research setting but are now available to specialists and primary care clinicians alike. Given that most cognitive disorders are first identified in primary care, however, I believe that their greatest impact will be in our clinical space.
The pathologic processes associated with AD can be detected approximately 2 decades before the advent of clinical symptoms, and the symptomatic period of cognitive impairment is estimated to occupy just the final third of the disease course of AD. Using imaging studies, primarily PET, as well as cerebrospinal fluid (CSF) and even blood biomarkers for beta amyloid and tau, the pathologic drivers of AD, clinicians can identify patients with AD pathology before any symptoms are present. Importantly for our present-day interventions, the application of biomarkers can also help to diagnose AD earlier.
Amyloid PET identifies one of the earliest markers of potential AD, but a barrier common to advanced diagnostic imaging has been cost. Medicare has now approved coverage for amyloid PET in cases of suspected cognitive impairment. In a large study of more than 16,000 older adults in the United States, PET scans were positive in 55.3% of cases with mild cognitive impairment (MCI). The PET positivity rate among adults with other dementia was 70.1%. The application of PET resulted in a change in care in more than 60% of patients with MCI and dementia. One quarter of participants had their diagnosis changed from AD to another form of dementia, and 10% were changed from a diagnosis of other dementia to AD.
Liquid biomarkers can involve either CSF or blood samples. To date, CSF testing has yielded more consistent results and has defined protocols for assessment. Still, collection of CSF is more challenging than collection of blood, and patients and their families may object to lumbar puncture. CSF assessment therefore remains generally in the province of specialists and research centers.
Primary care clinicians have been waiting for a reliable blood-based biomarker for AD, and that wait may be about to end. A study published in July 2024 included 1213 adults being evaluated for cognitive symptoms in Sweden. They completed a test measuring the ratio of phosphorylated tau 217 vs nonphosphorylated tau 217, with or without a test for serum amyloid ratios as well. These tests were compared with clinicians’ clinical diagnoses as well as CSF results, which were considered the gold standard.
Using only clinical tools, primary care clinicians’ and specialists’ diagnostic accuracy for MCI and dementia were just 61% and 73%, respectively. These values were substantially weaker vs the performance of either the serum tau or amyloid ratios (both 90% accurate). The authors concluded that serum testing has the potential to improve clinical care of patients with cognitive impairment.
Where does that leave us today? Commercially available blood biomarkers are available now which use different tests and cutoff values. These may be helpful but will probably be difficult to compare and interpret for primary care clinicians. In addition, insurance is less likely to cover these tests. Amyloid PET scans are a very reasonable option to augment clinician judgment of suspected cognitive impairment, but not all geographic areas will have ready access to this imaging study.
Still, it is an exciting time to have more objective tools at our disposal to identify MCI and AD. These tools can only be optimized by clinicians who recognize symptoms and perform the baseline testing necessary to determine pretest probability of MCI or dementia.
Charles P. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for McNeil Pharmaceuticals.
A version of this article first appeared on Medscape.com.
In our previous case-based review, I teased the opportunity to use biomarkers to increase the accuracy and expediency of the diagnosis of Alzheimer’s disease (AD). These tests are no longer confined to the research setting but are now available to specialists and primary care clinicians alike. Given that most cognitive disorders are first identified in primary care, however, I believe that their greatest impact will be in our clinical space.
The pathologic processes associated with AD can be detected approximately 2 decades before the advent of clinical symptoms, and the symptomatic period of cognitive impairment is estimated to occupy just the final third of the disease course of AD. Using imaging studies, primarily PET, as well as cerebrospinal fluid (CSF) and even blood biomarkers for beta amyloid and tau, the pathologic drivers of AD, clinicians can identify patients with AD pathology before any symptoms are present. Importantly for our present-day interventions, the application of biomarkers can also help to diagnose AD earlier.
Amyloid PET identifies one of the earliest markers of potential AD, but a barrier common to advanced diagnostic imaging has been cost. Medicare has now approved coverage for amyloid PET in cases of suspected cognitive impairment. In a large study of more than 16,000 older adults in the United States, PET scans were positive in 55.3% of cases with mild cognitive impairment (MCI). The PET positivity rate among adults with other dementia was 70.1%. The application of PET resulted in a change in care in more than 60% of patients with MCI and dementia. One quarter of participants had their diagnosis changed from AD to another form of dementia, and 10% were changed from a diagnosis of other dementia to AD.
Liquid biomarkers can involve either CSF or blood samples. To date, CSF testing has yielded more consistent results and has defined protocols for assessment. Still, collection of CSF is more challenging than collection of blood, and patients and their families may object to lumbar puncture. CSF assessment therefore remains generally in the province of specialists and research centers.
Primary care clinicians have been waiting for a reliable blood-based biomarker for AD, and that wait may be about to end. A study published in July 2024 included 1213 adults being evaluated for cognitive symptoms in Sweden. They completed a test measuring the ratio of phosphorylated tau 217 vs nonphosphorylated tau 217, with or without a test for serum amyloid ratios as well. These tests were compared with clinicians’ clinical diagnoses as well as CSF results, which were considered the gold standard.
Using only clinical tools, primary care clinicians’ and specialists’ diagnostic accuracy for MCI and dementia were just 61% and 73%, respectively. These values were substantially weaker vs the performance of either the serum tau or amyloid ratios (both 90% accurate). The authors concluded that serum testing has the potential to improve clinical care of patients with cognitive impairment.
Where does that leave us today? Commercially available blood biomarkers are available now which use different tests and cutoff values. These may be helpful but will probably be difficult to compare and interpret for primary care clinicians. In addition, insurance is less likely to cover these tests. Amyloid PET scans are a very reasonable option to augment clinician judgment of suspected cognitive impairment, but not all geographic areas will have ready access to this imaging study.
Still, it is an exciting time to have more objective tools at our disposal to identify MCI and AD. These tools can only be optimized by clinicians who recognize symptoms and perform the baseline testing necessary to determine pretest probability of MCI or dementia.
Charles P. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for McNeil Pharmaceuticals.
A version of this article first appeared on Medscape.com.
In our previous case-based review, I teased the opportunity to use biomarkers to increase the accuracy and expediency of the diagnosis of Alzheimer’s disease (AD). These tests are no longer confined to the research setting but are now available to specialists and primary care clinicians alike. Given that most cognitive disorders are first identified in primary care, however, I believe that their greatest impact will be in our clinical space.
The pathologic processes associated with AD can be detected approximately 2 decades before the advent of clinical symptoms, and the symptomatic period of cognitive impairment is estimated to occupy just the final third of the disease course of AD. Using imaging studies, primarily PET, as well as cerebrospinal fluid (CSF) and even blood biomarkers for beta amyloid and tau, the pathologic drivers of AD, clinicians can identify patients with AD pathology before any symptoms are present. Importantly for our present-day interventions, the application of biomarkers can also help to diagnose AD earlier.
Amyloid PET identifies one of the earliest markers of potential AD, but a barrier common to advanced diagnostic imaging has been cost. Medicare has now approved coverage for amyloid PET in cases of suspected cognitive impairment. In a large study of more than 16,000 older adults in the United States, PET scans were positive in 55.3% of cases with mild cognitive impairment (MCI). The PET positivity rate among adults with other dementia was 70.1%. The application of PET resulted in a change in care in more than 60% of patients with MCI and dementia. One quarter of participants had their diagnosis changed from AD to another form of dementia, and 10% were changed from a diagnosis of other dementia to AD.
Liquid biomarkers can involve either CSF or blood samples. To date, CSF testing has yielded more consistent results and has defined protocols for assessment. Still, collection of CSF is more challenging than collection of blood, and patients and their families may object to lumbar puncture. CSF assessment therefore remains generally in the province of specialists and research centers.
Primary care clinicians have been waiting for a reliable blood-based biomarker for AD, and that wait may be about to end. A study published in July 2024 included 1213 adults being evaluated for cognitive symptoms in Sweden. They completed a test measuring the ratio of phosphorylated tau 217 vs nonphosphorylated tau 217, with or without a test for serum amyloid ratios as well. These tests were compared with clinicians’ clinical diagnoses as well as CSF results, which were considered the gold standard.
Using only clinical tools, primary care clinicians’ and specialists’ diagnostic accuracy for MCI and dementia were just 61% and 73%, respectively. These values were substantially weaker vs the performance of either the serum tau or amyloid ratios (both 90% accurate). The authors concluded that serum testing has the potential to improve clinical care of patients with cognitive impairment.
Where does that leave us today? Commercially available blood biomarkers are available now which use different tests and cutoff values. These may be helpful but will probably be difficult to compare and interpret for primary care clinicians. In addition, insurance is less likely to cover these tests. Amyloid PET scans are a very reasonable option to augment clinician judgment of suspected cognitive impairment, but not all geographic areas will have ready access to this imaging study.
Still, it is an exciting time to have more objective tools at our disposal to identify MCI and AD. These tools can only be optimized by clinicians who recognize symptoms and perform the baseline testing necessary to determine pretest probability of MCI or dementia.
Charles P. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for McNeil Pharmaceuticals.
A version of this article first appeared on Medscape.com.
A New and Early Predictor of Dementia?
, in new findings that may provide a potential opportunity to identify high-risk populations for targeted enrollment in clinical trials of dementia prevention and treatment.
Results of an international study assessing frailty trajectories showed frailty levels notably increased in the 4-9 years before dementia diagnosis. Even among study participants whose baseline frailty measurement was taken prior to that acceleration period, frailty was still positively associated with dementia risk, the investigators noted.
“We found that with every four to five additional health problems, there is on average a 40% higher risk of developing dementia, while the risk is lower for people who are more physically fit,” said study investigator David Ward, PhD, of the Centre for Health Services Research, The University of Queensland, Brisbane, Australia.
The findings were published online in JAMA Neurology.
A Promising Biomarker
An accessible biomarker for both biologic age and dementia risk is essential for advancing dementia prevention and treatment strategies, the investigators noted, adding that growing evidence suggests frailty may be a promising candidate for this role.
To learn more about the association between frailty and dementia, Ward and his team analyzed data on 29,849 participants aged 60 years or above (mean age, 71.6 years; 62% women) who participated in four cohort studies: the English Longitudinal Study of Ageing (ELSA; n = 6771), the Health and Retirement Study (HRS; n = 9045), the Rush Memory and Aging Project (MAP; n = 1451), and the National Alzheimer’s Coordinating Center (NACC; n = 12,582).
The primary outcome was all-cause dementia. Depending on the cohort, dementia diagnoses were determined through cognitive testing, self- or family report of physician diagnosis, or a diagnosis by the study physician. Participants were excluded if they had cognitive impairment at baseline.
Investigators retrospectively determined frailty index scores by gathering information on health and functional outcomes for participants from each cohort. Only participants with frailty data on at least 30 deficits were included.
Commonly included deficits included high blood pressure, cancer, and chronic pain, as well as functional problems such as hearing impairment, difficulty with mobility, and challenges managing finances.
Investigators conducted follow-up visits with participants until they developed dementia or until the study ended, with follow-up periods varying across cohorts.
After adjustment for potential confounders, frailty scores were modeled using backward time scales.
Among participants who developed incident dementia (n = 3154), covariate-adjusted expected frailty index scores were, on average, higher in women than in men by 18.5% in ELSA, 20.9% in HRS, and 16.2% in MAP. There were no differences in frailty scores between sexes in the NACC cohort.
When measured on a timeline, as compared with those who didn’t develop dementia, frailty scores were significantly and consistently higher in the dementia groups 8-20 before dementia onset (20 years in HRS; 13 in MAP; 12 in ELSA; 8 in NACC).
Increases in the rates of frailty index scores began accelerating 4-9 years before dementia onset for the various cohorts, investigators noted.
In all four cohorts, each 0.1 increase in frailty scores was positively associated with increased dementia risk.
Adjusted hazard ratios [aHRs] ranged from 1.18 in the HRS cohort to 1.73 in the NACC cohort, which showed the strongest association.
In participants whose baseline frailty measurement was conducted before the predementia acceleration period began, the association of frailty scores and dementia risk was positive. These aHRs ranged from 1.18 in the HRS cohort to 1.43 in the NACC cohort.
The ‘Four Pillars’ of Prevention
The good news, investigators said, is that the long trajectory of frailty symptoms preceding dementia onset provides plenty of opportunity for intervention.
To slow the development of frailty, Ward suggested adhering to the “four pillars of frailty prevention and management,” which include good nutrition with plenty of protein, exercise, optimizing medications for chronic conditions, and maintaining a strong social network.
Ward suggested neurologists track frailty in their patients and pointed to a recent article focused on helping neurologists use frailty measures to influence care planning.
Study limitations include the possibility of reverse causality and the fact that investigators could not adjust for genetic risk for dementia.
Unclear Pathway
Commenting on the findings, Lycia Neumann, PhD, senior director of Health Services Research at the Alzheimer’s Association, noted that many studies over the years have shown a link between frailty and dementia. However, she cautioned that a link does not imply causation.
The pathway from frailty to dementia is not 100% clear, and both are complex conditions, said Neumann, who was not part of the study.
“Adopting healthy lifestyle behaviors early and consistently can help decrease the risk of — or postpone the onset of — both frailty and cognitive decline,” she said. Neumann added that physical activity, a healthy diet, social engagement, and controlling diabetes and blood pressure can also reduce the risk for dementia as well as cardiovascular disease.
The study was funded in part by the Deep Dementia Phenotyping Network through the Frailty and Dementia Special Interest Group. Ward and Neumann reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
, in new findings that may provide a potential opportunity to identify high-risk populations for targeted enrollment in clinical trials of dementia prevention and treatment.
Results of an international study assessing frailty trajectories showed frailty levels notably increased in the 4-9 years before dementia diagnosis. Even among study participants whose baseline frailty measurement was taken prior to that acceleration period, frailty was still positively associated with dementia risk, the investigators noted.
“We found that with every four to five additional health problems, there is on average a 40% higher risk of developing dementia, while the risk is lower for people who are more physically fit,” said study investigator David Ward, PhD, of the Centre for Health Services Research, The University of Queensland, Brisbane, Australia.
The findings were published online in JAMA Neurology.
A Promising Biomarker
An accessible biomarker for both biologic age and dementia risk is essential for advancing dementia prevention and treatment strategies, the investigators noted, adding that growing evidence suggests frailty may be a promising candidate for this role.
To learn more about the association between frailty and dementia, Ward and his team analyzed data on 29,849 participants aged 60 years or above (mean age, 71.6 years; 62% women) who participated in four cohort studies: the English Longitudinal Study of Ageing (ELSA; n = 6771), the Health and Retirement Study (HRS; n = 9045), the Rush Memory and Aging Project (MAP; n = 1451), and the National Alzheimer’s Coordinating Center (NACC; n = 12,582).
The primary outcome was all-cause dementia. Depending on the cohort, dementia diagnoses were determined through cognitive testing, self- or family report of physician diagnosis, or a diagnosis by the study physician. Participants were excluded if they had cognitive impairment at baseline.
Investigators retrospectively determined frailty index scores by gathering information on health and functional outcomes for participants from each cohort. Only participants with frailty data on at least 30 deficits were included.
Commonly included deficits included high blood pressure, cancer, and chronic pain, as well as functional problems such as hearing impairment, difficulty with mobility, and challenges managing finances.
Investigators conducted follow-up visits with participants until they developed dementia or until the study ended, with follow-up periods varying across cohorts.
After adjustment for potential confounders, frailty scores were modeled using backward time scales.
Among participants who developed incident dementia (n = 3154), covariate-adjusted expected frailty index scores were, on average, higher in women than in men by 18.5% in ELSA, 20.9% in HRS, and 16.2% in MAP. There were no differences in frailty scores between sexes in the NACC cohort.
When measured on a timeline, as compared with those who didn’t develop dementia, frailty scores were significantly and consistently higher in the dementia groups 8-20 before dementia onset (20 years in HRS; 13 in MAP; 12 in ELSA; 8 in NACC).
Increases in the rates of frailty index scores began accelerating 4-9 years before dementia onset for the various cohorts, investigators noted.
In all four cohorts, each 0.1 increase in frailty scores was positively associated with increased dementia risk.
Adjusted hazard ratios [aHRs] ranged from 1.18 in the HRS cohort to 1.73 in the NACC cohort, which showed the strongest association.
In participants whose baseline frailty measurement was conducted before the predementia acceleration period began, the association of frailty scores and dementia risk was positive. These aHRs ranged from 1.18 in the HRS cohort to 1.43 in the NACC cohort.
The ‘Four Pillars’ of Prevention
The good news, investigators said, is that the long trajectory of frailty symptoms preceding dementia onset provides plenty of opportunity for intervention.
To slow the development of frailty, Ward suggested adhering to the “four pillars of frailty prevention and management,” which include good nutrition with plenty of protein, exercise, optimizing medications for chronic conditions, and maintaining a strong social network.
Ward suggested neurologists track frailty in their patients and pointed to a recent article focused on helping neurologists use frailty measures to influence care planning.
Study limitations include the possibility of reverse causality and the fact that investigators could not adjust for genetic risk for dementia.
Unclear Pathway
Commenting on the findings, Lycia Neumann, PhD, senior director of Health Services Research at the Alzheimer’s Association, noted that many studies over the years have shown a link between frailty and dementia. However, she cautioned that a link does not imply causation.
The pathway from frailty to dementia is not 100% clear, and both are complex conditions, said Neumann, who was not part of the study.
“Adopting healthy lifestyle behaviors early and consistently can help decrease the risk of — or postpone the onset of — both frailty and cognitive decline,” she said. Neumann added that physical activity, a healthy diet, social engagement, and controlling diabetes and blood pressure can also reduce the risk for dementia as well as cardiovascular disease.
The study was funded in part by the Deep Dementia Phenotyping Network through the Frailty and Dementia Special Interest Group. Ward and Neumann reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
, in new findings that may provide a potential opportunity to identify high-risk populations for targeted enrollment in clinical trials of dementia prevention and treatment.
Results of an international study assessing frailty trajectories showed frailty levels notably increased in the 4-9 years before dementia diagnosis. Even among study participants whose baseline frailty measurement was taken prior to that acceleration period, frailty was still positively associated with dementia risk, the investigators noted.
“We found that with every four to five additional health problems, there is on average a 40% higher risk of developing dementia, while the risk is lower for people who are more physically fit,” said study investigator David Ward, PhD, of the Centre for Health Services Research, The University of Queensland, Brisbane, Australia.
The findings were published online in JAMA Neurology.
A Promising Biomarker
An accessible biomarker for both biologic age and dementia risk is essential for advancing dementia prevention and treatment strategies, the investigators noted, adding that growing evidence suggests frailty may be a promising candidate for this role.
To learn more about the association between frailty and dementia, Ward and his team analyzed data on 29,849 participants aged 60 years or above (mean age, 71.6 years; 62% women) who participated in four cohort studies: the English Longitudinal Study of Ageing (ELSA; n = 6771), the Health and Retirement Study (HRS; n = 9045), the Rush Memory and Aging Project (MAP; n = 1451), and the National Alzheimer’s Coordinating Center (NACC; n = 12,582).
The primary outcome was all-cause dementia. Depending on the cohort, dementia diagnoses were determined through cognitive testing, self- or family report of physician diagnosis, or a diagnosis by the study physician. Participants were excluded if they had cognitive impairment at baseline.
Investigators retrospectively determined frailty index scores by gathering information on health and functional outcomes for participants from each cohort. Only participants with frailty data on at least 30 deficits were included.
Commonly included deficits included high blood pressure, cancer, and chronic pain, as well as functional problems such as hearing impairment, difficulty with mobility, and challenges managing finances.
Investigators conducted follow-up visits with participants until they developed dementia or until the study ended, with follow-up periods varying across cohorts.
After adjustment for potential confounders, frailty scores were modeled using backward time scales.
Among participants who developed incident dementia (n = 3154), covariate-adjusted expected frailty index scores were, on average, higher in women than in men by 18.5% in ELSA, 20.9% in HRS, and 16.2% in MAP. There were no differences in frailty scores between sexes in the NACC cohort.
When measured on a timeline, as compared with those who didn’t develop dementia, frailty scores were significantly and consistently higher in the dementia groups 8-20 before dementia onset (20 years in HRS; 13 in MAP; 12 in ELSA; 8 in NACC).
Increases in the rates of frailty index scores began accelerating 4-9 years before dementia onset for the various cohorts, investigators noted.
In all four cohorts, each 0.1 increase in frailty scores was positively associated with increased dementia risk.
Adjusted hazard ratios [aHRs] ranged from 1.18 in the HRS cohort to 1.73 in the NACC cohort, which showed the strongest association.
In participants whose baseline frailty measurement was conducted before the predementia acceleration period began, the association of frailty scores and dementia risk was positive. These aHRs ranged from 1.18 in the HRS cohort to 1.43 in the NACC cohort.
The ‘Four Pillars’ of Prevention
The good news, investigators said, is that the long trajectory of frailty symptoms preceding dementia onset provides plenty of opportunity for intervention.
To slow the development of frailty, Ward suggested adhering to the “four pillars of frailty prevention and management,” which include good nutrition with plenty of protein, exercise, optimizing medications for chronic conditions, and maintaining a strong social network.
Ward suggested neurologists track frailty in their patients and pointed to a recent article focused on helping neurologists use frailty measures to influence care planning.
Study limitations include the possibility of reverse causality and the fact that investigators could not adjust for genetic risk for dementia.
Unclear Pathway
Commenting on the findings, Lycia Neumann, PhD, senior director of Health Services Research at the Alzheimer’s Association, noted that many studies over the years have shown a link between frailty and dementia. However, she cautioned that a link does not imply causation.
The pathway from frailty to dementia is not 100% clear, and both are complex conditions, said Neumann, who was not part of the study.
“Adopting healthy lifestyle behaviors early and consistently can help decrease the risk of — or postpone the onset of — both frailty and cognitive decline,” she said. Neumann added that physical activity, a healthy diet, social engagement, and controlling diabetes and blood pressure can also reduce the risk for dementia as well as cardiovascular disease.
The study was funded in part by the Deep Dementia Phenotyping Network through the Frailty and Dementia Special Interest Group. Ward and Neumann reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Faster Brain Atrophy Linked to MCI
While some brain atrophy is expected in aging, high levels of atrophy in the white matter and high enlargement in the ventricles are associated with earlier progression from normal cognition to MCI, the study found. The researchers also identified diabetes and atypical levels of amyloid beta protein in the cerebrospinal fluid as risk factors for brain atrophy and MCI.
For their research, published online on JAMA Network Open, Yuto Uchida, MD, PhD, and his colleagues at the Johns Hopkins University School of Medicine in Baltimore, Maryland, looked at data for 185 individuals (mean age, 55.4 years; 63% women) who were cognitively normal at baseline and followed for a median of 20 years.
All had been enrolled in a longitudinal cohort study on biomarkers of cognitive decline conducted at Johns Hopkins. Each participant underwent a median of five structural MRI studies during the follow-up period as well as annual cognitive testing. Altogether 60 individuals developed MCI, with eight of them progressing to dementia.
“We hypothesized that annual rates of change of segmental brain volumes would be associated with vascular risk factors among middle-aged and older adults and that these trends would be associated with the progression from normal cognition to MCI,” Uchida and colleagues wrote.
Uniquely Long Follow-Up
Most longitudinal studies using structural MRI count a decade or less of follow-up, the study authors noted. This makes it difficult to discern whether the annual rates of change of brain volumes are affected by vascular risk factors or are useful in predicting MCI, they said. Individual differences in brain aging make population-based studies less informative.
This study’s long timeframe allowed for tracking of brain changes “on an individual basis, which facilitates the differentiation between interindividual and intraindividual variations and leads to more accurate estimations of rates of brain atrophy,” Uchida and colleagues wrote.
People with high levels of atrophy in the white matter and enlargement in the ventricles saw earlier progression to MCI (hazard ratio [HR], 1.86; 95% CI, 1.24-2.49; P = .001). Diabetes mellitus was associated with progression to MCI (HR, 1.41; 95% CI, 1.06-1.76; P = .04), as was a low CSF Abeta42:Abeta40 ratio (HR, 1.48; 95% CI, 1.09-1.88; P = .04).
People with both diabetes and an abnormal amyloid profile were even more vulnerable to developing MCI (HR, 1.55; 95% CI, 1.13-1.98; P = .03). This indicated “a synergic association of diabetes and amyloid pathology with MCI progression,” Uchida and colleagues wrote, noting that insulin resistance has been shown to promote the formation of amyloid plaques, a hallmark of Alzheimer’s disease.
The findings also underscore that “white matter volume changes are closely associated with cognitive function in aging, suggesting that white matter degeneration may play a crucial role in cognitive decline,” the authors noted.
Uchida and colleagues acknowledged the modest size and imbalanced sex ratio of their study cohort as potential weaknesses, as well as the fact that the imaging technologies had changed over the course of the study. Most of the participants were White with family histories of dementia.
Findings May Lead to Targeted Interventions
In an editorial comment accompanying Uchida and colleagues’ study, Shohei Fujita, MD, PhD, of Massachusetts General Hospital, Boston, said that, while a more diverse population sample would be desirable and should be sought for future studies, the results nonetheless highlight “the potential of long-term longitudinal brain MRI datasets in elucidating the interplay of risk factors underlying cognitive decline and the potential benefits of controlling diabetes to reduce the risk of progression” along the Alzheimer’s disease continuum.
The findings may prove informative, Fujita said, in developing “targeted interventions for those most susceptible to progressive brain changes, potentially combining lifestyle modifications and pharmacological treatments.”
Uchida and colleagues’ study was funded by the Alzheimer’s Association, the National Alzheimer’s Coordinating Center, and the National Institutes of Health. The study’s corresponding author, Kenichi Oishi, disclosed funding from the Richman Family Foundation, Richman, the Sharp Family Foundation, and others. Uchida and Fujita reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
While some brain atrophy is expected in aging, high levels of atrophy in the white matter and high enlargement in the ventricles are associated with earlier progression from normal cognition to MCI, the study found. The researchers also identified diabetes and atypical levels of amyloid beta protein in the cerebrospinal fluid as risk factors for brain atrophy and MCI.
For their research, published online on JAMA Network Open, Yuto Uchida, MD, PhD, and his colleagues at the Johns Hopkins University School of Medicine in Baltimore, Maryland, looked at data for 185 individuals (mean age, 55.4 years; 63% women) who were cognitively normal at baseline and followed for a median of 20 years.
All had been enrolled in a longitudinal cohort study on biomarkers of cognitive decline conducted at Johns Hopkins. Each participant underwent a median of five structural MRI studies during the follow-up period as well as annual cognitive testing. Altogether 60 individuals developed MCI, with eight of them progressing to dementia.
“We hypothesized that annual rates of change of segmental brain volumes would be associated with vascular risk factors among middle-aged and older adults and that these trends would be associated with the progression from normal cognition to MCI,” Uchida and colleagues wrote.
Uniquely Long Follow-Up
Most longitudinal studies using structural MRI count a decade or less of follow-up, the study authors noted. This makes it difficult to discern whether the annual rates of change of brain volumes are affected by vascular risk factors or are useful in predicting MCI, they said. Individual differences in brain aging make population-based studies less informative.
This study’s long timeframe allowed for tracking of brain changes “on an individual basis, which facilitates the differentiation between interindividual and intraindividual variations and leads to more accurate estimations of rates of brain atrophy,” Uchida and colleagues wrote.
People with high levels of atrophy in the white matter and enlargement in the ventricles saw earlier progression to MCI (hazard ratio [HR], 1.86; 95% CI, 1.24-2.49; P = .001). Diabetes mellitus was associated with progression to MCI (HR, 1.41; 95% CI, 1.06-1.76; P = .04), as was a low CSF Abeta42:Abeta40 ratio (HR, 1.48; 95% CI, 1.09-1.88; P = .04).
People with both diabetes and an abnormal amyloid profile were even more vulnerable to developing MCI (HR, 1.55; 95% CI, 1.13-1.98; P = .03). This indicated “a synergic association of diabetes and amyloid pathology with MCI progression,” Uchida and colleagues wrote, noting that insulin resistance has been shown to promote the formation of amyloid plaques, a hallmark of Alzheimer’s disease.
The findings also underscore that “white matter volume changes are closely associated with cognitive function in aging, suggesting that white matter degeneration may play a crucial role in cognitive decline,” the authors noted.
Uchida and colleagues acknowledged the modest size and imbalanced sex ratio of their study cohort as potential weaknesses, as well as the fact that the imaging technologies had changed over the course of the study. Most of the participants were White with family histories of dementia.
Findings May Lead to Targeted Interventions
In an editorial comment accompanying Uchida and colleagues’ study, Shohei Fujita, MD, PhD, of Massachusetts General Hospital, Boston, said that, while a more diverse population sample would be desirable and should be sought for future studies, the results nonetheless highlight “the potential of long-term longitudinal brain MRI datasets in elucidating the interplay of risk factors underlying cognitive decline and the potential benefits of controlling diabetes to reduce the risk of progression” along the Alzheimer’s disease continuum.
The findings may prove informative, Fujita said, in developing “targeted interventions for those most susceptible to progressive brain changes, potentially combining lifestyle modifications and pharmacological treatments.”
Uchida and colleagues’ study was funded by the Alzheimer’s Association, the National Alzheimer’s Coordinating Center, and the National Institutes of Health. The study’s corresponding author, Kenichi Oishi, disclosed funding from the Richman Family Foundation, Richman, the Sharp Family Foundation, and others. Uchida and Fujita reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
While some brain atrophy is expected in aging, high levels of atrophy in the white matter and high enlargement in the ventricles are associated with earlier progression from normal cognition to MCI, the study found. The researchers also identified diabetes and atypical levels of amyloid beta protein in the cerebrospinal fluid as risk factors for brain atrophy and MCI.
For their research, published online on JAMA Network Open, Yuto Uchida, MD, PhD, and his colleagues at the Johns Hopkins University School of Medicine in Baltimore, Maryland, looked at data for 185 individuals (mean age, 55.4 years; 63% women) who were cognitively normal at baseline and followed for a median of 20 years.
All had been enrolled in a longitudinal cohort study on biomarkers of cognitive decline conducted at Johns Hopkins. Each participant underwent a median of five structural MRI studies during the follow-up period as well as annual cognitive testing. Altogether 60 individuals developed MCI, with eight of them progressing to dementia.
“We hypothesized that annual rates of change of segmental brain volumes would be associated with vascular risk factors among middle-aged and older adults and that these trends would be associated with the progression from normal cognition to MCI,” Uchida and colleagues wrote.
Uniquely Long Follow-Up
Most longitudinal studies using structural MRI count a decade or less of follow-up, the study authors noted. This makes it difficult to discern whether the annual rates of change of brain volumes are affected by vascular risk factors or are useful in predicting MCI, they said. Individual differences in brain aging make population-based studies less informative.
This study’s long timeframe allowed for tracking of brain changes “on an individual basis, which facilitates the differentiation between interindividual and intraindividual variations and leads to more accurate estimations of rates of brain atrophy,” Uchida and colleagues wrote.
People with high levels of atrophy in the white matter and enlargement in the ventricles saw earlier progression to MCI (hazard ratio [HR], 1.86; 95% CI, 1.24-2.49; P = .001). Diabetes mellitus was associated with progression to MCI (HR, 1.41; 95% CI, 1.06-1.76; P = .04), as was a low CSF Abeta42:Abeta40 ratio (HR, 1.48; 95% CI, 1.09-1.88; P = .04).
People with both diabetes and an abnormal amyloid profile were even more vulnerable to developing MCI (HR, 1.55; 95% CI, 1.13-1.98; P = .03). This indicated “a synergic association of diabetes and amyloid pathology with MCI progression,” Uchida and colleagues wrote, noting that insulin resistance has been shown to promote the formation of amyloid plaques, a hallmark of Alzheimer’s disease.
The findings also underscore that “white matter volume changes are closely associated with cognitive function in aging, suggesting that white matter degeneration may play a crucial role in cognitive decline,” the authors noted.
Uchida and colleagues acknowledged the modest size and imbalanced sex ratio of their study cohort as potential weaknesses, as well as the fact that the imaging technologies had changed over the course of the study. Most of the participants were White with family histories of dementia.
Findings May Lead to Targeted Interventions
In an editorial comment accompanying Uchida and colleagues’ study, Shohei Fujita, MD, PhD, of Massachusetts General Hospital, Boston, said that, while a more diverse population sample would be desirable and should be sought for future studies, the results nonetheless highlight “the potential of long-term longitudinal brain MRI datasets in elucidating the interplay of risk factors underlying cognitive decline and the potential benefits of controlling diabetes to reduce the risk of progression” along the Alzheimer’s disease continuum.
The findings may prove informative, Fujita said, in developing “targeted interventions for those most susceptible to progressive brain changes, potentially combining lifestyle modifications and pharmacological treatments.”
Uchida and colleagues’ study was funded by the Alzheimer’s Association, the National Alzheimer’s Coordinating Center, and the National Institutes of Health. The study’s corresponding author, Kenichi Oishi, disclosed funding from the Richman Family Foundation, Richman, the Sharp Family Foundation, and others. Uchida and Fujita reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM JAMA NETWORK OPEN