User login
Can Plant-Based Diet Deliver Ample Protein for Older Adults?
TOPLINE:
Replacing animal-based protein sources with plant-based alternatives in older adults reduced both the quality and quantity of protein intake only when all animal-based foods were eliminated for a vegan scenario, finds a simulation study that suggests a switch to 60% plant-based protein seems to be safe.
METHODOLOGY:
- For environmental and health reasons, the Dutch Health Council advises a switch to an animal-based to plant-based protein ratio of 40:60, but older adults also need adequate protein intake to prevent muscle loss and maintain health, and it’s uncertain if they can meet their protein requirements through a more sustainable diet.
- This simulation study evaluated the impact of more sustainable eating patterns on protein quantity and quality by using data of 607 community-dwelling older adults aged 65-79 years from the Dutch National Food Consumption Survey 2019-2021.
- Data on food consumption were collected via two 24-hour dietary recalls per participant on nonconsecutive days and calculated as three main meals and four in-between moments each day.
- In the simulation, certain food products in the original diet were replaced from a list of similar plant-based alternatives, using a random number generator, to create scenarios for two flexitarian diets (40% and 80% meat and fish were replaced), one pescetarian diet (meat was replaced, but not fish and other animal-based products), one vegetarian diet (meat and fish were replaced, but not other animal-based products), and one vegan diet (fish, meat, and animal-based products were replaced).
- Protein intake was calculated in three ways for each meal moment, including by total protein intake (quantity) and by the proportion of indispensable amino acids that must be eaten together within a limited timeframe (quality).
TAKEAWAY:
- In the reference diet, the total daily plant-based protein intake was 39.0% in men and 37.7% in women, while in the vegetarian scenario, it was 59.1% in men and 54.2% in women.
- In the flexitarian, pescetarian, and vegetarian scenarios, the usable protein intake was comparable; in the vegan scenario, both total protein intake and usable protein intake were lower, leading to nearly 50% less usable protein than in the original diet.
- In the original diet, 7.5% of men and 11.1% of women did not meet the estimated average requirements (EARs) for utilizable protein; in the vegan scenario, 83.3% of both sexes had a protein intake below the EAR.
- The loss in protein intake (quantity) in all scenarios was mainly observed at dinner; the loss in protein quality was greatest at breakfast and lunch, especially in lysine (found in beans or soy milk).
IN PRACTICE:
“Changing protein intake to 60% plant-based protein seems to be safe for older adults in terms of protein intake. In contrast, a vegan pattern was associated with a substantial decline in protein availability, leading to a majority of older adults not reaching the recommended protein levels,” the authors wrote.
SOURCE:
The study was led by Jos W. Borkent, HAN University of Applied Sciences, Nijmegen, the Netherlands. It was published online in The Journal of Nutrition, Health and Aging.
LIMITATIONS:
Study limitations included the use of a simulation model, which may not fully reflect real-life dietary practices. The strict timeframe for assessing protein quality (optimal combinations of indispensible amino acids) within one meal moment may have led to an underestimation of protein availability, especially in the vegan scenario. Additionally, the choice of processed meat replacements in the vegan scenario may not have represented protein sources of the highest quality available. Higher protein quality per meal in the vegan scenario is possible when smart combinations are made in multiple meal components.
DISCLOSURES:
The study was partly funded by a grant from the Taskforce for Applied Research SIA, which is part of the Netherlands Organisation for Scientific Research and financed by the Dutch Ministry of Education, Culture and Science and by a fund of the Dutch Dairy Association. The authors declared that they had no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Replacing animal-based protein sources with plant-based alternatives in older adults reduced both the quality and quantity of protein intake only when all animal-based foods were eliminated for a vegan scenario, finds a simulation study that suggests a switch to 60% plant-based protein seems to be safe.
METHODOLOGY:
- For environmental and health reasons, the Dutch Health Council advises a switch to an animal-based to plant-based protein ratio of 40:60, but older adults also need adequate protein intake to prevent muscle loss and maintain health, and it’s uncertain if they can meet their protein requirements through a more sustainable diet.
- This simulation study evaluated the impact of more sustainable eating patterns on protein quantity and quality by using data of 607 community-dwelling older adults aged 65-79 years from the Dutch National Food Consumption Survey 2019-2021.
- Data on food consumption were collected via two 24-hour dietary recalls per participant on nonconsecutive days and calculated as three main meals and four in-between moments each day.
- In the simulation, certain food products in the original diet were replaced from a list of similar plant-based alternatives, using a random number generator, to create scenarios for two flexitarian diets (40% and 80% meat and fish were replaced), one pescetarian diet (meat was replaced, but not fish and other animal-based products), one vegetarian diet (meat and fish were replaced, but not other animal-based products), and one vegan diet (fish, meat, and animal-based products were replaced).
- Protein intake was calculated in three ways for each meal moment, including by total protein intake (quantity) and by the proportion of indispensable amino acids that must be eaten together within a limited timeframe (quality).
TAKEAWAY:
- In the reference diet, the total daily plant-based protein intake was 39.0% in men and 37.7% in women, while in the vegetarian scenario, it was 59.1% in men and 54.2% in women.
- In the flexitarian, pescetarian, and vegetarian scenarios, the usable protein intake was comparable; in the vegan scenario, both total protein intake and usable protein intake were lower, leading to nearly 50% less usable protein than in the original diet.
- In the original diet, 7.5% of men and 11.1% of women did not meet the estimated average requirements (EARs) for utilizable protein; in the vegan scenario, 83.3% of both sexes had a protein intake below the EAR.
- The loss in protein intake (quantity) in all scenarios was mainly observed at dinner; the loss in protein quality was greatest at breakfast and lunch, especially in lysine (found in beans or soy milk).
IN PRACTICE:
“Changing protein intake to 60% plant-based protein seems to be safe for older adults in terms of protein intake. In contrast, a vegan pattern was associated with a substantial decline in protein availability, leading to a majority of older adults not reaching the recommended protein levels,” the authors wrote.
SOURCE:
The study was led by Jos W. Borkent, HAN University of Applied Sciences, Nijmegen, the Netherlands. It was published online in The Journal of Nutrition, Health and Aging.
LIMITATIONS:
Study limitations included the use of a simulation model, which may not fully reflect real-life dietary practices. The strict timeframe for assessing protein quality (optimal combinations of indispensible amino acids) within one meal moment may have led to an underestimation of protein availability, especially in the vegan scenario. Additionally, the choice of processed meat replacements in the vegan scenario may not have represented protein sources of the highest quality available. Higher protein quality per meal in the vegan scenario is possible when smart combinations are made in multiple meal components.
DISCLOSURES:
The study was partly funded by a grant from the Taskforce for Applied Research SIA, which is part of the Netherlands Organisation for Scientific Research and financed by the Dutch Ministry of Education, Culture and Science and by a fund of the Dutch Dairy Association. The authors declared that they had no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Replacing animal-based protein sources with plant-based alternatives in older adults reduced both the quality and quantity of protein intake only when all animal-based foods were eliminated for a vegan scenario, finds a simulation study that suggests a switch to 60% plant-based protein seems to be safe.
METHODOLOGY:
- For environmental and health reasons, the Dutch Health Council advises a switch to an animal-based to plant-based protein ratio of 40:60, but older adults also need adequate protein intake to prevent muscle loss and maintain health, and it’s uncertain if they can meet their protein requirements through a more sustainable diet.
- This simulation study evaluated the impact of more sustainable eating patterns on protein quantity and quality by using data of 607 community-dwelling older adults aged 65-79 years from the Dutch National Food Consumption Survey 2019-2021.
- Data on food consumption were collected via two 24-hour dietary recalls per participant on nonconsecutive days and calculated as three main meals and four in-between moments each day.
- In the simulation, certain food products in the original diet were replaced from a list of similar plant-based alternatives, using a random number generator, to create scenarios for two flexitarian diets (40% and 80% meat and fish were replaced), one pescetarian diet (meat was replaced, but not fish and other animal-based products), one vegetarian diet (meat and fish were replaced, but not other animal-based products), and one vegan diet (fish, meat, and animal-based products were replaced).
- Protein intake was calculated in three ways for each meal moment, including by total protein intake (quantity) and by the proportion of indispensable amino acids that must be eaten together within a limited timeframe (quality).
TAKEAWAY:
- In the reference diet, the total daily plant-based protein intake was 39.0% in men and 37.7% in women, while in the vegetarian scenario, it was 59.1% in men and 54.2% in women.
- In the flexitarian, pescetarian, and vegetarian scenarios, the usable protein intake was comparable; in the vegan scenario, both total protein intake and usable protein intake were lower, leading to nearly 50% less usable protein than in the original diet.
- In the original diet, 7.5% of men and 11.1% of women did not meet the estimated average requirements (EARs) for utilizable protein; in the vegan scenario, 83.3% of both sexes had a protein intake below the EAR.
- The loss in protein intake (quantity) in all scenarios was mainly observed at dinner; the loss in protein quality was greatest at breakfast and lunch, especially in lysine (found in beans or soy milk).
IN PRACTICE:
“Changing protein intake to 60% plant-based protein seems to be safe for older adults in terms of protein intake. In contrast, a vegan pattern was associated with a substantial decline in protein availability, leading to a majority of older adults not reaching the recommended protein levels,” the authors wrote.
SOURCE:
The study was led by Jos W. Borkent, HAN University of Applied Sciences, Nijmegen, the Netherlands. It was published online in The Journal of Nutrition, Health and Aging.
LIMITATIONS:
Study limitations included the use of a simulation model, which may not fully reflect real-life dietary practices. The strict timeframe for assessing protein quality (optimal combinations of indispensible amino acids) within one meal moment may have led to an underestimation of protein availability, especially in the vegan scenario. Additionally, the choice of processed meat replacements in the vegan scenario may not have represented protein sources of the highest quality available. Higher protein quality per meal in the vegan scenario is possible when smart combinations are made in multiple meal components.
DISCLOSURES:
The study was partly funded by a grant from the Taskforce for Applied Research SIA, which is part of the Netherlands Organisation for Scientific Research and financed by the Dutch Ministry of Education, Culture and Science and by a fund of the Dutch Dairy Association. The authors declared that they had no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Quick Dementia Screening Test Shows Promise for Primary Care
SEATTLE — A novel, quick, and low-cost dementia screening test could significantly improve early detection of Alzheimer’s disease in primary care settings, according to research presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The test, called qBEANS — short for Quick Behavioral Exam to Advance Neuropsychological Screening — involves patients spooning raw kidney beans into small plastic cups in a specific sequence to assess motor learning, visuospatial memory, and executive function. It requires no technology or wearable sensors, making it accessible and easy to implement.
Previous research has shown qBEANS to be sensitive and specific to Alzheimer’s disease pathology, as well as predictive of cognitive and functional decline, the researchers said.
However, the current version of the test takes around 7 minutes to administer, which is too long for use in primary care, according to study author Sydney Schaefer, PhD, associate professor in the School of Biological and Health Systems Engineering at Arizona State University, Tempe, Arizona.
“The purpose of this study was to identify the minimum number of trials needed for reliability relative to the original longer version,” said Schaefer.
The study involved 48 participants without dementia, 77% of whom were women, and an average age of 75.4 years.
The researchers found that the shortened version of the qBEANS test takes only about 3.85 minutes on average — nearly 48% faster than the original version — while still maintaining high reliability (intraclass correlation of 0.85).
With its brevity and simplicity, the test could be easily administered by medical assistants during patient check-in, potentially increasing early dementia detection rates in primary care, said Schaefer.
While the shortened qBEANS test shows promise, further research is needed to assess its acceptability in primary care settings.
“The findings also warrant further development of the BEAN as a direct-to-consumer product, given its low cost and ease of administration,” said Schaefer.
However, Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, cautioned that direct-to-consumer plans “could lead to participants not knowing what to do with the results out of context and without clinical input.”
“I’m not sure that we need to have a new evaluation tool, but instead, greater adoption of known and existing tools,” said Perissinotto, who was not involved in the study.
According to Perissinotto, existing cognitive screening tools Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are more commonly used to evaluate cognition and are also relatively quick to administer.
“If [qBEANS] is not benchmarked to other standard tools like the MMSE or MoCA, clinicians may have trouble interpreting results,” said Perissinotto.
Study co-authors Schaefer and Jill Love are co-founders and managing members of Neurosessments LLC, which developed the qBEANS test.
A version of this article appeared on Medscape.com.
SEATTLE — A novel, quick, and low-cost dementia screening test could significantly improve early detection of Alzheimer’s disease in primary care settings, according to research presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The test, called qBEANS — short for Quick Behavioral Exam to Advance Neuropsychological Screening — involves patients spooning raw kidney beans into small plastic cups in a specific sequence to assess motor learning, visuospatial memory, and executive function. It requires no technology or wearable sensors, making it accessible and easy to implement.
Previous research has shown qBEANS to be sensitive and specific to Alzheimer’s disease pathology, as well as predictive of cognitive and functional decline, the researchers said.
However, the current version of the test takes around 7 minutes to administer, which is too long for use in primary care, according to study author Sydney Schaefer, PhD, associate professor in the School of Biological and Health Systems Engineering at Arizona State University, Tempe, Arizona.
“The purpose of this study was to identify the minimum number of trials needed for reliability relative to the original longer version,” said Schaefer.
The study involved 48 participants without dementia, 77% of whom were women, and an average age of 75.4 years.
The researchers found that the shortened version of the qBEANS test takes only about 3.85 minutes on average — nearly 48% faster than the original version — while still maintaining high reliability (intraclass correlation of 0.85).
With its brevity and simplicity, the test could be easily administered by medical assistants during patient check-in, potentially increasing early dementia detection rates in primary care, said Schaefer.
While the shortened qBEANS test shows promise, further research is needed to assess its acceptability in primary care settings.
“The findings also warrant further development of the BEAN as a direct-to-consumer product, given its low cost and ease of administration,” said Schaefer.
However, Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, cautioned that direct-to-consumer plans “could lead to participants not knowing what to do with the results out of context and without clinical input.”
“I’m not sure that we need to have a new evaluation tool, but instead, greater adoption of known and existing tools,” said Perissinotto, who was not involved in the study.
According to Perissinotto, existing cognitive screening tools Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are more commonly used to evaluate cognition and are also relatively quick to administer.
“If [qBEANS] is not benchmarked to other standard tools like the MMSE or MoCA, clinicians may have trouble interpreting results,” said Perissinotto.
Study co-authors Schaefer and Jill Love are co-founders and managing members of Neurosessments LLC, which developed the qBEANS test.
A version of this article appeared on Medscape.com.
SEATTLE — A novel, quick, and low-cost dementia screening test could significantly improve early detection of Alzheimer’s disease in primary care settings, according to research presented at the Gerontological Society of America (GSA) 2024 Annual Scientific Meeting.
The test, called qBEANS — short for Quick Behavioral Exam to Advance Neuropsychological Screening — involves patients spooning raw kidney beans into small plastic cups in a specific sequence to assess motor learning, visuospatial memory, and executive function. It requires no technology or wearable sensors, making it accessible and easy to implement.
Previous research has shown qBEANS to be sensitive and specific to Alzheimer’s disease pathology, as well as predictive of cognitive and functional decline, the researchers said.
However, the current version of the test takes around 7 minutes to administer, which is too long for use in primary care, according to study author Sydney Schaefer, PhD, associate professor in the School of Biological and Health Systems Engineering at Arizona State University, Tempe, Arizona.
“The purpose of this study was to identify the minimum number of trials needed for reliability relative to the original longer version,” said Schaefer.
The study involved 48 participants without dementia, 77% of whom were women, and an average age of 75.4 years.
The researchers found that the shortened version of the qBEANS test takes only about 3.85 minutes on average — nearly 48% faster than the original version — while still maintaining high reliability (intraclass correlation of 0.85).
With its brevity and simplicity, the test could be easily administered by medical assistants during patient check-in, potentially increasing early dementia detection rates in primary care, said Schaefer.
While the shortened qBEANS test shows promise, further research is needed to assess its acceptability in primary care settings.
“The findings also warrant further development of the BEAN as a direct-to-consumer product, given its low cost and ease of administration,” said Schaefer.
However, Carla Perissinotto, MD, MHS, professor in the Division of Geriatrics at the University of California, San Francisco, cautioned that direct-to-consumer plans “could lead to participants not knowing what to do with the results out of context and without clinical input.”
“I’m not sure that we need to have a new evaluation tool, but instead, greater adoption of known and existing tools,” said Perissinotto, who was not involved in the study.
According to Perissinotto, existing cognitive screening tools Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) are more commonly used to evaluate cognition and are also relatively quick to administer.
“If [qBEANS] is not benchmarked to other standard tools like the MMSE or MoCA, clinicians may have trouble interpreting results,” said Perissinotto.
Study co-authors Schaefer and Jill Love are co-founders and managing members of Neurosessments LLC, which developed the qBEANS test.
A version of this article appeared on Medscape.com.
FROM GSA 2024
Managing Diabetes and Dementia in Long-Term Care
VANCOUVER, BRITISH COLUMBIA — Conditions like diabetes and dementia are common in patients who are admitted to long-term care facilities, but aggressive management of these conditions in long-term care residents is not recommended, according to a presentation given at the Family Medicine Forum (FMF) 2024.
Hospitalizations for hypoglycemia are risky for patients with diabetes who are residents of long-term care facilities, particularly those aged 75 years or older, said Adam Gurau, MD, a family physician in Toronto. Gurau completed a fellowship in care of the elderly at the University of Toronto, in Ontario, Canada.
“A lot of studies have shown diabetes-related hospitalizations,” said Gurau. He cited a 2014 study that found that hypoglycemia hospitalization rates were twice as high in older patients (age, 75 years or older) as in younger patients (age, 65-74 years).
“It is important to keep in mind that our residents in long-term care are at increasing risk for hypoglycemia, and we really should try to reduce [this risk] and not use dangerous medications or potentially dangerous [means of] diabetes management,” said Gurau.
A Canadian study that examined the composite risk for emergency department visits, hospitalizations, or death within 30 days of reaching intensive glycemic control with high-risk agents (such as insulin or sulfonylureas) suggested little benefit and possible harm in using these agents in adults aged 75 years or older.
In addition, current guidelines on diabetes management encourage a different approach. “Looking at some of the more recent North American guidelines, many of them actually now recommend relaxing glycemic targets to reduce overtreatment and prevent hypoglycemia,” said Gurau.
Deprescribing Medications
Medication reviews present opportunities for taking a global view of a patient’s treatments and determining whether any drug can be removed from the list. “What we want to do is optimize medications,” said Gurau. “We’re not talking about adding medications. We’re talking about removing medications, which is, I think, what we should be doing.”
Some research suggests that patients are open to deprescribing. One survey examined older adults (mean age, 79.1 years) with three or more chronic conditions who had been prescribed at least five medications. The researchers found that most participants (77%) were willing to deprescribe one or more medicines if a doctor advised that it was possible. “General practitioners may be able to increase deprescribing by building trust with their patients and communicating evidence about the risks of medication use,” the researchers wrote.
About 62% of seniors living in a residential care home have a diagnosis of Alzheimer’s disease or another dementia, according to the Alzheimer Society of Canada. Evidence suggests that nonpharmacologic approaches, such as massage and touch therapy and music, can manage neuropsychiatric symptoms, such as aggression and agitation, that are associated with dementia in older adults, noted Gurau.
“We want to focus on nonpharmacologic approaches for many of these [long-term care] residents,” said Gurau. “We have to do as much as we can to exhaust all the nonpharmacologic approaches.”
Preventing Hospitalizations
Another challenge to tackle in long-term care is the unnecessary transfer of residents to hospital emergency departments, according to Gurau. “In many situations, it’s worth trying as hard as we can to treat them in the nursing home, as opposed to having them go to hospital.”
Researchers estimated that 25% of the transfers from long-term care facilities in Canada to hospital emergency departments in 2014 were potentially preventable.
Urinary tract infections accounted for 30% of hospital emergency department visits for potentially preventable conditions by older patients who are residents in long-term care, according to 2013-2014 data from the Canadian Institute for Health Information.
“There are lots of downsides to going to the hospital [from long-term care],” Gurau told this news organization. “There are risks for infections, risks for increasing delirium and agitation [in patients with dementia], and risks for other behavior that can really impact somebody’s life.”
Gurau reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — Conditions like diabetes and dementia are common in patients who are admitted to long-term care facilities, but aggressive management of these conditions in long-term care residents is not recommended, according to a presentation given at the Family Medicine Forum (FMF) 2024.
Hospitalizations for hypoglycemia are risky for patients with diabetes who are residents of long-term care facilities, particularly those aged 75 years or older, said Adam Gurau, MD, a family physician in Toronto. Gurau completed a fellowship in care of the elderly at the University of Toronto, in Ontario, Canada.
“A lot of studies have shown diabetes-related hospitalizations,” said Gurau. He cited a 2014 study that found that hypoglycemia hospitalization rates were twice as high in older patients (age, 75 years or older) as in younger patients (age, 65-74 years).
“It is important to keep in mind that our residents in long-term care are at increasing risk for hypoglycemia, and we really should try to reduce [this risk] and not use dangerous medications or potentially dangerous [means of] diabetes management,” said Gurau.
A Canadian study that examined the composite risk for emergency department visits, hospitalizations, or death within 30 days of reaching intensive glycemic control with high-risk agents (such as insulin or sulfonylureas) suggested little benefit and possible harm in using these agents in adults aged 75 years or older.
In addition, current guidelines on diabetes management encourage a different approach. “Looking at some of the more recent North American guidelines, many of them actually now recommend relaxing glycemic targets to reduce overtreatment and prevent hypoglycemia,” said Gurau.
Deprescribing Medications
Medication reviews present opportunities for taking a global view of a patient’s treatments and determining whether any drug can be removed from the list. “What we want to do is optimize medications,” said Gurau. “We’re not talking about adding medications. We’re talking about removing medications, which is, I think, what we should be doing.”
Some research suggests that patients are open to deprescribing. One survey examined older adults (mean age, 79.1 years) with three or more chronic conditions who had been prescribed at least five medications. The researchers found that most participants (77%) were willing to deprescribe one or more medicines if a doctor advised that it was possible. “General practitioners may be able to increase deprescribing by building trust with their patients and communicating evidence about the risks of medication use,” the researchers wrote.
About 62% of seniors living in a residential care home have a diagnosis of Alzheimer’s disease or another dementia, according to the Alzheimer Society of Canada. Evidence suggests that nonpharmacologic approaches, such as massage and touch therapy and music, can manage neuropsychiatric symptoms, such as aggression and agitation, that are associated with dementia in older adults, noted Gurau.
“We want to focus on nonpharmacologic approaches for many of these [long-term care] residents,” said Gurau. “We have to do as much as we can to exhaust all the nonpharmacologic approaches.”
Preventing Hospitalizations
Another challenge to tackle in long-term care is the unnecessary transfer of residents to hospital emergency departments, according to Gurau. “In many situations, it’s worth trying as hard as we can to treat them in the nursing home, as opposed to having them go to hospital.”
Researchers estimated that 25% of the transfers from long-term care facilities in Canada to hospital emergency departments in 2014 were potentially preventable.
Urinary tract infections accounted for 30% of hospital emergency department visits for potentially preventable conditions by older patients who are residents in long-term care, according to 2013-2014 data from the Canadian Institute for Health Information.
“There are lots of downsides to going to the hospital [from long-term care],” Gurau told this news organization. “There are risks for infections, risks for increasing delirium and agitation [in patients with dementia], and risks for other behavior that can really impact somebody’s life.”
Gurau reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — Conditions like diabetes and dementia are common in patients who are admitted to long-term care facilities, but aggressive management of these conditions in long-term care residents is not recommended, according to a presentation given at the Family Medicine Forum (FMF) 2024.
Hospitalizations for hypoglycemia are risky for patients with diabetes who are residents of long-term care facilities, particularly those aged 75 years or older, said Adam Gurau, MD, a family physician in Toronto. Gurau completed a fellowship in care of the elderly at the University of Toronto, in Ontario, Canada.
“A lot of studies have shown diabetes-related hospitalizations,” said Gurau. He cited a 2014 study that found that hypoglycemia hospitalization rates were twice as high in older patients (age, 75 years or older) as in younger patients (age, 65-74 years).
“It is important to keep in mind that our residents in long-term care are at increasing risk for hypoglycemia, and we really should try to reduce [this risk] and not use dangerous medications or potentially dangerous [means of] diabetes management,” said Gurau.
A Canadian study that examined the composite risk for emergency department visits, hospitalizations, or death within 30 days of reaching intensive glycemic control with high-risk agents (such as insulin or sulfonylureas) suggested little benefit and possible harm in using these agents in adults aged 75 years or older.
In addition, current guidelines on diabetes management encourage a different approach. “Looking at some of the more recent North American guidelines, many of them actually now recommend relaxing glycemic targets to reduce overtreatment and prevent hypoglycemia,” said Gurau.
Deprescribing Medications
Medication reviews present opportunities for taking a global view of a patient’s treatments and determining whether any drug can be removed from the list. “What we want to do is optimize medications,” said Gurau. “We’re not talking about adding medications. We’re talking about removing medications, which is, I think, what we should be doing.”
Some research suggests that patients are open to deprescribing. One survey examined older adults (mean age, 79.1 years) with three or more chronic conditions who had been prescribed at least five medications. The researchers found that most participants (77%) were willing to deprescribe one or more medicines if a doctor advised that it was possible. “General practitioners may be able to increase deprescribing by building trust with their patients and communicating evidence about the risks of medication use,” the researchers wrote.
About 62% of seniors living in a residential care home have a diagnosis of Alzheimer’s disease or another dementia, according to the Alzheimer Society of Canada. Evidence suggests that nonpharmacologic approaches, such as massage and touch therapy and music, can manage neuropsychiatric symptoms, such as aggression and agitation, that are associated with dementia in older adults, noted Gurau.
“We want to focus on nonpharmacologic approaches for many of these [long-term care] residents,” said Gurau. “We have to do as much as we can to exhaust all the nonpharmacologic approaches.”
Preventing Hospitalizations
Another challenge to tackle in long-term care is the unnecessary transfer of residents to hospital emergency departments, according to Gurau. “In many situations, it’s worth trying as hard as we can to treat them in the nursing home, as opposed to having them go to hospital.”
Researchers estimated that 25% of the transfers from long-term care facilities in Canada to hospital emergency departments in 2014 were potentially preventable.
Urinary tract infections accounted for 30% of hospital emergency department visits for potentially preventable conditions by older patients who are residents in long-term care, according to 2013-2014 data from the Canadian Institute for Health Information.
“There are lots of downsides to going to the hospital [from long-term care],” Gurau told this news organization. “There are risks for infections, risks for increasing delirium and agitation [in patients with dementia], and risks for other behavior that can really impact somebody’s life.”
Gurau reported having no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM FMF 2024
Vitamin D May Lower Blood Pressure in Seniors With Overweight
TOPLINE:
Supplementation with vitamin D and calcium can reduce systolic and diastolic blood pressure in older individuals with overweight, particularly in those with a body mass index (BMI) > 30 and those diagnosed with hypertension.
METHODOLOGY:
- Large cohort data have provided epidemiologic evidence linking vitamin D deficiency to a higher risk for cardiovascular disorders, including hypertension; however, evidence on the beneficial effects of vitamin D supplementation on blood pressure outcomes remains inconclusive.
- A post hoc analysis of a randomized controlled trial was conducted to investigate the effect of two doses of cholecalciferol (vitamin D3) on blood pressure in individuals aged 65 years or older with a BMI > 25 and serum vitamin D levels of 10-30 ng/mL.
- A total of 221 participants were recruited through outpatient departments, clinics, and advertisements in the greater Beirut area and received calcium supplementation in combination with either a low dose (600 IU/d, as recommended by the Institute of Medicine [IOM]) or a high dose (3750 IU/d) of vitamin D3.
- Blood pressure measurements were taken at baseline, 6 months, and 12 months using a SureSigns VS3 monitor.
- Participants were also stratified by BMI and hypertension status to assess the effects of vitamin D and calcium on blood pressure.
TAKEAWAY:
- Systolic and diastolic blood pressures were significantly reduced with vitamin D supplementation in the overall cohort (mean difference, 3.5 and 2.8 mm Hg, respectively; P = .005 and P = .002, respectively), with the effect more prominent in those in the high-dose vitamin D group.
- Participants with a BMI > 30 experienced reductions in both systolic and diastolic blood pressures in the overall cohort (P < .0001 and P = .01, respectively); although the systolic blood pressure was significantly reduced with both high- and low-dose vitamin D, the diastolic blood pressure decreased in the high-dose group only.
- Patients with hypertension benefited from all doses of vitamin D, regardless of the BMI.
- Systolic blood pressure at 6 and 12 months was significantly predicted by BMI and baseline systolic blood pressure measurements, although not by the dose of vitamin D received.
IN PRACTICE:
“Our study found vitamin D supplementation may decrease blood pressure in specific subgroups such as older people, people with obesity, and possibly those with low vitamin D levels,” said study author Ghada El-Hajj Fuleihan, MD, MPH, of the American University of Beirut Medical Center in Beirut, Lebanon, said in a news release. “High vitamin D doses compared to the IOM’s recommended daily dose did not provide additional health benefits.”
SOURCE:
This study was led by Maya Rahme, Department of Internal Medicine, Division of Endocrinology, Calcium Metabolism and Osteoporosis Program, World Health Organization Collaborating Center for Metabolic Bone Disorders, American University of Beirut Medical Center in Beirut, Lebanon. It was published online in Journal of the Endocrine Society.
LIMITATIONS:
This study’s limitations included the exploratory nature of the analyses and the low power of the subgroup analyses. Additionally, the study focused on older individuals who were sedentary and had overweight, many of whom had prediabetes — conditions known to influence blood pressure. The possible effect of calcium alone on blood pressure reduction was also unclear.
DISCLOSURES:
This study was supported by grants from the American University of Beirut, St Joseph University, and the Lebanese Council for National Scientific Research. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Supplementation with vitamin D and calcium can reduce systolic and diastolic blood pressure in older individuals with overweight, particularly in those with a body mass index (BMI) > 30 and those diagnosed with hypertension.
METHODOLOGY:
- Large cohort data have provided epidemiologic evidence linking vitamin D deficiency to a higher risk for cardiovascular disorders, including hypertension; however, evidence on the beneficial effects of vitamin D supplementation on blood pressure outcomes remains inconclusive.
- A post hoc analysis of a randomized controlled trial was conducted to investigate the effect of two doses of cholecalciferol (vitamin D3) on blood pressure in individuals aged 65 years or older with a BMI > 25 and serum vitamin D levels of 10-30 ng/mL.
- A total of 221 participants were recruited through outpatient departments, clinics, and advertisements in the greater Beirut area and received calcium supplementation in combination with either a low dose (600 IU/d, as recommended by the Institute of Medicine [IOM]) or a high dose (3750 IU/d) of vitamin D3.
- Blood pressure measurements were taken at baseline, 6 months, and 12 months using a SureSigns VS3 monitor.
- Participants were also stratified by BMI and hypertension status to assess the effects of vitamin D and calcium on blood pressure.
TAKEAWAY:
- Systolic and diastolic blood pressures were significantly reduced with vitamin D supplementation in the overall cohort (mean difference, 3.5 and 2.8 mm Hg, respectively; P = .005 and P = .002, respectively), with the effect more prominent in those in the high-dose vitamin D group.
- Participants with a BMI > 30 experienced reductions in both systolic and diastolic blood pressures in the overall cohort (P < .0001 and P = .01, respectively); although the systolic blood pressure was significantly reduced with both high- and low-dose vitamin D, the diastolic blood pressure decreased in the high-dose group only.
- Patients with hypertension benefited from all doses of vitamin D, regardless of the BMI.
- Systolic blood pressure at 6 and 12 months was significantly predicted by BMI and baseline systolic blood pressure measurements, although not by the dose of vitamin D received.
IN PRACTICE:
“Our study found vitamin D supplementation may decrease blood pressure in specific subgroups such as older people, people with obesity, and possibly those with low vitamin D levels,” said study author Ghada El-Hajj Fuleihan, MD, MPH, of the American University of Beirut Medical Center in Beirut, Lebanon, said in a news release. “High vitamin D doses compared to the IOM’s recommended daily dose did not provide additional health benefits.”
SOURCE:
This study was led by Maya Rahme, Department of Internal Medicine, Division of Endocrinology, Calcium Metabolism and Osteoporosis Program, World Health Organization Collaborating Center for Metabolic Bone Disorders, American University of Beirut Medical Center in Beirut, Lebanon. It was published online in Journal of the Endocrine Society.
LIMITATIONS:
This study’s limitations included the exploratory nature of the analyses and the low power of the subgroup analyses. Additionally, the study focused on older individuals who were sedentary and had overweight, many of whom had prediabetes — conditions known to influence blood pressure. The possible effect of calcium alone on blood pressure reduction was also unclear.
DISCLOSURES:
This study was supported by grants from the American University of Beirut, St Joseph University, and the Lebanese Council for National Scientific Research. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
TOPLINE:
Supplementation with vitamin D and calcium can reduce systolic and diastolic blood pressure in older individuals with overweight, particularly in those with a body mass index (BMI) > 30 and those diagnosed with hypertension.
METHODOLOGY:
- Large cohort data have provided epidemiologic evidence linking vitamin D deficiency to a higher risk for cardiovascular disorders, including hypertension; however, evidence on the beneficial effects of vitamin D supplementation on blood pressure outcomes remains inconclusive.
- A post hoc analysis of a randomized controlled trial was conducted to investigate the effect of two doses of cholecalciferol (vitamin D3) on blood pressure in individuals aged 65 years or older with a BMI > 25 and serum vitamin D levels of 10-30 ng/mL.
- A total of 221 participants were recruited through outpatient departments, clinics, and advertisements in the greater Beirut area and received calcium supplementation in combination with either a low dose (600 IU/d, as recommended by the Institute of Medicine [IOM]) or a high dose (3750 IU/d) of vitamin D3.
- Blood pressure measurements were taken at baseline, 6 months, and 12 months using a SureSigns VS3 monitor.
- Participants were also stratified by BMI and hypertension status to assess the effects of vitamin D and calcium on blood pressure.
TAKEAWAY:
- Systolic and diastolic blood pressures were significantly reduced with vitamin D supplementation in the overall cohort (mean difference, 3.5 and 2.8 mm Hg, respectively; P = .005 and P = .002, respectively), with the effect more prominent in those in the high-dose vitamin D group.
- Participants with a BMI > 30 experienced reductions in both systolic and diastolic blood pressures in the overall cohort (P < .0001 and P = .01, respectively); although the systolic blood pressure was significantly reduced with both high- and low-dose vitamin D, the diastolic blood pressure decreased in the high-dose group only.
- Patients with hypertension benefited from all doses of vitamin D, regardless of the BMI.
- Systolic blood pressure at 6 and 12 months was significantly predicted by BMI and baseline systolic blood pressure measurements, although not by the dose of vitamin D received.
IN PRACTICE:
“Our study found vitamin D supplementation may decrease blood pressure in specific subgroups such as older people, people with obesity, and possibly those with low vitamin D levels,” said study author Ghada El-Hajj Fuleihan, MD, MPH, of the American University of Beirut Medical Center in Beirut, Lebanon, said in a news release. “High vitamin D doses compared to the IOM’s recommended daily dose did not provide additional health benefits.”
SOURCE:
This study was led by Maya Rahme, Department of Internal Medicine, Division of Endocrinology, Calcium Metabolism and Osteoporosis Program, World Health Organization Collaborating Center for Metabolic Bone Disorders, American University of Beirut Medical Center in Beirut, Lebanon. It was published online in Journal of the Endocrine Society.
LIMITATIONS:
This study’s limitations included the exploratory nature of the analyses and the low power of the subgroup analyses. Additionally, the study focused on older individuals who were sedentary and had overweight, many of whom had prediabetes — conditions known to influence blood pressure. The possible effect of calcium alone on blood pressure reduction was also unclear.
DISCLOSURES:
This study was supported by grants from the American University of Beirut, St Joseph University, and the Lebanese Council for National Scientific Research. No relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
A version of this article appeared on Medscape.com.
The Use of Biomarkers for Alzheimer’s Disease in Primary Care
In our previous case-based review, I teased the opportunity to use biomarkers to increase the accuracy and expediency of the diagnosis of Alzheimer’s disease (AD). These tests are no longer confined to the research setting but are now available to specialists and primary care clinicians alike. Given that most cognitive disorders are first identified in primary care, however, I believe that their greatest impact will be in our clinical space.
The pathologic processes associated with AD can be detected approximately 2 decades before the advent of clinical symptoms, and the symptomatic period of cognitive impairment is estimated to occupy just the final third of the disease course of AD. Using imaging studies, primarily PET, as well as cerebrospinal fluid (CSF) and even blood biomarkers for beta amyloid and tau, the pathologic drivers of AD, clinicians can identify patients with AD pathology before any symptoms are present. Importantly for our present-day interventions, the application of biomarkers can also help to diagnose AD earlier.
Amyloid PET identifies one of the earliest markers of potential AD, but a barrier common to advanced diagnostic imaging has been cost. Medicare has now approved coverage for amyloid PET in cases of suspected cognitive impairment. In a large study of more than 16,000 older adults in the United States, PET scans were positive in 55.3% of cases with mild cognitive impairment (MCI). The PET positivity rate among adults with other dementia was 70.1%. The application of PET resulted in a change in care in more than 60% of patients with MCI and dementia. One quarter of participants had their diagnosis changed from AD to another form of dementia, and 10% were changed from a diagnosis of other dementia to AD.
Liquid biomarkers can involve either CSF or blood samples. To date, CSF testing has yielded more consistent results and has defined protocols for assessment. Still, collection of CSF is more challenging than collection of blood, and patients and their families may object to lumbar puncture. CSF assessment therefore remains generally in the province of specialists and research centers.
Primary care clinicians have been waiting for a reliable blood-based biomarker for AD, and that wait may be about to end. A study published in July 2024 included 1213 adults being evaluated for cognitive symptoms in Sweden. They completed a test measuring the ratio of phosphorylated tau 217 vs nonphosphorylated tau 217, with or without a test for serum amyloid ratios as well. These tests were compared with clinicians’ clinical diagnoses as well as CSF results, which were considered the gold standard.
Using only clinical tools, primary care clinicians’ and specialists’ diagnostic accuracy for MCI and dementia were just 61% and 73%, respectively. These values were substantially weaker vs the performance of either the serum tau or amyloid ratios (both 90% accurate). The authors concluded that serum testing has the potential to improve clinical care of patients with cognitive impairment.
Where does that leave us today? Commercially available blood biomarkers are available now which use different tests and cutoff values. These may be helpful but will probably be difficult to compare and interpret for primary care clinicians. In addition, insurance is less likely to cover these tests. Amyloid PET scans are a very reasonable option to augment clinician judgment of suspected cognitive impairment, but not all geographic areas will have ready access to this imaging study.
Still, it is an exciting time to have more objective tools at our disposal to identify MCI and AD. These tools can only be optimized by clinicians who recognize symptoms and perform the baseline testing necessary to determine pretest probability of MCI or dementia.
Charles P. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for McNeil Pharmaceuticals.
A version of this article first appeared on Medscape.com.
In our previous case-based review, I teased the opportunity to use biomarkers to increase the accuracy and expediency of the diagnosis of Alzheimer’s disease (AD). These tests are no longer confined to the research setting but are now available to specialists and primary care clinicians alike. Given that most cognitive disorders are first identified in primary care, however, I believe that their greatest impact will be in our clinical space.
The pathologic processes associated with AD can be detected approximately 2 decades before the advent of clinical symptoms, and the symptomatic period of cognitive impairment is estimated to occupy just the final third of the disease course of AD. Using imaging studies, primarily PET, as well as cerebrospinal fluid (CSF) and even blood biomarkers for beta amyloid and tau, the pathologic drivers of AD, clinicians can identify patients with AD pathology before any symptoms are present. Importantly for our present-day interventions, the application of biomarkers can also help to diagnose AD earlier.
Amyloid PET identifies one of the earliest markers of potential AD, but a barrier common to advanced diagnostic imaging has been cost. Medicare has now approved coverage for amyloid PET in cases of suspected cognitive impairment. In a large study of more than 16,000 older adults in the United States, PET scans were positive in 55.3% of cases with mild cognitive impairment (MCI). The PET positivity rate among adults with other dementia was 70.1%. The application of PET resulted in a change in care in more than 60% of patients with MCI and dementia. One quarter of participants had their diagnosis changed from AD to another form of dementia, and 10% were changed from a diagnosis of other dementia to AD.
Liquid biomarkers can involve either CSF or blood samples. To date, CSF testing has yielded more consistent results and has defined protocols for assessment. Still, collection of CSF is more challenging than collection of blood, and patients and their families may object to lumbar puncture. CSF assessment therefore remains generally in the province of specialists and research centers.
Primary care clinicians have been waiting for a reliable blood-based biomarker for AD, and that wait may be about to end. A study published in July 2024 included 1213 adults being evaluated for cognitive symptoms in Sweden. They completed a test measuring the ratio of phosphorylated tau 217 vs nonphosphorylated tau 217, with or without a test for serum amyloid ratios as well. These tests were compared with clinicians’ clinical diagnoses as well as CSF results, which were considered the gold standard.
Using only clinical tools, primary care clinicians’ and specialists’ diagnostic accuracy for MCI and dementia were just 61% and 73%, respectively. These values were substantially weaker vs the performance of either the serum tau or amyloid ratios (both 90% accurate). The authors concluded that serum testing has the potential to improve clinical care of patients with cognitive impairment.
Where does that leave us today? Commercially available blood biomarkers are available now which use different tests and cutoff values. These may be helpful but will probably be difficult to compare and interpret for primary care clinicians. In addition, insurance is less likely to cover these tests. Amyloid PET scans are a very reasonable option to augment clinician judgment of suspected cognitive impairment, but not all geographic areas will have ready access to this imaging study.
Still, it is an exciting time to have more objective tools at our disposal to identify MCI and AD. These tools can only be optimized by clinicians who recognize symptoms and perform the baseline testing necessary to determine pretest probability of MCI or dementia.
Charles P. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for McNeil Pharmaceuticals.
A version of this article first appeared on Medscape.com.
In our previous case-based review, I teased the opportunity to use biomarkers to increase the accuracy and expediency of the diagnosis of Alzheimer’s disease (AD). These tests are no longer confined to the research setting but are now available to specialists and primary care clinicians alike. Given that most cognitive disorders are first identified in primary care, however, I believe that their greatest impact will be in our clinical space.
The pathologic processes associated with AD can be detected approximately 2 decades before the advent of clinical symptoms, and the symptomatic period of cognitive impairment is estimated to occupy just the final third of the disease course of AD. Using imaging studies, primarily PET, as well as cerebrospinal fluid (CSF) and even blood biomarkers for beta amyloid and tau, the pathologic drivers of AD, clinicians can identify patients with AD pathology before any symptoms are present. Importantly for our present-day interventions, the application of biomarkers can also help to diagnose AD earlier.
Amyloid PET identifies one of the earliest markers of potential AD, but a barrier common to advanced diagnostic imaging has been cost. Medicare has now approved coverage for amyloid PET in cases of suspected cognitive impairment. In a large study of more than 16,000 older adults in the United States, PET scans were positive in 55.3% of cases with mild cognitive impairment (MCI). The PET positivity rate among adults with other dementia was 70.1%. The application of PET resulted in a change in care in more than 60% of patients with MCI and dementia. One quarter of participants had their diagnosis changed from AD to another form of dementia, and 10% were changed from a diagnosis of other dementia to AD.
Liquid biomarkers can involve either CSF or blood samples. To date, CSF testing has yielded more consistent results and has defined protocols for assessment. Still, collection of CSF is more challenging than collection of blood, and patients and their families may object to lumbar puncture. CSF assessment therefore remains generally in the province of specialists and research centers.
Primary care clinicians have been waiting for a reliable blood-based biomarker for AD, and that wait may be about to end. A study published in July 2024 included 1213 adults being evaluated for cognitive symptoms in Sweden. They completed a test measuring the ratio of phosphorylated tau 217 vs nonphosphorylated tau 217, with or without a test for serum amyloid ratios as well. These tests were compared with clinicians’ clinical diagnoses as well as CSF results, which were considered the gold standard.
Using only clinical tools, primary care clinicians’ and specialists’ diagnostic accuracy for MCI and dementia were just 61% and 73%, respectively. These values were substantially weaker vs the performance of either the serum tau or amyloid ratios (both 90% accurate). The authors concluded that serum testing has the potential to improve clinical care of patients with cognitive impairment.
Where does that leave us today? Commercially available blood biomarkers are available now which use different tests and cutoff values. These may be helpful but will probably be difficult to compare and interpret for primary care clinicians. In addition, insurance is less likely to cover these tests. Amyloid PET scans are a very reasonable option to augment clinician judgment of suspected cognitive impairment, but not all geographic areas will have ready access to this imaging study.
Still, it is an exciting time to have more objective tools at our disposal to identify MCI and AD. These tools can only be optimized by clinicians who recognize symptoms and perform the baseline testing necessary to determine pretest probability of MCI or dementia.
Charles P. Vega, Health Sciences Clinical Professor, Family Medicine, University of California, Irvine, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, adviser, consultant, or trustee for McNeil Pharmaceuticals.
A version of this article first appeared on Medscape.com.
A New and Early Predictor of Dementia?
, in new findings that may provide a potential opportunity to identify high-risk populations for targeted enrollment in clinical trials of dementia prevention and treatment.
Results of an international study assessing frailty trajectories showed frailty levels notably increased in the 4-9 years before dementia diagnosis. Even among study participants whose baseline frailty measurement was taken prior to that acceleration period, frailty was still positively associated with dementia risk, the investigators noted.
“We found that with every four to five additional health problems, there is on average a 40% higher risk of developing dementia, while the risk is lower for people who are more physically fit,” said study investigator David Ward, PhD, of the Centre for Health Services Research, The University of Queensland, Brisbane, Australia.
The findings were published online in JAMA Neurology.
A Promising Biomarker
An accessible biomarker for both biologic age and dementia risk is essential for advancing dementia prevention and treatment strategies, the investigators noted, adding that growing evidence suggests frailty may be a promising candidate for this role.
To learn more about the association between frailty and dementia, Ward and his team analyzed data on 29,849 participants aged 60 years or above (mean age, 71.6 years; 62% women) who participated in four cohort studies: the English Longitudinal Study of Ageing (ELSA; n = 6771), the Health and Retirement Study (HRS; n = 9045), the Rush Memory and Aging Project (MAP; n = 1451), and the National Alzheimer’s Coordinating Center (NACC; n = 12,582).
The primary outcome was all-cause dementia. Depending on the cohort, dementia diagnoses were determined through cognitive testing, self- or family report of physician diagnosis, or a diagnosis by the study physician. Participants were excluded if they had cognitive impairment at baseline.
Investigators retrospectively determined frailty index scores by gathering information on health and functional outcomes for participants from each cohort. Only participants with frailty data on at least 30 deficits were included.
Commonly included deficits included high blood pressure, cancer, and chronic pain, as well as functional problems such as hearing impairment, difficulty with mobility, and challenges managing finances.
Investigators conducted follow-up visits with participants until they developed dementia or until the study ended, with follow-up periods varying across cohorts.
After adjustment for potential confounders, frailty scores were modeled using backward time scales.
Among participants who developed incident dementia (n = 3154), covariate-adjusted expected frailty index scores were, on average, higher in women than in men by 18.5% in ELSA, 20.9% in HRS, and 16.2% in MAP. There were no differences in frailty scores between sexes in the NACC cohort.
When measured on a timeline, as compared with those who didn’t develop dementia, frailty scores were significantly and consistently higher in the dementia groups 8-20 before dementia onset (20 years in HRS; 13 in MAP; 12 in ELSA; 8 in NACC).
Increases in the rates of frailty index scores began accelerating 4-9 years before dementia onset for the various cohorts, investigators noted.
In all four cohorts, each 0.1 increase in frailty scores was positively associated with increased dementia risk.
Adjusted hazard ratios [aHRs] ranged from 1.18 in the HRS cohort to 1.73 in the NACC cohort, which showed the strongest association.
In participants whose baseline frailty measurement was conducted before the predementia acceleration period began, the association of frailty scores and dementia risk was positive. These aHRs ranged from 1.18 in the HRS cohort to 1.43 in the NACC cohort.
The ‘Four Pillars’ of Prevention
The good news, investigators said, is that the long trajectory of frailty symptoms preceding dementia onset provides plenty of opportunity for intervention.
To slow the development of frailty, Ward suggested adhering to the “four pillars of frailty prevention and management,” which include good nutrition with plenty of protein, exercise, optimizing medications for chronic conditions, and maintaining a strong social network.
Ward suggested neurologists track frailty in their patients and pointed to a recent article focused on helping neurologists use frailty measures to influence care planning.
Study limitations include the possibility of reverse causality and the fact that investigators could not adjust for genetic risk for dementia.
Unclear Pathway
Commenting on the findings, Lycia Neumann, PhD, senior director of Health Services Research at the Alzheimer’s Association, noted that many studies over the years have shown a link between frailty and dementia. However, she cautioned that a link does not imply causation.
The pathway from frailty to dementia is not 100% clear, and both are complex conditions, said Neumann, who was not part of the study.
“Adopting healthy lifestyle behaviors early and consistently can help decrease the risk of — or postpone the onset of — both frailty and cognitive decline,” she said. Neumann added that physical activity, a healthy diet, social engagement, and controlling diabetes and blood pressure can also reduce the risk for dementia as well as cardiovascular disease.
The study was funded in part by the Deep Dementia Phenotyping Network through the Frailty and Dementia Special Interest Group. Ward and Neumann reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
, in new findings that may provide a potential opportunity to identify high-risk populations for targeted enrollment in clinical trials of dementia prevention and treatment.
Results of an international study assessing frailty trajectories showed frailty levels notably increased in the 4-9 years before dementia diagnosis. Even among study participants whose baseline frailty measurement was taken prior to that acceleration period, frailty was still positively associated with dementia risk, the investigators noted.
“We found that with every four to five additional health problems, there is on average a 40% higher risk of developing dementia, while the risk is lower for people who are more physically fit,” said study investigator David Ward, PhD, of the Centre for Health Services Research, The University of Queensland, Brisbane, Australia.
The findings were published online in JAMA Neurology.
A Promising Biomarker
An accessible biomarker for both biologic age and dementia risk is essential for advancing dementia prevention and treatment strategies, the investigators noted, adding that growing evidence suggests frailty may be a promising candidate for this role.
To learn more about the association between frailty and dementia, Ward and his team analyzed data on 29,849 participants aged 60 years or above (mean age, 71.6 years; 62% women) who participated in four cohort studies: the English Longitudinal Study of Ageing (ELSA; n = 6771), the Health and Retirement Study (HRS; n = 9045), the Rush Memory and Aging Project (MAP; n = 1451), and the National Alzheimer’s Coordinating Center (NACC; n = 12,582).
The primary outcome was all-cause dementia. Depending on the cohort, dementia diagnoses were determined through cognitive testing, self- or family report of physician diagnosis, or a diagnosis by the study physician. Participants were excluded if they had cognitive impairment at baseline.
Investigators retrospectively determined frailty index scores by gathering information on health and functional outcomes for participants from each cohort. Only participants with frailty data on at least 30 deficits were included.
Commonly included deficits included high blood pressure, cancer, and chronic pain, as well as functional problems such as hearing impairment, difficulty with mobility, and challenges managing finances.
Investigators conducted follow-up visits with participants until they developed dementia or until the study ended, with follow-up periods varying across cohorts.
After adjustment for potential confounders, frailty scores were modeled using backward time scales.
Among participants who developed incident dementia (n = 3154), covariate-adjusted expected frailty index scores were, on average, higher in women than in men by 18.5% in ELSA, 20.9% in HRS, and 16.2% in MAP. There were no differences in frailty scores between sexes in the NACC cohort.
When measured on a timeline, as compared with those who didn’t develop dementia, frailty scores were significantly and consistently higher in the dementia groups 8-20 before dementia onset (20 years in HRS; 13 in MAP; 12 in ELSA; 8 in NACC).
Increases in the rates of frailty index scores began accelerating 4-9 years before dementia onset for the various cohorts, investigators noted.
In all four cohorts, each 0.1 increase in frailty scores was positively associated with increased dementia risk.
Adjusted hazard ratios [aHRs] ranged from 1.18 in the HRS cohort to 1.73 in the NACC cohort, which showed the strongest association.
In participants whose baseline frailty measurement was conducted before the predementia acceleration period began, the association of frailty scores and dementia risk was positive. These aHRs ranged from 1.18 in the HRS cohort to 1.43 in the NACC cohort.
The ‘Four Pillars’ of Prevention
The good news, investigators said, is that the long trajectory of frailty symptoms preceding dementia onset provides plenty of opportunity for intervention.
To slow the development of frailty, Ward suggested adhering to the “four pillars of frailty prevention and management,” which include good nutrition with plenty of protein, exercise, optimizing medications for chronic conditions, and maintaining a strong social network.
Ward suggested neurologists track frailty in their patients and pointed to a recent article focused on helping neurologists use frailty measures to influence care planning.
Study limitations include the possibility of reverse causality and the fact that investigators could not adjust for genetic risk for dementia.
Unclear Pathway
Commenting on the findings, Lycia Neumann, PhD, senior director of Health Services Research at the Alzheimer’s Association, noted that many studies over the years have shown a link between frailty and dementia. However, she cautioned that a link does not imply causation.
The pathway from frailty to dementia is not 100% clear, and both are complex conditions, said Neumann, who was not part of the study.
“Adopting healthy lifestyle behaviors early and consistently can help decrease the risk of — or postpone the onset of — both frailty and cognitive decline,” she said. Neumann added that physical activity, a healthy diet, social engagement, and controlling diabetes and blood pressure can also reduce the risk for dementia as well as cardiovascular disease.
The study was funded in part by the Deep Dementia Phenotyping Network through the Frailty and Dementia Special Interest Group. Ward and Neumann reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
, in new findings that may provide a potential opportunity to identify high-risk populations for targeted enrollment in clinical trials of dementia prevention and treatment.
Results of an international study assessing frailty trajectories showed frailty levels notably increased in the 4-9 years before dementia diagnosis. Even among study participants whose baseline frailty measurement was taken prior to that acceleration period, frailty was still positively associated with dementia risk, the investigators noted.
“We found that with every four to five additional health problems, there is on average a 40% higher risk of developing dementia, while the risk is lower for people who are more physically fit,” said study investigator David Ward, PhD, of the Centre for Health Services Research, The University of Queensland, Brisbane, Australia.
The findings were published online in JAMA Neurology.
A Promising Biomarker
An accessible biomarker for both biologic age and dementia risk is essential for advancing dementia prevention and treatment strategies, the investigators noted, adding that growing evidence suggests frailty may be a promising candidate for this role.
To learn more about the association between frailty and dementia, Ward and his team analyzed data on 29,849 participants aged 60 years or above (mean age, 71.6 years; 62% women) who participated in four cohort studies: the English Longitudinal Study of Ageing (ELSA; n = 6771), the Health and Retirement Study (HRS; n = 9045), the Rush Memory and Aging Project (MAP; n = 1451), and the National Alzheimer’s Coordinating Center (NACC; n = 12,582).
The primary outcome was all-cause dementia. Depending on the cohort, dementia diagnoses were determined through cognitive testing, self- or family report of physician diagnosis, or a diagnosis by the study physician. Participants were excluded if they had cognitive impairment at baseline.
Investigators retrospectively determined frailty index scores by gathering information on health and functional outcomes for participants from each cohort. Only participants with frailty data on at least 30 deficits were included.
Commonly included deficits included high blood pressure, cancer, and chronic pain, as well as functional problems such as hearing impairment, difficulty with mobility, and challenges managing finances.
Investigators conducted follow-up visits with participants until they developed dementia or until the study ended, with follow-up periods varying across cohorts.
After adjustment for potential confounders, frailty scores were modeled using backward time scales.
Among participants who developed incident dementia (n = 3154), covariate-adjusted expected frailty index scores were, on average, higher in women than in men by 18.5% in ELSA, 20.9% in HRS, and 16.2% in MAP. There were no differences in frailty scores between sexes in the NACC cohort.
When measured on a timeline, as compared with those who didn’t develop dementia, frailty scores were significantly and consistently higher in the dementia groups 8-20 before dementia onset (20 years in HRS; 13 in MAP; 12 in ELSA; 8 in NACC).
Increases in the rates of frailty index scores began accelerating 4-9 years before dementia onset for the various cohorts, investigators noted.
In all four cohorts, each 0.1 increase in frailty scores was positively associated with increased dementia risk.
Adjusted hazard ratios [aHRs] ranged from 1.18 in the HRS cohort to 1.73 in the NACC cohort, which showed the strongest association.
In participants whose baseline frailty measurement was conducted before the predementia acceleration period began, the association of frailty scores and dementia risk was positive. These aHRs ranged from 1.18 in the HRS cohort to 1.43 in the NACC cohort.
The ‘Four Pillars’ of Prevention
The good news, investigators said, is that the long trajectory of frailty symptoms preceding dementia onset provides plenty of opportunity for intervention.
To slow the development of frailty, Ward suggested adhering to the “four pillars of frailty prevention and management,” which include good nutrition with plenty of protein, exercise, optimizing medications for chronic conditions, and maintaining a strong social network.
Ward suggested neurologists track frailty in their patients and pointed to a recent article focused on helping neurologists use frailty measures to influence care planning.
Study limitations include the possibility of reverse causality and the fact that investigators could not adjust for genetic risk for dementia.
Unclear Pathway
Commenting on the findings, Lycia Neumann, PhD, senior director of Health Services Research at the Alzheimer’s Association, noted that many studies over the years have shown a link between frailty and dementia. However, she cautioned that a link does not imply causation.
The pathway from frailty to dementia is not 100% clear, and both are complex conditions, said Neumann, who was not part of the study.
“Adopting healthy lifestyle behaviors early and consistently can help decrease the risk of — or postpone the onset of — both frailty and cognitive decline,” she said. Neumann added that physical activity, a healthy diet, social engagement, and controlling diabetes and blood pressure can also reduce the risk for dementia as well as cardiovascular disease.
The study was funded in part by the Deep Dementia Phenotyping Network through the Frailty and Dementia Special Interest Group. Ward and Neumann reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
Goodbye CHADSVASc: Sex Complicates Stroke Risk Scoring in AF
The European Society of Cardiology (ESC) caused a stir when they recommended in their latest atrial fibrillation (AF) management guideline that gender no longer be included in the decision to initiate oral anticoagulation therapy.
The move aims to level the playing field between men and women and follows a more nuanced understanding of stroke risk in patients with AF, said experts. It also acknowledges the lack of evidence in people receiving cross-sex hormone therapy.
In any case, the guidelines, developed in collaboration with the European Association for Cardio-Thoracic Surgery and published by the European Heart Journal on August 30, simply follow 2023’s US recommendations, they added.
One Size Does Not Fit All
So, what to the ESC guidelines actually say?
They underline that, if left untreated, the risk for ischemic stroke is increased fivefold in patients with AF, and the “default approach should therefore be to provide oral anticoagulation to all eligible AF patients, except those at low risk for incident stroke or thromboembolism.”
However, the authors note that there is a lack of strong evidence on how to apply the current risk scores to help inform that decision in real-world patients.
Dipak Kotecha, MBChB, PhD, Professor of Cardiology at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, England, and senior author of the ESC guidelines, said in an interview that “the available scores have a relatively poor ability to accurately predict which patients will have a stroke or thromboembolic event.”
Instead, he said “a much better approach is for healthcare professionals to look at each patient’s individual risk factors, using the risk scores to identify those patients that might not benefit from oral anticoagulant therapy.”
For these guidelines, the authors therefore wanted to “move away from a one-size-fits-all” approach, Kotecha said, and instead ensure that more patients can benefit from the new range of direct oral anticoagulants (DOACs) that are easier to take and with much lower chance of side effects or major bleeding.
To achieve this, they separated their clinical recommendations from any particular risk score, and instead focused on the practicalities of implementation.
Risk Modifier Vs Risk Factor
To explain their decision the authors highlight that “the most popular risk score” is the CHA2DS2–VASc, which gives a point for female sex, alongside factors such as congestive heart failure, hypertension, and diabetes mellitus, and a sliding scale of points for increasing age.
Kotecha pointed out the score was developed before the DOACs were available and may not account for how risk factors have changed in recent decades.
The result is that CHA2DS2–VASc gives the same number of points to an individual with heart failure or prior transient ischemic attack as to a woman aged less than 65 years, “but the magnitude of increased risk is not the same,” Usha Beth Tedrow, MD, Associate Professor of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, said in an interview.
As far back as 2018, it was known that “female sex is a risk modifier, rather than a risk factor for stroke in atrial fibrillation,” noted Jose Joglar, MD, lead author of the 2023 ACC/AHA/ACCP/HRS Guideline for the Diagnosis and Management of Atrial Fibrillation said in an interview.
A Danish national registry study involving 239,671 AF patients treated between 1997 and 2015, nearly half of whom were women, showed that, at a CHA2DS2–VASc score of 0, the “risk of stroke between men and women is absolutely the same,” he said.
“It is not until after a CHA2DS2–VASc score of 2 that the curves start to separate,” Joglar, Program Director, Clinical Cardiac Electrophysiology Fellowship Program, The University of Texas Southwestern Medical Center, Dallas, continued, “but by then you have already made the decision to anticoagulate.”
More recently, Kotecha and colleagues conducted a population cohort study of the electronic healthcare records of UK primary care patients treated between 2005 and 2020, and identified 78,852 with AF; more than a third were women.
Their analysis, published on September 1, showed that women had a lower adjusted rate of the primary composite outcome of all-cause mortality, ischemic stroke, or arterial thromboembolism, driven by a reduced mortality rate.
“Removal of gender from clinical risk scoring could simplify the approach to which patients with AF should be offered oral anticoagulation,” Kotecha and colleagues concluded.
Joglar clarified that “women are at increased risk for stroke than men” overall, but by the time that risk “becomes manifest, other risk factors have come into play, and they have already met the criteria for anticoagulation.”
The authors of the latest ESC guideline therefore concluded that the “inclusion of gender complicates clinical practice both for healthcare professionals and patients.” Their solution was to remove the question of gender for decisions over initiating oral anticoagulant therapy in clinical practice altogether.
This includes individuals who identify as transgender or are undergoing sex hormone therapy, as all the experts interviewed by Medscape Medical News agreed that there is currently insufficient evidence to know if that affects stroke risk.
Instead, guidelines state that the drugs are “recommended in those with a CHA2DS2-VA score of 2 or more and should be considered in those with a CHA2DS2-VA score of 1, following a patient-centered and shared care approach.”
“Dropping the gender part of the risk score is not really a substantial change” from previous ESC or other guidelines, as different points were required in the past to recommend anticoagulants for women and men, Kotecha said, adding that “making the approach easier for clinicians may avoid penalizing women as well as nonbinary and transgender patients.”
Anne B. Curtis, MD, SUNY Distinguished Professor, Department of Medicine, Jacobs School of Medicine & Biomedical Sciences, University at Buffalo in New York, agreed.
Putting aside the question of female sex, she said that there are not a lot of people under the age of 65 years with “absolutely no risk factors,” and so, “if the only reason you would anticoagulate” someone of that age is because they are a woman that “doesn’t make a lot of sense to me.”
The ESC guidelines are “trying to say, ‘look at the other risk factors, and if anything is there, go ahead and anticoagulate,” Curtis said in an interview.
“It’s actually a very thoughtful decision,” Tedrow said, and not “intended to discount risk in women.” Rather, it’s a statement that acknowledges the problem of recommending anticoagulation therapy in women “for whom it is not appropriate.”
Joglar pointed out that that recommendation, although not characterized in the same way, was in fact included in the 2023 US guidelines.
“We wanted to use a more nuanced approach,” he said, and move away from using CHA2DS2–VASc as the prime determinant of whether to start oral anticoagulation and towards a magnitude risk assessment, in which female sex is seen as a risk modifier.
“The Europeans and the Americans are looking at the same data, so we often reach the same conclusions,” Joglar said, although “we sometimes use different wordings.”
Overall, Kotecha expressed the hope that the move “will lead to better implementation of guidelines, at the end of the day.”
“That’s all we can hope for: Patients will be offered a more individualized approach, leading to more appropriate use of treatment in the right patients.”
The newer direct oral anticoagulation is “a much simpler therapy,” he added. “There is very little monitoring, a similar risk of bleeding as aspirin, and yet the ability to largely prevent the high rate of stroke and thromboembolism associated with atrial fibrillation.”
“So, it’s a big ticket item for our communities and public health, particularly as atrial fibrillation is expected to double in prevalence in the next few decades and evidence is building that it can lead to vascular dementia in the long-term.”
No funding was declared. Kotecha declares relationships with Bayer, Protherics Medicines Development, Boston Scientific, Daiichi Sankyo, Boehringer Ingelheim, BMS-Pfizer Alliance, Amomed, MyoKardia. Curtis declared relationships with Janssen Pharmaceuticals, Medtronic, Abbott. Joglar declared no relevant relationships. Tedrow declared no relevant relationships.
A version of this article appeared on Medscape.com.
The European Society of Cardiology (ESC) caused a stir when they recommended in their latest atrial fibrillation (AF) management guideline that gender no longer be included in the decision to initiate oral anticoagulation therapy.
The move aims to level the playing field between men and women and follows a more nuanced understanding of stroke risk in patients with AF, said experts. It also acknowledges the lack of evidence in people receiving cross-sex hormone therapy.
In any case, the guidelines, developed in collaboration with the European Association for Cardio-Thoracic Surgery and published by the European Heart Journal on August 30, simply follow 2023’s US recommendations, they added.
One Size Does Not Fit All
So, what to the ESC guidelines actually say?
They underline that, if left untreated, the risk for ischemic stroke is increased fivefold in patients with AF, and the “default approach should therefore be to provide oral anticoagulation to all eligible AF patients, except those at low risk for incident stroke or thromboembolism.”
However, the authors note that there is a lack of strong evidence on how to apply the current risk scores to help inform that decision in real-world patients.
Dipak Kotecha, MBChB, PhD, Professor of Cardiology at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, England, and senior author of the ESC guidelines, said in an interview that “the available scores have a relatively poor ability to accurately predict which patients will have a stroke or thromboembolic event.”
Instead, he said “a much better approach is for healthcare professionals to look at each patient’s individual risk factors, using the risk scores to identify those patients that might not benefit from oral anticoagulant therapy.”
For these guidelines, the authors therefore wanted to “move away from a one-size-fits-all” approach, Kotecha said, and instead ensure that more patients can benefit from the new range of direct oral anticoagulants (DOACs) that are easier to take and with much lower chance of side effects or major bleeding.
To achieve this, they separated their clinical recommendations from any particular risk score, and instead focused on the practicalities of implementation.
Risk Modifier Vs Risk Factor
To explain their decision the authors highlight that “the most popular risk score” is the CHA2DS2–VASc, which gives a point for female sex, alongside factors such as congestive heart failure, hypertension, and diabetes mellitus, and a sliding scale of points for increasing age.
Kotecha pointed out the score was developed before the DOACs were available and may not account for how risk factors have changed in recent decades.
The result is that CHA2DS2–VASc gives the same number of points to an individual with heart failure or prior transient ischemic attack as to a woman aged less than 65 years, “but the magnitude of increased risk is not the same,” Usha Beth Tedrow, MD, Associate Professor of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, said in an interview.
As far back as 2018, it was known that “female sex is a risk modifier, rather than a risk factor for stroke in atrial fibrillation,” noted Jose Joglar, MD, lead author of the 2023 ACC/AHA/ACCP/HRS Guideline for the Diagnosis and Management of Atrial Fibrillation said in an interview.
A Danish national registry study involving 239,671 AF patients treated between 1997 and 2015, nearly half of whom were women, showed that, at a CHA2DS2–VASc score of 0, the “risk of stroke between men and women is absolutely the same,” he said.
“It is not until after a CHA2DS2–VASc score of 2 that the curves start to separate,” Joglar, Program Director, Clinical Cardiac Electrophysiology Fellowship Program, The University of Texas Southwestern Medical Center, Dallas, continued, “but by then you have already made the decision to anticoagulate.”
More recently, Kotecha and colleagues conducted a population cohort study of the electronic healthcare records of UK primary care patients treated between 2005 and 2020, and identified 78,852 with AF; more than a third were women.
Their analysis, published on September 1, showed that women had a lower adjusted rate of the primary composite outcome of all-cause mortality, ischemic stroke, or arterial thromboembolism, driven by a reduced mortality rate.
“Removal of gender from clinical risk scoring could simplify the approach to which patients with AF should be offered oral anticoagulation,” Kotecha and colleagues concluded.
Joglar clarified that “women are at increased risk for stroke than men” overall, but by the time that risk “becomes manifest, other risk factors have come into play, and they have already met the criteria for anticoagulation.”
The authors of the latest ESC guideline therefore concluded that the “inclusion of gender complicates clinical practice both for healthcare professionals and patients.” Their solution was to remove the question of gender for decisions over initiating oral anticoagulant therapy in clinical practice altogether.
This includes individuals who identify as transgender or are undergoing sex hormone therapy, as all the experts interviewed by Medscape Medical News agreed that there is currently insufficient evidence to know if that affects stroke risk.
Instead, guidelines state that the drugs are “recommended in those with a CHA2DS2-VA score of 2 or more and should be considered in those with a CHA2DS2-VA score of 1, following a patient-centered and shared care approach.”
“Dropping the gender part of the risk score is not really a substantial change” from previous ESC or other guidelines, as different points were required in the past to recommend anticoagulants for women and men, Kotecha said, adding that “making the approach easier for clinicians may avoid penalizing women as well as nonbinary and transgender patients.”
Anne B. Curtis, MD, SUNY Distinguished Professor, Department of Medicine, Jacobs School of Medicine & Biomedical Sciences, University at Buffalo in New York, agreed.
Putting aside the question of female sex, she said that there are not a lot of people under the age of 65 years with “absolutely no risk factors,” and so, “if the only reason you would anticoagulate” someone of that age is because they are a woman that “doesn’t make a lot of sense to me.”
The ESC guidelines are “trying to say, ‘look at the other risk factors, and if anything is there, go ahead and anticoagulate,” Curtis said in an interview.
“It’s actually a very thoughtful decision,” Tedrow said, and not “intended to discount risk in women.” Rather, it’s a statement that acknowledges the problem of recommending anticoagulation therapy in women “for whom it is not appropriate.”
Joglar pointed out that that recommendation, although not characterized in the same way, was in fact included in the 2023 US guidelines.
“We wanted to use a more nuanced approach,” he said, and move away from using CHA2DS2–VASc as the prime determinant of whether to start oral anticoagulation and towards a magnitude risk assessment, in which female sex is seen as a risk modifier.
“The Europeans and the Americans are looking at the same data, so we often reach the same conclusions,” Joglar said, although “we sometimes use different wordings.”
Overall, Kotecha expressed the hope that the move “will lead to better implementation of guidelines, at the end of the day.”
“That’s all we can hope for: Patients will be offered a more individualized approach, leading to more appropriate use of treatment in the right patients.”
The newer direct oral anticoagulation is “a much simpler therapy,” he added. “There is very little monitoring, a similar risk of bleeding as aspirin, and yet the ability to largely prevent the high rate of stroke and thromboembolism associated with atrial fibrillation.”
“So, it’s a big ticket item for our communities and public health, particularly as atrial fibrillation is expected to double in prevalence in the next few decades and evidence is building that it can lead to vascular dementia in the long-term.”
No funding was declared. Kotecha declares relationships with Bayer, Protherics Medicines Development, Boston Scientific, Daiichi Sankyo, Boehringer Ingelheim, BMS-Pfizer Alliance, Amomed, MyoKardia. Curtis declared relationships with Janssen Pharmaceuticals, Medtronic, Abbott. Joglar declared no relevant relationships. Tedrow declared no relevant relationships.
A version of this article appeared on Medscape.com.
The European Society of Cardiology (ESC) caused a stir when they recommended in their latest atrial fibrillation (AF) management guideline that gender no longer be included in the decision to initiate oral anticoagulation therapy.
The move aims to level the playing field between men and women and follows a more nuanced understanding of stroke risk in patients with AF, said experts. It also acknowledges the lack of evidence in people receiving cross-sex hormone therapy.
In any case, the guidelines, developed in collaboration with the European Association for Cardio-Thoracic Surgery and published by the European Heart Journal on August 30, simply follow 2023’s US recommendations, they added.
One Size Does Not Fit All
So, what to the ESC guidelines actually say?
They underline that, if left untreated, the risk for ischemic stroke is increased fivefold in patients with AF, and the “default approach should therefore be to provide oral anticoagulation to all eligible AF patients, except those at low risk for incident stroke or thromboembolism.”
However, the authors note that there is a lack of strong evidence on how to apply the current risk scores to help inform that decision in real-world patients.
Dipak Kotecha, MBChB, PhD, Professor of Cardiology at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust, Birmingham, England, and senior author of the ESC guidelines, said in an interview that “the available scores have a relatively poor ability to accurately predict which patients will have a stroke or thromboembolic event.”
Instead, he said “a much better approach is for healthcare professionals to look at each patient’s individual risk factors, using the risk scores to identify those patients that might not benefit from oral anticoagulant therapy.”
For these guidelines, the authors therefore wanted to “move away from a one-size-fits-all” approach, Kotecha said, and instead ensure that more patients can benefit from the new range of direct oral anticoagulants (DOACs) that are easier to take and with much lower chance of side effects or major bleeding.
To achieve this, they separated their clinical recommendations from any particular risk score, and instead focused on the practicalities of implementation.
Risk Modifier Vs Risk Factor
To explain their decision the authors highlight that “the most popular risk score” is the CHA2DS2–VASc, which gives a point for female sex, alongside factors such as congestive heart failure, hypertension, and diabetes mellitus, and a sliding scale of points for increasing age.
Kotecha pointed out the score was developed before the DOACs were available and may not account for how risk factors have changed in recent decades.
The result is that CHA2DS2–VASc gives the same number of points to an individual with heart failure or prior transient ischemic attack as to a woman aged less than 65 years, “but the magnitude of increased risk is not the same,” Usha Beth Tedrow, MD, Associate Professor of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts, said in an interview.
As far back as 2018, it was known that “female sex is a risk modifier, rather than a risk factor for stroke in atrial fibrillation,” noted Jose Joglar, MD, lead author of the 2023 ACC/AHA/ACCP/HRS Guideline for the Diagnosis and Management of Atrial Fibrillation said in an interview.
A Danish national registry study involving 239,671 AF patients treated between 1997 and 2015, nearly half of whom were women, showed that, at a CHA2DS2–VASc score of 0, the “risk of stroke between men and women is absolutely the same,” he said.
“It is not until after a CHA2DS2–VASc score of 2 that the curves start to separate,” Joglar, Program Director, Clinical Cardiac Electrophysiology Fellowship Program, The University of Texas Southwestern Medical Center, Dallas, continued, “but by then you have already made the decision to anticoagulate.”
More recently, Kotecha and colleagues conducted a population cohort study of the electronic healthcare records of UK primary care patients treated between 2005 and 2020, and identified 78,852 with AF; more than a third were women.
Their analysis, published on September 1, showed that women had a lower adjusted rate of the primary composite outcome of all-cause mortality, ischemic stroke, or arterial thromboembolism, driven by a reduced mortality rate.
“Removal of gender from clinical risk scoring could simplify the approach to which patients with AF should be offered oral anticoagulation,” Kotecha and colleagues concluded.
Joglar clarified that “women are at increased risk for stroke than men” overall, but by the time that risk “becomes manifest, other risk factors have come into play, and they have already met the criteria for anticoagulation.”
The authors of the latest ESC guideline therefore concluded that the “inclusion of gender complicates clinical practice both for healthcare professionals and patients.” Their solution was to remove the question of gender for decisions over initiating oral anticoagulant therapy in clinical practice altogether.
This includes individuals who identify as transgender or are undergoing sex hormone therapy, as all the experts interviewed by Medscape Medical News agreed that there is currently insufficient evidence to know if that affects stroke risk.
Instead, guidelines state that the drugs are “recommended in those with a CHA2DS2-VA score of 2 or more and should be considered in those with a CHA2DS2-VA score of 1, following a patient-centered and shared care approach.”
“Dropping the gender part of the risk score is not really a substantial change” from previous ESC or other guidelines, as different points were required in the past to recommend anticoagulants for women and men, Kotecha said, adding that “making the approach easier for clinicians may avoid penalizing women as well as nonbinary and transgender patients.”
Anne B. Curtis, MD, SUNY Distinguished Professor, Department of Medicine, Jacobs School of Medicine & Biomedical Sciences, University at Buffalo in New York, agreed.
Putting aside the question of female sex, she said that there are not a lot of people under the age of 65 years with “absolutely no risk factors,” and so, “if the only reason you would anticoagulate” someone of that age is because they are a woman that “doesn’t make a lot of sense to me.”
The ESC guidelines are “trying to say, ‘look at the other risk factors, and if anything is there, go ahead and anticoagulate,” Curtis said in an interview.
“It’s actually a very thoughtful decision,” Tedrow said, and not “intended to discount risk in women.” Rather, it’s a statement that acknowledges the problem of recommending anticoagulation therapy in women “for whom it is not appropriate.”
Joglar pointed out that that recommendation, although not characterized in the same way, was in fact included in the 2023 US guidelines.
“We wanted to use a more nuanced approach,” he said, and move away from using CHA2DS2–VASc as the prime determinant of whether to start oral anticoagulation and towards a magnitude risk assessment, in which female sex is seen as a risk modifier.
“The Europeans and the Americans are looking at the same data, so we often reach the same conclusions,” Joglar said, although “we sometimes use different wordings.”
Overall, Kotecha expressed the hope that the move “will lead to better implementation of guidelines, at the end of the day.”
“That’s all we can hope for: Patients will be offered a more individualized approach, leading to more appropriate use of treatment in the right patients.”
The newer direct oral anticoagulation is “a much simpler therapy,” he added. “There is very little monitoring, a similar risk of bleeding as aspirin, and yet the ability to largely prevent the high rate of stroke and thromboembolism associated with atrial fibrillation.”
“So, it’s a big ticket item for our communities and public health, particularly as atrial fibrillation is expected to double in prevalence in the next few decades and evidence is building that it can lead to vascular dementia in the long-term.”
No funding was declared. Kotecha declares relationships with Bayer, Protherics Medicines Development, Boston Scientific, Daiichi Sankyo, Boehringer Ingelheim, BMS-Pfizer Alliance, Amomed, MyoKardia. Curtis declared relationships with Janssen Pharmaceuticals, Medtronic, Abbott. Joglar declared no relevant relationships. Tedrow declared no relevant relationships.
A version of this article appeared on Medscape.com.
Vitamin K Supplementation Reduces Nocturnal Leg Cramps in Older Adults
TOPLINE:
Vitamin K supplementation significantly reduced the frequency, intensity, and duration of nocturnal leg cramps in older adults. No adverse events related to vitamin K were identified.
METHODOLOGY:
- Researchers conducted a multicenter, double-blind, placebo-controlled randomized clinical trial in China from September 2022 to December 2023.
- A total of 199 participants aged ≥ 65 years with at least two documented episodes of nocturnal leg cramps during a 2-week screening period were included.
- Participants were randomized in a 1:1 ratio to receive either 180 μg of vitamin K (menaquinone 7) or a placebo daily for 8 weeks.
- The primary outcome was the mean number of nocturnal leg cramps per week, while secondary outcomes were the duration and severity of muscle cramps.
- The ethics committees of Third People’s Hospital of Chengdu and Affiliated Hospital of North Sichuan Medical College approved the study, and all participants provided written informed consent.
TAKEAWAY:
- Vitamin K group experienced a significant reduction in the mean weekly frequency of cramps (mean difference, 2.60 [SD, 0.81] to 0.96 [SD, 1.41]) compared with the placebo group, which maintained a mean weekly frequency of 3.63 (SD, 2.20) (P < .001).
- The severity of nocturnal leg cramps decreased more in the vitamin K group (mean difference, −2.55 [SD, 2.12] points) than in the placebo group (mean difference, −1.24 [SD, 1.16] points).
- The duration of nocturnal leg cramps also decreased more in the vitamin K group (mean difference, −0.90 [SD, 0.88] minutes) than in the placebo group (mean difference, −0.32 [SD, 0.78] minutes).
- No adverse events related to vitamin K use were identified, indicating a good safety profile for the supplementation.
IN PRACTICE:
“Given the generally benign characteristics of NLCs, treatment modality must be both effective and safe, thus minimizing the risk of iatrogenic harm,” the study authors wrote.
SOURCE:
This study was led by Jing Tan, MD, the Third People’s Hospital of Chengdu in Chengdu, China. It was published online on October 28 in JAMA Internal Medicine.
LIMITATIONS:
This study did not investigate the quality of life or sleep, which could have provided additional insights into the impact of vitamin K on nocturnal leg cramps. The relatively mild nature of nocturnal leg cramps experienced by the participants may limit the generalizability of the findings to populations with more severe symptoms.
DISCLOSURES:
This study was supported by grants from China Health Promotion Foundation and the Third People’s Hospital of Chengdu Scientific Research Project. Tan disclosed receiving personal fees from BeiGene, AbbVie, Pfizer, Xian Janssen Pharmaceutical, and Takeda Pharmaceutical outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Vitamin K supplementation significantly reduced the frequency, intensity, and duration of nocturnal leg cramps in older adults. No adverse events related to vitamin K were identified.
METHODOLOGY:
- Researchers conducted a multicenter, double-blind, placebo-controlled randomized clinical trial in China from September 2022 to December 2023.
- A total of 199 participants aged ≥ 65 years with at least two documented episodes of nocturnal leg cramps during a 2-week screening period were included.
- Participants were randomized in a 1:1 ratio to receive either 180 μg of vitamin K (menaquinone 7) or a placebo daily for 8 weeks.
- The primary outcome was the mean number of nocturnal leg cramps per week, while secondary outcomes were the duration and severity of muscle cramps.
- The ethics committees of Third People’s Hospital of Chengdu and Affiliated Hospital of North Sichuan Medical College approved the study, and all participants provided written informed consent.
TAKEAWAY:
- Vitamin K group experienced a significant reduction in the mean weekly frequency of cramps (mean difference, 2.60 [SD, 0.81] to 0.96 [SD, 1.41]) compared with the placebo group, which maintained a mean weekly frequency of 3.63 (SD, 2.20) (P < .001).
- The severity of nocturnal leg cramps decreased more in the vitamin K group (mean difference, −2.55 [SD, 2.12] points) than in the placebo group (mean difference, −1.24 [SD, 1.16] points).
- The duration of nocturnal leg cramps also decreased more in the vitamin K group (mean difference, −0.90 [SD, 0.88] minutes) than in the placebo group (mean difference, −0.32 [SD, 0.78] minutes).
- No adverse events related to vitamin K use were identified, indicating a good safety profile for the supplementation.
IN PRACTICE:
“Given the generally benign characteristics of NLCs, treatment modality must be both effective and safe, thus minimizing the risk of iatrogenic harm,” the study authors wrote.
SOURCE:
This study was led by Jing Tan, MD, the Third People’s Hospital of Chengdu in Chengdu, China. It was published online on October 28 in JAMA Internal Medicine.
LIMITATIONS:
This study did not investigate the quality of life or sleep, which could have provided additional insights into the impact of vitamin K on nocturnal leg cramps. The relatively mild nature of nocturnal leg cramps experienced by the participants may limit the generalizability of the findings to populations with more severe symptoms.
DISCLOSURES:
This study was supported by grants from China Health Promotion Foundation and the Third People’s Hospital of Chengdu Scientific Research Project. Tan disclosed receiving personal fees from BeiGene, AbbVie, Pfizer, Xian Janssen Pharmaceutical, and Takeda Pharmaceutical outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Vitamin K supplementation significantly reduced the frequency, intensity, and duration of nocturnal leg cramps in older adults. No adverse events related to vitamin K were identified.
METHODOLOGY:
- Researchers conducted a multicenter, double-blind, placebo-controlled randomized clinical trial in China from September 2022 to December 2023.
- A total of 199 participants aged ≥ 65 years with at least two documented episodes of nocturnal leg cramps during a 2-week screening period were included.
- Participants were randomized in a 1:1 ratio to receive either 180 μg of vitamin K (menaquinone 7) or a placebo daily for 8 weeks.
- The primary outcome was the mean number of nocturnal leg cramps per week, while secondary outcomes were the duration and severity of muscle cramps.
- The ethics committees of Third People’s Hospital of Chengdu and Affiliated Hospital of North Sichuan Medical College approved the study, and all participants provided written informed consent.
TAKEAWAY:
- Vitamin K group experienced a significant reduction in the mean weekly frequency of cramps (mean difference, 2.60 [SD, 0.81] to 0.96 [SD, 1.41]) compared with the placebo group, which maintained a mean weekly frequency of 3.63 (SD, 2.20) (P < .001).
- The severity of nocturnal leg cramps decreased more in the vitamin K group (mean difference, −2.55 [SD, 2.12] points) than in the placebo group (mean difference, −1.24 [SD, 1.16] points).
- The duration of nocturnal leg cramps also decreased more in the vitamin K group (mean difference, −0.90 [SD, 0.88] minutes) than in the placebo group (mean difference, −0.32 [SD, 0.78] minutes).
- No adverse events related to vitamin K use were identified, indicating a good safety profile for the supplementation.
IN PRACTICE:
“Given the generally benign characteristics of NLCs, treatment modality must be both effective and safe, thus minimizing the risk of iatrogenic harm,” the study authors wrote.
SOURCE:
This study was led by Jing Tan, MD, the Third People’s Hospital of Chengdu in Chengdu, China. It was published online on October 28 in JAMA Internal Medicine.
LIMITATIONS:
This study did not investigate the quality of life or sleep, which could have provided additional insights into the impact of vitamin K on nocturnal leg cramps. The relatively mild nature of nocturnal leg cramps experienced by the participants may limit the generalizability of the findings to populations with more severe symptoms.
DISCLOSURES:
This study was supported by grants from China Health Promotion Foundation and the Third People’s Hospital of Chengdu Scientific Research Project. Tan disclosed receiving personal fees from BeiGene, AbbVie, Pfizer, Xian Janssen Pharmaceutical, and Takeda Pharmaceutical outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Heat Waves Pose Significant Health Risks for Dually Eligible Older Individuals
TOPLINE:
Heat waves are associated with an increase in heat-related emergency department visits, hospitalizations, and deaths among dually eligible individuals older than 65 years.
METHODOLOGY:
- The researchers conducted a retrospective time-series study using national Medicare and Medicaid data from 2016 to 2019 to assess the link between heat waves during warm months and adverse health events.
- A total of 5,448,499 dually eligible individuals (66% women; 20% aged ≥ 85 years) were included from 28,404 zip code areas across 50 states and Washington, DC.
- Heat waves were defined as three or more consecutive days of extreme heat with a maximum temperature of at least 90 °F and within the 97th percentile of daily maximum temperatures for each zip code.
- Primary outcomes were daily counts of heat-related emergency department visits and hospitalizations.
- Secondary outcomes were all-cause and heat-specific emergency department visits, all-cause and heat-specific hospitalizations, deaths, and long-term nursing facility placements within 3 months after a heat wave.
TAKEAWAY:
- Heat waves were associated with a 10% increase in heat-related emergency department visits (incidence rate ratio [IRR], 1.10; 95% CI, 1.08-1.12) and a 7% increase in heat-related hospitalizations (IRR, 1.07; 95% CI, 1.04-1.09).
- Mortality rates were 4% higher during heat wave days than during non–heat wave days (IRR, 1.04; 95% CI, 1.01-1.07).
- No significant difference was found in rates of long-term nursing facility placements or heat-related emergency department visits for nursing facility residents.
- All racial and ethnic groups showed higher incidence rates of heat-related emergency department visits during heat waves, especially among beneficiaries identified as Asian (IRR, 1.21; 95% CI, 1.12-1.29). Rates were higher among individuals residing in the Northwest, Ohio Valley, and the West.
IN PRACTICE:
“In healthcare settings, clinicians should incorporate routine heat wave risk assessments into clinical practice, especially in regions more susceptible to extreme heat, for all dual-eligible beneficiaries and other at-risk patients,” wrote Jose F. Figueroa, MD, MPH, of the Harvard T.H. Chan School of Public Health in Boston, in an invited commentary. “Beyond offering preventive advice, clinicians can adjust medications that may increase their patients’ susceptibility during heat waves, or they can refer patients to social workers and social service organizations to ensure that they are protected at home.”
SOURCE:
This study was led by Hyunjee Kim, PhD, of the Center for Health Systems Effectiveness at Oregon Health & Science University, Portland. It was published online in JAMA Health Forum.
LIMITATIONS:
This study relied on a claims database to identify adverse events, which may have led to omissions in coding, particularly for heat-related conditions if the diagnostic codes for heat-related symptoms had not been adopted. This study did not adjust for variations in air quality or green space, which could have confounded the association of interest. Indoor heat exposures or adaptive behaviors, such as air conditioning use, were not considered. The analysis could not compare the association of heat waves with adverse events between those with dual eligibility and those without dual eligibility.
DISCLOSURES:
This study was supported by the National Institute on Aging. One author reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Heat waves are associated with an increase in heat-related emergency department visits, hospitalizations, and deaths among dually eligible individuals older than 65 years.
METHODOLOGY:
- The researchers conducted a retrospective time-series study using national Medicare and Medicaid data from 2016 to 2019 to assess the link between heat waves during warm months and adverse health events.
- A total of 5,448,499 dually eligible individuals (66% women; 20% aged ≥ 85 years) were included from 28,404 zip code areas across 50 states and Washington, DC.
- Heat waves were defined as three or more consecutive days of extreme heat with a maximum temperature of at least 90 °F and within the 97th percentile of daily maximum temperatures for each zip code.
- Primary outcomes were daily counts of heat-related emergency department visits and hospitalizations.
- Secondary outcomes were all-cause and heat-specific emergency department visits, all-cause and heat-specific hospitalizations, deaths, and long-term nursing facility placements within 3 months after a heat wave.
TAKEAWAY:
- Heat waves were associated with a 10% increase in heat-related emergency department visits (incidence rate ratio [IRR], 1.10; 95% CI, 1.08-1.12) and a 7% increase in heat-related hospitalizations (IRR, 1.07; 95% CI, 1.04-1.09).
- Mortality rates were 4% higher during heat wave days than during non–heat wave days (IRR, 1.04; 95% CI, 1.01-1.07).
- No significant difference was found in rates of long-term nursing facility placements or heat-related emergency department visits for nursing facility residents.
- All racial and ethnic groups showed higher incidence rates of heat-related emergency department visits during heat waves, especially among beneficiaries identified as Asian (IRR, 1.21; 95% CI, 1.12-1.29). Rates were higher among individuals residing in the Northwest, Ohio Valley, and the West.
IN PRACTICE:
“In healthcare settings, clinicians should incorporate routine heat wave risk assessments into clinical practice, especially in regions more susceptible to extreme heat, for all dual-eligible beneficiaries and other at-risk patients,” wrote Jose F. Figueroa, MD, MPH, of the Harvard T.H. Chan School of Public Health in Boston, in an invited commentary. “Beyond offering preventive advice, clinicians can adjust medications that may increase their patients’ susceptibility during heat waves, or they can refer patients to social workers and social service organizations to ensure that they are protected at home.”
SOURCE:
This study was led by Hyunjee Kim, PhD, of the Center for Health Systems Effectiveness at Oregon Health & Science University, Portland. It was published online in JAMA Health Forum.
LIMITATIONS:
This study relied on a claims database to identify adverse events, which may have led to omissions in coding, particularly for heat-related conditions if the diagnostic codes for heat-related symptoms had not been adopted. This study did not adjust for variations in air quality or green space, which could have confounded the association of interest. Indoor heat exposures or adaptive behaviors, such as air conditioning use, were not considered. The analysis could not compare the association of heat waves with adverse events between those with dual eligibility and those without dual eligibility.
DISCLOSURES:
This study was supported by the National Institute on Aging. One author reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Heat waves are associated with an increase in heat-related emergency department visits, hospitalizations, and deaths among dually eligible individuals older than 65 years.
METHODOLOGY:
- The researchers conducted a retrospective time-series study using national Medicare and Medicaid data from 2016 to 2019 to assess the link between heat waves during warm months and adverse health events.
- A total of 5,448,499 dually eligible individuals (66% women; 20% aged ≥ 85 years) were included from 28,404 zip code areas across 50 states and Washington, DC.
- Heat waves were defined as three or more consecutive days of extreme heat with a maximum temperature of at least 90 °F and within the 97th percentile of daily maximum temperatures for each zip code.
- Primary outcomes were daily counts of heat-related emergency department visits and hospitalizations.
- Secondary outcomes were all-cause and heat-specific emergency department visits, all-cause and heat-specific hospitalizations, deaths, and long-term nursing facility placements within 3 months after a heat wave.
TAKEAWAY:
- Heat waves were associated with a 10% increase in heat-related emergency department visits (incidence rate ratio [IRR], 1.10; 95% CI, 1.08-1.12) and a 7% increase in heat-related hospitalizations (IRR, 1.07; 95% CI, 1.04-1.09).
- Mortality rates were 4% higher during heat wave days than during non–heat wave days (IRR, 1.04; 95% CI, 1.01-1.07).
- No significant difference was found in rates of long-term nursing facility placements or heat-related emergency department visits for nursing facility residents.
- All racial and ethnic groups showed higher incidence rates of heat-related emergency department visits during heat waves, especially among beneficiaries identified as Asian (IRR, 1.21; 95% CI, 1.12-1.29). Rates were higher among individuals residing in the Northwest, Ohio Valley, and the West.
IN PRACTICE:
“In healthcare settings, clinicians should incorporate routine heat wave risk assessments into clinical practice, especially in regions more susceptible to extreme heat, for all dual-eligible beneficiaries and other at-risk patients,” wrote Jose F. Figueroa, MD, MPH, of the Harvard T.H. Chan School of Public Health in Boston, in an invited commentary. “Beyond offering preventive advice, clinicians can adjust medications that may increase their patients’ susceptibility during heat waves, or they can refer patients to social workers and social service organizations to ensure that they are protected at home.”
SOURCE:
This study was led by Hyunjee Kim, PhD, of the Center for Health Systems Effectiveness at Oregon Health & Science University, Portland. It was published online in JAMA Health Forum.
LIMITATIONS:
This study relied on a claims database to identify adverse events, which may have led to omissions in coding, particularly for heat-related conditions if the diagnostic codes for heat-related symptoms had not been adopted. This study did not adjust for variations in air quality or green space, which could have confounded the association of interest. Indoor heat exposures or adaptive behaviors, such as air conditioning use, were not considered. The analysis could not compare the association of heat waves with adverse events between those with dual eligibility and those without dual eligibility.
DISCLOSURES:
This study was supported by the National Institute on Aging. One author reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
On Second Thought: Aspirin for Primary Prevention — What We Really Know
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.