User login
Deprescribe Low-Value Meds to Reduce Polypharmacy Harms
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
VANCOUVER, BRITISH COLUMBIA — While polypharmacy is inevitable for patients with multiple chronic diseases, not all medications improve patient-oriented outcomes, members of the Patients, Experience, Evidence, Research (PEER) team, a group of Canadian primary care professionals who develop evidence-based guidelines, told attendees at the Family Medicine Forum (FMF) 2024.
In a thought-provoking presentation called “Axe the Rx: Deprescribing Chronic Medications with PEER,” the panelists gave examples of medications that may be safely stopped or tapered, particularly for older adults “whose pill bag is heavier than their lunch bag.”
Curbing Cardiovascular Drugs
The 2021 Canadian Cardiovascular Society Guidelines for the Management of Dyslipidemia for the Prevention of Cardiovascular Disease in Adults call for reaching an LDL-C < 1.8 mmol/L in secondary cardiovascular prevention by potentially adding on medical therapies such as proprotein convertase subtilisin/kexin type 9 inhibitors or ezetimibe or both if that target is not reached with the maximal dosage of a statin.
But family physicians do not need to follow this guidance for their patients who have had a myocardial infarction, said Ontario family physician Jennifer Young, MD, a physician advisor in the Canadian College of Family Physicians’ Knowledge Experts and Tools Program.
Treating to below 1.8 mmol/L “means lab testing for the patients,” Young told this news organization. “It means increasing doses [of a statin] to try and get to that level.” If the patient is already on the highest dose of a statin, it means adding other medications that lower cholesterol.
“If that was translating into better outcomes like [preventing] death and another heart attack, then all of that extra effort would be worth it,” said Young. “But we don’t have evidence that it actually does have a benefit for outcomes like death and repeated heart attacks,” compared with putting them on a high dose of a potent statin.
Tapering Opioids
Before placing patients on an opioid taper, clinicians should first assess them for opioid use disorder (OUD), said Jessica Kirkwood, MD, assistant professor of family medicine at the University of Alberta in Edmonton, Canada. She suggested using the Prescription Opioid Misuse Index questionnaire to do so.
Clinicians should be much more careful in initiating a taper with patients with OUD, said Kirkwood. They must ensure that these patients are motivated to discontinue their opioids. “We’re losing 21 Canadians a day to the opioid crisis. We all know that cutting someone off their opioids and potentially having them seek opioids elsewhere through illicit means can be fatal.”
In addition, clinicians should spend more time counseling patients with OUD than those without, Kirkwood continued. They must explain to these patients how they are being tapered (eg, the intervals and doses) and highlight the benefits of a taper, such as reduced constipation. Opioid agonist therapy (such as methadone or buprenorphine) can be considered in these patients.
Some research has pointed to the importance of patient motivation as a factor in the success of opioid tapers, noted Kirkwood.
Deprescribing Benzodiazepines
Benzodiazepine receptor agonists, too, often can be deprescribed. These drugs should not be prescribed to promote sleep on a long-term basis. Yet clinicians commonly encounter patients who have been taking them for more than a year, said pharmacist Betsy Thomas, assistant adjunct professor of family medicine at the University of Alberta.
The medications “are usually fairly effective for the first couple of weeks to about a month, and then the benefits start to decrease, and we start to see more harms,” she said.
Some of the harms that have been associated with continued use of benzodiazepine receptor agonists include delayed reaction time and impaired cognition, which can affect the ability to drive, the risk for falls, and the risk for hip fractures, she noted. Some research suggests that these drugs are not an option for treating insomnia in patients aged 65 years or older.
Clinicians should encourage tapering the use of benzodiazepine receptor agonists to minimize dependence and transition patients to nonpharmacologic approaches such as cognitive behavioral therapy to manage insomnia, she said. A recent study demonstrated the efficacy of the intervention, and Thomas suggested that family physicians visit the mysleepwell.ca website for more information.
Young, Kirkwood, and Thomas reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM FMF 2024
As Populations Age, Occam’s Razor Loses Its Diagnostic Edge
The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.
“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.
But are these two axioms truly in conflict, or is this a false dichotomy?
Occam’s Razor and Simple Diagnoses
Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.
For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.
Beyond Hickam’s Dictum and Occam’s Razor
A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.
Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:
1. Incidentaloma: An asymptomatic condition discovered accidentally.
2. Preexisting diagnosis: A known condition in the patient’s medical history.
3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.
4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.
Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.
Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.
Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.
Practical Insights
The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.
Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.
“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.
But are these two axioms truly in conflict, or is this a false dichotomy?
Occam’s Razor and Simple Diagnoses
Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.
For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.
Beyond Hickam’s Dictum and Occam’s Razor
A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.
Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:
1. Incidentaloma: An asymptomatic condition discovered accidentally.
2. Preexisting diagnosis: A known condition in the patient’s medical history.
3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.
4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.
Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.
Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.
Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.
Practical Insights
The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.
Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
The principle of parsimony, often referred to as “Occam’s razor,” favors a unifying explanation over multiple ones, as long as both explain the data equally well. This heuristic, widely used in medical practice, advocates for simpler explanations rather than complex theories. However, its application in modern medicine has sparked debate.
“Hickam’s dictum,” a counterargument to Occam’s razor, asserts that patients — especially as populations grow older and more fragile — can simultaneously have multiple, unrelated diagnoses. These contrasting perspectives on clinical reasoning, balancing diagnostic simplicity and complexity, are both used in daily medical practice.
But are these two axioms truly in conflict, or is this a false dichotomy?
Occam’s Razor and Simple Diagnoses
Interpersonal variability in diagnostic approaches, shaped by the subjective nature of many judgments, complicates the formal evaluation of diagnostic parsimony (Occam’s razor). Indirect evidence suggests that prioritizing simplicity in diagnosis can result in under-detection of secondary conditions, particularly in patients with chronic illnesses.
For example, older patients with a known chronic illness were found to have a 30%-60% lower likelihood of being treated for an unrelated secondary diagnosis than matched peers without the chronic condition. Other studies indicate that a readily available, simple diagnosis can lead clinicians to prematurely close their diagnostic reasoning, overlooking other significant illnesses.
Beyond Hickam’s Dictum and Occam’s Razor
A recent study explored the phenomenon of multiple diagnoses by examining the supposed conflict between Hickam’s dictum and Occam’s razor, as well as the ambiguities in how they are interpreted and used by physicians in clinical reasoning.
Part 1: Researchers identified articles on PubMed related to Hickam’s dictum or conflicting with Occam’s razor, categorizing instances into four models of Hickam’s dictum:
1. Incidentaloma: An asymptomatic condition discovered accidentally.
2. Preexisting diagnosis: A known condition in the patient’s medical history.
3. Causally related disease: A complication, association, epiphenomenon, or underlying cause connected to the primary diagnosis.
4. Coincidental and independent disease: A symptomatic condition unrelated to the primary diagnosis.
Part 2: Researchers analyzed 220 case records from Massachusetts General Hospital, Boston, and clinical problem-solving reports published in The New England Journal of Medicine between 2017 and 2023. They found no cases where the final diagnosis was not a unifying one.
Part 3: In an online survey of 265 physicians, 79% identified coincidental symptomatic conditions (category 4) as the least likely type of multiple diagnoses. Preexisting conditions (category 2) emerged as the most common, reflecting the tendency to add new diagnoses to a patient’s existing health profile. Almost one third of instances referencing Hickam’s dictum or violations of Occam’s razor fell into category 2.
Causally related diseases (category 3) were probabilistically dependent, meaning that the presence of one condition increased the likelihood of the other, based on the strength (often unknown) of the causal relationship.
Practical Insights
The significant finding of this work was that multiple diagnoses occur in predictable patterns, informed by causal connections between conditions, symptom onset timing, and likelihood. The principle of common causation supports the search for a unifying diagnosis for coincidental symptoms. It is not surprising that causally related phenomena often co-occur, as reflected by the fact that 40% of multiple diagnoses in the study’s first part were causally linked.
Thus, understanding multiple diagnoses goes beyond Hickam’s dictum and Occam’s razor. It requires not only identifying diseases but also examining their causal relationships and the timing of symptom onset. A unifying diagnosis is not equivalent to a single diagnosis; rather, it represents a causal pathway linking underlying pathologic changes to acute presentations.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Aliens, Ian McShane, and Heart Disease Risk
This transcript has been edited for clarity.
I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.
An alien spaceship parks itself, Independence Day style, above a local office building.
But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.
They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%.
Let’s assume the aliens are entirely accurate. What do you do with this information?
Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.
But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people.
In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”
This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.
We all know that genes play a significant role in our health outcomes. Some diseases (Huntington disease, cystic fibrosis, sickle cell disease, hemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.
The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.
But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total.
The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.
The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.
There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.
The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.
But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information.
It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.
The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.
They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.
But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.
This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.
The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.
This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed.
Let’s imagine we have a population of 40 people.
Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.
Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.
Risk score model 3 also correctly classified 30 of 40, but another different 30.
I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem.
This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.
How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.
It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry.
It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either.
Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic.
Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.
An alien spaceship parks itself, Independence Day style, above a local office building.
But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.
They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%.
Let’s assume the aliens are entirely accurate. What do you do with this information?
Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.
But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people.
In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”
This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.
We all know that genes play a significant role in our health outcomes. Some diseases (Huntington disease, cystic fibrosis, sickle cell disease, hemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.
The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.
But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total.
The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.
The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.
There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.
The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.
But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information.
It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.
The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.
They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.
But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.
This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.
The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.
This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed.
Let’s imagine we have a population of 40 people.
Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.
Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.
Risk score model 3 also correctly classified 30 of 40, but another different 30.
I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem.
This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.
How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.
It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry.
It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either.
Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic.
Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
I was really struggling to think of a good analogy to explain the glaring problem of polygenic risk scores (PRS) this week. But I think I have it now. Go with me on this.
An alien spaceship parks itself, Independence Day style, above a local office building.
But unlike the aliens that gave such a hard time to Will Smith and Brent Spiner, these are benevolent, technologically superior guys. They shine a mysterious green light down on the building and then announce, maybe via telepathy, that 6% of the people in that building will have a heart attack in the next year.
They move on to the next building. “Five percent will have a heart attack in the next year.” And the next, 7%. And the next, 2%.
Let’s assume the aliens are entirely accurate. What do you do with this information?
Most of us would suggest that you find out who was in the buildings with the higher percentages. You check their cholesterol levels, get them to exercise more, do some stress tests, and so on.
But that said, you’d still be spending a lot of money on a bunch of people who were not going to have heart attacks. So, a crack team of spies — in my mind, this is definitely led by a grizzled Ian McShane — infiltrate the alien ship, steal this predictive ray gun, and start pointing it, not at buildings but at people.
In this scenario, one person could have a 10% chance of having a heart attack in the next year. Another person has a 50% chance. The aliens, seeing this, leave us one final message before flying into the great beyond: “No, you guys are doing it wrong.”
This week: The people and companies using an advanced predictive technology, PRS , wrong — and a study that shows just how problematic this is.
We all know that genes play a significant role in our health outcomes. Some diseases (Huntington disease, cystic fibrosis, sickle cell disease, hemochromatosis, and Duchenne muscular dystrophy, for example) are entirely driven by genetic mutations.
The vast majority of chronic diseases we face are not driven by genetics, but they may be enhanced by genetics. Coronary heart disease (CHD) is a prime example. There are clearly environmental risk factors, like smoking, that dramatically increase risk. But there are also genetic underpinnings; about half the risk for CHD comes from genetic variation, according to one study.
But in the case of those common diseases, it’s not one gene that leads to increased risk; it’s the aggregate effect of multiple risk genes, each contributing a small amount of risk to the final total.
The promise of PRS was based on this fact. Take the genome of an individual, identify all the risk genes, and integrate them into some final number that represents your genetic risk of developing CHD.
The way you derive a PRS is take a big group of people and sequence their genomes. Then, you see who develops the disease of interest — in this case, CHD. If the people who develop CHD are more likely to have a particular mutation, that mutation goes in the risk score. Risk scores can integrate tens, hundreds, even thousands of individual mutations to create that final score.
There are literally dozens of PRS for CHD. And there are companies that will calculate yours right now for a reasonable fee.
The accuracy of these scores is assessed at the population level. It’s the alien ray gun thing. Researchers apply the PRS to a big group of people and say 20% of them should develop CHD. If indeed 20% develop CHD, they say the score is accurate. And that’s true.
But what happens next is the problem. Companies and even doctors have been marketing PRS to individuals. And honestly, it sounds amazing. “We’ll use sophisticated techniques to analyze your genetic code and integrate the information to give you your personal risk for CHD.” Or dementia. Or other diseases. A lot of people would want to know this information.
It turns out, though, that this is where the system breaks down. And it is nicely illustrated by this study, appearing November 16 in JAMA.
The authors wanted to see how PRS, which are developed to predict disease in a group of people, work when applied to an individual.
They identified 48 previously published PRS for CHD. They applied those scores to more than 170,000 individuals across multiple genetic databases. And, by and large, the scores worked as advertised, at least across the entire group. The weighted accuracy of all 48 scores was around 78%. They aren’t perfect, of course. We wouldn’t expect them to be, since CHD is not entirely driven by genetics. But 78% accurate isn’t too bad.
But that accuracy is at the population level. At the level of the office building. At the individual level, it was a vastly different story.
This is best illustrated by this plot, which shows the score from 48 different PRS for CHD within the same person. A note here: It is arranged by the publication date of the risk score, but these were all assessed on a single blood sample at a single point in time in this study participant.
The individual scores are all over the map. Using one risk score gives an individual a risk that is near the 99th percentile — a ticking time bomb of CHD. Another score indicates a level of risk at the very bottom of the spectrum — highly reassuring. A bunch of scores fall somewhere in between. In other words, as a doctor, the risk I will discuss with this patient is more strongly determined by which PRS I happen to choose than by his actual genetic risk, whatever that is.
This may seem counterintuitive. All these risk scores were similarly accurate within a population; how can they all give different results to an individual? The answer is simpler than you may think. As long as a given score makes one extra good prediction for each extra bad prediction, its accuracy is not changed.
Let’s imagine we have a population of 40 people.
Risk score model 1 correctly classified 30 of them for 75% accuracy. Great.
Risk score model 2 also correctly classified 30 of our 40 individuals, for 75% accuracy. It’s just a different 30.
Risk score model 3 also correctly classified 30 of 40, but another different 30.
I’ve colored this to show you all the different overlaps. What you can see is that although each score has similar accuracy, the individual people have a bunch of different colors, indicating that some scores worked for them and some didn’t. That’s a real problem.
This has not stopped companies from advertising PRS for all sorts of diseases. Companies are even using PRS to decide which fetuses to implant during IVF therapy, which is a particularly egregiously wrong use of this technology that I have written about before.
How do you fix this? Our aliens tried to warn us. This is not how you are supposed to use this ray gun. You are supposed to use it to identify groups of people at higher risk to direct more resources to that group. That’s really all you can do.
It’s also possible that we need to match the risk score to the individual in a better way. This is likely driven by the fact that risk scores tend to work best in the populations in which they were developed, and many of them were developed in people of largely European ancestry.
It is worth noting that if a PRS had perfect accuracy at the population level, it would also necessarily have perfect accuracy at the individual level. But there aren’t any scores like that. It’s possible that combining various scores may increase the individual accuracy, but that hasn’t been demonstrated yet either.
Look, genetics is and will continue to play a major role in healthcare. At the same time, sequencing entire genomes is a technology that is ripe for hype and thus misuse. Or even abuse. Fundamentally, this JAMA study reminds us that accuracy in a population and accuracy in an individual are not the same. But more deeply, it reminds us that just because a technology is new or cool or expensive doesn’t mean it will work in the clinic.
Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Sitting for More Than 10 Hours Daily Ups Heart Disease Risk
TOPLINE:
Sedentary time exceeding 10.6 h/d is linked to an increased risk for atrial fibrillation, heart failure, myocardial infarction, and cardiovascular (CV) mortality, researchers found. The risk persists even in individuals who meet recommended physical activity levels.
METHODOLOGY:
- Researchers used a validated machine learning approach to investigate the relationships between sedentary behavior and the future risks for CV illness and mortality in 89,530 middle-aged and older adults (mean age, 62 years; 56% women) from the UK Biobank.
- Participants provided data from a wrist-worn triaxial accelerometer that recorded their movements over a period of 7 days.
- Machine learning algorithms classified accelerometer signals into four classes of activities: Sleep, sedentary behavior, light physical activity, and moderate to vigorous physical activity.
- Participants were followed up for a median of 8 years through linkage to national health-related datasets in England, Scotland, and Wales.
- The median sedentary time was 9.4 h/d.
TAKEAWAY:
- During the follow-up period, 3638 individuals (4.9%) experienced incident atrial fibrillation, 1854 (2.09%) developed incident heart failure, 1610 (1.84%) experienced incident myocardial infarction, and 846 (0.94%) died from cardiovascular causes.
- The risks for atrial fibrillation and myocardial infarction increased steadily with an increase in sedentary time, with sedentary time greater than 10.6 h/d showing a modest increase in risk for atrial fibrillation (hazard ratio [HR], 1.11; 95% CI, 1.01-1.21).
- The risks for heart failure and CV mortality were low until sedentary time surpassed approximately 10.6 h/d, after which they rose by 45% (HR, 1.45; 95% CI, 1.28-1.65) and 62% (HR, 1.62; 95% CI, 1.34-1.96), respectively.
- The associations were attenuated but remained significant for CV mortality (HR, 1.33; 95% CI: 1.07-1.64) in individuals who met the recommended levels for physical activity yet were sedentary for more than 10.6 h/d. Reallocating 30 minutes of sedentary time to other activities reduced the risk for heart failure (HR, 0.93; 95% CI, 0.90-0.96) among those who were sedentary more than 10.6 h/d.
IN PRACTICE:
The study “highlights a complex interplay between sedentary behavior and physical activity, ultimately suggesting that sedentary behavior remains relevant for CV disease risk even among individuals meeting sufficient” levels of activity, the researchers reported.
“Individuals should move more and be less sedentary to reduce CV risk. ... Being a ‘weekend warrior’ and meeting guideline levels of [moderate to vigorous physical activity] of 150 minutes/week will not completely abolish the deleterious effects of extended sedentary time of > 10.6 hours per day,” Charles B. Eaton, MD, MS, of the Warren Alpert Medical School of Brown University in Providence, Rhode Island, wrote in an editorial accompanying the journal article.
SOURCE:
The study was led by Ezimamaka Ajufo, MD, of Brigham and Women’s Hospital in Boston. It was published online on November 15, 2024, in the Journal of the American College of Cardiology.
LIMITATIONS:
Wrist-based accelerometers cannot assess specific contexts for sedentary behavior and may misclassify standing time as sedentary time, and these limitations may have affected the findings. Physical activity was measured for 1 week only, which might not have fully represented habitual activity patterns. The sample included predominantly White participants and was enriched for health and socioeconomic status, which may have limited the generalizability of the findings.
DISCLOSURES:
The authors disclosed receiving research support, grants, and research fellowships and collaborations from various institutions and pharmaceutical companies, as well as serving on their advisory boards.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Sedentary time exceeding 10.6 h/d is linked to an increased risk for atrial fibrillation, heart failure, myocardial infarction, and cardiovascular (CV) mortality, researchers found. The risk persists even in individuals who meet recommended physical activity levels.
METHODOLOGY:
- Researchers used a validated machine learning approach to investigate the relationships between sedentary behavior and the future risks for CV illness and mortality in 89,530 middle-aged and older adults (mean age, 62 years; 56% women) from the UK Biobank.
- Participants provided data from a wrist-worn triaxial accelerometer that recorded their movements over a period of 7 days.
- Machine learning algorithms classified accelerometer signals into four classes of activities: Sleep, sedentary behavior, light physical activity, and moderate to vigorous physical activity.
- Participants were followed up for a median of 8 years through linkage to national health-related datasets in England, Scotland, and Wales.
- The median sedentary time was 9.4 h/d.
TAKEAWAY:
- During the follow-up period, 3638 individuals (4.9%) experienced incident atrial fibrillation, 1854 (2.09%) developed incident heart failure, 1610 (1.84%) experienced incident myocardial infarction, and 846 (0.94%) died from cardiovascular causes.
- The risks for atrial fibrillation and myocardial infarction increased steadily with an increase in sedentary time, with sedentary time greater than 10.6 h/d showing a modest increase in risk for atrial fibrillation (hazard ratio [HR], 1.11; 95% CI, 1.01-1.21).
- The risks for heart failure and CV mortality were low until sedentary time surpassed approximately 10.6 h/d, after which they rose by 45% (HR, 1.45; 95% CI, 1.28-1.65) and 62% (HR, 1.62; 95% CI, 1.34-1.96), respectively.
- The associations were attenuated but remained significant for CV mortality (HR, 1.33; 95% CI: 1.07-1.64) in individuals who met the recommended levels for physical activity yet were sedentary for more than 10.6 h/d. Reallocating 30 minutes of sedentary time to other activities reduced the risk for heart failure (HR, 0.93; 95% CI, 0.90-0.96) among those who were sedentary more than 10.6 h/d.
IN PRACTICE:
The study “highlights a complex interplay between sedentary behavior and physical activity, ultimately suggesting that sedentary behavior remains relevant for CV disease risk even among individuals meeting sufficient” levels of activity, the researchers reported.
“Individuals should move more and be less sedentary to reduce CV risk. ... Being a ‘weekend warrior’ and meeting guideline levels of [moderate to vigorous physical activity] of 150 minutes/week will not completely abolish the deleterious effects of extended sedentary time of > 10.6 hours per day,” Charles B. Eaton, MD, MS, of the Warren Alpert Medical School of Brown University in Providence, Rhode Island, wrote in an editorial accompanying the journal article.
SOURCE:
The study was led by Ezimamaka Ajufo, MD, of Brigham and Women’s Hospital in Boston. It was published online on November 15, 2024, in the Journal of the American College of Cardiology.
LIMITATIONS:
Wrist-based accelerometers cannot assess specific contexts for sedentary behavior and may misclassify standing time as sedentary time, and these limitations may have affected the findings. Physical activity was measured for 1 week only, which might not have fully represented habitual activity patterns. The sample included predominantly White participants and was enriched for health and socioeconomic status, which may have limited the generalizability of the findings.
DISCLOSURES:
The authors disclosed receiving research support, grants, and research fellowships and collaborations from various institutions and pharmaceutical companies, as well as serving on their advisory boards.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Sedentary time exceeding 10.6 h/d is linked to an increased risk for atrial fibrillation, heart failure, myocardial infarction, and cardiovascular (CV) mortality, researchers found. The risk persists even in individuals who meet recommended physical activity levels.
METHODOLOGY:
- Researchers used a validated machine learning approach to investigate the relationships between sedentary behavior and the future risks for CV illness and mortality in 89,530 middle-aged and older adults (mean age, 62 years; 56% women) from the UK Biobank.
- Participants provided data from a wrist-worn triaxial accelerometer that recorded their movements over a period of 7 days.
- Machine learning algorithms classified accelerometer signals into four classes of activities: Sleep, sedentary behavior, light physical activity, and moderate to vigorous physical activity.
- Participants were followed up for a median of 8 years through linkage to national health-related datasets in England, Scotland, and Wales.
- The median sedentary time was 9.4 h/d.
TAKEAWAY:
- During the follow-up period, 3638 individuals (4.9%) experienced incident atrial fibrillation, 1854 (2.09%) developed incident heart failure, 1610 (1.84%) experienced incident myocardial infarction, and 846 (0.94%) died from cardiovascular causes.
- The risks for atrial fibrillation and myocardial infarction increased steadily with an increase in sedentary time, with sedentary time greater than 10.6 h/d showing a modest increase in risk for atrial fibrillation (hazard ratio [HR], 1.11; 95% CI, 1.01-1.21).
- The risks for heart failure and CV mortality were low until sedentary time surpassed approximately 10.6 h/d, after which they rose by 45% (HR, 1.45; 95% CI, 1.28-1.65) and 62% (HR, 1.62; 95% CI, 1.34-1.96), respectively.
- The associations were attenuated but remained significant for CV mortality (HR, 1.33; 95% CI: 1.07-1.64) in individuals who met the recommended levels for physical activity yet were sedentary for more than 10.6 h/d. Reallocating 30 minutes of sedentary time to other activities reduced the risk for heart failure (HR, 0.93; 95% CI, 0.90-0.96) among those who were sedentary more than 10.6 h/d.
IN PRACTICE:
The study “highlights a complex interplay between sedentary behavior and physical activity, ultimately suggesting that sedentary behavior remains relevant for CV disease risk even among individuals meeting sufficient” levels of activity, the researchers reported.
“Individuals should move more and be less sedentary to reduce CV risk. ... Being a ‘weekend warrior’ and meeting guideline levels of [moderate to vigorous physical activity] of 150 minutes/week will not completely abolish the deleterious effects of extended sedentary time of > 10.6 hours per day,” Charles B. Eaton, MD, MS, of the Warren Alpert Medical School of Brown University in Providence, Rhode Island, wrote in an editorial accompanying the journal article.
SOURCE:
The study was led by Ezimamaka Ajufo, MD, of Brigham and Women’s Hospital in Boston. It was published online on November 15, 2024, in the Journal of the American College of Cardiology.
LIMITATIONS:
Wrist-based accelerometers cannot assess specific contexts for sedentary behavior and may misclassify standing time as sedentary time, and these limitations may have affected the findings. Physical activity was measured for 1 week only, which might not have fully represented habitual activity patterns. The sample included predominantly White participants and was enriched for health and socioeconomic status, which may have limited the generalizability of the findings.
DISCLOSURES:
The authors disclosed receiving research support, grants, and research fellowships and collaborations from various institutions and pharmaceutical companies, as well as serving on their advisory boards.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
‘No Hint of Benefit’ in Large Colchicine Trial
WASHINGTON —
The CLEAR SYNERGY (OASIS 9) study, called “the largest trial ever of colchicine in acute MI,” showed no hint of benefit in an adverse event curve for colchicine relative to placebo over 5 years, which suggests that the role of this drug after myocardial infarction (MI) “is uncertain,” Sanjit Jolly, MD, an interventional cardiologist at Hamilton Health Sciences and a professor of medicine at McMaster University in Hamilton, Ontario, Canada, reported at Transcatheter Cardiovascular Therapeutics (TCT) 2024.
For the primary composite outcome — cardiovascular death, MI, stroke, and ischemia-driven revascularization — the event curves in the colchicine and placebo groups remained essentially superimposed over 5 years of follow-up, with only a slight separation after 4 years. The hazard ratio for the primary endpoint showed a 1% difference in favor of colchicine (hazard ratio [HR], 0.99; P = .93).
There were no meaningful differences in any of the individual endpoint components; all 95% CIs straddled the line of unity. Rates of cardiovascular death (3.3% vs 3.2%) and stroke (1.4% vs 1.2%) were numerically higher in the colchicine group than in the placebo group. Rates of MI (2.9% vs 3.1%) and ischemia-driven revascularization (4.6% vs 4.7%) were numerically lower in the colchicine group.
No Difference
No adverse outcomes, including all-cause death (4.6% vs 5.1%), approached significance, with the exception of noncardiovascular death (13.0% vs 1.9%). For this outcome, the 95% CI stopped just short of the line of unity (HR, 0.68; 95% CI, 0.46-0.99).
Rates of adverse events (31.9% vs 31.7%; P = .86), serious adverse events (6.7% vs 7.4%; P = .22), and serious infections (2.5% vs 2.9%; P = .85) were similar in the colchicine and placebo groups, but diarrhea, a known side effect of colchicine, was higher in the colchicine group (10.2% vs 6.6%; P < .001).
Given these results, a panelist questioned the use of the word “uncertain” to describe the findings during the late-breaker session in which these results were presented.
“I think you are selling yourself short,” said J. Dawn Abbott, MD, director of the Interventional Cardiology Fellowship Training Program at the Lifespan Cardiovascular Institute, Brown University in Providence, Rhode Island. Based on the size and conduct of this trial, she called the results “definitive” and suggested that the guidelines should be adjusted.
The OASIS 9 Trial
In OASIS 9, 3528 patients were randomized to colchicine, and 3534 were randomized to placebo. A second randomization in both groups was to spironolactone or placebo; these results will be presented at the upcoming American Heart Association (AHA) 2024 meeting. Both analyses will be published in The New England Journal of Medicine at that time, Jolly reported.
The study involved 104 sites in Australia, Egypt, Europe, Nepal, and North America. Follow-up in both groups exceeded 99%. Most patients had an ST-elevation MI (STEMI), but about 5% of those enrolled had a non-STEMI. Less than 10% of patients had experienced a previous MI.
Less than 5% of patients were discharged on sodium-glucose cotransporter 2 therapy, and more than 95% were discharged on aspirin and a statin. Nearly 80% were discharged on an angiotensin-converting enzyme inhibitor, and most patients received an anticoagulant. More than 95% of patients were implanted with a drug-eluting stent.
At month 3, C-reactive protein levels were significantly lower in the colchicine group than in the placebo group. C-reactive protein is a biomarker for the anti-inflammatory effect that is considered to be colchicine’s primary mechanism of action. An anti-inflammatory effect has been cited as the probable explanation for the positive results shown in the COLCOT and LODOCO2 trials, published in 2019 and 2020, respectively.
In COLCOT, which randomized 4745 patients who experienced an acute MI in the previous 30 days, colchicine was associated with a 23% reduction in a composite major cardiovascular adverse events endpoint relative to placebo (HR, 0.77; P = .02). In LODOCO2, which randomized 5522 patients with chronic coronary disease, colchicine was associated with a 31% reduction in an adverse event composite endpoint (HR, 0.68; P < .0001).
However, two more recent trials — CONVINCE and CHANCE-3 — showed no difference between colchicine and placebo for the endpoint of recurrent stroke at 90 days. CONVINCE, with approximately 3000 patients, was relatively small, whereas CHANCE-3 randomized more than 8000 patients and showed no effect on the risk for stroke (HR, 0.98; 95% CI, 0.83-1.16).
New Data Challenge Guidelines
Of these trials, COLCOT was the most similar to OASIS 9, according to Jolly. Among the differences, OASIS 9 was initiated earlier and was larger than the other trials, so it had more power to address the study question.
Given the absence of benefit, Jolly indicated that OASIS 9 might disrupt both the joint American College of Cardiology and AHA guidelines, which gave colchicine a class 2b recommendation in 2023, and the European Society of Cardiology guidelines, which gave colchicine a 2a recommendation.
“This is a big deal for me,” said Ajay J. Kirtane, director of the Interventional Cardiovascular Care program at Columbia University in New York City. As someone who is now using colchicine routinely, these data have changed his opinion.
The previous data supporting the use of colchicine “were just so-so,” he explained. “Now I have a good rationale” for foregoing the routine use of this therapy.
Jolly said that he had put his own father on colchicine after an acute MI on the basis of the guidelines, but immediately took him off this therapy when the data from OASIS 9 were unblinded.
“The only signal from this trial was an increased risk of diarrhea,” Jolly said. The results, at the very least, suggest that colchicine “is not for everyone” after an acute MI, although he emphasized that these results do not rule out the potential for benefit from anti-inflammatory therapy. Ongoing trials, including one targeting interleukin 6, a cytokine associated with inflammation, remain of interest, he added.
A version of this article first appeared on Medscape.com.
WASHINGTON —
The CLEAR SYNERGY (OASIS 9) study, called “the largest trial ever of colchicine in acute MI,” showed no hint of benefit in an adverse event curve for colchicine relative to placebo over 5 years, which suggests that the role of this drug after myocardial infarction (MI) “is uncertain,” Sanjit Jolly, MD, an interventional cardiologist at Hamilton Health Sciences and a professor of medicine at McMaster University in Hamilton, Ontario, Canada, reported at Transcatheter Cardiovascular Therapeutics (TCT) 2024.
For the primary composite outcome — cardiovascular death, MI, stroke, and ischemia-driven revascularization — the event curves in the colchicine and placebo groups remained essentially superimposed over 5 years of follow-up, with only a slight separation after 4 years. The hazard ratio for the primary endpoint showed a 1% difference in favor of colchicine (hazard ratio [HR], 0.99; P = .93).
There were no meaningful differences in any of the individual endpoint components; all 95% CIs straddled the line of unity. Rates of cardiovascular death (3.3% vs 3.2%) and stroke (1.4% vs 1.2%) were numerically higher in the colchicine group than in the placebo group. Rates of MI (2.9% vs 3.1%) and ischemia-driven revascularization (4.6% vs 4.7%) were numerically lower in the colchicine group.
No Difference
No adverse outcomes, including all-cause death (4.6% vs 5.1%), approached significance, with the exception of noncardiovascular death (13.0% vs 1.9%). For this outcome, the 95% CI stopped just short of the line of unity (HR, 0.68; 95% CI, 0.46-0.99).
Rates of adverse events (31.9% vs 31.7%; P = .86), serious adverse events (6.7% vs 7.4%; P = .22), and serious infections (2.5% vs 2.9%; P = .85) were similar in the colchicine and placebo groups, but diarrhea, a known side effect of colchicine, was higher in the colchicine group (10.2% vs 6.6%; P < .001).
Given these results, a panelist questioned the use of the word “uncertain” to describe the findings during the late-breaker session in which these results were presented.
“I think you are selling yourself short,” said J. Dawn Abbott, MD, director of the Interventional Cardiology Fellowship Training Program at the Lifespan Cardiovascular Institute, Brown University in Providence, Rhode Island. Based on the size and conduct of this trial, she called the results “definitive” and suggested that the guidelines should be adjusted.
The OASIS 9 Trial
In OASIS 9, 3528 patients were randomized to colchicine, and 3534 were randomized to placebo. A second randomization in both groups was to spironolactone or placebo; these results will be presented at the upcoming American Heart Association (AHA) 2024 meeting. Both analyses will be published in The New England Journal of Medicine at that time, Jolly reported.
The study involved 104 sites in Australia, Egypt, Europe, Nepal, and North America. Follow-up in both groups exceeded 99%. Most patients had an ST-elevation MI (STEMI), but about 5% of those enrolled had a non-STEMI. Less than 10% of patients had experienced a previous MI.
Less than 5% of patients were discharged on sodium-glucose cotransporter 2 therapy, and more than 95% were discharged on aspirin and a statin. Nearly 80% were discharged on an angiotensin-converting enzyme inhibitor, and most patients received an anticoagulant. More than 95% of patients were implanted with a drug-eluting stent.
At month 3, C-reactive protein levels were significantly lower in the colchicine group than in the placebo group. C-reactive protein is a biomarker for the anti-inflammatory effect that is considered to be colchicine’s primary mechanism of action. An anti-inflammatory effect has been cited as the probable explanation for the positive results shown in the COLCOT and LODOCO2 trials, published in 2019 and 2020, respectively.
In COLCOT, which randomized 4745 patients who experienced an acute MI in the previous 30 days, colchicine was associated with a 23% reduction in a composite major cardiovascular adverse events endpoint relative to placebo (HR, 0.77; P = .02). In LODOCO2, which randomized 5522 patients with chronic coronary disease, colchicine was associated with a 31% reduction in an adverse event composite endpoint (HR, 0.68; P < .0001).
However, two more recent trials — CONVINCE and CHANCE-3 — showed no difference between colchicine and placebo for the endpoint of recurrent stroke at 90 days. CONVINCE, with approximately 3000 patients, was relatively small, whereas CHANCE-3 randomized more than 8000 patients and showed no effect on the risk for stroke (HR, 0.98; 95% CI, 0.83-1.16).
New Data Challenge Guidelines
Of these trials, COLCOT was the most similar to OASIS 9, according to Jolly. Among the differences, OASIS 9 was initiated earlier and was larger than the other trials, so it had more power to address the study question.
Given the absence of benefit, Jolly indicated that OASIS 9 might disrupt both the joint American College of Cardiology and AHA guidelines, which gave colchicine a class 2b recommendation in 2023, and the European Society of Cardiology guidelines, which gave colchicine a 2a recommendation.
“This is a big deal for me,” said Ajay J. Kirtane, director of the Interventional Cardiovascular Care program at Columbia University in New York City. As someone who is now using colchicine routinely, these data have changed his opinion.
The previous data supporting the use of colchicine “were just so-so,” he explained. “Now I have a good rationale” for foregoing the routine use of this therapy.
Jolly said that he had put his own father on colchicine after an acute MI on the basis of the guidelines, but immediately took him off this therapy when the data from OASIS 9 were unblinded.
“The only signal from this trial was an increased risk of diarrhea,” Jolly said. The results, at the very least, suggest that colchicine “is not for everyone” after an acute MI, although he emphasized that these results do not rule out the potential for benefit from anti-inflammatory therapy. Ongoing trials, including one targeting interleukin 6, a cytokine associated with inflammation, remain of interest, he added.
A version of this article first appeared on Medscape.com.
WASHINGTON —
The CLEAR SYNERGY (OASIS 9) study, called “the largest trial ever of colchicine in acute MI,” showed no hint of benefit in an adverse event curve for colchicine relative to placebo over 5 years, which suggests that the role of this drug after myocardial infarction (MI) “is uncertain,” Sanjit Jolly, MD, an interventional cardiologist at Hamilton Health Sciences and a professor of medicine at McMaster University in Hamilton, Ontario, Canada, reported at Transcatheter Cardiovascular Therapeutics (TCT) 2024.
For the primary composite outcome — cardiovascular death, MI, stroke, and ischemia-driven revascularization — the event curves in the colchicine and placebo groups remained essentially superimposed over 5 years of follow-up, with only a slight separation after 4 years. The hazard ratio for the primary endpoint showed a 1% difference in favor of colchicine (hazard ratio [HR], 0.99; P = .93).
There were no meaningful differences in any of the individual endpoint components; all 95% CIs straddled the line of unity. Rates of cardiovascular death (3.3% vs 3.2%) and stroke (1.4% vs 1.2%) were numerically higher in the colchicine group than in the placebo group. Rates of MI (2.9% vs 3.1%) and ischemia-driven revascularization (4.6% vs 4.7%) were numerically lower in the colchicine group.
No Difference
No adverse outcomes, including all-cause death (4.6% vs 5.1%), approached significance, with the exception of noncardiovascular death (13.0% vs 1.9%). For this outcome, the 95% CI stopped just short of the line of unity (HR, 0.68; 95% CI, 0.46-0.99).
Rates of adverse events (31.9% vs 31.7%; P = .86), serious adverse events (6.7% vs 7.4%; P = .22), and serious infections (2.5% vs 2.9%; P = .85) were similar in the colchicine and placebo groups, but diarrhea, a known side effect of colchicine, was higher in the colchicine group (10.2% vs 6.6%; P < .001).
Given these results, a panelist questioned the use of the word “uncertain” to describe the findings during the late-breaker session in which these results were presented.
“I think you are selling yourself short,” said J. Dawn Abbott, MD, director of the Interventional Cardiology Fellowship Training Program at the Lifespan Cardiovascular Institute, Brown University in Providence, Rhode Island. Based on the size and conduct of this trial, she called the results “definitive” and suggested that the guidelines should be adjusted.
The OASIS 9 Trial
In OASIS 9, 3528 patients were randomized to colchicine, and 3534 were randomized to placebo. A second randomization in both groups was to spironolactone or placebo; these results will be presented at the upcoming American Heart Association (AHA) 2024 meeting. Both analyses will be published in The New England Journal of Medicine at that time, Jolly reported.
The study involved 104 sites in Australia, Egypt, Europe, Nepal, and North America. Follow-up in both groups exceeded 99%. Most patients had an ST-elevation MI (STEMI), but about 5% of those enrolled had a non-STEMI. Less than 10% of patients had experienced a previous MI.
Less than 5% of patients were discharged on sodium-glucose cotransporter 2 therapy, and more than 95% were discharged on aspirin and a statin. Nearly 80% were discharged on an angiotensin-converting enzyme inhibitor, and most patients received an anticoagulant. More than 95% of patients were implanted with a drug-eluting stent.
At month 3, C-reactive protein levels were significantly lower in the colchicine group than in the placebo group. C-reactive protein is a biomarker for the anti-inflammatory effect that is considered to be colchicine’s primary mechanism of action. An anti-inflammatory effect has been cited as the probable explanation for the positive results shown in the COLCOT and LODOCO2 trials, published in 2019 and 2020, respectively.
In COLCOT, which randomized 4745 patients who experienced an acute MI in the previous 30 days, colchicine was associated with a 23% reduction in a composite major cardiovascular adverse events endpoint relative to placebo (HR, 0.77; P = .02). In LODOCO2, which randomized 5522 patients with chronic coronary disease, colchicine was associated with a 31% reduction in an adverse event composite endpoint (HR, 0.68; P < .0001).
However, two more recent trials — CONVINCE and CHANCE-3 — showed no difference between colchicine and placebo for the endpoint of recurrent stroke at 90 days. CONVINCE, with approximately 3000 patients, was relatively small, whereas CHANCE-3 randomized more than 8000 patients and showed no effect on the risk for stroke (HR, 0.98; 95% CI, 0.83-1.16).
New Data Challenge Guidelines
Of these trials, COLCOT was the most similar to OASIS 9, according to Jolly. Among the differences, OASIS 9 was initiated earlier and was larger than the other trials, so it had more power to address the study question.
Given the absence of benefit, Jolly indicated that OASIS 9 might disrupt both the joint American College of Cardiology and AHA guidelines, which gave colchicine a class 2b recommendation in 2023, and the European Society of Cardiology guidelines, which gave colchicine a 2a recommendation.
“This is a big deal for me,” said Ajay J. Kirtane, director of the Interventional Cardiovascular Care program at Columbia University in New York City. As someone who is now using colchicine routinely, these data have changed his opinion.
The previous data supporting the use of colchicine “were just so-so,” he explained. “Now I have a good rationale” for foregoing the routine use of this therapy.
Jolly said that he had put his own father on colchicine after an acute MI on the basis of the guidelines, but immediately took him off this therapy when the data from OASIS 9 were unblinded.
“The only signal from this trial was an increased risk of diarrhea,” Jolly said. The results, at the very least, suggest that colchicine “is not for everyone” after an acute MI, although he emphasized that these results do not rule out the potential for benefit from anti-inflammatory therapy. Ongoing trials, including one targeting interleukin 6, a cytokine associated with inflammation, remain of interest, he added.
A version of this article first appeared on Medscape.com.
FROM TCT 2024
Heat Waves Pose Significant Health Risks for Dually Eligible Older Individuals
TOPLINE:
Heat waves are associated with an increase in heat-related emergency department visits, hospitalizations, and deaths among dually eligible individuals older than 65 years.
METHODOLOGY:
- The researchers conducted a retrospective time-series study using national Medicare and Medicaid data from 2016 to 2019 to assess the link between heat waves during warm months and adverse health events.
- A total of 5,448,499 dually eligible individuals (66% women; 20% aged ≥ 85 years) were included from 28,404 zip code areas across 50 states and Washington, DC.
- Heat waves were defined as three or more consecutive days of extreme heat with a maximum temperature of at least 90 °F and within the 97th percentile of daily maximum temperatures for each zip code.
- Primary outcomes were daily counts of heat-related emergency department visits and hospitalizations.
- Secondary outcomes were all-cause and heat-specific emergency department visits, all-cause and heat-specific hospitalizations, deaths, and long-term nursing facility placements within 3 months after a heat wave.
TAKEAWAY:
- Heat waves were associated with a 10% increase in heat-related emergency department visits (incidence rate ratio [IRR], 1.10; 95% CI, 1.08-1.12) and a 7% increase in heat-related hospitalizations (IRR, 1.07; 95% CI, 1.04-1.09).
- Mortality rates were 4% higher during heat wave days than during non–heat wave days (IRR, 1.04; 95% CI, 1.01-1.07).
- No significant difference was found in rates of long-term nursing facility placements or heat-related emergency department visits for nursing facility residents.
- All racial and ethnic groups showed higher incidence rates of heat-related emergency department visits during heat waves, especially among beneficiaries identified as Asian (IRR, 1.21; 95% CI, 1.12-1.29). Rates were higher among individuals residing in the Northwest, Ohio Valley, and the West.
IN PRACTICE:
“In healthcare settings, clinicians should incorporate routine heat wave risk assessments into clinical practice, especially in regions more susceptible to extreme heat, for all dual-eligible beneficiaries and other at-risk patients,” wrote Jose F. Figueroa, MD, MPH, of the Harvard T.H. Chan School of Public Health in Boston, in an invited commentary. “Beyond offering preventive advice, clinicians can adjust medications that may increase their patients’ susceptibility during heat waves, or they can refer patients to social workers and social service organizations to ensure that they are protected at home.”
SOURCE:
This study was led by Hyunjee Kim, PhD, of the Center for Health Systems Effectiveness at Oregon Health & Science University, Portland. It was published online in JAMA Health Forum.
LIMITATIONS:
This study relied on a claims database to identify adverse events, which may have led to omissions in coding, particularly for heat-related conditions if the diagnostic codes for heat-related symptoms had not been adopted. This study did not adjust for variations in air quality or green space, which could have confounded the association of interest. Indoor heat exposures or adaptive behaviors, such as air conditioning use, were not considered. The analysis could not compare the association of heat waves with adverse events between those with dual eligibility and those without dual eligibility.
DISCLOSURES:
This study was supported by the National Institute on Aging. One author reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Heat waves are associated with an increase in heat-related emergency department visits, hospitalizations, and deaths among dually eligible individuals older than 65 years.
METHODOLOGY:
- The researchers conducted a retrospective time-series study using national Medicare and Medicaid data from 2016 to 2019 to assess the link between heat waves during warm months and adverse health events.
- A total of 5,448,499 dually eligible individuals (66% women; 20% aged ≥ 85 years) were included from 28,404 zip code areas across 50 states and Washington, DC.
- Heat waves were defined as three or more consecutive days of extreme heat with a maximum temperature of at least 90 °F and within the 97th percentile of daily maximum temperatures for each zip code.
- Primary outcomes were daily counts of heat-related emergency department visits and hospitalizations.
- Secondary outcomes were all-cause and heat-specific emergency department visits, all-cause and heat-specific hospitalizations, deaths, and long-term nursing facility placements within 3 months after a heat wave.
TAKEAWAY:
- Heat waves were associated with a 10% increase in heat-related emergency department visits (incidence rate ratio [IRR], 1.10; 95% CI, 1.08-1.12) and a 7% increase in heat-related hospitalizations (IRR, 1.07; 95% CI, 1.04-1.09).
- Mortality rates were 4% higher during heat wave days than during non–heat wave days (IRR, 1.04; 95% CI, 1.01-1.07).
- No significant difference was found in rates of long-term nursing facility placements or heat-related emergency department visits for nursing facility residents.
- All racial and ethnic groups showed higher incidence rates of heat-related emergency department visits during heat waves, especially among beneficiaries identified as Asian (IRR, 1.21; 95% CI, 1.12-1.29). Rates were higher among individuals residing in the Northwest, Ohio Valley, and the West.
IN PRACTICE:
“In healthcare settings, clinicians should incorporate routine heat wave risk assessments into clinical practice, especially in regions more susceptible to extreme heat, for all dual-eligible beneficiaries and other at-risk patients,” wrote Jose F. Figueroa, MD, MPH, of the Harvard T.H. Chan School of Public Health in Boston, in an invited commentary. “Beyond offering preventive advice, clinicians can adjust medications that may increase their patients’ susceptibility during heat waves, or they can refer patients to social workers and social service organizations to ensure that they are protected at home.”
SOURCE:
This study was led by Hyunjee Kim, PhD, of the Center for Health Systems Effectiveness at Oregon Health & Science University, Portland. It was published online in JAMA Health Forum.
LIMITATIONS:
This study relied on a claims database to identify adverse events, which may have led to omissions in coding, particularly for heat-related conditions if the diagnostic codes for heat-related symptoms had not been adopted. This study did not adjust for variations in air quality or green space, which could have confounded the association of interest. Indoor heat exposures or adaptive behaviors, such as air conditioning use, were not considered. The analysis could not compare the association of heat waves with adverse events between those with dual eligibility and those without dual eligibility.
DISCLOSURES:
This study was supported by the National Institute on Aging. One author reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Heat waves are associated with an increase in heat-related emergency department visits, hospitalizations, and deaths among dually eligible individuals older than 65 years.
METHODOLOGY:
- The researchers conducted a retrospective time-series study using national Medicare and Medicaid data from 2016 to 2019 to assess the link between heat waves during warm months and adverse health events.
- A total of 5,448,499 dually eligible individuals (66% women; 20% aged ≥ 85 years) were included from 28,404 zip code areas across 50 states and Washington, DC.
- Heat waves were defined as three or more consecutive days of extreme heat with a maximum temperature of at least 90 °F and within the 97th percentile of daily maximum temperatures for each zip code.
- Primary outcomes were daily counts of heat-related emergency department visits and hospitalizations.
- Secondary outcomes were all-cause and heat-specific emergency department visits, all-cause and heat-specific hospitalizations, deaths, and long-term nursing facility placements within 3 months after a heat wave.
TAKEAWAY:
- Heat waves were associated with a 10% increase in heat-related emergency department visits (incidence rate ratio [IRR], 1.10; 95% CI, 1.08-1.12) and a 7% increase in heat-related hospitalizations (IRR, 1.07; 95% CI, 1.04-1.09).
- Mortality rates were 4% higher during heat wave days than during non–heat wave days (IRR, 1.04; 95% CI, 1.01-1.07).
- No significant difference was found in rates of long-term nursing facility placements or heat-related emergency department visits for nursing facility residents.
- All racial and ethnic groups showed higher incidence rates of heat-related emergency department visits during heat waves, especially among beneficiaries identified as Asian (IRR, 1.21; 95% CI, 1.12-1.29). Rates were higher among individuals residing in the Northwest, Ohio Valley, and the West.
IN PRACTICE:
“In healthcare settings, clinicians should incorporate routine heat wave risk assessments into clinical practice, especially in regions more susceptible to extreme heat, for all dual-eligible beneficiaries and other at-risk patients,” wrote Jose F. Figueroa, MD, MPH, of the Harvard T.H. Chan School of Public Health in Boston, in an invited commentary. “Beyond offering preventive advice, clinicians can adjust medications that may increase their patients’ susceptibility during heat waves, or they can refer patients to social workers and social service organizations to ensure that they are protected at home.”
SOURCE:
This study was led by Hyunjee Kim, PhD, of the Center for Health Systems Effectiveness at Oregon Health & Science University, Portland. It was published online in JAMA Health Forum.
LIMITATIONS:
This study relied on a claims database to identify adverse events, which may have led to omissions in coding, particularly for heat-related conditions if the diagnostic codes for heat-related symptoms had not been adopted. This study did not adjust for variations in air quality or green space, which could have confounded the association of interest. Indoor heat exposures or adaptive behaviors, such as air conditioning use, were not considered. The analysis could not compare the association of heat waves with adverse events between those with dual eligibility and those without dual eligibility.
DISCLOSURES:
This study was supported by the National Institute on Aging. One author reported receiving grants from the National Institutes of Health outside the submitted work. No other disclosures were reported.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
How Extreme Rainfall Amplifies Health Risks
Climate change is intensifying the variability of precipitation caused by extreme daily and overall rainfall events. Awareness of the effects of these events is crucial for understanding the complex health consequences of climate change. Physicians have often advised their patients to move to a better climate, and when they did, the recommendation was rarely based on precise scientific knowledge. However, the benefits of changing environments were often so evident that they were indisputable.
Today, advanced models, satellite imagery, and biological approaches such as environmental epigenetics are enhancing our understanding of health risks related to climate change.
Extreme Rainfall and Health
The increase in precipitation variability is linked to climate warming, which leads to higher atmospheric humidity and extreme rainfall events. These manifestations can cause rapid weather changes, increasing interactions with harmful aerosols and raising the risk for various cardiovascular and respiratory conditions. However, a full understanding of the association between rain and health has been hindered by conflicting results and methodological issues (limited geographical locations and short observation durations) in studies.
The association between rainfall intensity and health effects is likely nonlinear. Moderate precipitation can mitigate summer heat and help reduce air pollution, an effect that may lower some environmental health risks. Conversely, intense, low-frequency, short-duration rainfall events can have particularly harmful effects on health, as such events can trigger rapid weather changes, increased proliferation of pathogens, and a rise in the risk of various pollutants, potentially exacerbating health conditions.
Rain and Mortality
Using an intensity-duration-frequency model of three rainfall indices (high intensity, low frequency, short duration), a study published in October 2024 combined these with mortality data from 34 countries or regions. Researchers estimated associations between mortality (all cause, cardiovascular, and respiratory) and rainfall events with different return periods (the average time expected before an extreme event of a certain magnitude occurs again) and crucial effect modifiers, including climatic, socioeconomic, and urban environmental conditions.
The analysis included 109,954,744 deaths from all causes; 31,164,161 cardiovascular deaths; and 11,817,278 respiratory deaths. During the study period, from 1980 to 2020, a total of 50,913 rainfall events with a 1-year return period, 8362 events with a 2-year return period, and 3301 events with a 5-year return period were identified.
The most significant finding was a global positive association between all-cause mortality and extreme rainfall events with a 5-year return period. One day of extreme rainfall with a 5-year return period was associated with a cumulative relative risk (RRc) of 1.08 (95% CI, 1.05-1.11) for daily mortality from all causes. Rainfall events with a 2-year return period were associated with increased daily respiratory mortality (RRc, 1.14), while no significant effect was observed for cardiovascular mortality during the same period. Rainfall events with a 5-year return period were associated with an increased risk for both cardiovascular mortality (RRc, 1.05) and respiratory mortality (RRc, 1.29), with the respiratory mortality being significantly higher.
Points of Concern
According to the authors, moderate to high rainfall can exert protective effects through two main mechanisms: Improving air quality (rainfall can reduce the concentration of particulate matter 2.5 cm in diameter or less in the atmosphere) and behavioral changes in people (more time spent in enclosed environments, reducing direct exposure to outdoor air pollution and nonoptimal temperatures). As rainfall intensity increases, the initial protective effects may be overshadowed by a cascade of negative impacts including:
- Critical resource disruptions: Intense rainfall can cause severe disruptions to access to healthcare, infrastructure damage including power outages, and compromised water and food quality.
- Physiological effects: Increased humidity levels facilitate the growth of airborne pathogens, potentially triggering allergic reactions and respiratory issues, particularly in vulnerable individuals. Rapid shifts in atmospheric pressure and temperature fluctuations can lead to cardiovascular and respiratory complications.
- Indirect effects: Extreme rainfall can have profound effects on mental health, inducing stress and anxiety that may exacerbate pre-existing mental health conditions and indirectly contribute to increased overall mortality from nonexternal causes.
The intensity-response curves for the health effects of heavy rainfall showed a nonlinear trend, transitioning from a protective effect at moderate levels of rainfall to a risk for severe harm when rainfall intensity became extreme. Additionally, the significant effects of extreme events were modified by various types of climate and were more pronounced in areas characterized by low variability in precipitation or sparse vegetation cover.
The study demonstrated that various local factors, such as climatic conditions, climate type, and vegetation cover, can potentially influence cardiovascular and respiratory mortality and all-cause mortality related to precipitation. The findings may help physicians convey to their patients the impact of climate change on their health.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Climate change is intensifying the variability of precipitation caused by extreme daily and overall rainfall events. Awareness of the effects of these events is crucial for understanding the complex health consequences of climate change. Physicians have often advised their patients to move to a better climate, and when they did, the recommendation was rarely based on precise scientific knowledge. However, the benefits of changing environments were often so evident that they were indisputable.
Today, advanced models, satellite imagery, and biological approaches such as environmental epigenetics are enhancing our understanding of health risks related to climate change.
Extreme Rainfall and Health
The increase in precipitation variability is linked to climate warming, which leads to higher atmospheric humidity and extreme rainfall events. These manifestations can cause rapid weather changes, increasing interactions with harmful aerosols and raising the risk for various cardiovascular and respiratory conditions. However, a full understanding of the association between rain and health has been hindered by conflicting results and methodological issues (limited geographical locations and short observation durations) in studies.
The association between rainfall intensity and health effects is likely nonlinear. Moderate precipitation can mitigate summer heat and help reduce air pollution, an effect that may lower some environmental health risks. Conversely, intense, low-frequency, short-duration rainfall events can have particularly harmful effects on health, as such events can trigger rapid weather changes, increased proliferation of pathogens, and a rise in the risk of various pollutants, potentially exacerbating health conditions.
Rain and Mortality
Using an intensity-duration-frequency model of three rainfall indices (high intensity, low frequency, short duration), a study published in October 2024 combined these with mortality data from 34 countries or regions. Researchers estimated associations between mortality (all cause, cardiovascular, and respiratory) and rainfall events with different return periods (the average time expected before an extreme event of a certain magnitude occurs again) and crucial effect modifiers, including climatic, socioeconomic, and urban environmental conditions.
The analysis included 109,954,744 deaths from all causes; 31,164,161 cardiovascular deaths; and 11,817,278 respiratory deaths. During the study period, from 1980 to 2020, a total of 50,913 rainfall events with a 1-year return period, 8362 events with a 2-year return period, and 3301 events with a 5-year return period were identified.
The most significant finding was a global positive association between all-cause mortality and extreme rainfall events with a 5-year return period. One day of extreme rainfall with a 5-year return period was associated with a cumulative relative risk (RRc) of 1.08 (95% CI, 1.05-1.11) for daily mortality from all causes. Rainfall events with a 2-year return period were associated with increased daily respiratory mortality (RRc, 1.14), while no significant effect was observed for cardiovascular mortality during the same period. Rainfall events with a 5-year return period were associated with an increased risk for both cardiovascular mortality (RRc, 1.05) and respiratory mortality (RRc, 1.29), with the respiratory mortality being significantly higher.
Points of Concern
According to the authors, moderate to high rainfall can exert protective effects through two main mechanisms: Improving air quality (rainfall can reduce the concentration of particulate matter 2.5 cm in diameter or less in the atmosphere) and behavioral changes in people (more time spent in enclosed environments, reducing direct exposure to outdoor air pollution and nonoptimal temperatures). As rainfall intensity increases, the initial protective effects may be overshadowed by a cascade of negative impacts including:
- Critical resource disruptions: Intense rainfall can cause severe disruptions to access to healthcare, infrastructure damage including power outages, and compromised water and food quality.
- Physiological effects: Increased humidity levels facilitate the growth of airborne pathogens, potentially triggering allergic reactions and respiratory issues, particularly in vulnerable individuals. Rapid shifts in atmospheric pressure and temperature fluctuations can lead to cardiovascular and respiratory complications.
- Indirect effects: Extreme rainfall can have profound effects on mental health, inducing stress and anxiety that may exacerbate pre-existing mental health conditions and indirectly contribute to increased overall mortality from nonexternal causes.
The intensity-response curves for the health effects of heavy rainfall showed a nonlinear trend, transitioning from a protective effect at moderate levels of rainfall to a risk for severe harm when rainfall intensity became extreme. Additionally, the significant effects of extreme events were modified by various types of climate and were more pronounced in areas characterized by low variability in precipitation or sparse vegetation cover.
The study demonstrated that various local factors, such as climatic conditions, climate type, and vegetation cover, can potentially influence cardiovascular and respiratory mortality and all-cause mortality related to precipitation. The findings may help physicians convey to their patients the impact of climate change on their health.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Climate change is intensifying the variability of precipitation caused by extreme daily and overall rainfall events. Awareness of the effects of these events is crucial for understanding the complex health consequences of climate change. Physicians have often advised their patients to move to a better climate, and when they did, the recommendation was rarely based on precise scientific knowledge. However, the benefits of changing environments were often so evident that they were indisputable.
Today, advanced models, satellite imagery, and biological approaches such as environmental epigenetics are enhancing our understanding of health risks related to climate change.
Extreme Rainfall and Health
The increase in precipitation variability is linked to climate warming, which leads to higher atmospheric humidity and extreme rainfall events. These manifestations can cause rapid weather changes, increasing interactions with harmful aerosols and raising the risk for various cardiovascular and respiratory conditions. However, a full understanding of the association between rain and health has been hindered by conflicting results and methodological issues (limited geographical locations and short observation durations) in studies.
The association between rainfall intensity and health effects is likely nonlinear. Moderate precipitation can mitigate summer heat and help reduce air pollution, an effect that may lower some environmental health risks. Conversely, intense, low-frequency, short-duration rainfall events can have particularly harmful effects on health, as such events can trigger rapid weather changes, increased proliferation of pathogens, and a rise in the risk of various pollutants, potentially exacerbating health conditions.
Rain and Mortality
Using an intensity-duration-frequency model of three rainfall indices (high intensity, low frequency, short duration), a study published in October 2024 combined these with mortality data from 34 countries or regions. Researchers estimated associations between mortality (all cause, cardiovascular, and respiratory) and rainfall events with different return periods (the average time expected before an extreme event of a certain magnitude occurs again) and crucial effect modifiers, including climatic, socioeconomic, and urban environmental conditions.
The analysis included 109,954,744 deaths from all causes; 31,164,161 cardiovascular deaths; and 11,817,278 respiratory deaths. During the study period, from 1980 to 2020, a total of 50,913 rainfall events with a 1-year return period, 8362 events with a 2-year return period, and 3301 events with a 5-year return period were identified.
The most significant finding was a global positive association between all-cause mortality and extreme rainfall events with a 5-year return period. One day of extreme rainfall with a 5-year return period was associated with a cumulative relative risk (RRc) of 1.08 (95% CI, 1.05-1.11) for daily mortality from all causes. Rainfall events with a 2-year return period were associated with increased daily respiratory mortality (RRc, 1.14), while no significant effect was observed for cardiovascular mortality during the same period. Rainfall events with a 5-year return period were associated with an increased risk for both cardiovascular mortality (RRc, 1.05) and respiratory mortality (RRc, 1.29), with the respiratory mortality being significantly higher.
Points of Concern
According to the authors, moderate to high rainfall can exert protective effects through two main mechanisms: Improving air quality (rainfall can reduce the concentration of particulate matter 2.5 cm in diameter or less in the atmosphere) and behavioral changes in people (more time spent in enclosed environments, reducing direct exposure to outdoor air pollution and nonoptimal temperatures). As rainfall intensity increases, the initial protective effects may be overshadowed by a cascade of negative impacts including:
- Critical resource disruptions: Intense rainfall can cause severe disruptions to access to healthcare, infrastructure damage including power outages, and compromised water and food quality.
- Physiological effects: Increased humidity levels facilitate the growth of airborne pathogens, potentially triggering allergic reactions and respiratory issues, particularly in vulnerable individuals. Rapid shifts in atmospheric pressure and temperature fluctuations can lead to cardiovascular and respiratory complications.
- Indirect effects: Extreme rainfall can have profound effects on mental health, inducing stress and anxiety that may exacerbate pre-existing mental health conditions and indirectly contribute to increased overall mortality from nonexternal causes.
The intensity-response curves for the health effects of heavy rainfall showed a nonlinear trend, transitioning from a protective effect at moderate levels of rainfall to a risk for severe harm when rainfall intensity became extreme. Additionally, the significant effects of extreme events were modified by various types of climate and were more pronounced in areas characterized by low variability in precipitation or sparse vegetation cover.
The study demonstrated that various local factors, such as climatic conditions, climate type, and vegetation cover, can potentially influence cardiovascular and respiratory mortality and all-cause mortality related to precipitation. The findings may help physicians convey to their patients the impact of climate change on their health.
This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
On Second Thought: Aspirin for Primary Prevention — What We Really Know
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Our recommendations vis-à-vis aspirin have evolved at a dizzying pace. The young’uns watching us right now don’t know what things were like in the 1980s. The Reagan era was a wild, heady time where nuclear war was imminent and we didn’t prescribe aspirin to patients.
That only started in 1988, which was a banner year in human history. Not because a number of doves were incinerated by the lighting of the Olympic torch at the Seoul Olympics — look it up if you don’t know what I’m talking about — but because 1988 saw the publication of the ISIS-2 trial, which first showed a mortality benefit to prescribing aspirin post–myocardial infarction (MI).
Giving patients aspirin during or after a heart attack is not controversial. It’s one of the few things in this business that isn’t, but that’s secondary prevention — treating somebody after they develop a disease. Primary prevention, treating them before they have their incident event, is a very different ballgame. Here, things are messy.
For one thing, the doses used have been very inconsistent. We should point out that the reason for 81 mg of aspirin is very arbitrary and is rooted in the old apothecary system of weights and measurements. A standard dose of aspirin was 5 grains, where 20 grains made 1 scruple, 3 scruples made 1 dram, 8 drams made 1 oz, and 12 oz made 1 lb - because screw you, metric system. Therefore, 5 grains was 325 mg of aspirin, and 1 quarter of the standard dose became 81 mg if you rounded out the decimal.
People have tried all kinds of dosing structures with aspirin prophylaxis. The Physicians’ Health Study used a full-dose aspirin, 325 mg every 2 days, while the Hypertension Optimal Treatment (HOT) trial tested 75 mg daily and the Women’s Health Study tested 100 mg, but every other day.
Ironically, almost no one has studied 81 mg every day, which is weird if you think about it. The bigger problem here is not the variability of doses used, but the discrepancy when you look at older vs newer studies.
Older studies, like the Physicians’ Health Study, did show a benefit, at least in the subgroup of patients over age 50 years, which is probably where the “everybody over 50 should be taking an aspirin” idea comes from, at least as near as I can tell.
More recent studies, like the Women’s Health Study, ASPREE, or ASPIRE, didn’t show a benefit. I know what you’re thinking: Newer stuff is always better. That’s why you should never trust anybody over age 40 years. The context of primary prevention studies has changed. In the ‘80s and ‘90s, people smoked more and we didn’t have the same medications that we have today. We talked about all this in the beta-blocker video to explain why beta-blockers don’t seem to have a benefit post MI.
We have a similar issue here. The magnitude of the benefit with aspirin primary prevention has decreased because we’re all just healthier overall. So, yay! Progress! Here’s where the numbers matter. No one is saying that aspirin doesn’t help. It does.
If we look at the 2019 meta-analysis published in JAMA, there is a cardiovascular benefit. The numbers bear that out. I know you’re all here for the math, so here we go. Aspirin reduced the composite cardiovascular endpoint from 65.2 to 60.2 events per 10,000 patient-years; or to put it more meaningfully in absolute risk reduction terms, because that’s my jam, an absolute risk reduction of 0.41%, which means a number needed to treat of 241, which is okay-ish. It’s not super-great, but it may be justifiable for something that costs next to nothing.
The tradeoff is bleeding. Major bleeding increased from 16.4 to 23.1 bleeds per 10,000 patient-years, or an absolute risk increase of 0.47%, which is a number needed to harm of 210. That’s the problem. Aspirin does prevent heart disease. The benefit is small, for sure, but the real problem is that it’s outweighed by the risk of bleeding, so you’re not really coming out ahead.
The real tragedy here is that the public is locked into this idea of everyone over age 50 years should be taking an aspirin. Even today, even though guidelines have recommended against aspirin for primary prevention for some time, data from the National Health Interview Survey sample found that nearly one in three older adults take aspirin for primary prevention when they shouldn’t be. That’s a large number of people. That’s millions of Americans — and Canadians, but nobody cares about us. It’s fine.
That’s the point. We’re not debunking aspirin. It does work. The benefits are just really small in a primary prevention population and offset by the admittedly also really small risks of bleeding. It’s a tradeoff that doesn’t really work in your favor.
But that’s aspirin for cardiovascular disease. When it comes to cancer or DVT prophylaxis, that’s another really interesting story. We might have to save that for another time. Do I know how to tease a sequel or what?
Labos, a cardiologist at Kirkland Medical Center, Montreal, Quebec, Canada, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Fewer Recurrent Cardiovascular Events Seen With TNF Inhibitor Use in Axial Spondyloarthritis
TOPLINE:
Tumor necrosis factor (TNF) inhibitors are associated with a reduced risk for recurrent cardiovascular events in patients with radiographic axial spondyloarthritis (axSpA) and a history of cardiovascular events.
METHODOLOGY:
- The researchers conducted a nationwide cohort study using data from the Korean National Claims Database, including 413 patients diagnosed with cardiovascular events following a radiographic axSpA diagnosis.
- Of all patients, 75 received TNF inhibitors (mean age, 51.9 years; 92% men) and 338 did not receive TNF inhibitors (mean age, 60.7 years; 74.9% men).
- Patients were followed from the date of the first cardiovascular event to the date of recurrence, the last date with claims data, or up to December 2021.
- The study outcome was recurrent cardiovascular events that occurred within 28 days of the first incidence and included myocardial infarction and stroke.
- The effect of TNF inhibitor exposure on the risk for recurrent cardiovascular events was assessed using an inverse probability weighted Cox regression analysis.
TAKEAWAY:
- The incidence of recurrent cardiovascular events in patients with radiographic axSpA was 32 per 1000 person-years.
- The incidence was 19 per 1000 person-years in the patients exposed to TNF inhibitors, whereas it was 36 per 1000 person-years in those not exposed to TNF inhibitors.
- Exposure to TNF inhibitors was associated with a 67% lower risk for recurrent cardiovascular events than non-exposure (P = .038).
IN PRACTICE:
“Our data add to previous knowledge by providing more direct evidence that TNFi [tumor necrosis factor inhibitors] could reduce the risk of recurrent cardiovascular events,” the authors wrote.
SOURCE:
The study was led by Oh Chan Kwon, MD, PhD, and Hye Sun Lee, PhD, Yonsei University College of Medicine, Seoul, South Korea. It was published online on October 4, 2024, in Arthritis Research & Therapy.
LIMITATIONS:
The lack of data on certain cardiovascular risk factors such as obesity, smoking, and lifestyle may have led to residual confounding. The patient count in the TNF inhibitor exposure group was not adequate to analyze each TNF inhibitor medication separately. The study included only Korean patients, limiting the generalizability to other ethnic populations. The number of recurrent stroke events was relatively small, making it infeasible to analyze myocardial infarction and stroke separately.
DISCLOSURES:
The study was funded by Yuhan Corporation as part of its “2023 Investigator Initiated Translation Research Program.” The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Tumor necrosis factor (TNF) inhibitors are associated with a reduced risk for recurrent cardiovascular events in patients with radiographic axial spondyloarthritis (axSpA) and a history of cardiovascular events.
METHODOLOGY:
- The researchers conducted a nationwide cohort study using data from the Korean National Claims Database, including 413 patients diagnosed with cardiovascular events following a radiographic axSpA diagnosis.
- Of all patients, 75 received TNF inhibitors (mean age, 51.9 years; 92% men) and 338 did not receive TNF inhibitors (mean age, 60.7 years; 74.9% men).
- Patients were followed from the date of the first cardiovascular event to the date of recurrence, the last date with claims data, or up to December 2021.
- The study outcome was recurrent cardiovascular events that occurred within 28 days of the first incidence and included myocardial infarction and stroke.
- The effect of TNF inhibitor exposure on the risk for recurrent cardiovascular events was assessed using an inverse probability weighted Cox regression analysis.
TAKEAWAY:
- The incidence of recurrent cardiovascular events in patients with radiographic axSpA was 32 per 1000 person-years.
- The incidence was 19 per 1000 person-years in the patients exposed to TNF inhibitors, whereas it was 36 per 1000 person-years in those not exposed to TNF inhibitors.
- Exposure to TNF inhibitors was associated with a 67% lower risk for recurrent cardiovascular events than non-exposure (P = .038).
IN PRACTICE:
“Our data add to previous knowledge by providing more direct evidence that TNFi [tumor necrosis factor inhibitors] could reduce the risk of recurrent cardiovascular events,” the authors wrote.
SOURCE:
The study was led by Oh Chan Kwon, MD, PhD, and Hye Sun Lee, PhD, Yonsei University College of Medicine, Seoul, South Korea. It was published online on October 4, 2024, in Arthritis Research & Therapy.
LIMITATIONS:
The lack of data on certain cardiovascular risk factors such as obesity, smoking, and lifestyle may have led to residual confounding. The patient count in the TNF inhibitor exposure group was not adequate to analyze each TNF inhibitor medication separately. The study included only Korean patients, limiting the generalizability to other ethnic populations. The number of recurrent stroke events was relatively small, making it infeasible to analyze myocardial infarction and stroke separately.
DISCLOSURES:
The study was funded by Yuhan Corporation as part of its “2023 Investigator Initiated Translation Research Program.” The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Tumor necrosis factor (TNF) inhibitors are associated with a reduced risk for recurrent cardiovascular events in patients with radiographic axial spondyloarthritis (axSpA) and a history of cardiovascular events.
METHODOLOGY:
- The researchers conducted a nationwide cohort study using data from the Korean National Claims Database, including 413 patients diagnosed with cardiovascular events following a radiographic axSpA diagnosis.
- Of all patients, 75 received TNF inhibitors (mean age, 51.9 years; 92% men) and 338 did not receive TNF inhibitors (mean age, 60.7 years; 74.9% men).
- Patients were followed from the date of the first cardiovascular event to the date of recurrence, the last date with claims data, or up to December 2021.
- The study outcome was recurrent cardiovascular events that occurred within 28 days of the first incidence and included myocardial infarction and stroke.
- The effect of TNF inhibitor exposure on the risk for recurrent cardiovascular events was assessed using an inverse probability weighted Cox regression analysis.
TAKEAWAY:
- The incidence of recurrent cardiovascular events in patients with radiographic axSpA was 32 per 1000 person-years.
- The incidence was 19 per 1000 person-years in the patients exposed to TNF inhibitors, whereas it was 36 per 1000 person-years in those not exposed to TNF inhibitors.
- Exposure to TNF inhibitors was associated with a 67% lower risk for recurrent cardiovascular events than non-exposure (P = .038).
IN PRACTICE:
“Our data add to previous knowledge by providing more direct evidence that TNFi [tumor necrosis factor inhibitors] could reduce the risk of recurrent cardiovascular events,” the authors wrote.
SOURCE:
The study was led by Oh Chan Kwon, MD, PhD, and Hye Sun Lee, PhD, Yonsei University College of Medicine, Seoul, South Korea. It was published online on October 4, 2024, in Arthritis Research & Therapy.
LIMITATIONS:
The lack of data on certain cardiovascular risk factors such as obesity, smoking, and lifestyle may have led to residual confounding. The patient count in the TNF inhibitor exposure group was not adequate to analyze each TNF inhibitor medication separately. The study included only Korean patients, limiting the generalizability to other ethnic populations. The number of recurrent stroke events was relatively small, making it infeasible to analyze myocardial infarction and stroke separately.
DISCLOSURES:
The study was funded by Yuhan Corporation as part of its “2023 Investigator Initiated Translation Research Program.” The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
New Evidence That Plaque Buildup Shouldn’t Be Ignored
Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”
Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.
Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.
“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.
David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”
“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
Predicting Risk Early
The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.
Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”
Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
3D Vascular Ultrasound vs Coronary Artery Calcium
While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.
For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.
But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.
Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.
3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
Progression Increases Risk
Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further.
The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.
Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”
He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
A version of this article appeared on Medscape.com.
Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”
Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.
Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.
“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.
David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”
“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
Predicting Risk Early
The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.
Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”
Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
3D Vascular Ultrasound vs Coronary Artery Calcium
While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.
For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.
But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.
Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.
3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
Progression Increases Risk
Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further.
The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.
Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”
He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
A version of this article appeared on Medscape.com.
Subclinical disease detected on imaging predicts death, report investigators who show that plaque burden found on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
The work not only highlights the importance of early detection, but it also has clinical implications, said Valentin Fuster, MD, president of the Mount Sinai Fuster Heart Hospital in New York. “It’s going to change things,” he said. “What I believe is going to happen is that we will begin to evaluate people with risk factors at age 30 using imaging. Today, we evaluate people at age 50 using clinical practice guidelines.”
Fuster’s team developed 3D vascular ultrasound to assess plaque burden and applied it in a prospective cohort study known as BioImage. The researchers assessed 6102 patients in Chicago, Illinois, and Fort Lauderdale, Florida, using 3D vascular ultrasound of the carotid artery and another well-established modality — coronary artery calcium, determined by CT.
Participants had no cardiovascular symptoms, yet their plaque burden and calcium scores at the beginning of the study were significantly associated with death during the 15 years of follow-up, even after taking risk factors and medication into account. The results are published in the Journal of the American College of Cardiology.
“Now, there is no question that subclinical disease on imaging predicts mortality,” said Fuster.
David J. Maron, MD, a preventive cardiologist at the Stanford University School of Medicine in California, calls the finding “very important.”
“The presence of atherosclerosis is powerful knowledge to guide the intensity of therapy and to motivate patients and clinicians to treat it,” said Maron, who is the co-author of an accompanying editorial and was not involved in the study.
Predicting Risk Early
The research also showed that the risk for death increases if the burden of plaque in the carotid artery increases over time. Both plaque burden shown on 3D vascular ultrasound and coronary artery calcium on CT were better predictors of death than traditional risk factors.
Maron says recent studies of younger populations, such as Progression of Early Subclinical Atherosclerosis (PESA) and Coronary Artery Risk Development in Young Adults (CARDIA), show that “risk factors at a young age have much more impact on arterial disease than when we measure risk factors at older age.” The CARDIA study showed signs of atherosclerosis in patients as young as in their twenties. This paradigm shift to early detection will now be possible thanks to technological advances like 3D vascular ultrasound.
Maron said he agrees with screening earlier in life. “The risk of having an event is related to the plaque burden and the number of years that a patient has been exposed to that burden. The earlier in life we can identify the burden to slow, arrest, or even reverse the plaque, the better.”
Maron points out that the study looked at an older population and did not include information on cause of death. While a study of younger people and data on cardiac causes of death would be useful, he says the study’s conclusions remain significant.
3D Vascular Ultrasound vs Coronary Artery Calcium
While both imaging methods in the study predicted death better than cardiovascular risk factors alone, each option has advantages.
For coronary artery calcium, “there’s a huge amount of literature demonstrating the association with cardiovascular events, there’s a standardized scoring system, there are widespread facilities for computed tomography, and there is not a lot of variability in the measurement — it’s not dependent on the operator,” said Maron.
But there is one drawback. The scoring system –— the Agatston score — can paradoxically go up following aggressive lowering of low-density lipoprotein cholesterol. “Once coronary calcium is present, it is challenging to interpret a repeat scan because we don’t know if the increase in score is due to progression or increasing density of the calcium, which is a sign of healing,” said Maron.
Vascular ultrasound avoids this problem and can also identify early noncalcified plaques and monitor their progression before they would appear on CT. Furthermore, the imaging does not add to lifetime radiation dose, as CT does, Fuster said.
3D ultrasound technology will soon be available in an inexpensive, automated, and easy-to-use format, he explains. Fuster envisions a scenario in which a nurse in a low-income country, using a cell phone app, will be able to assess atherosclerosis in a patient’s femoral artery. “In less than 1 hour, we can predict disease much more rigorously than with risk factors alone,” he said. “I think this is very exciting.”
Progression Increases Risk
Finding any atherosclerosis means an increased risk for death, but a greater burden or amount of atherosclerosis increases that risk, said Fuster. Progression of atherosclerosis increases risk even further.
The study looked at changes in atherosclerosis burden on vascular ultrasound in a subset of 732 patients a median of 8.9 years after their first test. Those with progression had a higher risk for death than those with regression or no atherosclerosis. “Progression is much more significant in predicting mortality than atherosclerosis findings alone,” Fuster said.
Maron said this finding points to “two great values from noninvasive imaging of atherosclerosis.” Not only does imaging detect atherosclerosis, but it can also characterize the burden and any calcification. Further, it allows doctors to monitor the response to interventions such as lifestyle changes and medical therapy. “Serial imaging of plaque burden will really enhance the management of atherosclerosis,” said Maron. “If we discover that someone is progressing rapidly, we can intensify therapy.”
He says imaging results also provide needed motivation for both clinicians and patients to take action that would prevent the deaths that result from atherosclerosis.
A version of this article appeared on Medscape.com.